MENU

Lenovo’s Alan Browning discusses the benefits of Hyperconverged Infrastructure

October 31, 2018 • Features, Top Stories

Lenovo

Alan Browning, Hyperconverged Solutions Leader, Data Center Group, Lenovo META.

Many organisations today are looking to optimise their data centre management by moving toward a single, unified hyperconverged platform. The challenge is finding the least complicated and most cost-efficient way to achieve next-gen infrastructure without disrupting your business.

Alan Browning, Hyperconverged Solutions Leader, Data Center Group, Lenovo META (Middle East, Turkey and Africa) explains in the below Q&A why  Hyperconvergence is more than just hype. With the right technology and a trusted partner, Hyperconvergence can realise significant operational improvements and cost savings.

As the technology matures, what are the benefits that a hyperconvergence can bring to businesses?
Hyperconvergence is an innovative IT framework that combines storage, computing and networking into a single system, simplifying data interpretation and enhancing productivity. An example of how hyperconvergence is beneficial for the function and productivity of a business, is its ability to impressively reduce the turnaround time for the delivery of products and systems to customers by scaling the environment with a new node via an IP address, allowing IT departments to forecast their immediate requirements more accurately. In addition, operational costs dramatically decrease, as the work can be executed during normal office hours, eradicating overtime hours from the equation. In simple words, hyperconvergence by way of design allows enterprise customers to break the cycles of procuring hardware platforms and allows IT departments to focus on delivering services to the business rather than operational issues.

How has HCI been a growing trend for today’s data centres?
A number of reports and reliable studies predict HCI to be the fastest growing technology in modern datacentres since the advent of virtualization where virtualisation technologies transformed in the mid 2000’s from a physical platform to a widely adopted virtual environment. Just last year, the HCI market value touched USD 4 billion, and analysts are predicting it to exceed USD 10 billion by 2021. HCI has become a critical asset to datacentres today – protecting data, increasing scalability and reducing costs.

Based on a press briefing, for enterprises hyperconvergence represents a decision point: whether to follow the traditional server, networking and storage architecture or to deploy a single hyper-converged system. Which, if either, is best?

As technology evolves, hyperconvergence continues to be the simplest and most sophisticated way towards the new data centre. It proves that simplicity is the ultimate sophistication. Organizations around the world are continuously demanding technologies and systems that can enhance productivity and increase scalability, while also being cost efficient. Hyperconvergence offers just that, taking away the complexity of managing the high-end systems from the internal IT operations team and performing the job by making data handling less complex. The greatest challenge hyperconvergence solves is that it breaks down the silos within organisations. Now the virtualisation administrator becomes the storage guy, the network guy and the compute guy, all rolled up into one, allowing him to provide multiple services at the same rate as public cloud providers provide their services. In conclusion, seeing the way hyper-convergence addresses business challenges, it is by far a superior solution when compared to the old, converged way of managing and deploying infrastructure. The complexity of managing the solution is reduced, allowing the business to react to the speeds required by the customers, and ultimately, making your IT organisation operate at a higher level in the value chain.

How is artificial intelligence driving the adoption of hyper-converged cloud infrastructure?
Simply, these two technologies are massively integrated. The challenge is to move all in house applications or a subset of applications to the public cloud and to continue to manage some applications internally. The ongoing debate is summed up as on-premise vs cloud, but in reality, it’s a case of on-premise plus the cloud. If we were to break down the essence of IoT in a very simple format, it is every device that we interact with would have an IP address that generates data which needs to be moved to a central repository and interpreted into machine learning and decision making, which is essentially Artificial Intelligence. A public cloud is a great fit for any decision that does not need to be made in real-time and can afford latency, providing purpose solution and potentially a cost effective and scalable route. However, when dealing with solutions that cannot tolerate latency and require “real-time” decisions, such as self-driving cars or pilotless flying aeroplanes, the cloud simply won’t suffice as no technology today can tolerate uploading 10GB of data to a public cloud every 30 seconds, which would be the case of a self-driving car that has over 128 sensors. Often the truest things are said in jest and in this case, it could be said that “when microseconds matter, the public cloud is only 88 milliseconds away”. This is how AI and the IoT are driving mass adoption of HCI technologies to be deployed “at the edge”.

Edited By Darryl Linington
Follow @DarrylLinington on Twitter
Follow @ITNewsAfrica.com on Twitter

Comments

comments


Comments are closed.

« »