Member Article
How edge computing and hyperconvergence help your business remain “always-on”
Alan Conboy, Office of the CTO at Scale Computing
In today’s technology and data-driven world, IT downtime is bad news for any business, especially because of the resulting impact it can have on reputation, capital, and customer satisfaction. Recent high-profile cases, such as the data centre outage which led British Airways to pay an estimated £100,000,000 after cancelling more than 400 flights and stranding 75,000 passengers in one day, are prime examples of how much downtime can actually cost a business.
A well-established enterprise like British Airways might be able to withstand the monetary and reputational backlash to make a comeback, but smaller businesses could be so adversely impacted that they never fully recover. Plus, an established business is more likely to be able to afford the latest technology in detecting sophisticated threats, and protecting valuable data, as well as a multitude of expert IT staff to manage it all.
For smaller businesses, and those in industries with distributed enterprises like retail or financial services, there is an answer to the challenge of keeping pace with, not only ever-sophisticated cyber-threats, but also the growing requirement for your business to remain “always-on”. Smaller organisations can look to the latest technologies, like hyperconvergence and edge computing, that offer high availability, lower total cost of ownership (TCO), and easy deployment and management. By investing in technologies that are simple to manage and don’t require onsite IT experts, distributed organisations can achieve a sophisticated cyber-security strategy that mitigates the risk of costly downtime.
Say good-bye to single point of failure When it comes to edge computing, it is all about putting the computing resources close to where they are needed. With a traditional single point of failure, when there are devices at branch locations, like point-of-sale cash registers in retail stores, that all connect to a centralised data centre, then an outage at that central data centre can affect all the branch locations.
However, by putting an edge computing platform at each branch location, a failure at the central data centre would not need to bring down everything because each branch can run independently from it. A solid virtualised environment can run all of the different applications needed to provide customers with the high-tech services they have come to expect.
Many may wonder why this hasn’t been done before, and there is a simple answer: until very recently, it was cost-prohibitive to implement the kind of infrastructure that would make this work - highly-available infrastructure. Creating a highly-available virtual infrastructure involved a sizeable investment in a shared-storage appliance, multiple host servers, virtual hypervisor licensing, and then a disaster recovery solution.
Hyperconvergence: it just works This is where hyperconvergence comes into play, as it has consolidated those components into an easy-to-deploy, low-cost solution. Still, not all hyperconverged infrastructure (HCI) solutions are cost-effective for edge computing. Some HCI solutions are still designed like traditional virtualisation architectures and emulate SAN technology to support that legacy architecture. This storage emulation results in resource inefficiency requiring bigger systems that are not cost-compatible with edge computing.
The solution is HCI with hypervisor-embedded storage, which can offer smaller, cost-effective, highly-available infrastructure that allows each branch location to run independently, even if the central data centre goes down. A small cluster of three HCI appliances can continue running despite drive failures or even the failure of an entire appliance. There is no way to prevent downtime completely, but edge computing, with the right highly-available infrastructure, can insulate branches to continue operating independently.
With HCI, the central data centre is still a vital piece of the overall IT infrastructure. It consolidates data from all of the branch locations for analysis to make key business decisions. That doesn’t need to change with edge computing. On-site edge computing platforms can provide local computing while communicating key data back to the central data centre for reporting and analytics. By taking the single point of failure out of the central data centre, outages at any location need not have far-reaching effects across the whole organisation.
The logical solution As we see more and more technologies become commonplace in all aspects of business, industry, and our daily lives, high availability is also going to become more and more important. HCI and edge computing are quickly replacing traditional virtualisation infrastructure and making that high availability more accessible.
Whether one small location or thousands of distributed small branches, many organisations will want to consider HCI for highly available edge computing to ensure their “always-on” business goal is achieved.
This was posted in Bdaily's Members' News section by Scale Computing .