We’ve been hearing about edge computing as a trend for the better part of a decade now. Edge computing was supposed to be this emerging computing paradigm that processed data closer to where it was generated, allowing a variety of networks and devices to process data at greater speeds and higher volumes. Heath Thompson, president and GM of Quest Software, explores why promises have not been fully met and how the advantage could be to expectations in the near future.
With edge computing, it was always about the possibility of making greater use of “big data” (a term we rarely hear now) for AI, new types of applications and greater efficiencies. However, there has been no significant traction for edge computing except in specific, well-known cases. As network speeds have increased, and cloud and SaaS infrastructure has become more robust, ubiquitous and secure, the advantages of edge computing have grown in importance.
Edge computing works by moving the computing source closer to the data. It makes sense to do this with things like the IoT, and we can all appreciate iPhones and wearable IoT devices like watches that can process data without an Internet connection and in real time. In general, the benefits of edge computing are about response time, cost savings, data aggregation and consolidation, privacy and reducing the threats of security breaches. In many locales, data sovereignty is also a key concern; organizations cannot, by law or policy, allow data to move outside of certain domains. While this is critical, public cloud providers have solved for this with the geographic distribution of their data centers. It remains important as an engine for some high-end computers, but it is not universal.
Navigation complexity and cost of the Edge
While the concept of edge computing is sound and the benefits of processing data close to the source are undeniable, the reality is that this approach also brings with it a higher amount of complexity and cost of management. As with any other part of an organization’s infrastructure, leading IT platforms also require management, maintenance and security. Organizations already struggle with the scale of their networks, IoT devices and users on the network, and edge computing adds to this burden. Edge computing becomes another “node” in the network, which will require patching and opening new attack surfaces for threats such as ransomware and data breaches.
Advances in Technology Continue to Impede the Concept of Edge Computing
Simply put, edge computing is not simplifying anyone’s life – it is potentially making it more difficult. Beyond security and management considerations, edge computing can also add complexity to data governance and compliance. The data processed by leading computing platforms could have a local storage of personal information (PI) data, subject to privacy controls such as GDPR or CCPA. In addition, edge computing can perform important data transformations, which need to be managed and governed as part of data line considerations in an overall governance strategy.
None of these points are intended to be eliminators for edge computing, but rather to call out the obligations that organizations must take when they decide to adopt edge computing. There is a dance of opposing forces here: the ever-increasing volume of data from our ability and desire to measure everything, versus the need to produce actionable results from the data, versus the challenges and complexities of managing the infrastructure of peak calculation, vs. the quality and availability of other computational approaches. Indeed, we are implementing edge computing in some critical use cases, but we also realize that edge computing is not as cheap and cheerful—and as widely deployed—as we thought it might be.
See more: Last year’s predictions and 4 Kubernetes and Edge Trends to Watch
If not Edge Computing, then what?
Edge computing is a bit of an anachronistic term if truth be told. Modern companies don’t really think about “calculating” as the way to architect for growth in 2023. Rather, organizations think about their future results and put DATA at the center of their consideration. Organizations are building data meshes, or data fabrics, like the nervous systems that drive business operations.
Data meshes make data more accessible and available to users, directly connecting data to those who use it: data owners, data producers, and data consumers. They also give organizations better decision-making power, allowing teams to generate data while creating usable data products for other teams. Data meshes solve problems such as data necks more globally in the enterprise; where edge computing does it locally, a mesh does it from anywhere that makes sense. It can connect cloud applications to sensitive data residing with a customer in a safe and secure manner. It can create virtual data catalogs from sources that cannot be centralized and can give developers the ability to query data from a variety of storage devices without access problems.
The technology follows four key principles that deliver many of the promised but never delivered business benefits of edge computing: decentralized domain-oriented data ownership; data as a product; self-serve data infrastructure as a platform; and federated data governance. Most importantly, a data mesh puts data where it belongs: at the center of any strategy that enables the growth and transformation of modern business.
Shifting the focus from computing to data is essential and also right to consider edge computing or cloud computing. Organizations today are focused on data, and computing is a means to an end how to get results from data. We have long recognized that we live in a “hybrid” world, where it is neither / nor, but both / and. But the modern business is now just focused on its data, and the empowerment of organizations to use their data is where we need our focus to be in 2023 and beyond.
How do you shift focus to the edge and beyond to leverage data? Share with us Facebook, Twitterand LinkedIn.
Image Source: Shutterstock