Why edge computing is gaining popularity in a Covid-affected world

21st June 2021
Why edge computing is gaining popularity in a Covid-affected world

By Neelesh Kripalani, Sr. VP & Head - Centre of Excellence, Clover Infotech
June 21 2021: Ever since the onset of the pandemic, it has compelled enterprises to rapidly move their critical workload to the cloud to ensure seamless functioning of business.
Hence, we see a plethora of new cloud trends emerging in the market. One such trend is edge computing.
Not long ago, edge computing was considered a futuristic concept – something that was attractive to talk about but lacked practical world examples. This has ceased to be the truth! Amidst spiraling numbers of COVID-19 cases, organisations are relooking their operational structures, to meet the new challenges that the second wave has brought about. In this space, remote working has once again become a buzzword, paving the way for cloud enabled technologies to shape the new normal. As cloud is gaining momentum, and enterprises are frantically looking for ways to optimize their network, storage and agility, edge computing has turned out to be the perfect solution.
To understand where does edge computing fit in the whole spectrum of IT infrastructure, we need to begin with the basics – Understanding what actually is edge computing?
“Edge computing” is a type of distributed architecture in which data processing occurs close to the source of data, i.e., at the “edge” of the system. This approach reduces the need to bounce data back and forth between the cloud and device while maintaining consistent performance. 
With regards to infrastructure, edge computing is a network of local micro data centers for storage and processing purposes. At the same time, the central data center oversees the proceedings and gets valuable insights into the local data processing. However, we need to be mindful that edge computing is a kind of expansion of cloud computing architecture - an optimized solution for decentralized infrastructure. 
The main difference between edge and cloud computing is - In cloud, the data is processed away from the source, and hence there are chances of facing bottlenecks in the data transmission, which in turn leads to latency. Whereas, in edge computing, the processing occurs closer to the data source. This reduces latency in data transmission and computation, thereby enhancing agility.
While conversations about the advantages of edge computing are exciting, to drive real value from it an organization needs to begin with identifying the pain-points it addresses. The ultimate purpose of edge computing is to bring compute, storage, and network services closer to endpoints and end users to improve overall application performance. Based on this knowledge, IT architects must identify and document instances where edge computing can address existing network performance problems. 
How does edge computing work?
In traditional enterprise computing, data is produced at a user's computer. That data is moved across a Wide Area Network (WAN) such as the internet, through the corporate LAN, where the data is stored and worked upon by an enterprise application. Results of that work are then conveyed back to the end-user. However, if we consider the number of devices that are connected to a company’s server, also the volume of data it generates, it is far too much for a traditional IT infrastructure to accommodate.
So, IT architects have shifted focus from the central data center to the logical edge of the infrastructure -- taking storage and computing resources from the data center and moving those resources to the point where the data is generated. The idea is very simple – if we can’t get the data closer to the data center, get the data centre closer to the data.
Why edge computing is gaining popularity
There are several reasons for the growing adoption of edge computing:
-Due to emerging technologies such as IoT and IoB, the data is being generated in real-time. Devices enabled by these technologies require a high response time and considerable bandwidth for proper operation. 
-Cloud computing is centralized. Transmitting and processing massive quantities of raw data puts a significant load on the network’s bandwidth. 
-The incessant movement of large quantities of data back and forth is beyond reasonable cost-effectiveness and leads to latency
-Processing data at the source and then sending valuable data to the center, is a more efficient solution.
As organisations are increasingly moving back to remote working models, we will witness wide adoption of edge computing as it empowers remote work infrastructure with greater computation and storage capabilities. When millions of end-user devices operating across geographic locations are connected to a central data center, it puts tremendous strain on the IT infrastructure. In such a scenario, edge computing has emerged as the viable architecture as it supports distributed computing to deploy compute and storage resources closer to the data source. Through this, it not only enables seamless decentralization of IT but also eliminates data congestion and latency issues. It allows enterprises to deploy local storage to collect and protect raw data, while local servers perform the essential analytics to enable faster decision making before sending the result to the central data centre

TechNote: Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed to improve response times and save bandwidth. It is a topology rather than a technology- Wikipedia