What is cloud computing? It’s the practice of storing, managing, and processing large amounts of data on remote servers and data centers, usually over the internet. However, cloud technologies have their drawbacks, in particular, increased latency of information processing. So when it comes to time-critical applications, there’s a need for a faster and more flexible solution. Edge computing is a response to this challenge.
In this article, we focus on what edge computing is and compare cloud computing vs. edge computing. We tell about the benefits of moving data processing to the edge of the network. We also take a closer look at the main challenges edge computing faces today and whether it is able to replace cloud computing.
- Edge computing vs. cloud computing
- Why is cloud computing not enough?
- What is edge computing?
- In what areas can you use edge computing?
- Main pros and cons of edge computing
- Benefits of edge computing
- Drawbacks of edge computing
- Edge computing security issues
- Requirements for implementing edge computing
Edge computing vs. cloud computing
When talking about cloud computing vs. edge computing, the main focus is on where data processing takes place. Currently, the majority of existing Internet of Things (IoT) systems perform all of their computations in the cloud using massive centralized servers. As a result, low-level end devices, as well as gateway devices that have somewhat more storage and processing resources, are used mostly for aggregating data and performing low-level processing.
Edge computing offers a completely different approach: it moves most of these processes – from computing to data storage to networking – away from the centralized data center and closer to the end user. According to a study by IDC, by 2020, 45 percent of all data generated by IoT devices will be stored, processed, and analyzed at the edge of a network or close to it.
Why is cloud computing not enough?
With the amount of data we currently process, cloud computing isn’t the best choice for latency-intolerant and computation-intensive applications. When you compute and analyze all data in the cloud, you inevitably face two problems: increased latency and wasted resources of decentralized data centers, cloudlets, and mobile edge nodes located at the edge of the network.
This is especially obvious in IoT: the number of intelligent devices is growing rapidly. Here are some predictions of how fast this industry will grow by 2020:
- Gartner predicts that by 2020, the number of connected devices will reach 20.8 billion, compared to only 6.4 billion connected devices in 2016.
- IDC says that in 2020, the total installed base of IoT devices will reach as high as 28.1 billion. At the same time, IDC predicts that by the same year, 67% of enterprise IT infrastructure and software will be used to serve cloud-based offerings.
- IHS Markit claims that by 2020, the total installed base of IoT devices will grow to 30.7 billion, compared to 15.4 billion devices in 2015.
The main problem is that all of these devices generate huge amounts of data but don’t process it. Instead, they send all this information to the cloud, which leads to overloading of both data centers and networks. Increased latency is a great challenge for IoT and mobile virtual reality products.
Performing computation closer to the source of data can help lower the general dependence of your service or app on the cloud and make data processing faster.
What is edge computing?
So, what is edge computing and how does it differ from cloud computing? The main edge computing and cloud computing difference stems for the fact that edge computing paradigms offer a more decentralized architecture where most processes are performed on the devices themselves. Instead of one centralized center, edge computing uses a mesh network of smaller data centers that can store and process data locally, at the “edge” of the network.
There are two forms of edge computing:
- Cloud edge, when the public cloud is extended to several point-of-presence (PoP) locations
- Device edge, when a custom software stack emulating cloud services runs on existing hardware
The main difference between these two forms is in their deployment and pricing models. Since cloud edge is just an extended form of the traditional cloud model, the cloud provider is responsible for the entire infrastructure.
Device edge lives on the customer’s hardware, making it possible to perform near real-time processing of requests.
There are three ways of implementing edge computing:
- Fog computing – a decentralized computing infrastructure in which all data, storage, and computing applications are distributed in the most efficient way between the cloud and end devices
- Mobile edge computing (MEC) – an architecture that brings computational and storage capacities of the cloud closer to the edge of the mobile network
- Cloudlet computing – an infrastructure that uses smaller data centers to offload central data centers and bring the cloud closer to end users
Edge computing opens new possibilities for IoT applications that rely on machine learning, boosting the speed of performance for language processing, facial recognition, obstacle avoidance, and other tasks. Plus, it helps take some of the load off the main data center and network, reducing the total amount of centralized processing.
In what areas can you use edge computing?
Below, we give several edge computing examples and list the areas where it appears to be reasonable to move computational processes to the network edge.
- Smart homes and cities. With a constantly growing number of sensors, smart cities cannot rely only on cloud computing. Processing and analyzing information closer to the source may help reduce latency in smart cities and for community services that require fast responses, such as law enforcement and medical teams.
- Smart transportation. Edge computing allows you to send all information locally, processing and sending to the cloud only the most important data. This technology is already used for improving the quality of both commercial and public intelligent transportation systems. It also helps increase the safety and efficiency of travel by feeding local information patterns into larger and more widely accessible network systems.
- Drones and remotely operated vehicles. When a self-driving car must react to the data it collects, even the smallest delay can lead to a potentially dangerous situation. Moving computation closer to the data source helps significantly reduce latency and improve quality of service.
Many industry giants are moving away from the cloud and closer to the edge, including such companies as Microsoft (with its Azure IoT Edge), Amazon (with its AWS Greengrass), Google, Dropbox, and Nest Security Systems. At the same time, there are several startups such as Vapor IO trying to build new networks of distributed data centers.
Main pros and cons of edge computing
Of course, moving computation away from the cloud has its benefits and drawbacks. Below, we list the most relevant pros and cons of using an edge computing approach for business.
Benefits of edge computing
Decreased latency is the most important benefit of performing computation at the edge of a network. Reducing latency is especially important for applications requiring an immediate response. As a result of edge computing, connected applications become more responsive and robust.
Prevent network and data center overload by processing data at the point of origin instead of making a device–cloud-device data round trip, you can lift the burden from both data centers and networks. Loading becomes more scalable respect to the number of endpoints, as computing resources grow proportionally. Avoiding round trips to and from the cloud or a centralized data center is especially important for applications that use machine learning or computer vision.
Increased redundancy and availability for businesses – when computation is performed at the edge, any possible disruption can be limited to one point in the network instead of the entire system, as is the case with cloud computing.
Lower management and connectivity costs – when you send only significant information instead of raw streams of sensor data and send it only short distances, it saves a lot of network and computing resources.
Drawbacks of edge computing
Processing large amounts of data – edge computing has a strictly limited capacity for storing and processing huge sets of information since all computation takes place on edge nodes that have their limitations.
Real-time data processing – while decreasing latency is one of the main purposes of implementing edge computing in the first place, ensuring real-time data processing in practice may be a challenge. The connectivity among a large number of end devices makes it challenging for edge nodes to perform high-quality real-time data processing. Furthermore, an increase in the amount of data produced may lead to overloading of the network edge and degraded performance of some applications.
Programming – in the cloud, the majority of programs are compiled for a particular target platform and written in one programming language. But in edge computing, computation processes run on a variety of platforms, and the runtime can be different for each. As a result, writing an application that can be deployed in an edge computing environment is challenging.
Naming – there’s no standardized naming mechanism for the edge computing paradigm yet, so you may need to learn different network and communication protocols in order to communicate with the many different things in your system.
While traditional naming mechanisms such as the Uniform Resource Identifier (URI) and Domain Name System (DNS) satisfy the majority of current networks, these mechanisms aren’t flexible enough to serve a dynamic edge network. Furthermore, a complex IP-based naming mechanism can be too heavy for some resource-constrained items at the edge of the network to support. Instead of DNS, you can use new naming mechanisms such as MobilityFirst and Named Data Networking (NDN).
Optimization metrics – workload allocation becomes a serious challenge in edge networks because there are many layers with different computation capabilities.
Intelligent connection scheduling policy – developing an efficient method for scheduling computing tasks and splitting them between different edge nodes is challenging as well. Not only do you need to know whether a particular edge node is able to perform the required computation, you also need to make sure that the productivity of the node won’t be affected by doing it.
Lightweight database and kernel – since edge nodes’ capabilities are limited, there may be additional limitations on what software is compatible. Therefore, it’s preferable to use less resource-intensive, lightweight databases and software for edge computing.
Privacy and security – there are a lot of concerns about ensuring privacy and security of data in edge computing. One of these concerns is that end devices are more vulnerable to attacks than a centralized data center. In the next section, we take a closer look at this particular problem.
Edge computing security issues
When it comes to edge computing and information security, there are two completely opposite points of view. On the one hand, storing, processing, and analyzing sensitive information close to the source rather than sending it over the network may help improve data protection. On the other hand, there is the possibility of end devices being even more vulnerable to attacks than a network of centralized data centers.
As a result, when building an edge computing solution, you need to pay special attention to ensuring the protection of the entire system. Access control, data encryption, and virtual private network tunneling may be of help.
The major challenge with developing a reliable lightweight security solution is the diversity of edge computing environments.
In general, there are three major security challenges with edge computing:
- Creating an efficient authentication mechanism – giving access to edge computing services only to authorized end devices is crucial for preventing the entry of unauthorized nodes. The problem is that standard solutions applicable in the cloud can’t be used at the network edge because of the significant difference in end devices’ power, storage, and processing capabilities.
- Protecting against malicious attacks – an edge computing environment is vulnerable to different types of malicious attacks such as spoofing, intrusions, and Denial-of-Service (DoS). Therefore, proper security measures need to be taken in order to ensure the protection of network capabilities.
- Ensuring end users’ privacy – since sensitive data may be collected by edge nodes, there’s a need to ensure the protection of such data. Also, user habits can be revealed to an adversary by analyzing usage habits of edge services.
Also keep in mind that the majority of end users’ devices have limitations because of their battery-powered nature. As a result, you need to implement lightweight security and privacy solutions.
Requirements for implementing edge computing
There are several requirements for ensuring a quality implementation of the edge computing model:
- Resource management
Below, we take a closer look at each of these requirements.
Edge computing needs to be able to provide steady performance even if the number of end devices and applications increases significantly. This can be achieved by adding new points of service and therefore expanding edge computing geographically or by adding new service nodes to an existing network. You can also use cloud interplay and make your edge computing model work as an independent mini cloud.
Reliability is one of the main problems you need to think of when moving computation to the network edge. Reliable edge computing ensures failover mechanisms for different cases, from failure of an individual node to failure of the entire edge network and service platform to lack of network coverage.
You need to implement protocols similar to packet pathfinding in order to ensure that you can find available services in the event of a failure. Additionally, significant information such as client session data stored on edge devices needs to be consistently backed up so it can be recovered in case of device or network failure. Plus, in contrast to cloud computing, you can’t immediately add new nodes to existing edge computing infrastructure.
However, you can increase the reliability of your edge computing model by implementing the three following techniques:
- Reschedule failed tasks
- Create checkpoints by periodically saving the state of user services and end devices
- Replicate edge servers and mini data centers in multiple geographical locations
Creating an efficient resource management model is another crucial issue posed by the edge computing paradigm. Edge computing requires adopting multi-level resource management techniques that can be applied at the edge network level as well as in coordination with remote cloud providers and edge networks.
One of the main requirements for an efficient edge computing model is to provide interoperability among various edge devices. Therefore, edge computing needs to define standards of interoperability for both applications and data exchange between users. Standard protocols that apply to all devices within the network also need to be devised.
Security in edge computing has two aspects:
- Isolating data paths and memory
- Ensuring protection of edge devices against malicious users
While cloud computing ensures data isolation by using virtualized devices, edge computing needs to define sandboxes for user applications in order to monitor resource usage and ensure the isolation of sensitive data. Network function virtualization appears to be one possible solution to this problem. This concept can be applied to network nodes to ensure the protection of user domains. But since NFV is still a new technology, not all network vendors adhere to its standards.
While some believe that edge computing can replace cloud computing altogether, these two approaches are rather complementary than substitutes. When used in conjunction, cloud and edge computing architectures allow businesses to store and process data more effectively.
Furthermore, many businesses are already using a hybrid architecture to improve performance and customer satisfaction. Cloud computing remains one of the best ways to safely back up and store large sets of information. It also can be effectively used for performing less time-sensitive data processing. And for operations requiring faster data processing, performing computation on distributed data centers or devices themselves appears to be the best solution.