Edge Computing Platform

Optimize and deliver rapid-speed user experiences

Play Video

Edge Computing Platform ensures ultra-low latency and high performance computing.

CDNetworks Edge Computing Platform (ECP) enables customers to meet growing business demands by effortlessly deploying and scaling up container-based applications. ECP places high-performance compute, storage and network resources as close as possible to end users. Doing so lowers the cost of data transport, decreases latency, and increases locality. ECP is a container orchestration system built on Kubernetes and Docker for customers to write container-based applications once and deploy them everywhere.

Resources

ECP Free Tier Program

Sign Up and get $500 credit

Product Highlights

1500+ PoPs Global Presence

CDNetworks provides unmatched scale for rapidly expanding your container-based applications

50+ Tbps High Bandwidth

Aggregated bandwidth ensures high performance and availability, even with peak traffic

< 50 ms Ultra Low Latency

Fast application processing and communication between edge and end points

Distributed PoPs coverage to insure ultra-low latency

Compatible with TCP protocol

Automated deployment, self-healing, auto scaling, application monitoring & reporting

Comprehensive technical support

Edge Computing Platform Solution

ECP is an Infrastructure As a Service (IaaS) that offers both Computer, Network, Storage resources for container instances and Kubernetes (K8s) container management at the edge.

Compute

CPU
Memory

Network

Public IPv4 and IPv6 network interface
Static IPs
Load Balancing

Storage

High performance local SSD persistent storage

Product Highlights

Automated Application Deployment

When developers specify a Pod, they can optionally specify the resources each container needs. Kubernetes runs a scheduler that automatically makes decisions about which nodes to place their Pods on, based on requests as well as predefined schedule policies and preferences. Manual application planning is not required.

Self-healing

Kubernetes scheduler will restart containers that fail, replace and reschedule containers when nodes die, and kill containers that don’t respond to any health check.

Automatic Rolling Updates

Deployment controller allows developers do application rollouts and rollbacks with ease.

Horizontal Pod Autoscaling (HPA)

Scale applications up and down automatically based on resource usage such as CPU and memory.

Users at the edge

ECP Free Tier Program

Sign Up and get $500 credit

ECP Global Coverage

ECP places high-performance compute, storage and network resources as close as possible to end users and allow customers to write container-based applications once and deploy them everywhere. CDNetworks’ global points of presence (PoPs) are organized into four “server groups” based on cost.

Standard

Premium

Premium+

Ultra

We define different prices for traffic served from the four groups. In this way, they can fully customize performance and cost for different regions in the world. ECP covers the most significant areas on the planet, and continues to expand its global network reach at a rapid pace.

What is Edge Computing?

Edge computing is a network philosophy that aims to bring computing power, memory and storage as close to the end users as possible. The “edge” refers to the edge of the network, the location where the network’s servers can deliver computing functionalities to customers most expediently.

Instead of relying on a server at a centralized location like a data center, edge computing moves processing physically closer to the end user. The computation is done locally, like on a user’s computer, an IoT device or an edge server.

Edge computing minimizes the amount of long-distance communication that has to happen between a client and a centralized cloud or server.  This results in less delay, or latency, faster response times and bandwidth usage.

Edge Computing

Frequently Asked Questions

Edge computing works by allowing data from local devices to be analyzed at the edge of the network in which they operate before being sent to a centralized cloud or edge cloud ecosystem. A network of data centers, servers, routers, and network switches distributed across the globe processes and stores data locally and each can replicate its data to other locations. These individual locations are called Points of Presence (PoP). Edge PoPs are physically closer to the device, unlike cloud servers, which could be far away.

Traditionally, organizations ran multiple applications on physical servers. There was no easy way to allocate resources to all applications to ensure they all performed equally well. Then came virtual machines (VM), which allowed applications to be isolated for better utilization of a server’s resources on the same hardware infrastructure.

Containers are similar to VMs, except that they can share the operating system (OS) among the applications. This makes containers portable across clouds and OS distributions. Developers can bundle and run applications effectively and in an agile manner, with no downtime.

In fact, the open-source platform Kubernetes helps developers automate much of the management of container applications. For example, it allows you developers to distribute network traffic in case one container is receiving high traffic, automate rollouts and rollbacks, restart containers that fail, health checks and more.

Developers can deploy applications on the edge by building pods – small units of computing that group together one or more containers with shared storage and network resources. Kubernetes, or K8s as they are called, can be deployed on every edge PoP to allow developers to build these pods on the edge themselves.

Consider a cloud gaming company that has users across the world accessing graphics-intensive content to their devices from a centralized cloud. The game has to respond to user keystrokes and mouse actions, and the data must travel to and from the cloud, in milliseconds or even faster. This continual interactivity requires immense computing power to be stored, fetched, and processed by the company’s servers. Additionally, modern cloud-gaming requires 5G networks because of the stable ultra-low latency they promise.

The greater the distance to the servers, the further the data has to travel and the higher the chances for latency and jitter. This could lead to delays and a poor gaming experience for users.

By moving the computing closer to the edge and to users, data travels the minimum possible distance and players have a latency-free experience. This renders the actual user devices, whether consoles or personal computers, irrelevant. Running the data workloads at the edge thereby makes it possible to render graphically intensive video and create a superior gaming experience overall. It also helps companies eliminate the costs to run a centralized infrastructure.

Edge computing does come with some security concerns. Since the edge nodes are closer to the end users, edge computing often deals with large volumes of highly sensitive data. If this data leaks, there can be serious concerns about privacy violations.

As more IoT and connected devices join the edge network, the potential attack surface also expands. The devices and users in the edge computing environment could also be moving. These factors make it difficult to design security rules to thwart attacks.

One approach to ensure security with edge computing is to minimize the processing done on the devices themselves. The data can be collected from the device, and then packaged and routed to an edge node for processing. This may not always be possible, though, such as when sensors on self-driving cars or building-automation systems need to process data and make decisions in real-time.

Encryption of data at rest and in transit can help address some security concerns with edge computing. This way, even if the data from the devices is leaked, hackers will not be able to decipher any personal information.

The edge devices can also differ in their requirements for power, electricity, and network connectivity. This raises concerns about their availability and what happens when one of the nodes goes down. Edge computing addresses this using Global Server Load Balancing (GSLB), a technology that distributes traffic among several different edge nodes. With GSLB, when one node is overwhelmed and about to go down, other nodes can step in and continue to fulfill user requests.

Cloud computing is a technology that allows for the delivery of storage, applications, and processing power on an on-demand service basis over the internet. In the early days of computing, businesses had to set up data centers, hardware, and other computing infrastructure to run their applications. This meant upfront costs, managing complexity, and spending manpower to maintain the infrastructure, all of which multiplied with scale.

Cloud computing essentially lets businesses “rent” access to data storage and applications from cloud service providers. The providers will be responsible for owning and managing the centralized applications in their data centers, while businesses pay according to their usage of these resources. Edge computing is different in that the applications and computation are moved closer to users.

STATELESS VS STATEFUL

Another crucial difference between cloud computing and edge computing lies in how they handle stateful and stateless applications.

Stateful applications are those that store information on previous transactions. Online banking or email are examples where new transactions are performed in context to what happened before. Since these applications need to store more data about their state, they are better suited to be stored on the conventional cloud.

Stateless applications are those that don’t store any information in reference to past transactions. For example, entering a query in a search engine is a stateless transaction. If the search is interrupted or closed, you will start a new one from scratch. Applications that run on the edge often are stateless, as they need to be moved around and require less storage and computation.

BANDWIDTH REQUIREMENTS

Cloud computing and edge computing also differ in the bandwidth requirements of the applications they handle. Bandwidth refers to the amount of data that can travel between a user and the servers across the internet. The higher the bandwidth, the greater the impact on the performance of the applications and the resulting costs.  

Since the distance that the data must travel to a centralized cloud is significantly further than the path traveled to an edge computer, applications require higher bandwidth to maintain the performance and avoid packet loss. When you have applications that require high bandwidth for their performance, edge computing is the clear way to go.

While edge computing and cloud computing may differ in many aspects, utilizing one does not preclude the use of the other. For example, to address the latency issues in a public cloud model, you can offload processing for mission-critical applications closer to the source of the data.

Latency

One main difference between cloud computing and edge computing pertains to latency. Cloud computing can introduce latency because of the distance between users and the cloud. The edge infrastructure moves computing power closer to end users to minimize the distance that data has to travel, while still retaining the centralized nature of cloud computing.  Thus, edge computing is better for latency-sensitive applications while cloud computing is suited for applications whose latency is not a major concern.

Play Video