Optimize and deliver rapid-speed user experiences

Play Video
Play Video
Play Video

Edge Computing ensures ultra-low latency and high bandwidth/performance computing.

CDNetworks Edge Computing Platform (ECP) enables customers to meet growing business demands by effortlessly deploying and scaling up container-based applications. ECP places high-performance compute, storage and network resources as close as possible to end users. Doing so lowers the cost of data transport, decreases latency, and increases locality. ECP is a container orchestration system built on Kubernetes and Docker for customers to write container-based applications once and deploy them everywhere.

Product Highlights

1500+ PoPs Global Presence

CDNetworks provides unmatched scale for rapidly expanding your container-based applications

50+ Tbps High Bandwidth

Aggregated bandwidth ensures high performance and availability, even with peak traffic

< 50 ms Ultra Low Latency

Fast application processing and communication between edge and end points

Distributed PoPs coverage to insure ultra-low latency

Compatible with TCP protocol

Automated deployment, self-healing, auto scaling, application monitoring & reporting

Comprehensive technical support

Edge Computing Platform Solution

ECP is an Infrastructure As a Service (IaaS) that offers both Computer, Network, Storage resources for container instances and Kubernetes (K8s) container management at the edge.




Public IPv4 and IPv6 network interface
Static IPs
Load Balancing


High performance local SSD persistent storage


Automated Application Deployment

When developers specify a Pod, they can optionally specify the resources each container needs. Kubernetes runs a scheduler that automatically makes decisions about which nodes to place their Pods on, based on requests as well as predefined schedule policies and preferences. Manual application planning is not required.


Kubernetes scheduler will restart containers that fail, replace and reschedule containers when nodes die, and kill containers that don’t respond to any health check.

Automatic Rolling Updates

Deployment controller allows developers do application rollouts and rollbacks with ease.

Horizontal Pod Autoscaling (HPA)

Scale applications up and down automatically based on resource usage such as CPU and memory.

ECP Global Coverage

ECP places high-performance compute, storage and network resources as close as possible to end users and allow customers to write container-based applications once and deploy them everywhere. CDNetworks’ global points of presence (PoPs) are organized into four “server groups” based on cost.









We define different prices for traffic served from the four groups. In this way, they can fully customize performance and cost for different regions in the world. ECP covers the most significant areas on the planet, and continues to expand its global network reach at a rapid pace.


  • Ashburn, USA 
  • Atlanta, USA 
  • Boston, USA 
  • Chicago, USA 
  • Dallas, USA 
  • Denver, USA 
  • Los Angeles, USA 
  • Miami, USA 
  • Montreal, Canada 
  • New York City, USA 
  • San Jose, USA 
  • Seattle, USA 
  • 多伦多,加拿大 


  • Amsterdam, Netherlands  
  • Ankara, Turkey  
  • Bucharest, Romania  
  • Frankfurt, Germany  
  • Istanbul, Turkey  
  • London, United Kingdom  
  • Madrid, Spain  
  • Milan, Italy  
  • 莫斯科,俄罗斯  
  • Paris, France  
  • Stockholm, Sweden  
  • Warsaw, Poland  


  • Chennai, India   
  • Delhi, India    
  • Hong Kong, Hong Kong  
  • Incheon, South Korea  
  • Jakarta, Indonesia  
  • Manila, Philippines  
  • 孟买,印度    
  • Surabaya, Indonesia  
  • Seoul, South Korea  
  • Singapore, Singapore    
  • Sydney, Australia  
  • Taipei, Taiwan  
  • 东京,日本  

What is Edge Computing?

Edge computing is a network philosophy that aims to bring computing power, memory and storage as close to the end users as possible. The “edge” refers to the edge of the network, the location where the network’s servers can deliver computing functionalities to customers most expediently.

Instead of relying on a server at a centralized location like a data center, edge computing moves processing physically closer to the end user. The computation is done locally, like on a user’s computer, an IoT device or an edge server.

Edge computing minimizes the amount of long-distance communication that has to happen between a client and a centralized cloud or server.  This results in less delay, or latency, faster response times and bandwidth usage.

Edge Computing


Edge computing works by allowing data from the local devices to be analyzed at the edge of the network they are in, before being sent to centralized cloud or edge cloud ecosystem. A network of data centers, servers, routers and network switches distributed across the globe processes and stores data locally and each can replicate its data to other locations. These individual locations are called Points of Presence (PoP). Edge PoPs are physically closer to the device, unlike cloud servers which could be far away.

Traditionally, organizations ran multiple applications on physical servers. There was no easy way to allocate resources to all applications to ensure they all performed equally well. Then came virtual machines (VM) which allowed applications to be isolated for better utilization of a server’s resource on the same hardware infrastructure.

Containers are similar to VMs, except that they can share the operating system (OS) among the applications. This makes containers portable across clouds and OS distributions. Developers can bundle and run applications effectively and in an agile manner, with no downtime.

In fact, the open-source platform Kubernetes helps developers automate much of the management of container applications. For example, it allows you to distribute network traffic in case one container is receiving high traffic, automate rollouts and rollbacks, restart containers that fail, check on their health and more.

Developers can deploy applications on the edge by building pods – small units of computing that group together one or more containers with shared storage and network resources. Kubernetes or K8s as they are called, can be deployed on every edge PoP to allow developers to build these pods on the edge themselves.

Consider a cloud gaming company with users from across the world accessing graphics-intensive content to their devices from a centralized cloud. The game has to respond to users’ keystrokes and mouse action and the data must travel to and from the cloud in milliseconds or less. This continual interactivity requires immense computing power to be stored, fetched and processed by the company’s servers. Additionally, modern cloud-gaming requires 5G networks because of the stable ultra-low latency it promises.

The greater the distance to the servers, the more the data has to travel and the higher the chances of latency and jitter. This could lead to delays and a poor gaming experience for users.

By moving the computing closer to the edge and the users, data travels the minimum possible distance and players have a latency-free experience. This makes the actual user devices like a console or personal computer irrelevant. Running the data workloads at the edge thereby making it possible to render graphically intensive video and creating a better gaming experience overall., and also helps the company do away with the costs of running a centralized infrastructure.

Edge computing does come with some security concerns. Since the edge nodes are closer to the end users, edge computing often deals with large volumes of highly sensitive data. If this data leaks, there can be serious concerns about privacy violations.

As more IoT and connected devices join the edge network, the potential attack surface also expands. The devices and users in the edge computing environment could also be moving. This makes it difficult to design security rules to thwart attacks.

One approach to ensure security with edge computing is to minimize the processing done on the devices themselves. The data can be collected from the device, packaged and routed to an edge node for processing. This may not always be possible though, such as when sensors on self-driving cars or building-automation systems need to process data and make decisions in real-time.

Encryption of data at rest and in transit can help address some of the security concerns with edge computing. This way, even if the data from the devices is leaked, they will not be able to decipher any personal information.

The edge devices can also differ in their requirements for power, electricity and network connectivity. This raises concerns about their availability and what happens when one of the nodes go down. Edge computing addresses this using Global Server Load Balancing (GSLB), a technology which distributes traffic among the several different edge nodes. So when one node is overwhelmed and about to go down, others can take over and continue to fulfil user requests.

Cloud computing is a technology that allows for the delivery of storage, applications and processing power on an on-demand service basis over the internet. In the early days of computing, businesses had to set up data centers, hardware and other computing infrastructure to run their applications. This meant upfront costs, managing complexity and spending manpower on maintaining the infrastructure, all of which multiplied with scale.

Cloud computing essentially lets businesses “rent” access to data storage and applications from cloud service providers. The providers will be responsible for owning and managing the centralized applications in their data centers while businesses only pay based on their usage of these resources. Edge computing is different in that the applications and computation is moved closer to users.

Stateless VS Stateful

Another crucial difference between cloud computing and edge computing lies in how they handle stateful and stateless applications.

Stateful applications are those that store information on previous transactions. Online banking or email are examples, where new transactions are performed in context to what has happened before. Since these applications need to store more data about their state, they are better suited to be stored on the conventional cloud.

Stateless applications are those that don’t store any information in reference to past transactions. For example, entering a query in a search engine is a stateless transaction. If the search is interrupted or closed, you will start a new one from scratch. The applications which run on the edge are often stateless as they need to be moved around and require less storage and computation.


Cloud computing and edge computing also differ in the bandwidth requirements of the applications they handle. Bandwidth refers to the amount of data that can travel between the user and the servers across the internet. The higher the bandwidth, the greater the impact on the performance of the applications and the resulting costs.  

Since the distance that the data has to travel to a centralized cloud is much more, applications require higher bandwidth to maintain the performance and avoid packet loss. When you have applications that require high bandwidth for their performance, edge computing is the way to go.

While edge computing and cloud computing may differ in many aspects, utilizing one does not preclude the use of the other. For example, to address the latency issues in a public cloud model, you can move processing for mission-critical applications closer to the source of the data.

One of the main differences between cloud computing and edge computing pertains to latency. Cloud computing can introduce latency because of the distance between users and the cloud. The edge infrastructure moves computing power closer to end users to minimize the distance that data has to travel, while still retaining the centralized nature of cloud computing.  Thus edge computing is better for latency-sensitive applications while cloud computing works for those for which latency is not a major concern.

Play Video