Standardization Progress Of Edge Computing In 3GPP - IOTROUTER
Search

Standardization progress of edge computing in 3GPP

Introduction

Edge computing in 3GPP is becoming increasingly important as 5G networks evolve. With mobile data traffic growing exponentially due to IoT, AR/VR, and high-definition video applications, network operators face the dual challenge of reducing latency and increasing network flexibility. To address these challenges, 3GPP introduced CUPS (Control and User Plane Separation) in Release 14 of the EPC (Evolved Packet Core).

CUPS allows control plane (CP) and user plane (UP) functions to scale independently and be deployed closer to base stations or edge nodes. By separating these planes, operators can optimize data routing, reduce end-to-end latency, and deliver more reliable services, especially for applications that require real-time responsiveness. This architecture also lays the foundation for future edge computing in 3GPP, enabling operators to deploy distributed services without impacting the existing core network.

Improving The Communication Efficiency Of Iot Gateways With 5g Communication Modules/edge computing in 3GPP

Overview of Edge Computing in 3GPP

The primary goal of edge computing in 3GPP is to enhance network performance by moving user-plane functions closer to the end user. This approach reduces the load on centralized network elements and enables low-latency service delivery.

By separating CP and UP, operators can deploy user plane nodes flexibly:

  • At the network edge near 4G/5G base stations to reduce latency.

  • In distributed locations for load balancing and regional optimization.

  • Centrally for scenarios where centralized control is preferable.

This flexibility is particularly important for latency-sensitive applications such as autonomous vehicles, industrial automation, augmented reality, and real-time analytics. The separation also allows independent scaling of CP and UP functions, which means operators can handle increasing data traffic without over-provisioning the entire network.

CP/UP Separation in EPC

In the CUPS architecture, key EPC nodes—including Serving Gateway (S-GW), Packet Gateway (P-GW), and Traffic Detection Function (TDF)—are split into separate control and user plane entities. This separation introduces several changes to the network:

  1. Independent modules: Each S-GW, P-GW, and TDF has separate CP and UP modules.

  2. New interfaces: The Sx interface connects CP and UP nodes, ensuring proper coordination.

  3. Interface modifications: Existing interfaces are either split (e.g., S2a → S2a-C for control and S2a-U for user plane) or retained if functional differentiation is not required (e.g., S12, Gx, Gy).

  4. Partial or merged deployments: Operators can separate some nodes while keeping others combined, or merge CP and UP into a single entity when operational efficiency is desired.

This flexibility in architecture deployment is a key enabler for edge computing in 3GPP, as it allows operators to place user plane functions where they are most effective, based on latency, traffic demand, and service requirements.

Cloud Computing: The Foundation of Modern Technology/edge computing in 3GPP

Functional Changes in S-GW, P-GW, and TDF

After CP/UP separation, the roles and responsibilities of each module are redistributed:

S-GW

  • S-GW-U functions are fully controlled by S-GW-C.

  • Functions such as load management, overload control, and operation administration (OAM) are either removed or shifted to other nodes, particularly the P-GW.

  • UE mobility endpoints are no longer managed by S-GW-U; P-GW inherits this functionality.

P-GW

  • P-GW-U functions are controlled by P-GW-C.

  • Mobility support and packet forwarding are largely inherited by S-GW.

  • Policy and Charging Control (PCC) remains on P-GW, while other management functions are offloaded.

TDF

  • TDF follows a similar separation model, enabling flexible deployment of traffic detection and service-level control at the edge.

  • The separation ensures that traffic management can be distributed to user plane nodes close to the service or UE, further reducing latency and improving throughput.

This functional redistribution allows operators to optimize resources and ensure that edge nodes handle traffic-intensive or latency-sensitive functions, leaving centralized nodes for policy enforcement and broader network management.

Selecting User Plane Nodes

For edge computing in 3GPP, CP nodes must carefully select UP nodes based on multiple factors:

  1. Correspondence principle:

    • S-GW-C selects S-GW-U

    • P-GW-C selects P-GW-U

    • TDF-C selects TDF-U

  2. Merged CP scenarios: CP can choose a merged UP or multiple independent UPs depending on traffic requirements.

  3. Selection criteria:

    • UE location: closer nodes reduce latency and improve service quality.

    • UP capacity: the selected UP must support all required functions and traffic volume.

    • Deployment type: centralized, distributed, or edge.

    • UE service requirements: for example, ultra-low-latency applications require nearby UP nodes.

These selection rules ensure that edge computing in 3GPP is not only flexible but also optimized for real-world performance needs.

Deployment Scenarios

Operators have several deployment options for CUPS-based edge computing:

  1. Full separation: All CP and UP nodes are separated, maximizing flexibility and scalability.

  2. Partial separation: Some nodes are separated while others remain combined, balancing complexity and operational efficiency.

  3. Merged architecture: CP and UP of S-GW and P-GW are merged for simpler management.

Distributed or edge deployments provide additional benefits, such as reducing backhaul traffic and improving latency for regional services. This is especially critical for real-time applications, industrial automation, and IoT use cases.

Benefits of Edge Computing in 3GPP

  • Reduced latency: Deploying UP nodes near users minimizes data travel time.

  • Scalability: CP and UP can scale independently to handle traffic surges.

  • Operational flexibility: Operators can deploy UP nodes where needed and upgrade nodes independently.

  • 5G readiness: Supports high-throughput, low-latency services essential for autonomous vehicles, AR/VR, and industrial IoT.

  • Efficient resource allocation: Network resources can be focused on edge nodes handling latency-sensitive functions.

Conclusion

Edge computing in 3GPP through CUPS provides a flexible, scalable, and low-latency architecture for 5G networks. By separating control and user planes, operators can efficiently deploy user plane functions at the network edge, reduce latency, and optimize resources for demanding applications. This architecture ensures that networks are prepared for the future of 5G and IoT services, laying a solid foundation for advanced edge computing deployments.

Contact Us