Dedicated edge computing node deployments are seen today for evolving vertical sectors like manufacturing, transportation logistics and smart city gateways, while their number is expected to be significantly increased with the advent of new user-oriented services. Fog computing on the other hand, relies on larger processing nodes (e.g. mini-DCs) deployed in between the edge and the cloud segments of the network, allowing the sharing of infrastructure and computing resources among many attached subnetworks, hosting a variety of shared software services, similar to what the cloud offers but at a downscaled local level. Fog computing, (which is better denoted as near-edge computing in contrast to the far-edge computing described above) provides a more pragmatic approach, for a number of services which can rely on moderate latency requirements while being benefited from better deployment economics due to its resource sharing principles. It is noted that the typical 5G concept that relies on a pool of collocated BBUs at the central office (CO) site which are then connected to their remote radio head (RRH) units over the operator’s fronthaul infrastructure, enables essentially a type of fog computing deployment with the additional collocation of processing units attached to the base-band unit (BBU) pool. Recently formed partnerships between major cloud service operators and telecom operators are planning to realise this concept as a step forward to bring edge computing service deployment [[i]].
Despite the move of the cloud service providers closer to the edge, the distributed edge computing concept at the end-user site is still a necessity for fulfilling advance 5G requirements in terms of reliability and security in addition to ultra-low latency. This is applicable for a number of envisioned services primarily in manufacturing, automated mobility and any critical services such as eHealth and public protection and disaster relief (PPDR). In addition the sharing of edge computing nodes over the telecom operator infrastructure provides a new powerful business model through which the infrastructure owner and the independent end-users, application developers and service providers can be mutually benefited [[ii]], while the operator may easily expand its portfolio towards new type of tailored user-centric services [[iii]].
Figure 1 – Envisioned 5G system from the end-user to the fog/edge computing level and the cloud level at the core, with intelligent network control and service deployment and monitoring capabilities per level
According to the Int5Gent vision, which is highly supported by its telecom operator partners and is depicted in Figure 1, the edge and fog computing nodes coexist in a 5G fronthaul-backhaul infrastructure and support the vertical services and IoT devices at the attached access networks. The edge/fog level is composed by a diverse type of nodes and infrastructures with different processing capabilities, split in general into pure edge nodes (e.g. industrial or enterprise nodes, smart city or smart home gateways, private WiFi infrastructure servers) and fog-based service delivery nodes (e.g. mini-DCs, application servers, content delivery and data storage nodes). A special case is the node attached to the pool of BBUs handling the information processing of the mobile end users in the RAN. The overall 5G system is controlled by a network orchestrator with distributed edge/fog node management capabilities and centrally located application deployment and monitoring, which is also linked to the cloud for the delivery of high level processes such as web services and access to data pools.
What is though important in such a network, is the capability on one hand to seamlessly interconnect access nodes supporting any type of IoT device and related services over a bandwidth flexible and adaptive fronthaul/backhaul infrastructure and on the other hand to control and manage the network and computational resources, as well as orchestrate the lifecycle of the deployed service functions. This necessitates key advances in both the hardware level and the network and service orchestration level that are summarized below and constitute the main innovation targets of the envisioned 5G system platform.
- The infrastructure should be capable of commonly supporting distributed edge nodes and low cost simple RRHs linked dynamically to a pool of centrally located pool of BBUs at the CO. In turn this requires advances in the optimal support of spectrally tuneable elements in both wireless and optical domains, as well as resolving the data synchronization issues that occurs through time sensitive networking solutions.
- The use of mmWave technologies is essential in order to address the high capacity requirements for enhanced fixed-wireless access nodes (beyond 10GB/s), wireless node extensions and bandwidth rich IoT devices offering video streaming services. These should be accompanied by efficient and data transport solutions with small integration complexity with the mmWave radio front ends and reduced power consumption.
- Processing at the edge should be able to accommodate the needs for intelligent AI-based applications, support of enhanced security protocols and flexibility in terms of supported functions and linked resources. The processing hardware should be efficiently interfaced to a network orchestrator (VNF manager) with distributed allocation capabilities in order to efficiently manage the set of processes running at the edge.
- Multiple radio access technologies (RATs) should be integrated within the Base-Band Processing Engine of BBUs in order to achieve interoperability between various fronthaul interfaces, functional splits, optical transceivers, microwave/mmWave radio-frontends each one tailored to the specific needs of attached access networks. Such powerful RAT-agnostic approach offers increased flexibility at the network design and operation level.
- The data plane infrastructure should be supported by an NFV orchestration platform capable of operating a mix of virtual and physical network functions in distributed NFV infrastructures. Heterogeneous processing resources at the cloud, fog and edge node level should be optimally controlled by specialized virtual infrastructure managers, while transport SDN RAN controllers are required for managing the network resources over integrated fronthaul/backhaul and core segments.
- A service deployment platform is required for the instantiation of the service requests by end users in the form of dynamic slices, adaptive to the changing application requirements. A more efficient service level platform should also be able to implement policy criteria and provide intelligent analytics to the attached end users.
- The monitoring of resources and infrastructure at the network orchestration level should be linked to the service functions in a continuous manner feeding cognitive network strategies and responding on time to changes.
- The different platform layers should be properly integrated and interfaced with easily expandable solutions for the seamless inclusion of new technology blocks in the data plane and the deployment of new services and applications in the user plane.
[i] Business Insider, Hirsh Chitkara – AWS’s new partnership with network operators will help enable latency-sensitive 5G applications, 5/12/19
[ii] STL partners – Edge Computing: 5 viable telco business models , Nov. 2017