This blog is for Communication Services professionals to discuss and share perspectives, point of views and best practices around key trending topics.

« May 2016 | Main

June 26, 2019

vRAN Progression in 5G Part-2

 

vRAN progression in 5G - Is it worth the effort? (PART-2)

 

This blog, in continuation of first part, further elucidates different standard and non-standard approaches adopted by telecom industry to make vRAN concept a reality. It also touches upon what realistic challenges lie ahead with adoption of multiple line approaches.

 

Different approaches for 3GPP-defined split RAN

 

3GPP has defined eight different functional split options, which creates additional distribution of protocol layer options run on the CU and DU. This allows splitting at RRC, PDCP, RLC, MAC, and L1 levels by creating a lower and a higher version of each layer as separate modules. Each of these options have their advantages and disadvantages in terms of bandwidth and latency-related impact.

 

Higher L1RFOptions in Green are accepted as reference solutions in 5G NROp:1 Split at L3-PDCPLower L1Lower MACFig 1: Functional split diagram

 

The concept of numerology, introduced in 5G, can support over-the-air latency of as low as 62.5 microseconds. This is the minimal time at the subframe level in 5G compared to 1ms in LTE, which implies it has to be honored at MAC-PHY interface for DL/UL resource allocation of UE by the RAN scheduler. Due to this very low latency, any spilt at lower layer level makes physical layer processing (precoding, channel and layer mapping) and scheduling decisions at MAC very complex. Because of this, the most favored approach is option 2 (see details in Fig 2). This has already been adopted in dual connectivity solutions with PDCP-RLC split. Here, the CU consists of L3+PDCP and the DU consists of the remaining L2+L1+RF modules. Most of the initial 5G commercial deployments across the world have adopted this option currently.

 

 

Fig 2: vRAN split option 2 detailed at the protocol stack level

 

 

To unlock maximum value from vRAN solutions, many protocol stack developers have adopted different approaches, which are illustrated below.

 

Benefits and disadvantages of different vRAN options

 

  • Software and hardware remodeling: This option creates a multi-threaded software model for lower layers (MAC/PHY) where threads are carefully segregated and mounted on compatible kernel versions (RT/non-RT). It is bound to dedicated or shared CPU cores based on the real-time and non-real-time operations to be executed. For example, in a typical MAC layer solution, the MAC-PHY interface thread or resource scheduler thread should be real-time and running on a dedicated core while the MAC-L3 interface thread can be non-real-time which can run on a shared CPU core. This also engenders the need for using kernel RT-enabled patches for such threads or utilizing a data plane development kit (DPDK) with fast-path mechanisms that bypass the kernel to user space transfer of data during user plane packet processing or hyperthreading on x86-based platforms. These options can accelerate the packet-level processing in the lower layers. Any 5G commercial RAN software requires several hardware CPU cores to run the multi-threaded software. Binding the processes/threads to the right affinity and priority is also important for efficient software performance. However, complexity and costs are high to ensure these software and hardware options comply with the above requirements.

 

  • Entity-level segregation: Some architectures propose a two-level split at the RAN level by introducing an additional entity in between. This is possible by having additional signal processing units (SPUs) that may host the DU components of RLC, MAC and higher L1 based on design and requirements (Fig 3). All L3+PDCP components are run on the CU as virtualized baseband units (vBBUs) that are connected through ethernet to SPU. Further, the SPU is connected to the RRU that hosts lower L1 along with RF and antenna parts, which leverage ethernet connectivity between the nodes. This type of architecture provides latency benefits at the fronthaul interface, which is important when using URLLC-based applications in the future (in 3GPP Rel-16). However, it also requires an extra node to be designed at both hardware and software levels, which can increase cost. Another overhead in this type of arrangement could be connectivity cost between all physical and remote (virtual) nodes which could go considerably high if dark fibre is used to connect them together.

     

 

           Fig 3: Addition SPU Approach to Cater RAN Components

 

Open RAN (O-RAN) impact: Another aspect of segregation comes from O-RAN-defined architecture. This further breaks down RAN into five logical parts and standardizes some of the newly-proposed interfaces with the aim of creating a complete and open (or proprietary agnostic) RAN solution. O-RAN is an initiative by a consortium of companies to create open RAN solutions across interfaces, thereby reducing dependencies on the same vendor for end-to-end solutions. O-RAN defines a RAN intelligent controller (RIC) where real-time and near real-time functions are connected through a defined A1 interface (Fig 4). Most of the AI/ML application-level and L3+(RRM) protocol-level data processing functionalities lie in these two logical entities. Further, near real-time RICs are connected to conventional CU and DU nodes through another proposed interface called E2. O-RAN also advocates having an open white-box fronthaul interface between O-DU (Open Distributed Unit) hosting L2+ entities and O-RU (Open Radio Unit) hosting L1+RF entities. Theoretically, with this kind of standardization, it should be possible for an operator to connect BBUs from one vendor with RRUs from another vendor. However, in reality, these interfaces are connected through a common public radio interface (CPRI) using fibre cables. Most of the time, this option is highly customized according to the L1 design and mount architecture. Thus, standardizing these interfaces can be a real challenge for the O-RAN consortium.

 


Fig 4: O-RAN Architecture pic

 

To summarize, implementing virtualized RAN comes with its own challenges as well as benefits. It is most useful as a conventional macro base station solution where baseband units and radio remote units are separate entities. However, the advent of the millimeter waves frequency spectrum with its very low wavelengths that can travel very small distances is enabling most 5G base stations to be designed as dense in-building solutions or small cell (femto/pico) deployments where baseband, radio and antenna units are all integrated into a single box. With such miniature hardware presenting a one-stop solution, the significance of vRAN may diminish. Nevertheless, I believe the evolution of RAN architecture will remain quite dynamic in the near future thanks to innovation in 5G, wireless and virtualization technologies that will simplify complexity and improve performance.

 

References

 

 

 

vRAN Progression in 5G Part-1

vRAN progression in 5G - Is it worth the effort? (PART-1)

 

At MWC Barcelona 2019, more than 90% of demonstrations and use case conceptualizations were around 5G. The emergence of 5G unlocks significant disruption in the radio access network (RAN) space on top of the existing 4G LTE networks, thanks to non-standalone mode dual-connectivity architecture. However, a key application is the potential to move RAN to the cloud in what is called virtualized RAN (vRAN). This 2-part blog shall cover various solution aspects of vRAN with respect to evolving standards and their relevance to the 5G landscape.

 

Evolution of vRAN architecture

The main driver for vRAN is the tremendous savings achieved in operational and capital expenditure for network service providers (NSPs) and network equipment providers (NEPs). But, before examining the pros and cons of vRAN, it is important to understand how vRAN-based architecture has evolved by focusing on RAN components.

 

 

Fig 1: Basic vRAN architecture network

 

A conventional radio access network or base station is comprised of two parts:

 

  1. Baseband unit (BBU) - BBU is the brain of RAN. It runs the complete protocol stack software that is responsible for allocating radio resources to the connected UE (User Equipment) in both downlink and uplink directions. It also controls mobility like basic attach, handover, etc., and exposes connectivity to the backhaul core network for internet access.

 

  1. Radio remote unit (RRU) - The RRU comprises of the actual radio frequency (RF) hardware along with transmitting and receiving antennas. It is responsible for broadcasting the electromagnetic radio signals. RRU is known as the front-haul network for RAN.

 

Traditionally, deploying RAN base stations is a distributed process wherein BBUs and RRUs are physically installed across every cell site. This involves considerable costs to provision and run the hardware. The emergence of 4G led to some improvements as a result of which the BBU could be centralized at one location while RRUs were installed at physical cell sites. Now, the next phase of improvement brought about by 5G involves creating a 'BBU hotel' where a group of BBUs are housed together on a single hardware while exposing multiple ethernet/fibre links to connect to multiple RRUs.

 

The challenge here is the steep cost of the fibre cables that connect to RRUs, which are responsible for transporting data signals using Common Public Radio Interface (CPRI) protocol. Another challenge is supporting the high bandwidth needed for 5G and ensuring low latency (to the tune of tens of milliseconds in 5G) over the current interface.

 

vRAN architecture proposes to move the 'BBU hotel' to centralized data sites using network functions virtualization (NFV) platforms and virtual network function (VNF) software that run on industry-standard silicon like Intel x86, Cavium, Freescale, etc. Such a model can also use optimized servers to scale baseband capabilities as well as AI/ML-based cloud orchestrators to intelligently build high processing capabilities for smart radio resource allocation algorithms.

 

 

Fig 2: Different types of RAN field deployment scenarios

 


 

Upgrading RAN to vRAN architecture and its impact

Typically, a 5G RAN protocol stack shall comprise of L3+ (OAM, RRM, SON, RRC, X2, S1, Ng, Xn), L2+ (PDCP, RLC, MAC) and L1 (physical layer + RF) components and protocols. These components and protocol layers support control plane (CP) as well as user plane (UP) traffic. While the signaling and user plane traffic flows from backhaul to RAN through S1-SCTP and GTP protocol, the fronthaul traffic towards the UE goes over the air (OTA).

In order to maximize the output of vRAN systems, considerable design changes in the RAN architecture are required. 3GPP, the global standards organizations for mobile telephony, has introduced many concepts to address this by defining several split architecture options in RAN. For instance, it has defined the centralized unit (CU) and distributed unit (DU) as two logical entities (see Fig 3). The idea is that all the protocol layers and components that are mounted on the CU can be virtualized using different cloud options. In contrast, all the protocol layers that are mounted on the DU can run at physical sites along with the RRU. CU is further segregated as CU-CP (Centralized Unit Control Plane) hosting signaling plane protocol entities and CU-UP (Centralized Unit User Plane) hosting data plane protocol entities. Another new standard interface, E1 has been defined as an interface between CU-CP and CU-UP which is SCTP based protocol in-line with existing interfaces (S1/X2 etc.). Moreover, both the CU and DU are connected through a standard F1 interface with F1-C and F1-U acting as control and user plane interfaces. F1 interface is also SCTP based for transport.

 

Fig 3: DU/CU-level segregation in RAN

 

The whole idea of having a centralized unit is to run most of the non-real-time processes that may not require strict latency timelines. Therefore, the radio resource manager (RRM), which is responsible for dynamically allocating common and dedicated radio resources to the UE along with admission and bearer control functionalities, can run its resource allocation algorithms here. Designers also have the options to use machine learning, artificial intelligence and predictive analytics tools to design the most optimized algorithms for the RRM. For example, the RRM can support the MAC (Medium Access Control) scheduler for allocating the correct resource blocks for a particular UE. It does this by finding the most accurate UE radio conditions within the cell, based on multiple CQI/PMI/RI feedback reports that may use AI-based logics for processing large chunk of reports efficiently. There is a one to many mapping supported between CU and DU where one CU entity (possibly hosted on cloud or virtualized) shall be able to control multiple DU entities (hosted on actual physical cell sites). The size of this mapping shall be driven from multiple factors like software design, physical link used to connect CU-DU as well as hardware & processing capabilities of Linux/DSP/FPGA based platforms hosting these entities.

This concludes the first part of this blog which explained the terminologies and recent augmentations in standards helping to evolve a virtualized RAN solution. The next part shall explain the approaches proposed for vRAN deployment and their challenges.

 


Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter


Categories