Artificial Intelligence (AI) and Machine Learning (ML) is gaining an increasing momentum in the coming 5G-NR R17 release in the middle of 2022. Lead by the 3GPP RAN3 working group, a new study item has been raised recently to bring attention to the application of AI/ML to NG-RAN (i.e. 5G-Advance) network. The 3GPP RAN3 working group is now focusing on the functionality and the corresponding types of inputs and outputs, i.e. massive data collected from RAN, core network, and terminals. The working group is also studying on the potential impacts on the existing nodes and interfaces, whereas the detailed AI/ML algorithms are out of the scope and expected to be left for network vendors and operators’ implementation. This article first gives an overview of the status of the study item, followed by predictions of the AI/ML algorithms that are most probably adopted in the future. The challenges in integrating AI/ML algorithms in the NG-RAN is also reviewed in the following sections.
In the first draft of the technical report, TR 37.817 – Study on enhancement for Data Collection for NR and EN-DC (Early version only at June 15, 2021), a functional framework for RAN intelligence (Figure 1), has been proposed. In the framework, data are collected from network nodes (e.g. base station or terminals) as the basis for model training, and inference. The framework has provided two data paths separately for training-based ML algorithm (e.g. supervised learning) and inference-based ML algorithm such as unsupervised learning and reinforcement learning. As expected, the details of the ML algo rithms are expected to be left for implementation from network vendors and operators. The output of the ML algorithms is delivered to entities such as RAN RU, DU and OAM, which are labeled as actor to take actions in the framework. In the framework, the delay in one loop is defined as the difference between moment when the feedback from the current actions is available, and the moment when actions are delivered by the actors. In applications requiring low latency, the delay in one loop needs to be optimized. However, for other applications focusing on network throughput, network energy efficiency, the priority of the ML algorithms is to exploit the maximum gain from data.
The most promising supervised ML algorithm is the deep learning neural networks. In its simplest form, deep neural networks consist of multiple layers of interconnected nodes, each building upon the previous layers to tune the beliefs from the input to the output. Other incredibly complex types of neural networks, such as Convolutional neural network (CNNs) and Recurrent neu ral network (RNNs), are to address specific problems separately in computer vision and natural language processing. How to apply CNNs and RNNs to network data is still open for researchers.
In unsupervised learning, ML algorithms based on Bayesian inference are gaining attentions. Candidates are the Expectation Maximization (EM) al gorithm and Principal Component Analysis (PCA). Unsupervised learning avoids the training and validation steps which are common in neural network. The avoidance of training and validation makes it more attractive than super vised learning algorithms. By incorporating prior knowledge, these algorithms are the best to harvest the benefits brought by intelligence while leveraging the years of experience in the network implementation in the field.
Reinforcement learning, however, is believed to find its most important role in NG-RAN. In general, the networking performance optimization problem is an inherent complex task, with multiple objectives including latency, reliabil ity, connection density, user experience, etc. Meanwhile, it must address the constraints due to dependence of NR-enabling features such as beamforming, carrier-aggregation, and network slicing. Such an engineering problem can be treated as ‘black-box’. Any attempt to model the behaviours of the ‘black box’ will definitely have bias and are not well behaved in different scenarios. Reinforcement learning (RL), however, can provide solutions to such a ‘black box’ problem. Classic algorithms of RL, e.g. the Q-learning algorithm, is goal oriented, model-free, bias-resistant, and adaptable. Unlike supervised learning algorithms, reinforcement learning algorithms doesn’t require retraining be cause it can adapt to new environment automatically on the fly. Reinforce ment learning algorithm can also run in real-time in balancing the exploration and the exploitation capabilities. In the exploration, it can explore the search space to leave local optimal solutions; while in the exploitation, it can exploit the neighbourhood of the best solutions that it has found thus far.
In deploying the AI/ML algorithms to the NG-RAN, one of the challenges is the growing of the algorithm complexity. How to split the complexity be tween the base stations and the terminals, and among different network enti ties, is still open. In general, the base station has much cheaper computation resource so the major components of the algorithms will lie in the base sta tions. However, the communication protocol to send the input date from the terminals to the base stations should be standardized and optimized to reduce the network performance impact.
Another challenge is the feature engineering in the AI/ML algorithms. An item to study on enhancement for data collection for NR and ENDC has been initiated with the objective to identify the data (i.e. features in ML algorithm) that may be needed by AI function as input. It is expected that some data features are mandatory while the others are optional. How to select the most relevant features for specific optimization objectives is critical for the runtime complexity and the performance of any AI/ML algorithm. The third challenge is the privacy and security issue in AI/ML algorithm. In supervised learning, to obtain more accurate classifier, a ML/AL algorithm is supposed to collect sufficient training data from a set of data owners, which usually contains sensitive information from data owners. Data storage in the data center will also present data security concerns. Researcher are also fo cusing on improving the vulnerabilities of the AI/ML algorithm under the attacks, especially when in the open, wireless communication environment. In summary, the research on the application of the AI/ML in the NG-RAN has just begun. This article only provides some initial considerations. Though the detailed AI/ML algorithms are out of the RAN3 scope, this article has provided a prediction of the possible AI/ML algorithms that are possible for implementation by network vendors and operators. Stay tuned for further updates as the study progresses in the 3GPP and join us in this journey into this ‘uncharted’ territory of AI/ML in NG-RAN.