IEEE PIMRC 2017 is pleased to announce the sixteen tutorials scheduled within the technical program of the conference. They cover a wide range of timely and disruptive topics and will be presented by top-notch experts in their field, both from academia and industry:
|Sunday 8 October 2017|
|Westmount||Tutorial 1||Coffee Break||Tutorial 1||Lunch Break
(on your own)
|Tutorial 9||Coffee Break||Tutorial 9|
|Outremont||Tutorial 2||Tutorial 2||Tutorial 10||Tutorial 10|
|Fontaine C||Tutorial 3||Tutorial 3||Tutorial 11||Tutorial 11|
|Fontaine D||Tutorial 4||Tutorial 4||Tutorial 12||Tutorial 12|
|Fontaine E||Tutorial 5||Tutorial 5||Tutorial 13||Tutorial 13|
|Fontaine F||Tutorial 6||Tutorial 6||Tutorial 14||Tutorial 14|
|Fontaine G||Tutorial 7||Tutorial 7||Tutorial 15||Tutorial 15|
|Fontaine H||Tutorial 8||Tutorial 8||Tutorial 16||Tutorial 16|
Future of wireless cellular networks is challenged by a plethora of new applications and services, for which legacy networks e.g., LTE were not originally designed. Internet of Things (IoT) or Machine-to-Machine communications (M2M) represent one such amalgam of emerging services. Some of the important key use cases of IoT include smart metering, industrial automation, video surveillance, environment sensing, wearable sensing and computing, vehicular sensing, intelligent transport systems, participatory sensing and crowdsourcing etc. Historically, cellular systems have been mainly designed and optimized to serve traffic from human-to-human (H2H) communications. IoT poses new challenges that cannot be overlooked mainly due to the contrasting nature and density of H2H and IoT devices and the nature of the offered traffic. Therefore, designing future cellular network vis-a-vis 5G and beyond such that it can meet the requirements of both H2H and IoT simultaneously is a great challenge.
The paradigm shift required in cellular system design was recently well summarized in a European commission’s official 5G vision, revealed at Mobile World Congress 2015 according to which “5G infrastructure should be flexible and rapidly adapt to a broad range of requirements. It should be designed to be a sustainable and scalable technology”. In other words, unlike its predecessors, 5G network needs to be designed to be Lean, Elastic, Agile and Proactive, (i.e., LEAP). The term lean characterises low set up time and signalling and control overheads. The term elastic characterizes flexibility and adaptability to provide resources where and when needed. The term proactive denotes the ability to predict and pre-empt instead of reacting to a situation.
- Why do future cellular networks need to be lean, elastic and proactive to support IoT? (10 min)
- IoT/H2H Use cases demanding leanness
- IoT/H2H Use cases demanding elasticity
- IoT/H2H Use cases demanding proactivity
- IoT/H2H Use cases demanding agility
- New architectures for IoT: Control and Data Plane Split in RAN (CDSA) (15 min)
- What is the optimal split of functionalities between DBS and CBS and does this split have to change with service type e.g, IoT, or H2H?
- What are the best measurements to create the database for implementing a data base aided CDSA (D-CDA) implementation and does the optimal set of measurements change with service type e.g., IoT or H2H?
- How much capacity can be gained with D-CDA under ideal conditions?
- How much energy can be saved with D-CDA under ideal conditions?
- How does D-CDA compare with conventional HetNets and CDA schemes?
- Why is SoTA SON not good enough for IoT? (15 min)
- Why 5G SON needs exclusive treatment for energy efficiency to support IoT.
- Why IoT needs conflict free SON?
- Why SON for IoT needs more focus on Large Time Scale dynamics?
- Why SON for IoT needs to be more transparent?
- Why IoT needs faster SON: Pro-Active instead of reactive SON?
- Why IoT SON engine needs end to end network visibility
- What is dark data in cellular networks, and what are its utilities in enabling IoT (25 min)
- User/device/thing level dark data
- Cell level dark data
- Core network level dark data
- Dark data from peripheral sources
- Implementing BSON to achieve leanness, elasticity and proactivity in cellular networks (50 min)
- Collecting and preprocessing dark data
- Transforming the raw data into right data
- Knowledge building perspective
- Analytics and machine learning perspective
- Building user and network behavior models using the right data
- Integrating models into SON engine
- Reviewing the holistic BSON framework
- Case studies on BSON/CDSA as enablers of IoT (45 min)
- Case Study 1: CDSA for lean and elastic cell less deployment
- Case study 1: CDSA for proactive mobility management
- Case study 2: BSON for proactive energy efficiency
- Case study 3: BSON for proactive self-healing
- Open research challenges in CDSA and BSON as enabler for IoT (20 min)
- Challenges in determining optimal split between control and data plane for different IoT use cases
- Challenges in collecting, mining and inferring from IoT data, for BSON implementation.
- Challenges in coupling the big data based knowledge base with SON engine.
The next generation wireless networks need to accommodate around 1000x higher data volumes and 50x more devices than current networks. Since the spectral resources are scarce, particularly in bands suitable for wide-area coverage, the main improvements need to come from a more aggressive spatial reuse of the spectrum; that is, many more concurrent transmissions are required per unit area. This can be achieved by the massive MIMO (massive multi-user multiple-input multiple output) technology, where the access points are equipped with hundreds of antennas and can serve tens of users on each time‐frequency resource by spatial multiplexing. The large number of antennas provides a great separation of users in the spatial domain, which is a paradigm shift from conventional multi-user technologies that mainly rely on user separation in the time or frequency domains.
In recent years, massive MIMO has gone from being a mind-blowing theoretical concept to one of the most promising 5G-enabling technologies. Everybody seems to talk about massive MIMO, but do they all mean the same thing? What is the canonical definition of massive MIMO? What are the main differences from classical multi‐user MIMO technology from the nineties? What are the key characteristics of the transmission protocol? How does the channel model impact the spectral and energy efficiency? How can massive MIMO be deployed and what is the impact of hardware impairment? Is pilot contamination a problem in practice?
This tutorial provides answers to these questions and other doubts that the attendees might have. We begin by covering the main motivation and properties of massive MIMO in depth. Next, we describe basic communication theoretic results that are useful to quantify the fundamental gains, behaviors, and limits of the technology. The second half of the tutorial provides a survey of the state-of‐the‐art regarding spectral efficiency, energy efficient network design, and practical deployment considerations.
- Massive MIMO: What and why (30 min)
- Introduction: Trends and 5G goals
- Evolving cellular networks for higher area throughput
- Key aspects of having massive antenna numbers
- Achieving a scalable Massive MIMO protocol
- Spectral efficiency (60 min)
- Basic communication theoretical results
- Methodology for performance evaluation
- Channel estimation
- Spectral efficiency in uplink and downlink
- The limiting factors of Massive MIMO
- Asymptotic analysis
- Practical deployment considerations (30 min)
- Channel modeling
- Array deployments – different antenna geometries, effect of antenna element spacing
- Massive MIMO at mmWave frequencies
- Co-existence with heterogeneous networks
- Energy efficiency (45 min)
- Why care about energy efficiency?
- Mathematical definition of energy efficiency
- Importance of accurate power consumption modeling
- Optimizing networks for energy efficiency: How many users and antennas?
- Fundamental insights
- Interplay between design parameters – Power scaling laws
- Radiated vs. total power
- Massive MIMO vs. small cells
- Future predictions
- Key open problems (15 min)
TU-03 SPECTRUM MANAGEMENT AND SHARING FOR FUTURE WIRELESS NETWORKS: DYNAMIC RADIO SPECTRUM ACCESS AS A SERVICE
In this tutorial we introduce concepts and methods for providing spectrum-access-as-a-service (SaaS) to coexisting wireless systems utilizing the same radio spectrum each with different connectivity requirements. We then present a multi-objective optimization framework to investigate the fundamental performance bounds in a fully dynamic SaaS system. Various system architectures based on full/partial virtualisation are then investigated and their performances are compared. We then look at the SaaS as a digital ecosystem and an innovation framework where we discuss its horizontal scalability and self-organization behaviour as well as its resiliency, robustness, utility and pricing. SaaS-based techniques are then discussed for providing connectivity to mission critical autonomous objects such as autonomous vehicles and robots. Applications of data science and machine learning in future planning of such systems are also explained. We then present several use-cases followed by conclusions and discussions on the challenges and open problems in service oriented provisioning of radio spectrum.
In this tutorial we define spectrum-access-as-a-service (SaaS) for coexisting wireless systems. Coexisting wireless systems are modelled as independent systems having their own, often conflicting, performance objective accessing a shared and limited radio spectrum. Multi-objective-optimization is utilized as an analytical tool, where we define the notion of optimality in such problems and briefly review the main concepts including the ‘efficient set’. The efficient set provides a quantitative insight on the achievable system trade-offs. The characteristics of this set are then reviewed followed by analytical and heuristics methods for obtaining the efficient set. In particular, we investigate the robustness of the efficient sets against inaccurate measurements which often happens in various system functions including spectrum sensing due to various reasons. In some cases the set of achievable trade-offs provided by the efficient set are not ‘good enough’ from either users’ or operators’ perspective. For such cases we present methods to engineer the efficient set. A number of use-cases are provided in which advanced communication techniques such as coding, beamforming, and relaying are adopted in dynamic spectrum access systems to enable achieving better trade-offs. The proposed multi-objective optimization formulation is capable of quantifying techno-economic aspect of dynamic spectrum access. We further look at the system design issues and discuss the alternative virtualizations scenarios and compare their performance. Based on these investigations, we then propose SaaS as a digital ecosystem. Following is the list of topics which is presented in this tutorial:
- Theoretical background: Optimization with multiple objectives
- Notions of optimality: Various notions of optimality are defined including fuzzy dominance and strong and weak Edgeworth-Pareto optimality; the concepts of local and global efficiency are also discussed. The efficiency concept is then discussed from the trade-off point of view and the achievable compromises in system design. Examples from wireless communications including, multiplexing diversity trade-off, spectral and energy efficiency trade-off are also discussed based on this mathematical framework.
- Classification of multi-objective optimization problems: Different convex and non-convex multi objective problems are investigated and duality theorem for such problems is also presented. Extensions of KKT conditions for multi-objective optimization problems are presented.
- Obtaining the set of optimal solutions: Classic scalarisation techniques as well as heuristics and evolutionary algorithms are reviewed and pros-and-cons are investigated.
- Approximations and computational complexity: Methods for approximating the efficient set are presented and their corresponding computational complexity are also discussed.
- Robustness in multi-objective optimization: The concept of robustness in such problems is discussed. Robustness is in particular very important in wireless communications as different factors such as delay and inaccurate measurements contribute into uncertainty of the decision space. Analytical methods are presented for obtaining a robust efficient set. The cost of being robust in terms of the gap with the non-robust efficient set is also discussed.
- Engineering the efficient set: In cases where the set of trade-offs is not ‘good enough’ for the system design, methods are discussed where the efficient set could be accordingly engineered through various methods including, providing a better set of achievable system trade-offs. Various examples are also provided in which different techniques such as coding, beamforming, and relaying are adopted to engineer the efficient set so it provides a ‘better’ set of trade-offs.
Based on the analytical insight provided by the multiobjective optimization framework we then present a cloud based spectrum-access-as-a-service platform designed for providing dynamic spectrum access. In such system other information could be also incorporated in decision making via the cloud which makes the whole process more efficient. In particular we discuss the following:
- Techniques for efficient monitoring of the spectrum status in various time-scales through a combination of a priori knowledge of radio coverage map, information provided by other users/networks at the same/or a similar location/circumstance, i.e., crowd-sourcing, predicting object function behaviour based on data or model,
- Techniques to maintain access after perceiving/predicting an interference risk,
- Collaborative connectivity techniques to facilitate spectrum access,
- Design of scalable receivers capable of substituting full connectivity by a combination of on-board processing, inter-user collaboration, and a lower spectrum access,
- Various system architectures based on full/partial virtualisation and evaluation of their performance using simulations and analysis,
- Techniques for coordinating various level of collaboration,
- Cross-layer techniques to capture wireless networks’ and users’ characteristics across protocol layers,
- SaaS horizontal scalability,
- Design guidelines to meet the demands efficiently,
- Protocols and methods for designing a SaaS Application Program Interface (API),
- Design guidelines/specifications for radio access networks (RANs) and/or 5G and beyond,
- Techniques to incorporate proactive cashing into SaaS,
- Exploring design alternatives for the transitional phase
We then discuss the challenges and open problems in the convergence of computing and communications I general and providing spectrum access in particular.
Unmanned aerial vehicles (UAVs) are expected to become an integral component of future smart cities. In fact, UAVs are expected to be widely and massively deployed for a variety of critical applications that include surveillance, package delivery, disaster and recovery, remote sensing, and transportation, among others. More recently, new possibilities for commercial applications and public service for UAVs have begun to emerge, with the potential to dramatically change the way in which we lead our daily lives. For instance, in 2013, Amazon announced a research and development initiative focused on its next-generation Prime Air delivery service. The goal of this service is to deliver packages into customers’ hands in 30 minutes or less using small UAVs, each with a payload of several pounds. 2014 has been a pivotal year that has witnessed an unprecedented proliferation of personal drones, such as the Phantom and Inspire from DJI, the Lone Project from Google, AR Drone and Bebop Drone from Parrot, and IRIS Drone from 3D Robotic. Such a widespread deployment of UAVs will require fundamental new tools and techniques to analyze the possibilities of wireless communications using UAVs and among UAVs.
In the telecom arena, flying drones are already envisioned by operators to help provide broadband access to under-developed areas or provide hot-spot coverage during sporting events. More generally flying drones are expected to become widespread in the foreseeable future. These flying robots will develop a unique capability of providing a rapidly deployable, highly flexible, wireless relaying architecture that can strongly complement small cell base stations. UAVs can provide “on-demand” densification, help push content closer to the end-user at a reduced cost and be made autonomous to a large extent: Airborne relays can self-optimize positioning based on safety constraints, learning of propagation characteristics (including maximizing line of sight probability) and of ground user traffic demands. Finally, UAVs can act as local storing units making smart decisions about content caching. Thus airborne relays offer a promising solution for ultra-flexible wireless deployment, without the prohibitive costs related to fiber backhaul upgrading. Yet another example is when UAVs can be used as flying base stations that can be used to serve hotspots and highly congested events, or to provide critical communications for areas in
which no terrestrial infrastructure exists (e.g., in public safety scenarios or in rural areas). Clearly, UAVs will revolutionize the wireless industry and there is an ever increasing need to understand the potential and challenges of wireless communications using UAVs.
To this end, this tutorial will seek to provide a comprehensive introduction to wireless communications using UAVs while delineating the potential opportunities, roadblocks, and challenges facing the widespread deployment of UAVs for communication purposes. First, the tutorial will shed light on the intrinsic properties of the air-to-ground and air-to-air channel models while pinpointing how such channels differ from classical wireless terrestrial channels. Second, we will introduce the fundamental performance metrics and limitations of UAV-based communications. In particular, using tools from communication theory and stochastic geometry, we will provide insights on the quality-of-service that can be provided by UAV-based wireless communications, in the presence of various types of ground and terrestrial networks. Then, we will analyze and study the performance of UAV-to-UAV communications. Subsequently, having laid the fundamental performance metrics, we will introduce the analytical and theoretical tools needed to understand how to optimally deploy and operate UAVs for communication purposes. In particular, we will study several specific UAV deployment and mobility scenarios and we will provide new mathematical techniques, from optimization, game, and probability theory that can enable one to dynamically deploy and move UAVs for optimizing wireless communications. Moreover, we will study, in detail, the challenges of resource allocation in networks that rely on UAV-based communications. Throughout this tutorial, we will highlight the various performance tradeoffs pertaining to UAV communications ranging from energy efficiency to mobility and coverage. The tutorial concludes by overviewing future opportunities and challenges in this area.
- UAVs and wireless communications: a closer union
- Brief history of UAV communications and the current state of the art
- The distinction between conventional cellular/wireless systems and UAV-based wireless networking
- Basic issues and challenges of UAV communications
- Channel modeling for UAV communications
- UAV channels vs. Terrestrial channels
- Air-to-ground channel: characteristics and existing models
- Air-to-air channel: characteristics and existing models
- Shortcoming of existing models
- Fundamental performance tradeoffs for UAV-based wireless communication
- Performance metrics and parameters for UAV-based communication
- Tools needed for modeling and analyzing the performance of UAV-based communication
- Several case studies for quantifying the performance of UAV-based communications in practical scenarios such as UAV with terrestrial networks and UAV with device-to-device communications
- Performance analysis of UAV communications under flight time constraints
- Optimal deployment and network operation
- Overview of UAV deployments for wireless communication purposes: opportunities and challenges
- UAV deployment optimization in terms of location, motion and coordination
- Resource allocation in UAVs: challenges and case studies
- Game-theoretic methods for optimizing UAV deployment and operation
- Cooperation between UAVs as well as between UAV and ground network elements
- 5G applications of UAV Communications
- Machine learning for cache-enabled UAV communications
- UAV-enabled networks for energy-efficient Internet of Things (IoT) communication
- Other applications in 5G and beyond
- Conclusions and future directions
- Conclusions and summary
- Discussion of future directions and open opportunities: toward a massive deployment of UAVs for wireless communications
For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable and accurate performance bounds for cellular wireless networks including multi-tier and cognitive cellular networks, underlay device-to-device (D2D) communications, energy harvesting-based communication, coordinated multipoint transmission (CoMP) transmissions, full-duplex (FD) communications, etc. These technologies will enable the evolving fifth generation (5G) cellular networks. Stochastic geometry, the theory of point processes in particular, can capture the location-dependent interactions among the coexisting network entities. It provides a rich set of mathematical tools to model and analyze cellular networks with different types of cells (e.g., macro cell, micro cell, pico cell, or femto cell) with different characteristics (i.e., transmission power, cognition capabilities, etc.) in terms of several key performance indicators such as SINR coverage probability, link capacity, and network capacity.
For the analysis and design of interference avoidance and management techniques in such multi-tier cellular networks (which are also referred to as small cell networks or HetNets), rigorous yet simple interference models are required. However, interference modeling has always been a challenging problem even in the traditional single-tier cellular networks. For interference characterization, assuming that the deployment of the base stations (BSs) in a cellular network follows a regular grid (e.g., the traditional hexagonal grid model) leads to either intractable results which require massive Monte Carlo simulation or inaccurate results due to unrealistic assumptions (e.g., Wyner model). Moreover, due to the variation of the capacity (both network and link capacities) demands across the service area (e.g., downtowns, residential areas, parks, sub-urban and rural areas), the BSs will not exactly follow a gridbased model. That is, for snapshots of a cellular network at different locations, the positions of the BSs with respect to (w.r.t.) each other will have random patterns. By capturing the spatial randomness of the BSs as well as network entities including network users, stochastic geometry analysis provides general and topology-independent results. When applied to networks modeled as spatial Poisson point processes (PPPs) with Rayleigh fading, simple closed-form expressions can be obtained which help us to better understand the network performance behavior in response to the variations in design parameters. Stochastic geometrybased analysis and optimization of future generation cellular networks is a very fertile area of research and has recently attracted significant interest from the research community.
The aim of this tutorial is to provide an extensive overview of the stochastic geometry modeling approach for next-generation cellular networks, and the state-of-the-art research on this topic. After motivating the requirement for spatial modeling for the evolving 5G cellular networks, it will introduce the basics of stochastic geometry modeling tools and the related mathematical preliminaries. Then, it will present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks, underlay D2D communication, and cognitive and energyharvesting D2D communication. It will also present a taxonomy of the stochastic geometry modeling approaches based on the target network model, the point process used, and the performance evaluation technique. Finally, it will discuss the open research challenges and future research directions.
- Overview of 5G Cellular Networks and Spatial Modeling Techniques (20 minutes)
- 5G visions and requirements and enabling technologies
- Key performance indicators (KPIs): SINR outage/coverage, average rate, transmission capacity
- SINR modeling techniques
- Stochastic geometry modeling
- Point Process and Interference Modeling (30 minutes)
- Point processes (PPP, clustered processes, repulsive processes)
- Campbell theorem and probability generating functional
- Neyman Scott process: Matern cluster process and modified Thomas cluster process
- Laplace transform of the pdf of interference
- Performance Evaluation Techniques (40 minutes)
- Technique #1: Rayleigh fading assumption
- Technique #2: Region bounds and dominant interferers
- Technique #3: Fitting
- Technique #4: Plancherel-Parseval theorem
- Technique #5: Inversion
- Modeling Large-Scale Single and Multi-Tier Cellular Networks (60 minutes)
- Modeling downlink transmissions
- Modeling uplink transmissions
- Single-tier networks with frequency reuse
- Biasing and load balancing
- Optimal deployment of BSs
- Large-scale multiple-input multiple-output cellular systems
- Modeling Cognitive Small Cells in Multi-Tier Cellular Networks (20 minutes)
- Spectrum sensing range and spectrum reuse efficiency
- Spectrum access schemes by cognitive small cells
- Network modeling
- Outage probability (channel outage and SINR outage) analysis for downlink transmissions in cognitive small cells
- Modeling Mode Selection and Power Control for Underlay D2D Communication (30 minutes)
- Biasing-based mode selection and channel inversion power control for underlay D2D communication
- Network modeling and stochastic geometry analysis
- Cognitive and energy harvesting-based D2D communication
- Network modeling and stochastic geometry analysis
- Open Issues and Future Research Directions (10 minutes)
In recent years, energy harvesting (EH) solutions have become a emerging paradigm for powering up future wireless sensing systems. Instead of completely relying on a fixed battery or power from the grid, nodes with EH capabilities collect energy from the environment, such as solar power or power from radio-frequency signals. Energy harvesting constitute a key enabling technology for internet of things (IoT) applications, including smart homes and cities. This aspect is particularly important given that over 16 billion devices are expected to be connected in the upcoming by 2022 and hence powering of these devices and providing energy autonomous systems is a central concern.
This tutorial focuses on cross-layer approaches that treat the communications, control and estimation aspects together. In contrast to approaches that solely focus on communication aspects, this framework emphasizes the underlying sensing and control problem in wireless sensing applications. This unified framework enables researchers to efficiently bridge the gap between the fundamental signal processing results, such as the sampling theorems in signal processing, and the practical limitations imposed by energy harvesting capabilities.
The tutorial will start with an overview of energy harvesting technologies and their applications in sensing. This part will allow the audience to get a high-level grasp of energy harvesting technologies and in particular the possibilities and the practical limitations brought by these technologies in sensor networks. We will then move to the theoretical results regarding to estimation and control in energy harvesting systems. We will cover both the off-line and online optimization approaches, and both single and multi-user systems with centralized and decentralized approaches. We will conclude with a summary and a discussion of open research topics.
- Overview of Energy Harvesting Sensing Systems
- Application Examples
- Remote Estimation with Sensor Networks Powered by Energy Harvesting – Offline Optimization
- Single User Systems
- Multi-user Systems
- Centralized Approaches
- Decentralized Approaches
- Remote Estimation and Control with Sensor Networks Powered by Energy Harvesting – Online Optimization
- Dynamical Systems
- Kalman Filtering in Dynamical Systems
- Control of Dynamical Systems
- Single User Systems
- Multi-user Systems
- Centralized Approaches
- Decentralized Approaches
- Open Research Topics and Concluding Remarks
Fully dynamic or flexible time division duplexing (TDD) is an essential 5G ingredient, e.g., in the 3GPP New Radio (NR) specification. In small cell scenarios, especially, the amount of instantaneous uplink (UL) and downlink (DL) traffic may vary significantly with time and among the adjacent cells. In such cases, Dynamic TDD allows full flexibility for resources to be adapted between the UL and DL at each time instant thus providing vastly improved overall resource utilisation. However, the dynamic variation of resource allocation will change the interference seen by neighbouring cells and users, drastically complicating the overall interference management. In particular, this variation can impact systems that employ coordinated beamforming or cooperative multi-cell transmission, which require sufficiently reliable channel state information (CSI) between the mutually interfering network nodes.
The target of the tutorial is to provide a holistic view for the design of interference management in 5G and beyond networks based on dynamic traffic aware TDD, particularly addressing relevant technology components such as beamformer training, CSI acquisition, resource allocation and interference control. The methods discussed will account for variations in user traffic as well the associated overhead from adapting UL/DL resources. First, an overview of 3GPP NR physical layer aspects is provided. A special focus is given for key technology components enabling dynamic TDD operation in NR. After this, the theoretical performance limits of dynamic TDD systems using scheduling and coordinated beamforming are briefly explored. Subsequently, low complexity, near optimal distributed solutions that account for the users’ traffic dynamics are considered. Particular emphasis is put on the iterative Forward-Backward (F-B) training based CSI acquisition and direct beamformer estimation mechanisms using precoded pilots, as well as, methods to compensate for pilot non-orthogonality and the associated errors due to imperfect channel measurements. The feasibility of proposed F-B training schemes in the context of 5G radio access covering impacts to frame structure design, UE operation, etc., will be discussed. Finally, the proposed training schemes are extended to network controlled device-to-device (D2D) and cooperative transmission scenarios. The tutorial concludes with some highlights for future research directions.
- Objective, introduction and outline
- Network densification – challenges for interference management and potential solutions
- Dynamic and flexible TDD
- Overview of physical layer aspects of 3GPP New Radio (NR) in Rel-15
- Key technology components and procedures enabling dynamic TDD in NR
- Performance bounds and decentralized approaches in dense multiuser MIMO networks
- Linear transmitter-receiver design for multi-cell multiuser MIMO communication
- Effective CSI signaling and backhauling
- Traffic aware linear transceiver design for different system optimization criteria and Quality of Service constraints
- Joint UL/DL mode selection and transceiver design
- Coordinated interference management in dense TDD based 5G networks
- Distributed transmission with Forward-Backward (F-B) training
- Direct beamformer estimation with over-the-air (OTA) F-B training
- Impact of limited pilot resources
- Extension to network controlled device-to-device (D2D) and cooperative transmission scenarios
- OTA TX-RX training schemes in the context of 5G radio access
- Impact to frame structure design, UE operation
- Impact on reference signals, and control signaling
The tutorial focusses on the main principles of the converged edge cloud, as the cornerstone of the next generation network architecture. The term “converged” is attributed to two basic trends (a) some of the traditional Radio Access Network (RAN) functions (e.g., baseband processing functions) are moved to the edge cloud for better scalability, pooling and resiliency and (b) some of the traditional Core functions are decomposed, virtualized and relocated to the edge cloud, allowing for better, unmatched application support. Hence, the converged edge cloud is designed to support the Mobile Edge Computing, Cloud RAN and virtualized Core functions.
The tutorial will describe the architectural concepts of the converged edge cloud and will address its key attributes, which include:
- Providing the compute and storage infrastructure (edge data centers) serving as flexible application platform at the edge of the network.
- Enabling low-latency applications and massive capacity through localized delivery (e.g., localized services, caching, location awareness).
- Supporting special use cases of ultra-low latency and high reliability for vertical markets.
- Exposing network information to applications to improve user experience and overall network efficiency.
- Enabling support for smart connectivity through algorithms and network intelligence in cloud-based multi-connectivity environments.
Machine learning algorithms such as clustering and classification on graphs are highly versatile tools applicable to many disciplines. Their application to wireless communications, in particular Wireless Sensor networks is growing. In this tutorial we present important mathematical tools and algorithmic principles for performing learning on data modeled as graphs with a special focus on applications to wireless communications. We discuss basic graph concepts, graph community models, spectral graph theoretic tools, among others.
- Introduction to fundamentals of graphs and graph matrix representations
- Sparse graphs and dense graphs
- Basic Random Graph Models
- Community Detection Problem: Community paritioning vs hidden community detection
- Challenges in community detection: NP Hardness
- Machine learning on graphs
- Machine learning applications in Wireless communications
- Semi-supervised vs unsupervised clustering
- Relevant random graph models
- Unsupervised Clustering
- Spectral clustering based community detection
- Anomaly detection and clique detection
- Kernel Spectral Clustering
- Semidefinite relaxation
- Semisupervised Learning/Clustering
- Diffusion/Random-walk and Message-Passing based methods
- Personalized PageRank
- Belief Propagation
After turbo coding and LDPC coding, polar coding has emerged as a new coding method in theory and practice, with completely different principles for the design of code, encoder and decoder. With their invention by Arikan in 2008, polar codes have been accepted as a breakthrough in channel coding and have since spawn great academic interest. In late 2016 they were accepted for the eMBB control channel within the 5G standardisation, and so have also manifested their role as practically relevant codes. This tutorial will provide the theoretic principles and practical design approaches for polar codes, and will give an insight into the 5G polar coding standardisation process.
- Basic methods
- Channel polarisation
- Principle of channel polarisation
- Capacity-achieving coding scheme
- Polarisation exponent
- Encoder and decoder
- Kernel and transformation matrix
- Tanner graph
- Successive-cancellation decoder
- Frozen set design
- Density evolution for BEC
- Density evolution for AWGNC with GA
- Finite-length codes
- List decoding
- SC decoder structures
- List decoder structures
- CRC for list decoding
- Code design concepts
- Frozen / information sequence
- Puncturing / shortening patterns
- CRC / distributed CRC / PC
- Special code designs
- Polar sub-codes
- Multi-kernel polar codes
- Parity-check polar codes
- Polar codes in 5G
- Coding in 5G
- Coding standardisation overview
- Where to find information
- Agreements and discussion
- Agreements on polar codes
- Discussion of open issues
TU-11 LOW-POWER WIRELESS COMMUNICATION TECHNOLOGIES FOR CONNECTING EMBEDDED SENSORS IN THE IOT: A JOURNEY FROM FUNDAMENTALS TO HANDS-ON
Embedded sensors are enabling a wide range of emerging smart services in domains ranging from healthcare to smart homes and cities. They are waiting to be connected to the internet and rapidly becoming crucial components of a valuable Internet of Things (IoT).
The variety of wireless sensor system applications demands for appropriate wireless connectivity. Several new technologies and standards are popping up, fit for short or large range, and various data rate requirements. Dedicated networks are being deployed for Machine Type Communication.
This tutorial will bring a theoretical and practical initiation to wireless technologies tailored for connecting embedded sensors. It will explain fundamental concepts of wireless propagation, highlighting the challenges and opportunities to realize low power connections. Several actual technologies and standards for different categories of connections will be introduced. A few illustrative use cases will be presented. A hands-on session will allow the participants to experiment with EFM32 Happy Gecko developer boards, cooperating in small teams. In a final session a glance on future trends will be given. The tutorial will be concluded with an overview of interesting relevant resources and a discussion with the participants on the expectations for follow up beyond the tutorial.
- Introduction and fundamental knowhow (45 min)
- IoT wireless technologies overview (45 min)
- Low and lower – reducing power consumption of IoT nodes: a hands-on (60 min)
- Lessons learned and future trends (30 min)
The lecture will start with a sketch of the context and expectations on connecting embedded sensors in the IoT. The typical anatomy of a connected embedded system will be introduced, and the challenges associated to their design and operation.
The basics of wireless communication will be explained starting with free space wave propagation and discussing the influence of multi-pad, blockers, operating frequency, and antennas. The impact of the fundamental physics for connecting sensors specifically will be highlighted, with respect to energy consumption specifically. Technological solutions to connect remote sensors with low power budget in a reliable way will be introduced.
It is raining IoT technologies – which is very good news! A variety of solutions exists and is emerging, with different characteristics to meet the requirements of many different applications and embedded sensors waiting to be connected to the internet. IoT developers and integrators are often overwhelmed by the amount of standards and technologies available for IoT. This session will cover the variety of state-of-the-art wireless radio protocols (e.g., Bluetooth Low Energy, SIGFOX, LoRa, NB-IoT). They can support different ranges, data rates, energy constraints, and business needs
Appropriate use cases for the different connectivity options will be illustrated from an IoT perspective.
In this hands-on session the attendees will experience the development of a wireless sensor node. More specifically, how low-power operation can be achieved by clever utilization of the available resources. Such design choices include: selecting the right wireless technology, duty-cycled operation, hardware acceleration, etc.
The participants will experiment with a custom LoRa-based sensor, built with a Semtec SX1272 Radio chip and an EFM32 Cortex M0 processor. The way operations are being performed on the node can be customized in the Silabs Simplicity Studio IDE. The effect of these changes on the energy consumption can be observed through the IDE’s built-in energy profiler.
In the end, the lecturers will have shown that the design of true low-power wireless sensors requires a thoughtful design. Both software and hardware need to work seamlessly together within the boundaries of a specific application.
In the closing lecture we will first review the main learning of the tutorial and further resources of interested will be pointed to. Next a glance will be given on the future and specifically enlighten how Machine Type Communication is expected to evolve in the broader evolution towards 5G communication systems.
To conclude, we will propose a plan to continue the discussion on wireless connectivity for embedded sensors, and poll for the participants’ interests and feedback.
Nowadays, the mobile network no longer just connects people but is evolving into billions of devices, such as sensors, controllers, machines, autonomous vehicles, drones, people and things with each other and then achieves information and Intelligence. From a planning and optimization perspective on the mobile network, this means that we also need a lot more flexibility to address these future needs.
Next-generation (5G) wireless systems are characterized by three key features: heterogeneity, in terms of technology and services, dynamics, in terms of rapidly varying environments and uncertainty, and size, in terms of number of users, nodes, and services. The need for smart, secure, and autonomic network design has become a central research issue in a variety of applications and scenarios. Ultra dense networks (UDN) have attracted intense interest from both academia and industry to potentially improve spatial reuse and coverage, thus allowing cellular systems to achieve higher data rates, while retaining the seamless connectivity and mobility of cellular networks. However, considering the severe inter-tier interference and limited cooperative gains resulting from the constrained and non-ideal transmissions between adjacent base stations, a new paradigm for improving both spectral efficiency and energy efficiency through suppressing inter-tier interference and enhancing the cooperative processing capabilities is needed in the practical evolution of UDN.
This tutorial will identify and discuss technical challenges and recent results related to the UDN in 5G mobile networks. The tutorial is mainly divided into four parts. In the first part, we will introduce UDN, discuss about the UDNs system architecture, and provide some main technical challenges. In the second part, we will focus on the issue of resource management in UDN and provide different recent research findings that help us to develop engineering insights. In the third part, we will address the signal processing and PHY layer design of UDN and address some key research problems. In the last part, we will summarize by providing a future outlook of UDN.
- Overview of UDN and System Architecture
- RAN Evolutions: Brief introduction of UDN, SON, C-RANs, LTE-U and their potential evolution
- Introduction o f UDN: Basic features and definitions, challenges, and state of the art
- System architecture: Fronthaul, Fog/cloud computing, heterogeneous networks, performance metrics
- Resource Management in UDN
- Resource Allocation : A cooperative bargaining game theoretic approach
- Resource allocation with heterogeneous services
- Secure resource allocation without and with cooperative jamming
- Cross layer optimization in UDN
- Interference Management in UDN
- Interference-limited resource optimization with fairness and imperfect spectrum sensing
- Coexistence of Wi-Fi and UDN with LTE-U
- Cooperative interference mitigation and handover management
- Incomplete CSI based resource optimization in SWIPT
- Outlook of UDN
- Evolution of UDN: Future research challenges
In spite of its 30-year history, Wi-Fi is continuously evolving being today the most used wireless technology for Internet access. Recently the fifth generation of the Wi-Fi standard – namely, IEEE 802.11-2016 – was published. It includes advanced techniques for multigigabit communications, more flexible QoS provisioning, power management, etc. In the tutorial, we will look into the next generation of Wi-Fi, which should be ready by 2021, focusing on the hottest topics currently being discussed in IEEE 802.11 Working Group. We will start with recently developed 802.11ah aka Wi-Fi HaLow that extends transmission range up to 1 km and makes Wi-Fi suitable for Internet of Things and Industrial Internet applications. Then we consider 802.11ax, which improves user experience in dense Wi-Fi networks and introduces OFDMA to Wi-Fi. We will also discuss mmWave communications with 802.11ay, data rates of which exceed 250 Gbps. Finally, we study how 802.11ba enables extremely low power communications and how Wi-Fi becomes smarter by providing new services, e.g. very accurate positioning, in addition to just cable replacement, for which it has been originally designed. For each topic, we will consider key features, review existing studies, list open issues and possible problem statements of high interest for both academia and industry.
- Trends of Wireless Technologies evolution
- Expansion on LTE and Wi-Fi, their intersection
- 802.11ah aka Wi-Fi Halow
- Use cases
- Short frames
- New channel access methods
- RAW (Restricted access window)
- TWT (Target Wakeup Time)
- Improving power efficiency
- Implementation (existing and expecting chipsets)
- Use cases
- Uplink MU MIMO
- OFDMA Random access (when and why it should be used)
- New Modulation and coding Schemes
- Improving performance in dense networks (a survey of discussed approaches)
- Customizable NAVs, RTS/CTS
- Use cases
- The peculiarities of mmWave communication
- New network architecture (PBSS)
- New PHY
- New channel access for directional communication
- How 11ay extends 11ad
- Other Wi-Fi improvements/amendments
- Pre-association discovery
- Fast Initial Link Setup
- Next Generation Positioning
- Low Power Wake up Radio
- Bonus: Li-Fi
Sections b-e will be enriched with analysis of existing studies and performance evaluation results. We will also discuss open issues and possible problem statements.
Sections b-g will be enriched with analysis of existing studies and performance evaluation results. We will also discuss open issues and possible problem statements. For example, Wi-Fi OFDMA basic principles differ from LTE ones, which makes scheduling problem open.
Sections b-f will be enriched with analysis of existing studies and performance evaluation results. We will also discuss open issues and possible problem statements.
For each of the amendment we will briefly discuss use cases and key solutions, either approved or being under consideration in the working group.
TU-14 NOMA FOR NEXT GENERATION WIRELESS NETWORKS: STATE OF THE ART, RESEARCH CHALLENGES, AND FUTURE TRENDS
Non-orthogonal multiple access (NOMA) is an essential enabling technology for the fifth generation (5G) wireless networks to meet the heterogeneous demands on low latency, high reliability, massive connectivity, improved fairness, and high throughput. The key idea behind NOMA is to serve multiple users in the same resource block, such as a time slot, subcarrier, or spreading code. The NOMA principle is a general framework, where several recently proposed 5G multiple access techniques can be viewed as special cases. Recent demonstrations by industry show that the use of NOMA can significantly improve the spectral efficiency of mobile networks. Because of its superior performance, NOMA has been also recently proposed for downlink transmission in 3rd generation partnership project long-term evolution (3GPP-LTE) systems, where the considered technique was termed multiuser superposition transmission (MUST). In addition, NOMA has been included into the next generation digital TV standard, e.g. ATSC (Advanced Television Systems Committee) 3.0, where it was termed Layered Division Multiplexing (LDM). This tutorial is to provide an overview of the latest research results and innovations in NOMA technologies as well as their applications. Future research challenges regarding NOMA in 5G and beyond are also presented.
- Review of the overall 5G requirements to support massive connectivity and realize spectrally and energy efficient communications.
- Single-carrier NOMA is introduced first, where two types of NOMA, powerdomain NOMA and cognitive-radio (CR) inspired NOMA, are described and their capabilities to meet different quality of service (QoS) requirements are compared.
- Multi-carrier (MC) NOMA is then presented as a special case of hybrid NOMA. The impact of user grouping and subcarrier allocation on the performance of MC-NOMA is illustrated. A few special case of MC-NOMA, such as sparse code multiple access (SCMA), low-density spreading (LDS), and pattern division multiple access (PDMA), are discussed and compared.
- The combination of orthogonal MIMO technologies and NOMA will be investigated. Unlike conventional multiple access techniques, the design of MIMO-NOMA is challenging. For example, power allocation in NOMA requires the ordering of the users based on their channel conditions. This user ordering is straightforward for the single-input single-output (SISO) case since it is easy to compare scalar channel coefficients, but it is difficult in MIMO scenarios in the presence of channel matrices/vectors. A few MIMO-NOMA designs with different trade-offs between system performance and complexity will be discussed.
- The design of cooperative NOMA schemes will be explained. In a NOMA system, successive interference cancellation is used, which means that some users know the other users’ information perfectly. Such a priori information should be exploited, e.g., some users can act as relays to help other users which experience poorer channel conditions. A few examples of cooperative NOMA protocols will be introduced and their advantages/disadvantages will be illustrated.
- The application of NOMA in mmWave networks will be investigated. Similar to NOMA, the motivation for using mmWave communications is the spectrum crunch, but the solution provided by mmWave communications is to use the mmWave bands which are less crowded compared to those used by the current cellular networks. We will show that the use of NOMA is still important in mmWave networks, in order to fully exploit the bandwidth resources available at very high frequencies.
- The practical implementation of NOMA will be discussed as well. The existing coding and modulation designs for NOMA will be described first, where another practical form of NOMA based on lattice coding, termed lattice partition multiple access (LPMA), will be introduced. The impact of imperfect channel state information (CSI) on the design of NOMA will then be investigated. Various approaches for cross-layer resource allocation in NOMA networks will be discussed and compared.
- Recent standardization activities related to NOMA will be reviewed as well. Particularly the tutorial will focus on the implementation of multi-user superposition transmission (MUST), a technique which has been included into 3GPP LTE Release 13. Different designs of MUST and their relationship to the basic form of NOMA will be illustrated. In addition, the application of NOMA in the digital TV standard ATSC 3.0 will also be explained.
- Finally, challenges and open problems for realizing spectrally efficient NOMA communications in the next generation of wireless networks will be discussed.
In wireless networking, currently one of the most pressing and challenging problems is to keep up with demand by scaling wireless capacity. State-of-the-art wireless communication already operates close to Shannon capacity and one of the most promising options to further increase data rates is to increase the communication bandwidth. Very high bandwidth channels are only available in the extremely high frequency part of the radio spectrum, the mm-wave band. The commercial potential of mm-wave networks has initiated several standardization activities within wireless personal area networks (WPANs) and wireless local area networks (WLANs), such as IEEE 802.15.3 Task Group 3c (TG3c), IEEE 802.11ad standardization task group, WirelessHD consortium, and wireless gigabit alliance (WiGig). First IEEE 802.11ad devices are expected to hit the market in 2016, and several large (European and US) research projects are currently investigating the use of mm-wave communication for backhaul, fronthaul, and even access in mobile networks. Ericsson Research has announced that a mm-waves cellular standard is expected to be released around 2020. Despite these ongoing standardization efforts and projects, much research is still needed. Communication at such high frequencies suffers from high attenuation and signal absorption, often restricting communication to line-of-sight (LOS) scenarios and requiring the use of highly directional antennas. This in turn requires a radical rethinking of wireless network design. For these reasons, the topic is of extreme relevance and timeliness to wireless networking and communication.
- Mm-wave specific characteristics: channel, propagation, deafness, blockage, directionality
- Hardware, specifically beam-forming antennas, also ADC/DAC challenges, #of transceiver chains, integration, etc.
- Interference modelling
- Beam-forming, access initialization
- MAC layer
- Medium access, packet aggregation, failure of CSMA/CD, specifics of directional medium access
- Association and relaying; multi-hop aspects
- Mm-waves for cellular networks
- Physical control channels
- Initial access and mobility management
- Resource allocation and interference management
- Coexistence and fallback to lower frequency mobile networks
- Control plane/user plane-split
- Spectrum sharing
- Mm-waves for short range networks
- IEEE 802.15.3c
- IEEE 802.11ad
- IEEE 802.11ay
In recent years we have seen an explosion of location based services. These services mostly rely on an accuracy performance that was “envisioned” two decades ago when the FCC demanded from the network operators such a performance to determine the whereabouts of 911 callers. Neither dedicated position systems, such as GPS, nor the cellular systems could deliver the potential performance in indoor or urban canyons, and therefore, led to an evolution of existing networks (GSM – UMTS – LTE) to provide network based localization. Further, in communication networks geo-location information is identified as a useful input that e.g. represents past and current channel state information and conditions (short- and long-term statistics) and network constellations. Current information jointly with mobility information leads to short-term predictions that have an impact on the different communication layers such as PHY, MAC or network management. Challenging applications of today and in the future demand a much more precise accuracy in the cm-range. We will survey the cellular network evolution of localization and outline potential lessons to be learned for future cellular generations, as well as a timely status of cellular localization within the 5G standard.
- Introduction and Motivation (10 min)
- Location-awareness in cellular communication systems (40 min)
- Physical layer
- MAC layer
- Network and transport layers
- Higher layers
- Fundamentals of cellular localization (40 min)
- Scene analysis (or fingerprinting)
- Add range free localisation methods
- Message passing (distributed vs centralized)
- Non-radio based localization techniques
- Challenges of localization methods
- Communication resources
- Standardization (key issues that hinder the evolution of different methods: too complicated and too many different devices)
- GNSS – Satellite navigation systems
- WLAN systems
- Proprietary systems
- Cellular systems
- Cellular standardization bodies related to localization (ETSI GSM, 3GPP – UMTS / LTE / NB-IOT), 3GPP2: CDMA 2000
- Other standardisation bodies interacting with 3GPP
- IEEE 802.11
- Bluetooth SIG
- 1G: Introduction of localization methods cellular systems
- 2G: Initial standardization of location methods
- 3G: Consolidation of standard methods and its enhancements
- 4G: Further enhancements of location methods
- indoor positioning
- Role of governmental bodies on the standardization
- FCC (United States)
- EC (Europe)
- Russian Federation
- Current standardization of 5G positioning
- New research trends for cellular localization
- Massive MIMO
- Cooperative positioning
- Multipath-assisted positioning
- Lessons learned