To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we study the technoeconomic challenges for one of the most promising new caching paradigms, the elastic wireless edge caching solution, by which third parties dynamically lease storage resources in a wireless cloud. The main idea is the following: a mobile network operator (MNO) advertises storage prices for servers placed in proximity to the end users, and various content providers lease on-demand capacity to improve the quality of their services. We describe the main concepts and existing business models for the elastic CDN solution, provide an overview of the related work, and discuss the key differences between in-network and edge caching. We then present a detailed model for this system where the caches reside in cellular base stations. We formulate a problem where cache dimensioning, content caching, and request routing decisions are jointly optimized by a central processor (CP) to reduce content delivery delay, subject to a given leasing budget. We design a suite of dynamic solution algorithms, based on the Lyapunov drift-minus-benefit technique and present numerical experiments that quantify the benefits of elastic over typical static cache deployments
Wireless edge caching for mobile social networks (MSNs) has emerged as one of the prospective solutions to provide reliable and low-latency communication services for mobile users on social networking. In this chapter, we first give an overview of MSNs, including their development and challenges. We then discuss mobile edge caching (MEC) paradigms to address emerging issues for MSNs, e.g., service delay, users’ experience, and economic efficiency. In addition to the advantages, the development of MEC networks also places some key challenges’ such as hierarchical architecture of MEC networks, proactive caching, privacy, and security issues. The framework can authenticate MSN users based on public-key cryptography and predict their content demands utilizing a matrix factorization method. Based on the prediction, an optimal content caching policy for an MEC node is presented to minimize the average latency of all MSN users under the MEC nodes’ storage capacity constraints. Furthermore, this framework provides an optimal business model to maximize the revenue for MSN service providers based on the demands of the MSN users and the obtained optimal caching policy.
Large-scale data analysis is becoming an important source of information for mobile network operators (MNOs). MNOs can now investigate the feasibility of possible new technological advances such as storage/memory utilization, context awareness, and edge/cloud computing using analytic platforms designed for big data processing. Within this context, studying caching from a mobile data traffic analytical perspective can offer rich insights on evaluating the potential benefits and gains of proactive caching at base stations. In this chapter, we study how data collected from MNOs can be leveraged using machine learning tools in order to infer insights into the benefits of caching. Through our practical architecture, vast amount of data can be harnessed for content popularity estimations and placing strategic contents at base stations (BSs).Ourresults demonstrate several gains in terms of both content demand satisfaction and backhaul offloading rates while utilizing real-world data sets collected from a major MNO.
This chapter presents a content-centric framework for transmission optimization in cloud radio access networks (RANs) by leveraging wireless edge caching and physical-layer multicasting. We consider a cache-enabled cloud RAN, where each base station (BS) is equipped with a local cache and connected to a central processor (CP) via a backhaul link. The BSs acquire the requested contents either from their local caches or from the core network via the backhaul links. We first study the caching effects on multicast-enabled access downlink, where users requesting the same content are grouped together and served by the same BS or BS cluster using multicasting. We study the cache-aware joint design of the content-centric BS clustering and multicast beam-forming to minimize the system total power cost and backhaul cost subject to the quality-of-service (QoS) constraints for each multicast group.
This chapter investigates the impact of caching in the interference networks. First, we briefly review the basics of some classic interference networks and the corresponding interference management techniques. Then we review an interference network with caches equipped at all transmitters and receivers, termed as cache-aided interference network. The information-theoretic metric normalized delivery time (NDT) is introduced to characterize the system performance. The NDT in the cache-aided interference network is discussed for both single-antenna and multiple-antenna cases. It is shown that with different cache sizes, the network topology can be opportunistically changed to different classic interference networks, which leverages local caching gain, coded multicasting gain, and transmitter cooperation gain (via interference alignment and interference neutralization). Finally, the NDT results are extended to the partially connected interference network.
In this chapter, a novel framework is proposed to address critical mobility management challenges, including frequent handovers (HOs), handover failure (HOF), and excessive energy consumption for seamless HO in emerging dense wireless cellular networks. In particular, we develop a model that exploits broadband mmW connectivity whenever available to cache content that MUEs are interested in. Thus it will enable the MUEs to use the cached content and avoid unnecessary HO to small cell base stations (SCBSs) with relatively small cell sizes. First, we develop a geometric model to derive tractable, closed-form expressions for key performance metrics, such as the probability of caching, cumulative distribution function of caching duration, and the average data rate for content caching over an mmW link. In addition, we provide insight on the performance gains that caching in mmW–mW networks can yield in terms of reducing the number of HOs and the average HOF.
We consider joint caching, routing, and channel assignment for video delivery over coordinated small-cell cellular systems of the future internet. We formulate the problem of maximizing the throughput of the system as a linear program in which the number of variables is very large. To address channel interference, our formulation incorporates the conflict graph that arises when wireless links interfere with each other due to simultaneous transmission. We utilize the column generation method to solve the problem by breaking it into a restricted master subproblem that involves a select subset of variables and a collection of pricing subproblems that select the new variable to be introduced into the restricted master problem, if that leads to a better objective function value.
Edge-caching has received much attention as an efficient technique to reduce delivery latency and network congestion during peak-traffic times by bringing data closer to end users. Existing works usually design caching algorithms separately from physical layer design. In this chapter, we analyze edge-caching wireless networks by taking into account the caching capability when designing the signal transmission. Particularly, we investigate multi-layer caching, where both base station (BS) and users are capable of storing content data in their local cache and analyze the performance of edge-caching wireless networks under two notable uncoded and coded caching strategies. Wefirst calculate backhaul and access throughputs of the two caching strategies for arbitrary values of cache size. The required backhaul and access throughputs are derived as a function of the BS and user cache sizes. Then closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Based on the derived formulas, the system EE is maximized via a precoding vectors design and optimization while satisfying a predefined user request rate. Two optimization problems are proposed to minimize the content delivery time for the two caching strategies.
Video data have been showed to dominate a significant portion of mobile data traffic and have a strong influence on a backhaul congestion issue in cellular networks. To tackle the problem, proactive caching is considered as a prominent candidate in terms of cost efficiency. In this chapter, we study a novel popularity-predicting-based caching procedure that takes raw video data as input to determine an optimal cache placement policy, which deals with both published and unpublished videos. For dealing with unpublished videos whose statistical information is unknown, features from the video content are extracted and condensed into a high-dimensional vector. This type of vector is then mapped to a lower-dimensional space. This process not only alleviates the computational burden but also creates a new vector that is more meaningful and comprehensive. At this stage, different types of prediction models can be trained to anticipate the popularity, for which information from published videos is used as training data.
In this chapter, we discuss the application of edge caching to enhance the physical layer security of cellular networks with limited backhaul capacity. By proactively sharing the same content across a subset of base stations (BSs) through both caching and backhaul loading, secure cooperative multiple-input multiple-output (MIMO) transmission of several BSs can be dynamically enabled in accordance with the cache status, the channel conditions, and the backhaul capacity. We formulate a two-stage nonconvex optimization problem for minimizing the total transmit power while providing quality of service (QoS) and guaranteeing communication secrecy during content delivery, where the caching and the cooperative MIMO transmission policy are optimized in an offline caching stage and an online delivery stage, respectively. Caching is shown to be beneficial as it reduces the data sharing overhead imposed on the capacity-constrained backhaul links, introduces additional secure degrees of freedom, and enables a power-efficient communication system design.