Abstract

Combining multiaccess edge computing (MEC) technology and wireless virtual reality (VR) game is a promising computing paradigm. Offloading the rendering tasks to the edge node can make up for the lack of computing resources of mobile devices. However, the current offloading works ignored that when rendering is enabled at the MEC server, the rendering operation depends heavily on the environment deployed on this MEC serve. In this paper, we propose a dynamically rendering-aware service module placement scheme for wireless VR games over the MEC networks. In this scheme, the rendering tasks of VR games are offloaded to the MEC server and closely coupled with service module placement. At the same time, to further optimize the end-to-end latency of VR video delivery, the routing delay of the rendered VR video stream and the costs of the service module migration are jointly considered with the proposed placement scheme. The goal of this scheme is to minimize the sum of the network costs over a long time under satisfying the delay constraint of each player. We model our strategy as a high-order, nonconvex, and time-varying function. To solve this problem, we transform the placement problem into the min-cut problem by constructing a series of auxiliary graphs. Then, we propose a two-stage iterative algorithm based on convex optimization and graphs theory to solve our object function. Finally, extensive simulation results show that our proposed algorithm can ensure low end-to-end latency for players and low network costs over the other baseline algorithms.

1. Introduction

Wireless virtual reality (VR) games are becoming more and more popular, and it is reported that the global VR gaming market size is projected to reach 45 billion dollars by 2025. A wireless VR game application is generally composed of two parts: a collection module and a service module. The collection module is used to collect the geographic location and actions of players and then delivers the collected information to the service module. The service module encapsulates all the necessary environments to perform logical calculations, render the scene, and synchronize the game information among players [1]. Players of different VR games need different service modules to perform their respective rendering tasks. Players of the same VR game use the same service modules and need to synchronize the information of this VR game with each other (such as character position and score). However, offering low-latency and high-quality VR gaming services to mass wireless players at any time and anywhere is always a major challenge [28].

Recently, introducing multiaccess edge computing (MEC) technology to wireless VR games has been a promising computing paradigm to address the above challenges [914]. By offloading the rendering tasks from the mobile devices (e.g., VR headsets) to the proximal MEC servers, the players’ requirements for ultrahigh computational capacity and strict response latency would be satisfied. Rendering refers to the process of generating images from a model, which is a representation of a 3D object or virtual environment defined by a programming language or data structure. Specifically, since the MEC server has higher computing power than the mobile device, the delay of rendering VR tasks on the MEC server is less than the delay of rendering the same VR tasks on the mobile device [1519].

However, edge rendering inevitably introduces the edge computing delay and the transmission delay caused by the rendered VR game video stream back to the mobile terminal. Especially, since the data volume of VR video streams is generally huge, the increase in delay will be even more pronounced. Therefore, it is particularly important to optimize the routing of rendered VR game video streams and reasonably allocate the edge resource including wireless spectrum and computation. In addition, it should be noted that deploying service modules on MEC servers increases placement costs, and limited by the storage capacity, service modules of all kinds of VR games cannot deploy on each MEC at the same time [2023]. But the premise of performing the rendering task of the player on the MEC server is that the service module of the VR game that this user participates in has been deployed on this MEC server [2426]. Based on the above discussion, the service module placement optimization and the computation resource allocation should be closely coupled to jointly optimize the wireless VR game delivery performance [2729].

Moreover, in a MEC network scenario of concurrent multiple kinds of wireless VR games, the geographical position of players may change with time, and their access base stations (BSs) may change as they move. To ensure the low routing cost of the rendered VR video streams of one group, the corresponding VR service module serving this group may need to migrate to a new base station. The above situation would increase migration costs [3034] including hardware wear-and-tear costs and data migration delay costs. Dynamically optimizing the trade-off between the routing cost and migration cost is necessary.

In this paper, we propose a dynamically rendering-aware service module placement scheme. In this scheme, the rendering tasks of VR games are offloaded to the MEC server and closely coupled with service module placement. At the same time, to further optimize the end-to-end latency of VR video delivery, the rendered VR video stream routing delay and service module migration costs are considered with the proposed placement scheme. Specifically, the strategies jointly consider the bandwidth, computing, and storage resource allocation scheme within each time slot and the service module migration cost optimization between different base stations in the adjacent time slot. The goal of this scheme is to minimize the sum of the network costs over a long time under satisfying the delay constraint of each player. (i)In this paper, we propose a dynamically rendering-aware service module placement scheme, which jointly optimizes service module placement and the associated rendering computation allocation. The goal of this scheme is to minimize the whole network cost based on satisfying the players’ low end-to-end delay and high-computing requirements(ii)We study the problem of how to dynamically place the VR service module to achieve a good balance between the routing delay cost of the rendered VR video stream and the migration cost of the corresponding service module(iii)We transform our placement problem into the minimal cut problem by developing algebraic conversions and constructing a series of auxiliary graphs. Then, we propose a two-stage iterative algorithm based on convex optimization and graphs theory to solve our objective function within polynomial time

The rest of this paper is organized as follows. Section 2 introduces the system model. Section 3 presents the problem formulation. The proposed solution is presented in Section 4. In Section 5, simulation results are presented and discussed. Finally, the conclusion is given in Section 6.

1.1. Related Work

At present, most of the research on placement strategy focuses on reducing network delay and network overhead for the user by reasonably deploying the services, data, or virtual machines in a suitable location with limited network resources. But the current works ignore considering the dependency relationships between computing and storage. Paper [24] proposes a two-time scale framework that jointly optimizes service placement and request scheduling considering system stability and operation cost. Paper [1] provides a mix of cost models to optimize the deployment of collaborative edge applications to achieve the best overall system performance. Paper [25] proposes a distributed algorithm based on games theory to optimize virtual machine placement in mobile cloud gaming through resource competition to meet the overall requirements of players in a cost-effective manner. Paper [35] proposes a novel offline community discovery and online community adjustment schemes to reduce the internode traffic and the system overhead, which solve the replica placement problem in a scalable and adaptive way. Paper [36] has some similarities with our work, which studies the joint optimization of service placement and request routing in the MEC networks with multidimensional (storage-computation-communication) constraints. In paper [5], the author proposes a MEC-based dynamic cache strategy and an optimized unload strategy to minimize system delay and energy. Paper [27] proposes a rendering-aware tile caching scheme to optimize the end-to-end latency for VR video delivery over multicell MEC networks. Paper [28] designs a view synthesis-based 360 VR caching system to meet the requirements of wireless VR applications and enhance the quality of the VR user experience, which supports MEC and hierarchical caching.

The goal of the recent research on wireless VR mainly focuses on improving the quality of service (QoS), reducing network overhead, or both by proper resource allocation, transcoding technology, introducing edge networks, and etc. Insufficient consideration is given to players’ mobility and the network scenario of concurrent multiple kinds of wireless VR games. Paper [4] proposes a blockchain-supported task offloading scheme to resist malicious attacks, which reduces the computing load of virtual machines and satisfy the high QoE of VR users. Paper [10] proposes a wireless VR network that supports MEC. The network uses a recurrent neural network (RNN) to predict the field of view of each VR user in real-time and transfers the rendering task of VR from the VR device to the MEC server through the rendering model migration function. Paper [16] proposes an adaptive MEC-assisted virtual reality framework, which can adaptively assign real-time virtual reality rendering tasks to MEC servers. Meanwhile, the caching capability of MEC servers can further improve network performance. Paper [37] proposes a task offloading, and resource management scheme based on wireless virtual reality is proposed. The scheme comprehensively considers the factors of cache, computing, and spectrum allocation and minimizes the content delivery delay while guaranteeing quality. Paper [38] studies a multilayer wireless VR video service scenario based on a MEC network. Its main goal is to minimize system energy consumption and delay and to find a balance between these two indicators. Paper [11] proposes to minimize the long-term energy consumption of MEC systems based on THz wireless access by jointly optimizing viewport rendering offloading and downlink transmission power control to support high-quality immersive VR video services. Paper [39] proposes a novel transcoding-enabled VR video caching and delivery framework for edge-enhanced next-generation wireless networks. Paper [40] investigates the optimal wireless streaming of a multi-quality-tiled VR video from a server to multiple users by effectively utilizing characteristics of multi-quality-tiled VR videos and computation resources at the users’ side.

2. System Model

The MEC server is a microdata center that is typically deployed with a cellular base station or WiFi access point. Some lightweight virtualization technologies are used to virtualize the hardware resources in the MEC server to realize the flexible sharing of resources.

In this section, as illustrated in Figure 1, we consider a scenario of concurrent multiple kinds of VR games under the cellular network equipped with MEC servers. In this network scenario, there are players and base stations (BSs), where each BS is deployed with a MEC server. We represent the set of BSs as and represent the set of users as . The base stations are connected to each other in a wired way. We assume that there are kinds of VR games in this scenario, denoted by the set . Therefore, different service modules are required to support these VR games. In addition, to make dynamic decisions, we model our problem as a time-slotted system, where we use to denote the set of consecutive time slots under consideration. We assume that each time slot is much larger than the delay caused by transmission and processing.

In the remaining subsections, the mathematical models for communication, dynamic placement, rendering computation, and whole network cost are discussed. Some important notations are summarized in Table 1.

2.1. Placement Cost

In this section, we investigate the dynamic placement scheme of all VR service modules in the system.

We assume that the set of service module placement strategies can be denoted as , where represents that the VR module service is storaged in the BS ; otherwise at the time , .

The cost for using the storage resources when placing service module on edge node is characterized by . The cost of the placement VR service module can be expressed by the following formula:

We assume the storage capacity of BS is , and the size of VR service module is . Due to the total size of the VR service modules deployed in BS m should not exceed the maximum storage capacity of BS , the constraint should be expressed as

2.2. Migration Cost

When the players move, due to the changes in the geographical location, the BS that transmits the rendered data to the players may change. At the same time, the BS that originally provided the rendering service for the game group may no longer be the best choice to provide service. The group may need to select a suitable new BS to perform rendering and even may need to deploy the corresponding VR service module on the new selected BS. That is to say, the data information of the service module may need to be migrated from the old MEC server to the new MEC server and built the environment on the new MEC. However, the migration of the VR service module will cause hardware wear-and-tear costs and impose data migration latency costs. The migration delay of each player belonging to the same group is equal and can be expressed as

In addition, the all migration costs can be expressed as where and can be, respectively, defined as where represents the migration delay of the VR service module and represents the cost of reconfiguring the VR service module . To be reasonable, we make the values of and to the same order of magnitude by adjusting the parameter .

2.3. Rendering Cost

Players in the same group may have overlapping computational tasks; in this section, we assume that the MEC server computes centrally after collecting all the information of the players in the group. Therefore, we allocate the computing resources on each server by the group.

In the MEC network, when the MEC server is serving only one group, that group can certainly get more computing resources to perform rendering, resulting in a low processing latency experience. However, in general, each MEC server needs to serve multiple groups at the same time, which can lead to competition for computation resources. In particular, if too many groups render on the same MEC server, the delays for all groups connected to this server will increase dramatically.

is the indicator, to represents whether the players join in the VR game . Due to one player can only join in one kind of game, so the corresponding constraints can be, respectively, formulated as

We use to denote the set of the rendering base station selection strategies. When the group selects the MEC server to perform the rendering task, at the time ; otherwise, .

In order to ensure the information synchronization between users in the same group, we assume that a group can only select one MEC server to process tasks at a time slot, so the corresponding constraints can be formulated as

Since the cost of putting the VR service module on the server is high, we put the VR service module on the BS, which has been selected to process the groups’ tasks. So, we can get the following formula:

We assume that the maximum computing capability of the MEC server is (Hz) and the computing resource of the BS allocated to group at time is . We use to represent the computing resource allocation scheme. represents the computing resource needed for group at time . The rendering delay of players belonging to the same group is equal. So, the rendering delay of player at time slot can be expressed as

So, the rendering cost can be denoted by the sum of the rendering latency of all groups, which can be expressed by

At the same time, a MEC server cannot allocate more computing resources to the groups; it serves than its maximum computing resources. Therefore, the corresponding computing resources constraints can be formulated as

2.4. Communication Cost

In this section, we present the communication model in the mobile edge computing networks based on mmWave, which concentrates on the downlink transmission. At the same time, we introduce the routing transmission delay.

2.4.1. Downlink Delay

We use as the access scheme, where the means that players is associated with BS at the time to obtain the rendered game video stream, while denotes that players is not served by BS at the time .

Moreover, players cannot connect to multiple base stations at the same time, and we need to ensure that each player can connect to a suitable one. So we get the following constraint formula:

We adopt the orthogonal spectrum reuse scheme in this system; i.e., all BS share the total frequency bandwidth, and there is no interference between the users served by the same BS. The data amount of the uplink transmission is small, only including some players' information, such as commands and actions. So, the delay and cost of this process are ignored in this paper.

The downlink transmission is used to transmit the rendered VR video stream, in which the amount of data is larger. Therefore, millimeter Wave technology with large bandwidth is adopted for downlink transmission. Assume that all channels are subject to independent identically distributed quasistatic Rayleigh block fading. The path loss can be expressed as follow: where is the downlink constant related to frequency, is the downlink path loss exponent at time , and is the distance between the players and BS at time .

Millimeter wave has the characteristics of short wavelength, small power, and directional antenna. The interference between the same frequency beam can be reduced well by millimeter wave interference cancelation technology. As the interference cancelation technology is not the focus of this paper and the millimeter transmission tends to be noise-limited and weak-interference, the interference in the transmission process of millimeter waves is ignored in this paper by referring to papers [16, 41, 42]. So, the signal-to-interference-plus-noise ratio received by the players from the BS is expressed as follows: where is the downlink antenna gain using direction beamforming between players and BS at the time , is the transmission power between players and BS , and is the variance of additive white Gaussian noise (AWGN).

We assume that the spectrum bandwidth allocated to players from BS at time is and use as the bandwidth allocation scheme. Since the total bandwidths that the BS allocates to its access players do not exceed the whole bandwidths in the wireless access network at time , which is , corresponding bandwidth constraints can be formulated as

Then, the uplink transmission rate between the players and the BS at time is

We assume that the size of the video images needed to transmit to the players at time is , so the delay of downlink transmission for players at time is

The delay of downlink transmission for all players at time , i.e., the downlink communication cost of the network, is

2.4.2. Routing Delay

In this section, we divided the players into groups based on the differences in VR games they participate in. Different groups need different service modules to perform rendering. We need to select an appropriate MEC server to perform rendering for group and route the rendered video stream quickly to the access base station of the user belonging to the group . The selected MEC server needs to have deployed the corresponding VR service modules and has sufficient computing resources to perform rendering tasks.

According to the above assumption, at the time slot , the delay of routing the rendered VR content requested by user from the working (rendering) BS to this user’s access BS can be expressed as where is the delay of routing one bit of data from BS to BS , when , .

The routing delay of all players at time , i.e., the routing cost of the network, is

So, the communication cost at time can be expressed as the sum of downlink transmission delay and routing delay.

3. Problem Formulation

Our goal is to develop dynamical service module placement strategies based on rendering-aware. The goal of those strategies is to minimize the sum of the whole network costs over a long time under satisfying the delay constraint of each player. The strategies jointly consider the resource allocation scheme within each time slot and the service module migration scheme between different base stations in the adjacent time slot.

We assume that the maximum tolerance delay of the group is . According to the above formula, the actual end-to-end delay of player at time slot can be expressed by the following:

We define as the weight coefficients, which represent the proportion of communication cost, rendering cost, placement cost, and migration cost in the objective function, respectively. So, the optimization problem can be formulated as follows:

Constraint ensures that a player cannot connect to multiple base stations at the same time; meanwhile, each user can connect to a BS. Constraint ensures that a group can only select one MEC server to perform rendering tasks at a time slot. Constraint ensures that the total bandwidths that the BS allocates to its access players do not exceed the whole bandwidths in the wireless access network at time . Constraint ensures that a MEC server cannot allocate more computing resources to the groups; it serves than its maximum computing resources. Constraint ensures that the total size of the VR service modules storage in BS should not exceed the maximum storage capacity of BS . Constraint ensures the total delay of each group cannot exceed its maximum tolerance delay.

4. Solution

In this section, in order to solve the original problem efficiently, we decompose the original problem into two subproblems including dynamic access and service module placement scheme and the quasistatic resource allocation. Then, we use minimum cut theory and convex optimization to solve the above subproblems, respectively.

4.1. Problem Reformulation

Firstly, to get rid of constraint 1 and constraint 2, we redefine sets and as and , respectively, where is the set of access decisions at time and represents the BS accessed by the players , and there is a one-to-one mapping relationship between it and the set . That is, and when . This way of coding can satisfy the constraint that a player can only access one base station at the same time.

In the same way, is the set of BS selection scheme at time . represents BS serving group at time , and there is a one-to-one mapping relationship between it and the set . This way of coding can satisfy the constraint that a group can only select one MEC server to perform editing tasks at a time slot.

So, the can be redefined as

Moreover, is the set of bandwidth allocation scheme at time . is the bandwidth that BS allocate to the players at time . is the set of computing resources allocation scheme at time . is the computing resources that BS allocate to the group at time .

Thus, we transform the original problem into the following problem:

where represents the set of all the groups that render on the BS and represents the set of all the players that access the BS at time . Constraint 5 can be satisfied by the -size minimum cut algorithm. is a binary function that equals if the specified condition holds and otherwise, where is the penalty function, which can be expressed as :

Due to our objective function containing dynamic optimization and quasistatic optimization, we divide the target function into two parts.

For the part one,

We design an iterative algorithm to update the access decisions of players and the placement schemes of the VR service module in each round by performing an operation called expansion. Furthermore, we optimize the expansion by minimizing graph cuts.

For the part two,

We use convex optimization to solve the resource allocation problem at each time slot.

4.2. Optimizing Dynamic Access and Placement Strategies by Graph Cuts

In this section, we introduce the expansion algorithm and how to construct a helper graph and encode the costs of part I into weights on the graph edges. Then, we demonstrate that the min-cut of the graph corresponds to the optimal decisions for the expansion.

4.2.1. Expansion

An expansion can be defined as a binary optimization and reflects the trend of moving the module served for group from the current base station to the base station and the trend of users accessing base station from the current base station. As shown in Figure 2, when we selected BS as the expansion, has a binary choice to stay as or change to . In the same way, has a binary choice to stay as or change to .

For the sake of calculation, the resultant after expansion also can be expressed by two indicator vectors with binary decision variables. (1) , where for all , we define if ; otherwise, . (2) , where for all , we define if ; otherwise, . Note that, if the module served for group is already on BS , , if the players is already access BS , .

4.2.2. Transforming the

After performing an “ expansion,” we reconstruct the as using binary variables and ; at the same time, we define and . And we can get

Then, based on the definition of , we can rewrite it as

4.2.3. A Simple Example of Graph Cut

Based on the derivation above, we find that and correspond to the sum of the products of pairs of binary variables; corresponds to the sum of binary variables.

Taking and as simple examples, we next will introduce how to minimize them, respectively, by constructing a graph. The basic idea is to construct a helper graph to make the sum of the weights of the min-cut of the graph equal the optimal value of the objective function. The above cut edges divide the nodes in the graph into two parts: one part of the nodes is on the side of node s, and the corresponding value is 0. The other part of the nodes is on the side of node , and the corresponding value is 1. In addition, the minimum cut can be computed in polynomial time only if all the edge weights are nonnegative. Next, we will introduce how to build a diagram for our example.

For , we reformulate the expression to construct each edge in a subgraph.

As illustrated in the first figure in Figure 3, the weight of edge between node and node is , the weight of edge between node and node is , and the weight of edge between node and node is , where . For example, when we divide the first graph’s nodes in Figure 3 into two parts by cutting the edge between nodes and , the edge between nodes and , and the edge between nodes and , node and node are in the same part, and node and node are in the same part (i.e., , , and , ). The value of the first graph function is , which is equal to the sum of the weights of the cut edges. In the last figure in Figure 3, the weight of edge between node and node is , and the weight of edge between node and node is .

4.2.4. Constructing a Graph to Solve the Subproblem

In this section, we construct a graph to make the sum of the edges’ weights in the minimal cut set equals the optimal value of our objective function. In this graph, there are vertices corresponding to the players, and vertices corresponding to the groups. Moreover, a source vertex and a terminal vertex are also in the vertex set. As a result, the set of vertices in is given by .

In the next section, we add edges to the graph and give each edge an appropriate weight. Firstly, based on the example of the last figure in Figure 3. The weights of the edges between node and node can be represented as , and the weights of the edges between node and node can be represented as .

Next, we rewrite formulas (30) and (31) to formulas (40) and (41) based on the example of the first figure in Figure 3.

Therefore, the weight of the edge between the vertex and vertex is where is always satisfied, which can be proved by the triangle inequality.

In the same way, the weight of the edge between the vertex and vertex is where is always satisfied, which can be proved by the triangle inequality.

In addition, based on the above derivation, we can also get that the partial of weight of the edge between vertex and vertex is

The partial of weight of the edge between vertex and vertex is

Moreover, the partial of weight of the edge between vertex and vertex is

Therefore, we can perform the following transformation of the objective function based on the above analysis:

The detail process of the auxiliary diagram construction is concluded in Algorithm 1.

Input: The network delay between BS and BS ; The
   switching cost of group at time ; The migration
   delay of group at time ;
Output: The value of binary variables and ; The auxiliary
   graph ; the variables and
1: Initialization; ;
2: fordo
3:   fordo
4:      do
5:         ;
6:         ;
7:      end for
8:      for Algorithm 1 do
9:         ;
10:         
           ;
11:         
12:      end for
13:   end for
14: end for
15: Solve the k-size s-t min cut [43] of ;
4.3. Resource Allocation Scheme Based on Convex Optimization

In this section, we mainly focus on the optimization of , that is, minimizing the total transmission and editing delay in each time interval through the reasonable allocation of computing and spectrum resources. When and are determined, the original optimization problem can be expressed in the following form: where is penalty function, which can be expressed as where goes to infinity. and are constants, when and are fixed.

Since the structure like is a well-known convex function, the optimization problem can be proved to be a convex problem.

Since the variable can affect multiple spectrum allocation variables, we denote those as global variables. Next, the local copy of the global variables would be introduced. Each base station can obtain a distributed feasible solution by decoupling the above problem.

For BS , we introduce the new variables as the local information.

is the local variation and represents the bandwidth resource allocation scheme of the BS . Thus, the feasible local variables of the BS can be denoted as and the constraint set of the objective function can be denoted as .

Let be the penalty function, when the belongs to the constraint set , i.e.,, we can get . Otherwise, . So, the objective functions equivalent to where , and in the above objective function, we can view as a constant.

We separate the objective function into multiple local function of the corresponding BS. Each local function can determine its local variable by using local information. The Lagrange formula of the augmented problem is where are the vectors of the Lagrange multipliers, and the penalty parameter is .

In order to solve the above problems (46), the iterative process is as follows.

4.3.1. Local Variables

where denotes the iteration times.

Since the updating process of of each BS is independent, we can decouple the problem into independent subproblems. We can update the local variables by solving the problem as follow:

We solve the above problem by CVX, due to it being convex, and then, broadcast the decision of each BS to other BSs.

4.3.2. Global Variables

The above problems are strictly convex and unconstrained quadratic problems, because we add the quadratic regular term to the augmented Lagrangian. Let the gradient of be zero. We can get the following results:

And then, we can derive

By using , we can derive

In other words, we can obtain global variables by averaging the corresponding updated local variables in each iteration.

4.3.3. Lagrange Multipliers

At each iteration, we can calculate the Lagrange multipliers directly by using the updated local variables and global variables . The formulation can be represented as follows:

4.3.4. Stopping Criterion and Convergence

The above problem is a convex problem with strong duality. When the number of iterations approaches infinity, the algorithm satisfies convergence. Therefore, the reasonable stopping criteria are given as follows: where and indicate the primal feasibility and dual feasibility conditions, respectively, which are the small positive constant scalars.

The above iteration process based on convex optimization is concluded in Algorithm 2.

1: Initialization the number of iterations , global variables
  and Lagrange multipliers ;
2: Set the maximum number of iterations and the stopping criterion threshold ;
3: while, and
4:   Each BS update by solving problem (48), and share the local solution to other BSs;
5:   Update the global variables according to the formula (52);
6:   Update the Lagrange multipliers according to the formula (54);
7:   ;
8: end while
9: Output the optimal solution;
4.3.5. Two-Stage Iterative Algorithm Based on Expansion

Because there are many optimization variables in the original problem, the complexity of the algorithm is high. In order to reduce the algorithm complexity and obtain the optimal solution to the original problem, we solve the original problem in two steps. So, we need to integrate the above two subalgorithms. Firstly, we input the result of Algorithm 1 as a fixed value into Algorithm 2 to solve Algorithm 2, and then, we compared the results of Algorithm 2 with the historical optimal results and updated the related variables. The above process is summarized in Algorithm 3.

Input: Set of BSs, Set of players, Set of groups, Set of consecutive time slots;
Output: The variable , and the minimum value of the objective function ;
1: Initialization the variable , , , , and
2: for iter=1:do
3:   fordo
4:      run Algorithm 1, obtain , and
5:      for iter=1:T do
6:         run Algorithm 2, obtain , and
7:      end for
8:      ifthen
9:         ;
10:         , ;
11:      else
12:         ;
13:         , ;
14:      end if
15:   end for
16: end for

Since we traverse for each MEC (Line 3 in Algorithm 3), the caching size can be restricted under at each round of expansion.

4.4. Algorithm Complexity Analysis

Since Algorithms 1 and 2 are the modules invoked by Algorithm 3 for times, where is the number of MECs and is the maximum number of iterations in Algorithm 3, we, respectively, investigate the complexity of Algorithms 1 and 2. According to paper [44], the complexity of the Algorithm 1 can be expressed as , where is the number of vertices and is the number of edges in the constructed graph. In our case, is bounded by ; is bounded by . Therefore, the complexity of Algorithm 1 is , due to . For Algorithm 2, the variables and have been fixed, and the remaining question can be broken down into solving local optimization problem (48) at each BS by using ADMM algorithm, whose complexity is . is the number of iterations required for Algorithm 2 convergence; the total computational complexity is . Therefore, the overall complexity of Algorithm 3 is .

5. Simulation Results and Discussions

In a wireless cellular network, it is assumed that players and base stations are randomly distributed in a circle with a radius of 100 m; other major simulation parameters are shown in Table 2.

To evaluate the performance of our proposed approach, we compare our proposed expansion-based two-stage approach to two other approaches: (1) placing each VR service module randomly on a MEC at each time slot, as labeled as “random placement,” and (2) particle swarm optimization was used to solve the objective function, as labeled as “particle swarm optimization.”

In Figure 4, we iteratively find the minimum value of the total network overhead under the condition that the maximizing computing capacity of each MEC server is 60 GHz and the maximizing storage capacity of each MEC server is 600 G, where total network overhead is the sum of the adjusted placement cost, communication cost, migration cost, and rendering cost, i.e., this paper’s object function . As shown above, the total network overhead of our proposed scheme and particle swarm optimization decreases rapidly as the iteration increases at the beginning, and then, the total network overhead converges and remains at an almost constant value. Moreover, it can be seen from the iteration diagram that our proposed algorithm converges in about 18 generations, while particle swarm optimization converges in about 25 generations. So, compared with other schemes, our proposed algorithm converges faster in the iteration process and keeps the lowest total network overhead.

Figure 5 shows the relationship between the computing power of the MEC server and the user average latency. Figure 6 shows the relationship between the computing power of the MEC server and the total network overhead. In the above two figures, as the computing power of the MEC server increases, the average latency and total network overhead of the user are greatly reduced. This is mainly because the more computing resources a MEC server can provide to the player, the less latency it needs to perform rendering. At the same time, the richer computing resources on the MEC server, the more MEC servers the system could be chosen to provide rendering services for a group of VR players, which saves the network cost of routing.

Figure 7 shows the relationship between the storage capability of the MEC server and the user average latency. Figure 8 shows the relationship between the storage capability of the MEC server and the total network overhead. As shown in the above two figures, the placement strategy proposed by us can effectively reduce the total network overhead. Moreover, with the storage capacity of the MEC server increasing, the average latency of user and total network overhead is greatly reduced. This is mainly because the larger the storage capacity of the MEC server, the more VR service modules can be placed on each edge node, which can reduce the migration costs between two base stations to a certain extent. Especially when the number of VR service modules that can be placed on the MEC server is small, in order to meet the video processing requirements of the constantly moving player, VR service modules need to migrate frequently between base stations. As shown in Figure 8, when the storage capability of the MEC server is less than 600 G, VR service module migration between base stations becomes frequent, and the total network overhead increases greatly.

In Figure 9, we compare the user average delay and total network overhead without delay constraint with the user average delay and total network overhead with delay constraint. The network parameter is the maximizing computing capacity of each MEC server is 60 GHz, and the storage capacity of each MEC server is 600 G. When there is no need to consider satisfying the delay constraint of each user, the feasible domain of the target problem becomes larger, and the total network cost is reduced compared with when the delay constraint is considered, but the average delay of the user will increase. At the same time, some users cannot complete their corresponding video processing tasks within the tolerable delay, as shown in Figure 10.

6. Conclusion

In this paper, we develop dynamical service module placement strategies based on rendering-aware to minimize the sum of the network costs over a long time under satisfying the delay constraint of each player. The strategies jointly consider the resource allocation scheme within each time slot and the service module migration scheme between different base stations in the adjacent time slot. Moreover, we propose a two-stage algorithm based on graph cut and convex optimization to solve the objective function. In future work, we will study the online placement strategy of VR service modules to further improve user experience and reduce network overhead in the process of VR video stream delivery and computing. In addition, we will extend our work to the security [45] and low-delay delivery of all kinds of superlarge video streams.

Data Availability

The simulation data used to support the findings of this study are included in the article. The research status data used to support the findings of this study are available in the references of this article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (No. 62171061).