Abstract

To support different QoS requirements of diverse types of services, a cross-layer QoS scheme providing different QoS guarantees is designed. This scheme sets values of service priorities according to services’ data arrival rates and required end-to-end delay to endow different services with diverse scheduling priorities. To support QoS requirements better and maintain fairness, this scheme introduces delay and throughput weight coefficients. The methods of calculating the coefficients are also proposed. Through decomposing the optimization problem that uses weighted network utility as its optimization objective using Lyapunov optimization technique, this scheme can simultaneously support different QoS requirements of various services. The throughput utility optimality of the scheme is also proved. To reduce the computational complexity of the scheme, a distributed media access control scheme is proposed. A power control algorithm for this cross-layer scheme is also designed, and this algorithm transforms the power control into the solution of a multivariate equation. The simulation results evaluated with Matlab show that, compared with the existing works, the algorithm presented in this paper can simultaneously satisfy the delay demands of different services with maintaining high throughput.

1. Introduction

With various multimedia applications that have diverse QoS requirements appearing in multihop wireless networks [1], how to satisfy different QoS demands of diverse services has become a hotspot research issue. Considering the sharing of wireless channels among nodes and the diversity of QoS requirements of different services, the way of assigning transmission priorities to applications according to their QoS demands is an effective method to utilize wireless resources and provide QoS guarantees. Some algorithms following this line of thinking have been proposed. EDCA of IEEE 802.11e [2] standard defines four access categories corresponding to voice, video, best effort, and background to endow different applications with diverse priorities in media access control. Reference [3] adopts the 802.11e Access Control (AC) queue structure. Control packets used to route are prioritized according to the type of traffic associated with them to ensure that high priority packets are not penalized by the control packets. In [4], delay time of transmitting data is chosen as QoS metric. Packets in queues of flows with higher QoS level are delivered with higher priority. To support efficient video transmission, the scheduling algorithm in [5] assigns priority depending on the types of video frame. This video-based scheduling algorithm is combined with 802.11e protocol. Designed for video transport, the policy of [6] calculates the values of the counters depending on the delay estimation and the importance of packets. Under this policy, the packets with the lowest value of the counters gain the transmission opportunity. However, as there is no corresponding routing and flow control scheme combined with the above priority-based scheduling algorithms, congestion of high priority data packets may occur. Service-differentiation routing algorithms [7, 8] that select routes with different approaches depending on the type of traffic to ensure that packets with higher priority will be transmitted on higher quality links are also proposed. However, these algorithms may cause unbalanced distribution of packets in the network.

Different from the above layered QoS schemes, the Backpressure policy [9] is designed by applying Lyapunov optimization technique joint routes and schedules. The policy can also be combined with flow control [10] to ensure that the admitted rate injected into the network layer lies within the network capacity region, as well as combined with MAC [11], TCP [12], and application layers [13]. Due to its throughput-optimal characteristic for different network structures, the backpressure cross-layer control scheme has been a promising scheme to provide QoS guarantees. There are still few researches of Backpressure policies designed to support different QoS requirements of different types of traffic [14, 15]. In [14], services are divided into different classes according to their QoS demands. QoS requirements are supported through solving the optimization problem with the objective of maximizing weighted utility of different classes and constraints of QoS demands of each class. However, under the condition of high traffic loads, the fairness of the policy will decline. Reference [15] proposes a Backpressure cross-layer algorithm which ensures that the delays of flows are proportional to the priorities of services with keeping optimal throughput utility. However, the work of [15] did not consider the situation that services have different arrival data rates.

In this paper, we consider both arrival data rate and QoS demands when setting priorities. The effect of priorities of services on services’ QoS performance is also studied. We propose a cross-layer QoS scheme which can provide QoS guarantees for different types of services simultaneously. The key contributions of this paper can be summarized as follows.(i)The paper proposes a Lyapunov optimization technique-based cross-layer scheme which can satisfy different QoS requirements of various applications with priority differentiation. The method of how to calculate services’ priorities is also designed.(ii)The paper introduces throughput weight coefficient and delay weight coefficient that are updated according to QoS performance to meet QoS demands better and maintain fairness.(iii)To reduce the computational complexity, a distributed media access control scheme is proposed. A power control algorithm to keep the data transmission rates of all wireless links being equal is also designed. This power control algorithm treats the power control as the solution of a multivariate equation.(iv)The performance in terms of utility optimality is demonstrated with rigorous theoretical analyses. The policy is shown that it can achieve a time average throughput utility which can be arbitrarily close to the optimal value.

The structure of the rest of the paper is as follows. Section 2 introduces the system model and problem formulation. In Section 3, the algorithm is designed using Lyapunov optimization. The performance analyses of the proposed algorithm are present in Section 4. Simulation results are given in Section 5. Conclusions are provided in Section 6.

2. Model and Problem Formulation

2.1. Network Model

Consider a multihop wireless network consisting of several nodes. Let the network be modeled by a directed connectivity graph , where is the set of nodes and represents a unidirectional wireless link between node and node j. M denotes the set of unicast sessions between source-destination pairs in the network. K denotes the set of services in each session. is the set of source nodes of service in session m. is the set of destination nodes of service in session . Packets generated in the source nodes traverse multiple wireless hops before arriving at the destination nodes. The system is assumed to run in a time-slotted fashion. There are two channels including common control channel and data channel which use different communication frequencies in the network. Each node can broadcast control packets consisting of channel access negotiation information, lengths of queues, and weight values of nodes on the common control channel. Each node can gain control information by monitoring the control channel. The data channel is used for data communication. In this model scheduling will be subjected to the following constraints [16]: is used to indicate whether link is used to transmit packets in time slot t. if , and if . denotes the transmit power from node to node in time slot . Constraint (1) means that each node can either transmit or receive data on data channel at the same time. The SINR (Signal to Interference plus Noise Ratio) of link at node in time slot is calculated as follows:Node is the sending node, and node is the destination node of packets from node . Node denotes the neighbor nodes of node . When node sends packets, node is the destination node of packets from node z. denotes the transmit loss from node to node y. is the receiver noise at node . The achievable capacity of link in time slot is calculated as follows: represents the bandwidth of the data channel. There are two necessary constraints for the successful data transmission on link to be satisfied. The first constraint can be expressed asThis constraint states that the SINR of link at node must be above the predefined SINR threshold .

However, if the new link is built, the transmission power from node to may result in additional interference at the receiving node of existing link , and the SINR of link at node will decrease. To make sure that the new transmission will not impair the existing transmissions and the SINR of each existing link keeps being above the predefined SINR threshold , the second constraint is expressed as denotes the additional interference at node caused by the data transmission from node to in time slot . According to constraints (4) and (5), the maximum and minimum transmit power of node to node in time slot can be written as

2.2. Virtual Queue at the Transport Layer

denotes the arrival rate of service in session injected into the transport layer from the application layer at source node. is the maximum arrival rate of session m. is the admitted rate of session injected into the network layer. is an auxiliary variable called the virtual input rate. There is a virtual queue for every service in session at the service’s source node. The virtual queue at the transport layer of source node is denoted by that is updated as follows:If each virtual queue is guaranteed to be stable, according to the necessity and sufficiency for queue stability [17], it is apparent that , where the time average value of time-varying variable is denoted by . Therefore, the lower bound of can be derived from which is calculable.

2.3. Data Queue at the Network Layer

The data backlog queue for service in session at the network layer of node is denoted by . In each slot , the queue is updated aswhere represents the set of nodes with . represents the set of nodes with . is the amount of data of service in session to be forwarded from nodes to in time slot . is an indicator function that denotes 1 if and denotes 0 otherwise. In addition, must not be greater than the transmission capacity of link in time slot .

2.4. Design of Priorities of Services

represents the priority of service , which is used to denote the importance degree of service in scheduling. is calculated using the method as follows: is the average data arrival rate of service . represents the maximum allowable end-to-end delay bound of service k. denotes the basic average data arrival rate. is the basic allowable end-to-end delay. and can be calculated as

2.5. Design of Throughput and Delay Weight Coefficients

represents delay weight coefficient of service . In every interval, the destination nodes of the same service calculate the average end-to-end delay of their corresponding service and the delay weight coefficient. As an example of service k, in destination nodes of service k, is calculated asHere, is the average end-to-end delay of service in interval . Similarly, the throughput weight coefficient of service k, is calculated as follows: represents the average throughput of service in interval . denotes the required average throughput of service k.

Through introducing throughput and delay weight coefficients into the optimization objective, the QoS performances of services are considered in the optimization. According to the calculation methods above, we can find that if the QoS performances of a service including average end-to-end delay and average throughput in the interval do not reach the threshold values, the delay and throughput weight coefficients will increase sharply. Meanwhile, the transmitted probability of the packets of this service will increase, which helps to support QoS requirements better.

2.6. Throughput Utility Optimization Problem

Similar to the design of utility function in [18], let the utility function of service in session m, , be a concave, differentiable, and nondecreasing utility function with . The throughput utility maximization problem P1 can be defined as follows:Similar to the definition in Section 2.2, is the time average value of time-varying variable , and is calculated according to . Here, , , , and . is the capacity region of the network. The constraint is used to guarantee the stability of the network.

3. Dynamic Algorithm via Lyapunov Optimization

Lyapunov optimization technique is applied to solve P1. and are used in the dynamic algorithm. Let be the network state vector in time slot . Define the Lyapunov function as The conditional Lyapunov drift in time slot isTo maximize a lower bound for , the drift-plus -penalty function can be defined as where is the weight of utility defined by user. The following inequality can be derived:here, , , and can be evaluated as follows: is a constant and satisfiesAssume that in this paper the transmit capacity of each link is a constant value by using a power control algorithm. According to , , and , constant must exist.

The algorithm CADSP (Cross-Layer Algorithm with Differentiated Service Prioritization) scheme is based on the drift-plus-penalty framework [17]. The main design principle of the algorithm is to minimize the right-hand side of (17). This scheme consists of three parts which are joint flow control, routing, and scheduling scheme, medium access control scheme, and power control algorithm.

3.1. Joint Flow Control, Routing, and Scheduling Scheme

This scheme includes four parts as follows.

Source Rate Control. For sessions and at source node , the admitted rate is chosen to solveProblem (20) is a linear optimization problem, and if , is set to be ; otherwise it is set to be zero.

Virtual Input Rate Control. For sessions and at source node , the virtual input rate is chosen to solveIf is strictly concave and twice differentiable, (21) is a concave maximization problem with linear constraint. can be chosen bywhere is the inverse function of that is the first-order derivative of . Since the utility function is strictly concave and twice differentiable, must be a monotonic function, and therefore, must exist. If is a linear function, let us suppose . can be calculated as

Joint Routing and Scheduling. At the node , routing and scheduling decisions for each service in session can be made by solving the following: denotes the capacity of link in time slot t, and is calculated according to (3). The first constraint of (24) indicates that the amount of data to be forwarded from one node to another node in a time slot should not be greater than the capacity of the link between these two nodes in time slot . The second constraint of (24) is built according to constraint (1) given in Section 2.1. The third constraint of (24) is built according to constraint (6) given in Section 2.1.

First, the best service and the best session whose data should be transmitted on link can be chosen asThe weight value of link is calculated using the following method asSo the joint routing and scheduling problem can be reduced to the following problem:Transmission rates are chosen based on (27) which is a hard problem for solving as it requires global knowledge and centralized algorithm. We define as the set of transmit powers on each link and define as the set of links which can be used for data transmission simultaneously when using as the set of transmit powers. is defined as the set of . In each slot, that maximizes is chosen as the set of scheduled links and the set of transmit powers.

Update of Queues. and are updated using (7) and (8) in each time slot.

3.2. Distributed Medium Access Control Scheme

Solving (27) is a NP-hard problem whose computation complexity is where denote the number of nodes in the network. Obviously, the computation complexity will increase shapely with the increase of . To reduce the computation complexity, a distributed medium access control scheme for routing and link scheduling is proposed in this section. The design principle of this distributed scheme is that nodes with higher weight values will get higher probabilities of accessing the medium and transmitting data. When the runtime is long enough, the distributed media access control scheme plays the same role to the GMS (Greedy Maximal Scheduling) algorithm which is a central algorithm and whose capacity region can reach 1/2 capacity region of MWM (Maximal Weighted Matching) [19] which is the basic of the central cross-layer routing and scheduling scheme proposed in Section 3.1.

The medium access control scheme is implemented in a time-slotted fashion on the common control channel. The way that nodes contend to access the control channel is similar to IEEE 802.11 two-way RTS and CTS handshake. The medium access control logic is illustrated in Figure 1. The details of the scheme are as follows. (i) There is a central control node which implements the power control algorithm and records state information of existing links, including transmit power, positions of nodes, and noises at the receiving nodes. (ii) At the beginning of each slot, each node trying to send data chooses a random waiting time . The value of is calculated in the central control node. It relates with number of nodes in the network. (iii) Each node sends IU packet that includes information about weight value, the next hop node chosen, current position, and noise on the control channel after waiting for . For the send node , the receiving node of node is , and the weight value of node is . Each node also monitors the IU packets from other nodes to gain the weight values of other nodes. (iv) Every backlogged node calculates its contention window and backoff counter [20] as follows: is randomly chosen from the range . (v) After from the beginning of the slot, each backlogged node continues monitoring the control channel. If the node senses an idle control channel for a period of , it can send RTS packet which includes . RTS packet also includes the information about the receiving node in plan. (vi) After receiving RTS packet from node i, the central control node checks if the receiving node in plan of node is in transmission. The control node also implements the power control algorithm to decide if the new link is allowed to be established. If the new link is allowed to be established, the control node responds with a CTS packet that includes the new transmit powers and transmission time lengths of all send nodes after a period of SIFS; otherwise, the control node responds with a NCTS that includes decision that the new link is not allowed to be established. (vii) The send nodes update the transmit powers after receiving CTS packet. The receiving node in plan of node prepares for data reception and responds with a ACK packet after the successful data reception. Without considering the weight value of node i, idle nodes update their contention windows and backoff counters after receiving CTS packet. (viii) Node and other idle nodes begin to monitor the control channel for further negotiation after receiving NCTS packet. (ix) The maximum times that each node is allowed to send RTS packets in a time slot is three.

3.3. Power Control Algorithm

The power control algorithm is implemented in the central control node of the network. The design objective of the algorithm is to ensure that the SINR at every receiving node is . Assume that there have been links in the network. The links from send nodes to receiving nodes are represented by . When node tries to transmit data packets to node , it send RTS packet on the control channel. After receiving RTS packet from by monitoring the common control channel, the central control node begins the computation to check the transmit powers of all send nodes which can guarantee that the following equalities exist:where . If we can get , the new link can be established. Equation (29) can be transformed into multivariate equations as follows:where If , the link is allowed to be established. Here , and represents the maximum transmit power that node can support. On the common control channel, the central control node will broadcast which are new transmit powers of the send nodes.

4. Performance Analysis

Theorem 1 (algorithm performance). Define and the optimization problem P2 asDefine as the optimal value of and as the solution of P2. Under the implementation of central CADSP scheme proposed in Section 3.1, the following inequality holds:

Proof. According to Lemma  4 in [18], similar to Theorem  3 in [16], the following inequality holds when : According to the equalities which can be got, and , (34) can be transformed into following inequality:As CADSP scheme can guarantee that and function is nondecreasing, the following inequality can be got:Inequality (36) means the overall throughput utility achieved by the algorithm in this paper is within a constant gap from the optimum value.

5. Simulation

5.1. Simulation Setup

The network for simulations is considered as a network with 20 nodes randomly distributed in a square of 900 m2. There are two unicast sessions with randomly chosen sources and destinations. Each session includes three services. Data are injected at the source nodes following Poisson arrivals. The simulation time lasts 10000 time slots. All the initial queue sizes are set to be 0. The throughput utility function is . Table 1 summarizes the simulation parameters.

In this paper, the performance of CADSP is compared with Backpressure scheme [21] and PDA-PMF scheme [6]. Backpressure scheme is a classical joint routing and scheduling algorithm that can provide throughput utility optimality. PDA-PMF scheme is a services-differentiated scheduling policy. In this simulation, PDA-PMF scheme is combined with AODV routing algorithm.

5.2. Simulation of Services with Different Delay Requirements

In this section, the average data arrival rates of all the services are the same. The maximum allowable end-to-end delay bounds of services are set as  s,  s, and  s. The required average throughputs of services are set as = 0.5 × 105 bits/s.

We compare against the three solutions by varying the average data arrival rate and plot the average throughput of service 1 in Figure 2, which shows that CADSP outperforms Backpressure and PDA-PMF. When the average data arrival rate is lower than 5 × 105 bits/s, the three schemes obtain similar throughput performance. However, with higher average data arrival rate, CADSP and PDA-PMF perform much better than Backpressure, since service 1 under CADSP and PDA-PMF is assigned the highest priority in the three services, and it can get more transmission opportunities than service 1 under Backpressure which has the same priority as the other two services. As Backpressure-based algorithm has throughput optimality, CADSP performs better than PDA-PMF in throughput performance.

Figure 3 shows the average end-to-end delay performance of service 1 for the three solutions. When the average data arrival rate is lower than 5.5 × 105 bits/s, PDA-PMF performs best. But when the average data arrival rate is above 6.5 × 105 bits/s, the average end-to-end delay of service 1 under PDA-PMF is higher than the maximum allowable end-to-end delay bound of service 1. The average end-to-end delay of CADSP is always below the maximum allowable end-to-end delay bound of service 1, since CADSP using delay weight coefficient can provide better delay guarantee. When the average data arrival rate is in the range from 0.5 × 105 bits/s to 4 × 105 bits/s, the average end-to-end delay of CADSP and Backpressure decreases. The reason is that the end-to-end delay will be high if the traffic load is too low for the formation of “queue length pressure” from source nodes to destination nodes.

The performances of the three solutions in terms of average throughput of service 2 are compared in Figure 4, which shows that CADSP outperforms Backpressure and PDA-PMF. When the average data arrival rate is lower than 4 × 105 bits/s, the three schemes obtain similar throughput performance. However, with higher average data arrival rate, CADSP performs better than Backpressure and PDA-PMF. From Figure 2 we can see that when traffic load is high, CADSP can still maintain good performance in terms of average throughput for service 2.

Figure 5 shows the average end-to-end delay performance of service 2 for the three solutions. Since the priority of service 2 in CADSP is not as high as service 1, the average end-to-end delay performance deteriorates. However, the average end-to-end delay of service 2 is still maintained being lower than the maximum allowable end-to-end delay bound of service 2 by using delay weight coefficient. We can also see that, in the condition of low traffic load, the performance of average end-to-end delay of PDA-PMF is the best.

In Figure 6, the average throughput of service 3 of the three solutions is compared. From the figure we can see that Backpressure with the same priority for each service outperforms CADSP and PDA-PMF. Since service 3 in CADSP and PDA-PMF is scheduled with the lowest priority, their average throughput cannot increase with the increase of average data arrival rate when traffic load is high. However, the average throughput of service 3 of CADSP is always higher than the required average throughput of service 3 through using throughput weight coefficient.

The average end-to-end delay performance of service 3 for the three solutions can be seen from Figure 7. Though CADSP performs worse than Backpressure in most conditions, its average end-to-end delay is maintained being lower than the maximum allowable end-to-end delay bound of service 3.

From the simulation results above, we can see that CADSP can support QoS requirements of all services.

We plot the throughput of the three solutions in Figure 8, which shows that Backpressure outperforms CADSP and PDA-PMF. The reason is that Backpressure can provide the throughput optimality, while the throughput optimality of CADSP is destroyed by introducing throughput and delay weight coefficients into the optimization objective.

5.3. Simulation of Services with Different Average Data Arrival Rates

The average data arrival rate of service 1 is four times that of service 3. The average data arrival rate of service 2 is two times that of service 3. The maximum allowable end-to-end delay bounds of services are set as  s. The required average throughputs of services are set as = 0.5 × 105 bits/s.

In Figure 9 the average throughputs of the three services using CADSP are compared. From the figure we can see that the ratio among the average throughputs of the three services is close to the ratio among the average arrival data rates of the three services.

The average end-to-end delay performances of the three services using CADSP are compared in Figure 10. The performances in terms of average end-to-end delay of service 1 and service 2 are lower than the maximum allowable end-to-end delay bound. When the average data arrival rate is lower than 2 × 105 bits/s, the average end-to-end delay of service 3 is higher than the maximum allowable end-to-end delay bound of service 3. The reason is that the average data arrival rate of service 3 is too low to form the “queue length pressure” to push packets of service 3 from source node to destination node, and this increases the average end-to-end delay of service 3.

6. Conclusions

This paper proposed a cross-layer QoS scheme which can provide different QoS guarantees for diverse types of services. Through setting priorities of services depending on services’ data arrival rates and end-to-end delay demands, services with higher QoS demands can gain better QoS performance. The delay and throughput weight coefficients in the objective of the optimization problem help to maintain fairness of the policy and make the scheme support QoS requirements better. The throughput utility optimality of the scheme is kept. A distributed medium access control scheme and a power control algorithm are designed to reduce the computational complexity of the scheme. Compared with the existing works, the policy presented in this paper can simultaneously support the delay requirements of different services and maintain higher throughput.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of the manuscript.