Abstract

This paper presents the performance of a hard decision belief propagation (HDBP) decoder used for Luby transform (LT) codes over additive white Gaussian noise channels; subsequently, three improved HDBP decoders are proposed. We first analyze the performance improvement of the sorted ripple and delayed decoding process in a HDBP decoder; subsequently, we propose ripple-sorted belief propagation (RSBP) as well as ripple-sorted and delayed belief propagation (RSDBP) decoders to improve the bit error rate (BER). Based on the analysis of the distribution of error encoded symbols, we propose a ripple-sorted and threshold-based belief propagation (RSTBP) decoder, which deletes low-reliability encoded symbols, to further improve the BER. Degree distribution significantly affects the performance of LT codes. Therefore, we propose a method for designing optimal degree distributions for the proposed decoders. Through simulation results, we demonstrate that the proposed RSBP and RSDBP decoders provide significantly better BER performances than the HDBP decoder. RSDBP and RSTDP combined with the proposed degree distributions outperformed state-of-the-art degree distributions in terms of the number of encoded symbols required to recover an input symbol correctly (NERRIC) and the frame error rate (FER). For a hybrid decoder formulated by combining RSDBP with a soft decision belief propagation decoder, the proposed degree distribution outperforms the other degree distributions in terms of decoding complexity.

1. Introduction

The Luby transform (LT) codes proposed in [1] are the first practical fountain code that performs well on reliable communications over a binary erasure channel (BEC). Successful hard decision belief propagation (HDBP) decoding is possible when encoded symbols are available, where is the overhead of decoding. With the advantage of being rateless, LT codes have been introduced in broadcast services and noisy channels [2]. The performance of LT codes over additive white Gaussian noise (AWGN) channels has been investigated in [3]. To improve decoding performance, soft information is used in a soft decision belief propagation (SDBP) decoder, which is used as the decoding algorithm over noisy channels [4].

Different strategies have been proposed to improve the performance of LT codes over AWGN channels. A Gauss–Jordan-elimination-assisted belief propagation (BP) decoder was proposed to address the premature termination of BP decoding [5]. However, it is only practical for short LT codes. Generally, an SDBP decoder begins when all encoded symbols are available. Therefore, in greedy spreading serial decoding, encoded symbols are processed at once, and messages propagated greedily to improve the convergence speed [6]. However, the increase in decoding complexity was demonstrated in [5, 6]. A cross-level decoding scheme that combines LT codes with low-density parity check (LDPC) codes was proposed [7]. Although this method provided an effective decoding scheme, it required additional bit decoding from the LDPC, thereby increasing the decoding complexity. The piggybacking BP decoding algorithm, which decreases the decoding overhead and decoding delay, was proposed for repeated accumulated (RA) rateless codes [8]. However, it is only useful for RA rateless codes. A parallel soft iterative decoding algorithm was proposed for satellite systems [9]. Similar to the study in [7], it is only effective when combining LDPC codes with LT codes in the physical layer. A low-complexity BP decoder was proposed to improve performance by deleting low-reliability symbols at the cost of a slight transmission efficiency loss [10]. The BP-based algorithm is combined with the log likelihood ratio- (LLR-) based adaptive demodulation (ADM) algorithm to further reduce the decoding complexity [11]. The maximum a posteriori probability-based ADM algorithm was proposed to improve performance by discarding incorrect bits [12]. An adaptive decoding algorithm was proposed to reduce the decoding complexity by reducing the number of active check nodes [13], which degraded the performance of LT codes. In [1013], the decoding complexity was reduced at the expense of increasing overhead because unreliable symbols were deleted. The trade-off between performance and decoding complexity was analyzed in [14]. Reducing the decoding complexity is important for the practicability of LT codes over noisy channels. However, the decoding complexity of the SDBP decoder remains high.

Several degree distributions have been proposed for LT codes over AWGN channels. An optimization process is formulated to design a new degree distribution, which improves the performance of LT codes over AWGN channels [15]. Three types of check-node degree distributions are proposed to improve the performance of systematic LT codes over AWGN channels [16]. A novel optimization model was proposed to design degree distributions over AWGN multiple access channels [17]. A ripple-based design of the degree distribution for AWGN channels was proposed in [18]. However, designing a good degree distribution and improving the performance in HDBP decoding over noisy channels remain an open problem.

Compared with SDBP decoding, HDBP decoding significantly reduced decoding complexity, which is extremely important for battery-powered equipment. The use of HDBP decoding can effectively reduce the decoding complexity of the hybrid decoding scheme, in which SDBP decoding will be invoked when HDBP decoding fails. Herein, the performance of HDBP decoding is analyzed, and improved HDBP decoders and their corresponding degree distributions are proposed. First, we investigate the ripple size throughout the decoding process and argue that sorting encoded symbols in ripple improves decoding performance; subsequently, we propose a ripple-sorted BP (RSBP) decoder. Based on the RSBP decoder, we discovered that with more encoded symbols available before decoding started, the decoding performance improved. Hence, we propose an improved BP decoder known as a ripple-sorted and delayed BP (RSDBP) decoder. Based on the analysis of the distribution of error encoded symbols, we argue that low-reliability encoded symbols should be deleted to improve decoding performance and propose a ripple-sorted and threshold-based BP (RSTBP) decoder. Second, by analyzing the random walk model, we propose a method to generate a set of candidate ripple-size evaluations. A ripple-based design of degree distribution known as the generalised degree distribution algorithm (GDDA) is used to generate the degree distribution [19]. Based on the Monte Carlo method, the optimal degree distribution for a specific BP decoder is achieved. Simulation results demonstrated that our proposed RSBP and RSDBP decoders outperformed the BP decoder in terms of the bit error rate (BER) performance. Additionally, RSDBP and RSTDP combined with the proposed degree distributions outperformed state-of-the-art degree distributions in terms of the number of encoded symbols required to recover an input symbol correctly (NERRIC) and the frame error rate (FER). For the hybrid decoder formulated by combining RSBP with an SDBP decoder, the proposed degree distribution outperformed state-of-the-art degree distributions in terms of decoding complexity.

The remainder of this paper is organised as follows. In Section 2, a review of the system model and the encoding and decoding of LT codes are provided. In Section 3, the performance of HDBP decoding is analyzed. In Section 4, our RSBP, RSDBP, and RSTBP decoders are presented. In Section 5, the performance of the proposed decoders is analyzed. In Section 6, a method to generate the optimal degree distribution for a specific BP decoder is proposed. In Section 7, our experimental design is outlined, and the efficiency of the proposed decoders and the proposed degree distribution are demonstrated by experimental results. Finally, our study is summarised as follows.

2. Background

2.1. System Model

Information messages must be transmitted from the source to the destination over AWGN channels. Messages are partitioned into blocks, and each block is partitioned into symbols. The input symbols of the LT codes are denoted as , which is a combination of original symbols and a cyclic redundancy check (CRC). Typically, a single input symbol can be one bit or even a packet. For simplicity, one bit is regarded as an input symbol in this study. At the source, a stream of encoded symbols is generated from input symbols. The encoded symbol is modulated by the binary phase shift keying and transmitted to the destination independently as . At the destination, the output of the AWGN channels for each symbol is as follows: where , with being the normal distribution. At the destination, encoded symbols are received to recover the input symbol. Generally, a soft demodulation is performed on encoded symbols. The log-likelihood ratio (LLR) of the encoded symbol is defined as

In this study, HDBP decoding was concatenated with SDBP decoding, which can reduce decoding complexity. The encoded symbols with LLR were passed to HDBP decoding, and the output of decoding was verified by a CRC. Decoding is successful if it passes; otherwise, SDBP decoding is invoked.

2.2. BP Encoding

Given input symbols and a degree distribution, subsequently, an infinite number of encoded symbols are generated according to Algorithm 1.

Pseudocode of LT encoding
Input: input symbols , degree distribution
Output: an encoded symbol c
1: initialize an encoded symbol c=0
2: select a degree d from [1, k] according to
3: select d different input symbols from X and add to a neighbor set V
4: for input symbol v in Vdo
5:  c=c XOR v
6: end for
7: returnc
Algorithm 1.
2.3. Hard Decision BP Decoding

BP decoding is widely used for LT codes, which are implemented in different variants for different channels. HDBP decoding is used in BEC, whereas SDBP decoding is used in noisy channels. We discovered that HDBP decoding concatenated with SDBP decoding can be used in noisy channels, which will be analyzed herein. In HDBP decoding, encoded symbols participating in the decoding process are considered as correct symbols. Hence, simple inversed XOR operations are performed. The pseudocode of HDBP decoding is shown in Algorithm 2, where decoding is performed at once. Decoding is completed when sufficient encoded symbols are received.

Pseudocode of hard decision BP decoding
Input: encoded symbols received from channels
Output: recovered input symbols
1: initialize ripple R as an empty queue
2: initialize recovered input symbols as an array
3: initialize waited encoded symbols Y as an array
4: whiledo
5:  receive an encoded symbol y from channels
6:  
7:  
8:  whiledo
9:   dequeue an input symbol from R
10:   
11:   for encoded symbol y in Ydo
12:    
13:    
14:   end for
15:  end while
16: end while
17: return
Algorithm 2.

3. Analysis of Hard Decision BP Decoding

For HDBP decoding, suppose encoded symbols are sufficient to recover input symbols . The relationship between the input symbols and encoded symbols can be expressed by a Tanner graph. For example, the Tanner graph of four input symbols and five encoded symbols is shown in Figure 1.

3.1. Error Probability of Hard Decision BP Decoding

Let denotes the error probability of the encoded symbol . Generally, and vice versa. A decreasing function exists such that

Let denotes the error probability of the input symbol if it is recovered by the encoded symbol , which is shown in formula (4). where denotes the neighbors of except . In HDBP decoding, input symbols are recovered individually in sequence. In Figure 1, is a reasonable sequence of input symbols recovered in decoding, and it is not the only one. For a sequence, we define as the set of encoded symbols that are the only neighbors of the input symbol at the end of decoding. For the sequence , we have ,, , and . We discovered that can be recovered by both and . Let denotes the set of encoded symbols supported to decode . We have if it is recovered by ; otherwise, . Therefore, we have

Let ; the error probability of HDBP decoding is shown in formula (6).

For a Tanner graph, several different exist. Our aim is to optimize the supported set of each input symbol to reduce the error probability of decoding. For example, should be recovered by a supported set with a lower error probability. For example, the Tanner graph with the LLR value is shown in Figure 2. The LLRs of and were set as -0.1 and 0.2, respectively. As shown in Figure 2(a), the input symbol is incorrect if it is recovered by . Otherwise, it is correct if it is recovered by , which is shown in Figure 2(b).

3.2. Improvement in Error Probability

In HDBP decoding, each input symbol is recovered by encoded symbols on average. In other words, input symbols will be recovered by two encoded symbols. This is a valid assumption because the probability of an input symbol recovered by more than two encoded symbols is small. Consider the case in which both encoded symbols and have only the neighbor of the input symbol at the end of decoding. The error probability of is shown in formula (7) if it is recovered by or at random.

Otherwise, the error probability of is as shown in formula (8) if it is recovered by the encoded symbol with a lower error probability.

Therefore, the error probability of decoding is reduced if the encoded symbol with a lower error probability is selected to recover the corresponding input symbol.

4. Improved Hard Decision BP Decoders

As shown, the error probability of the input symbol can be reduced by selecting the support set with a lower error probability. In this section, we propose three improved HDBP decoders to reduce the probability of decoding.

4.1. Ripple-Sorted BP Decoder

The structure of the HDBP decoder is shown in Figure 3. First, degree-one encoded symbols are added to the ripple to start the decoding. The symbols in the ripple are processed individually until the ripple is empty. Two methods can be used to reduce the error probability of recovered symbols in the decoding process. The first one is to sort symbols in the ripple. The second one is to sort symbols in the waiting array.

Lemma 1. Sorting the symbols in the waiting array can be replaced by sorting the symbols in the ripple, and both the RSBP decoder and waiting-array-sorted BP (WSBP) decoder can reduce the error probability of decoding.

Proof. The symbols released in each step depend only on the symbol being processed and the symbols in the waiting array; they are irrelevant to the order of the waiting array. The released symbols are sorted in the WSBP decoder, whereas the released symbols are sorted after being added to the ripple in the RSBP decoder. We assume that two symbols and are released simultaneously and without loss of generality. If the remaining neighbor of both and is , the error probability of is reduced in both the RSBP and WSBP decoders. Hence, Lemma 1 is proven.

Lemma 2. For HDBP decoding, sorting symbols in ripple is better than sorting symbols in the waiting array.

Proof. We assume that two symbols and exist in the ripple and . The remaining neighbors of and are and , respectively. It is clear that the error probability of the symbol released in this step is equal to or greater than the error probability of . Therefore, the symbol with minimal error probability should recover the corresponding input symbol in each decoding step. However, the error probability of the symbol released in the next steps may be less than the error probability of . If is released and added to the ripple when is processed, the remaining neighbor of will be . The input symbol should be recovered by . In this case, the performance of the RSBP decoder is better than that of the WSBP decoder. Hence, Lemma 2 is proven.

he design of the ripple size evolution assumes that the ripple size should be remain more than one throughout the decoding process. Therefore, in theory, the performance of the RSBP decoder is better than that of the HDBP decoder. To analyze the performance improvement, the ripple size and waiting array size are analyzed by Monte Carlo simulations. The result is shown in Figure 4 with and the degree distribution in [18], where the average ripple size and average waiting array size in each decoding step are calculated by 100000 simulations. The percentage of ripple sizes greater than one exceeds 80%, which means that symbols in the ripple can be sorted based on the absolute LLR value. As shown in Figure 4, the waiting array size is large at the beginning of decoding, which means that the probability of and released in the same decoding step is high. Additionally, we discovered that the number of symbols in the waiting array is larger than that in the ripple exception of . Therefore, sorting symbols in ripple is more efficient than sorting symbols in a waiting array. The proposed RSBP decoder can be implemented by replacing push (, ) with pushAndSort (, ) in Algorithm 2.

4.2. Ripple-Sorted and Delayed BP Decoder

For HDBP decoding, the number of symbols released in each step increased with the size of the waiting array. Therefore, the performance increased with the size of the waiting array. For example, as shown in Figure 5(a), the input symbolrecovered byis incorrect becauseis incorrect. As a result of error propagation,are also incorrect. In Figure 5(b), the decoding process is delayed until sufficient encoded symbols are available. The input symbolrecovered byis correct becauseis correct; therefore,are correct as well. Consequently, the encoded symbolwith a high error probability is redundant.

Lemma 3. The more encoded symbols are available before decoding starts, the better is the BER performance of decoding.

Proof. We assume that the input symbol can be recovered by one of with . If is processed before is available, then the error probability of is reduced if decoding is delayed until is available. If more encoded symbols are available before decoding starts, the error probability of more input symbols will be reduced. Hence, Lemma 3 is proven.

Based on Lemma 3, we propose our RSDBP decoder, which delays the start of the decoding until encoded symbols are received. The parameter depends on and the degree distribution. For example, is set as 0.16 for with a degree distribution in [20]. The proposed RSDBP decoder can be implemented by starting the RSBP decoding process until sufficient encoded symbols are added to the waiting array.

4.3. Ripple-Sorted and Threshold-Based BP Decoder

Let denotes the ratio of error symbols, which increases as the SNR decreases. The BER performance of the RSDBP decoder decreased as increased. To reduce the probability of incorrect encoded symbols participating in decoding, the encoded symbols with a high error probability should be deleted. The distribution of error symbols can be analyzed using Monte Carlo simulations. For example, () encoded symbols are generated and sorted by the error probability, denoted as . We segmented into 100 segments. The ratio of error encoded symbols in each segment is shown in Figure 6. As shown, only a small number of error encoded symbols exist, and the ratio of error encoded symbols increased with the segment sequence. Therefore, most error encoded symbols can be deleted from decoding if the tails of the sorted encoded symbols are deleted.

For HDBP decoding, the received encoded symbol will be deleted if , where denotes the threshold. Otherwise, it will participate in decoding. Let denotes the probability that an error symbol will be deleted. For deletion probability , the threshold can be calculated by Monte Carlo simulations, as shown in Algorithm 3.

Pseudocode of calculating threshold
Input: Probability
Output: Threshold t
1: initialize an ordered array S
2: initialize n=0
3: whilei<Mdo
4:  generate a packet p and add to S
5:  ifp is error then
6:    n=n+1
7:   end if
8:  i++9: end while
10: 
11: i=M
12: whiledo
13:  p=S[i]
14:  ifp is error then
15:   e=e – 1
16:  end if
17:  ife<= 0 do
18:   t =
19:   break
20:  end if
21:  i--
22: end while
23: returnt
Algorithm 3.

Let denotes the ratio of encoded symbols deleted by decoding, which depends on . Figure 7 shows the ratio of deletion as a function of . As shown, the ratio of deletion decreased as the SNR increased, whereas it increased with . Therefore, the trade-off between the BER performance and overhead can be adjusted by .

Based on the analysis of symbol deletion, we propose a new decoder named the RSTBP decoder, in which encoded symbols with higher error probabilities are deleted from decoding. The proposed RSTBP decoder can be implemented by deleting encoded symbols that exceed the threshold.

5. Analysis of the Improved BP Decoder

Lemma 4. The decoding complexity of the sorting ripple satisfies the constraint

Proof. The ripple size decreases as the decoding process proceeds. Initially, symbols are released and sorted, and the decoding complexity is . The remaining symbols will be inserted into the ripple, and the decoding complexity is less than . Hence, Lemma 4 is proven.
The computational complexities of the four decoders are shown in Table 1, where and , and the numerical results of computational complexities obtained by simulations are shown in Table 2. It is noteworthy that the number of XOR operations depends only on the average degree, and the number of SORT operations in RSDBP is the same as that in RSTBP. The number of SORT operations in RSBP is slightly less than that in RSDBP because small input symbols have been recovered before encoded symbols are available.

Lemma 5. For a and , the number of error encoded symbols participating in decoding is

Proof. Let N denotes the number of encoded symbols received, and we have. The error encoded symbols participating in decoding are . Hence, Lemma 5 is proved.

Lemma 6. The number of input symbols recovered by error encoded symbols directly satisfies the constraint

Proof. There exist pairs of encoded symbols. Since is small compared with , the probability of an error encoded symbol pairing with a correct encoded symbol is . Let denotes a pair of encoded symbols without loss of generality. If one of and is an error encoded symbol, the probability that is the error encoded symbol exceeds 0.5. Therefore, more than error encoded symbols will be considered as redundant symbols. Hence, Lemma 6 is proven.

Definition 7. (error propagation probability). the neighbors of the encoded symbol are selected randomly. Therefore, the probability that an encoded symbol with degree is affected by an error input symbol that satisfies the constraint

We observed that the error propagation probability decreased with the average degree. For example, no error propagation was observed when the average degree was one. Hence, a trade-off occurred between the error propagation probability and overhead.

Definition 8. (number of affected encoded symbols). let denote the degrees of encoding symbols that will recover input symbols. The number of encoded symbols affected by an error symbol in step directly satisfies the constraint

Lemma 9. (total number of affected encoded symbols). let denotes the steps affected by the error symbol in steps -. The total number of encoded symbols affected by an error symbol satisfies the constraint

Proof. Compared with , the average degree of encoded symbols is small. Hence, is relatively small. The number of encoded symbols that are affected by an error symbol directly and indirectly is small. Therefore, the double counting problem is disregarded; hence, the lemma is proven.

To validate the performance of LT codes in AWGN channels, we propose a new indicator known as NERRIC, which is defined as follows: where denotes the BER of decoding.

6. The Optimal Degree Distribution for a Specific BP Decoder

Studies regarding the design of an optimal degree distribution for a specific BP decoder over AWGN channels are limited, as previously discussed. Herein, a method for designing a degree distribution for a specific goal is proposed. The ripple size evolution is important for the design of a degree distribution. Random walk was used to model the number of encoded symbols released in each step. We assumed that the number of encoded symbols released in each step was a Poisson distribution.

Lemma 10. (symbol release). let denotes the probability that encoded symbols will be released in each step. It satisfies the constraint

Proof. The number of encoded symbols released is a Poisson distribution. The expectation of this distribution is one. Therefore, Lemma 10 is proven.

Let denotes the maximum number of encoded symbols released in a single decoding step. For a fixed , Monte Carlo simulations can be used to generate plenty of ripple size evolutions. Each ripple addition is modeled as a random walk with a probability distribution . The ripple size evolution is modeled as follows: where and denote the average ripple size and variance of the simulation results in decoding step , respectively; c denotes a parameter to adjust the ripple size evolution. For , the ripple size as a function of decoding step for different is shown in Figure 8. It is clear that the expected ripple size evolution can be generated by carefully adjusting parameter . Given the ripple size evolution, the degree distribution can be calculated using the GDDA. The degree distribution is obtained based on formula (18). where denotes the ripple size evaluation determined by parameters .

Let denotes the degree distribution designed to minimize average overhead. Let denotes another well-designed degree distribution to decrease the average degree at the expense of increasing the average overhead. The average overhead and average degree as a function of parameter are shown in Figure 9. The BER decreased as the average degree decreased because of two reasons. First, the more encoded symbols participated in decoding, the more encoded symbols recovered the same input symbol, resulting in a decrease error probability of decoding. Subsequently, the error propagation decreased with the average degree. Therefore, the BER is in conflict with the average overhead. Additionally, the average degree directly determines the number of operations during the encoding and decoding processes.

Let denotes the objective function of the degree distribution . Additionally, the optimal parameters can be converted to a pure optimization problem as follows:

The variable is used for the range [-1,1] and for the range [3, ]. Generally, for a fixed , it might appear that a lower value of would be desirable for decreasing both the average degree and BER at the expense of increasing the average overhead.

7. Numerical Results

In this section, some simulation results are provided to validate our study. The decoding algorithms were implemented in C++ and executed on a computer with a Xeon E3-1505 M CPU and 16 GB of RAM under Windows10. The degree distributions proposed in [18, 20] were used in our simulations, which are denoted as , respectively, and our proposed degree distribution is denoted as . The BER as a function of is shown in Figure 10. The BERs of the RSBP and RSDBP decoders were better than that of the BP decoder, consistent with our analyses. For example, with and , the BER of the BP decoder was 0.115, whereas the BER of the RSDBP decoder was 0.082. The computational times of BP, RSBP, and RSDBP are shown in Table 3. As shown, the computational times of the three decoders were similar.

The RSDBP decoder combined with the proposed degree distribution was compared with the other decoders. The degree distribution was designed to optimize by selecting the appropriate and ; the optimal parameters and average degree of are shown in Table 4, whereas the average degrees of are shown in Table 5. As shown, the average degree of is smaller than those of the others.

Figures 11 and 12 illustrate the NERRIC achieved by different decoders and different degree distributions with and , respectively. It is clear that the RSBP and RSDBP decoders outperformed the BP decoder, which is consistent with the theoretical analysis. The improvement decreased as the SNR increased because barely any error encoded symbols were discovered in channels with higher SNRs. Furthermore, RSDBP combined with the proposed degree distribution outperformed the other methods, and the improvement increased with the SNR. For example, with and , the of the RSDBP decoder combined with was 1.241, whereas the of RSDBP combined with the proposed degree distribution was 1.217. This is because the optimization goal was to minimize , and the probability of error propagation decreased with the average degree.

The RSTBP decoder combined with the degree distribution was compared with the other decoders, and the optimal parameters of with different and are listed in Table 6.

Figures 13 and 14 show as a function of SNR for and , respectively. As shown from the figures, the proposed degree distribution yielded better results than the others for both and to minimize for optimization. As the SNR increased, the performance of was better than that of the others. This is because the average degree of was smaller. Furthermore, as increased, decreased more slowly. This is because the number of error encoded symbols decreased as the SNR increased, and the number of encoded symbols deleted at approached that at .

In hybrid decoding, the decoding complexity decreased as the FER increased. For RSDBP decoder, the degree distributioncan be tuned to achieve a lower FER in a fixed overhead. The optimal parameters of the degree distribution with different and k values are shown in Table 7.

Figures 15 and 16 show the FER as a function of the SNR with and , respectively. It was observed that the proposed optimal degree distribution outperformed the others for different fixed overheads. For instance, in the case of , , and , the FERs of and were 0.0232 and 0.0138, respectively. This is because a better trade-off between the average overhead and average degree was achieved to reduce the effect of error propagation.

A hybrid decoder can be formulated by combining the RSDBP and SDBP decoders. Figures 17 and 18 show the decoding time as a function of the SNR for and , respectively. It was observed that outperformed the others in terms of the decoding complexity, as was better than the others in the HDBP decoding stage.

8. Conclusions

Herein, we first analyzed the improvement of BP decoding by introducing a sorting ripple, delaying the decoding process, and deleting low-reliable symbols. Subsequently, we proposed three improved HDBP decoders, namely, RSBP, RSDBP, and RSTBP decoders. We demonstrated that both RSBP and RSDBP outperformed BP decoding in terms of NERRIC although the decoding complexity increased slightly. Compared with the RSDBP decoder, the RSTBP decoder further increased the NERRIC but the average overhead increased. Furthermore, a ripple size evolution-based design of the optimal degree distribution was proposed. Numerical simulations demonstrated that the proposed degree distribution outperformed the others in terms of both the NERRIC and FER. The proposed scheme was not limited to AWGN channels and LT codes. It can be readily extended to noisy channels and Raptor codes. In future work, the energy consumption of LT codes will be investigated to identify a balance among the FER, average overhead, and average degree.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the Beijing Natural Science Foundation (4194073).