Abstract

In the cloud computing era, a paper recommender system is usually deployed on the cloud server and return recommendation results to readers directly. However, considering the paper recommender system, processing tremendous paper citation data on the cloud cannot provide fine-grained personalized and real-time recommendations for each reader because these recommended papers from the cloud are far from readers and probably not correlated strongly with each other for helping each reader research further and deeper in the interested field. Recently, the edge-cloud collaboration-based recommender system has been used for releasing parts of the cloud computing task to the edge and provides the recommendation near the client. Based on the edge computing recommender system, a keywords-driven and weight-aware paper recommendation approach is presented, namely, LP-PRk+ (link prediction-paper recommendation), to enable intelligent, personalized, and efficient paper recommendation services in the mobile edge computing environment. Specifically, the whole paper recommendation process mainly covers two parts: optimizing the existing paper citation graph via introducing a weighted similarity (i.e., building a weighted paper correlation graph) and then recommending a set of correlated papers according to the weighted paper correlation graph and the users’ query keywords. Experiments on a real-world paper correlation dataset, Hep-Th, show the capability of our proposal for improving the paper recommendation performance and its superiority against other related solutions.

1. Introduction

Generally, recommender system is based on the cloud-client structure that raises the risk of delays due to network bandwidth and latency, data leakage during remote transmission, and recommending uncorrelated results for users by processing tremendous multivariate heterogeneous data. In recent years, edge-cloud collaboration-based recommender systems are proposed to solve these problems [1, 2]. In the mobile edge computing environment, the recommender system can utilize the real-time information of users on the edge end to provide better recommendations.

Currently, the paper recommender system is one of the major tools and ways for readers to find their required papers. For example, popular academic paper search tools on the mobile edge end (e.g., Baidu Academic and Google Scholar) allow readers to look for their interesting papers from massive papers registered on the web, based on a set of desired keywords. In practice, one paper only covers readers’ partial query keywords. Therefore, the paper search tools probably recommended a set of papers that contain all the query keywords to meet the readers’ tastes for papers.

As shown in the right part in Figure 1, the fundamental paper recommendation process mainly consists of three phases in the cloud computing environment. First, a reader types a set of query keywords into the paper recommender system (e.g., {k1, k2, k3, k4, k5}). The second phase is papers’ discovery, in which vast candidate papers can be identified by using traditional keyword search methods [3]. The third phase is papers’ recommendation, in which the recommender system will output all candidate papers to the reader.

However, finding a set of desirable papers from massive candidates on the cloud server for the reader is often a sophisticated job even for an experienced reader. The reasons are mainly threefold, which are clarified as follows:(1)As the paper recommender system usually returns recommendation results to readers directly in the cloud computing, the paper recommender system cannot provide fine-grained personalized and real-time recommendation for readers.(2)The returned paper list should meet the following conditions. First, these papers collectively cover all the query keywords from the reader. Second, the recommended papers should own certain correlations with each other so as to aid the reader in launching in-depth and continuous research on an identical idea.(3)Traditional keywords-based paper recommendation approaches hardly analyze the possible correlations among different papers. Although paper citation graphs provide a good indicator of paper correlation relationships, they still face the serious problem of sparse data.

Currently, link prediction approaches are the best alternatives to address the sparse data problem of graphs (networks) [4], which aims to find the missing links and forecast future links based on known graphs (networks) information [5]. Specifically, link prediction approaches estimate the likelihood of building a new link among two unconnected nodes according to their graph structure and attribute information. Hence, applying link prediction to paper recommendations is of theoretical and practical significance.

To satisfy readers’ requirements on paper search, we propose a novel keywords-driven and weight-aware paper recommendation approach deployed on the mobile edge end, that is, (link prediction-paper recommendation). As shown in the left part in Figure 1, integrates multiple operations: query keywords, link prediction, papers’ discovery, and papers’ recommendation. On the mobile edge end [6], through analyzing a reader’s query keywords, can return a set of correlated papers (depicted by a connected subgraph) by mining the potential correlation patterns hidden in the weighted paper correlation graph. As output, the returned subgraph is interconnected by the target papers (containing query keywords) and bridging papers (containing no query keywords). Particularly, we consider interchangeability of one paper and its corresponding node in this paper, denoted as p or .

Generally, we achieve the following main contributions:(1)We propose a novel keywords-driven and weight-aware paper recommendation approach based on the mobile edge computing framework, i.e., , which enables intelligent, personalized, and efficient paper recommendation services in the mobile edge computing environment.(2)We optimize the existing paper citation graph model by introducing weighted similarity-based link prediction. Thus, we construct a weighted paper correlation graph.(3)To evaluate the usefulness and feasibility of , we conduct a group of experiments on a real-world paper citation dataset, Hep-Th.

The rest of the paper is organized as follows. Related researches are summarized in Section 2. Paper motivation is demonstrated in Section 3. In Section 4, we introduce the weighted similarity-based link prediction approach. Section 5 answers readers’ keywords query via the proposed solution. Section 6 analyzes and evaluates through experimental comparisons. Finally, we summarize the paper and point out the future research directions in Section 7.

2.1. Paper Recommendation

In general, Collaborative Filtering (CF) approach can calculate the similarity scores among different items; thus, paper recommender systems apply the CF approach and focus on ratings matrices. Furthermore, the CF approach can also calculate the ratings’ matrixes that are created on a paper citation graph [7]. Therefore, the early works of paper recommendation mainly used the CF approach to recommend papers. With the application of the CF approach in the paper recommendation, researchers found that the CF approach is generally restricted to cold-start and data sparsity problems [8]. In addition, Content-Based Filtering (CBF) approach is similar to the CF approach. The CBF [9] approach mainly focuses on the content relevance [10, 11] among papers to recommend papers. However, these recommended papers hardly match readers’ deep and continuous research around an identical focus. Furthermore, the CBF encounters the semantic ambiguity problem.

As the citation relationships between papers can reflect the correlations among papers’ research content, a graph-based approach becomes research focus. For example, Meng et al. [12] regarded authors, papers, topics, and keywords as nodes and multiple relationships as edges; the approach executed a random walk on the constructed four-layer heterogeneous graph to recommend papers. Furthermore, [13] proposed a graph-based PageRank-like paper recommendation approach, which mainly executed a biased random walk on paper citation graphs to recommend papers. Although [12] and [13] take papers’ citation relationships into account, these approaches do not tackle the data sparsity in the existing paper citation graphs.

2.2. Link Prediction

Link prediction [14] is an important approach of resolving links’ sparsity problem as it can compute the likelihood of building new links between two unconnected nodes. At present, three types of link prediction have been identified, that is, similarity-based method, maximum likelihood approach, and probabilistic method. The similarity-based method was used in large-scale networks as it could compute the similarity score among any two nodes. The maximum likelihood approach predicted links by utilizing specific parameters, and the probabilistic method could employ the trained model to forecast links [15]. However, these two approaches did not apply to the large-scale networks. In the paper, our proposal mainly employs the similarity-based method considering that an existing paper citation graph is a large-scale paper relationships network [16].

Currently, more researchers considered utilizing the weight of links in link prediction approaches. For example, the work of [17] tried to identify strong ties (e.g., spouses or romantic partners) in social network, and these strong ties were considered as the different link weight among users. Furthermore, [18] investigated the use of link strength on link prediction approaches; their proposed weighting criterion was based absolutely on topological data (i.e., the frequency of interactions among nodes). In addition, link prediction model could be categorized into two types: an unsupervised link prediction model and a supervised link prediction model. For example, [18] and [19] made use of user’s attributes information (i.e., gender, age, and sex) and interactive activities to estimate the social relationships strength on unsupervised link prediction model. Furthermore, [20] proposed a supervised link prediction model that used similarity metrics to calculate the similarity eigenvectors between a pair of nodes, and these eigenvectors were regarded as new databases for constructing the supervised model in predictive tasks [21].

In view of the above research content, a novel keywords-driven and weight-aware paper recommendation approach, that is, , is proposed in the paper, to cope with the sparse data problem and recommend a set of correlated papers. Next, actual examples are presented in Section 3 to further demonstrate the research motivation of the paper.

3. Research Motivation

In this section, the examples of Figures 2 and 3 are recruited to directly demonstrate our research motivation. Figure 2 describes a scenario that a reader needs to perform five keywords research before he creates a new paper: (1) keyword search (i.e., k1) is used by readers to search for interesting papers by typing query keywords; (2) paper citation graph (i.e., k4) is for finding the correlation relationships among different papers (i.e., mining the potential correlation patterns); (3) link prediction (i.e., k2) is to solve the data sparsity of paper citation graph; (4) Steiner tree (i.e., k3) [22] is utilized to find a set of correlated papers; and (5) dynamic programming (i.e., k5) [23] is applied to solve the Steiner tree problem. Thus, the reader obtains five corresponding query keywords, Q = {k1, k2, k3, k4, k5}.

Figure 3 is a part of an undirected paper citation graph. It contains 14 nodes (papers), that is, , covering diverse keywords. The notation {} {2021} {a1} indicates that node offers keywords and , an author a1, and the published time 2021. indicates that nodes and generate an undirected citation relationship.

According to the query keywords, the reader easily obtains a set of papers in Figure 3 (i.e., Rp = {}). However, these returned papers fail to satisfy the reader deep and continuous academic research around an identical focus. In fact, a paper recommender system generally recommends a larger number of candidate papers to readers; furthermore, the correlation relationships among candidate papers are transparent to readers. Therefore, readers hardly select a set of correlated papers from these candidate papers. Luckily, a paper citation graph depicts citation relationships among diverse papers, so it provides a promising way to mine the potential correlation patterns. However, the paper citation graph faces to the sparse data problem; that is, it does not consider the potential correlations among different papers not connected in the graph. For example, nodes and have common research content, while they fail to build a correlation relationship in Figure 3.

According to the above examples, in the mobile edge computing environment, we firstly address the data sparsity of paper citation graph and then recommend a set of correlated papers, these contents are presented in detail in Section 4 and Section 5, respectively.

4.1. Weighted Similarity-Based Link Prediction Approach

According to the analysis of the research motivation, we propose a weighted similarity-based link prediction schema that follows tasks’ sequence [24]. In Figure 4, the link prediction process mainly comprises the following activities:Activity 1. Preprocessing of Graph. For simplicity, a directed paper citation graph is treated as an undirected graph.Activity 2. Nodes’ Weighting. This activity mainly calculates the actual weight (i.e., ) between all nodes of the undirected paper citation graph. Here, the weights of two connected nodes and two unconnected nodes are both computed by employing the following KTA weighting criterion.Criterion. Keywords, Time, and Authors (KTA). Here, the number of common keywords of two papers increases and their published time are relatively close; thus, their weight (i.e., ) will be larger. Furthermore, the theory of [25] states that two different papers containing common authors tend to refer to each other. Thus, the KTA weighting criterion is as follows:where β, λ, and α are arbitrary parameter values, i.e., β, λ, and α (0, 1). Higher (or lower) values of β, λ, and α intensify (or attenuate) the influence of keywords, published time, and authors in the weighting criterion, respectively. () and () denote a set of keywords and authors, respectively. and represent co-authors and common keywords, respectively. and denote that the method computes the similarity authors of and keywords between two nodes, respectively. and indicate the published time of two nodes, respectively.Activity 3. Score Calculation and Ranking.(1)Firstly, we get the actual weight of two connected nodes (e.g., and , and ). Then, we calculate the weight of two nonconnected nodes (e.g., and ) by using the weighted similarity function (i.e., weighted common neighbor). Finally, we sort a descending list in order, and the maximum score [26] is saved in (, ).Weighted Common Neighbor—WCN [20]. It computes the average weight between two nonconnected nodes and ; that is,where (6) calculates the weight of two nonconnected nodes and by using the WCN. represents the number of common neighbor nodes. Equation (2) obtains a maximum weight (i.e., the maximum score) over the undirected paper citation graph.(2)Here, we directly use the actual weight of all unconnected nodes to produce a descending-ranking list.Activity 4. Links Establishment. This activity connects a pair of unconnected nodes. LP (link prediction) is defined as

4.2. Weighted Paper Correlation Graph

In the mobile edge computing environment, builds a weighted paper correlation graph by using the weighted similarity-based link prediction. The weighted paper correlation graph is defined as follows.

Definition 1. (Nodes). For one paper p, the weighted paper correlation graph has a mapped node . And the node covers diverse keywords (i.e., ), where these keywords denote its main research content.

Definition 2. (Edges). For any pair of nodes , the weighted paper correlation graph has a corresponding edge . And denotes the fact that two papers have a correlation relationship.

Definition 3. (Weights). For any two connected nodes and , the weight of / (larger than 0) directly indicates the strength of correlation between nodes and . For example, considering two edges and , if  > , then node has better “similarity” (correlation) with node instead of node .

Definition 4. (Weighted Paper Correlation Graph (W-PCG)). The W-PCG is expressed in , where , , and represent a set of nodes, edges, and weights.
According to Definition 1, each node of contains diverse keywords. To expediently answer the keywords query of Section 5, needs to preestablish an inverted index Sk. Concretely, given a query keyword , we can faster search for all papers containing keyword k. For example, nodes cover keywords k1 (i.e., Sk{} = {, }).

5. Paper Recommendation of LP-PRk+

5.1. Problem Formalization of Keyword-Driven Pattern Mining

On the mobile edge end [27], according to reader’s query keywords, we will introduce our paper recommendation approach based on a W-PCG (i.e., ). Specifically, given a query Q containing l (l 2) query keywords (i.e., Q = {}), our proposal can find optimal answer trees on , denoted as (Q), where (Q) is not only a connected tree containing all query keywords (i.e., Q) but also having highest correlation. To better clarify the paper, we summarize the symbols in Table 1.

Figure 5 is a part of and contains the same nodes of Figure 3 (i.e., , , ). And a new link (i.e., blue line) is added in Figure 5. According to the query keywords of Figure 2, i.e., Q = {k1, k2, k3, k4, k5}, nodes and contain keywords and , nodes , and contain keyword , nodes and contain keywords and , and node contains keyword . Thus, an answer tree (Q) connects one node from {}, one node from {}, one node from {}, and one node from {}; furthermore, (Q) may connect nodes that do not contain any query keywords, i.e., .

Therefore, the reader will obtain a set of papers that is different from Figure 3 (i.e., Rp = {}).

According to the above example, in the mobile edge computing environment, the reader initially pursues a weighted Steiner tree (Q) [28], and the definition is as follows.

Definition 5. (Weighted Steiner Tree). Given a W-PCG (i.e., ) and a set of nodes , when (Q) is a connected subgraph covering all nodes of , (Q) is a weighted Steiner tree.
Considering the inverted index of Section 4, we recognize multiple groups of nodes according to diverse query keywords from Q = {k1, , kl}, denoted as , where (1 ≤ n ≤ l) is a set of nodes covering a query keyword (1 ≤ n ≤ l). Next, we need to find a group weighted Steiner tree that covers all query keywords. The definition is as follows.

Definition 6. (Group Weighted Steiner Tree). Given and multiple groups of nodes , when a weighted Steiner tree (Q) selects nicely one node from each group (1 ≤ n ≤ l), (Q) is a group weighted Steiner tree.
There may be obtained multiple diverse group weighted Steiner trees in answer to reader’s keywords query. In fact, our recommendation goal, in the mobile edge computing environment, is simply to recommend a set of “most correlative” papers covering all query keywords. Here, the weight of denotes the “similarity” (correlations) of nodes and . Hence, the recommendation goal is finding a group weighted Steiner tree with maximal weight. In practice, considers the weight-aware (i.e., correlation-aware) paper recommendation problem as an optimization problem, and the object function is of “the smaller weight the better” case. Here, we transform Figure 5 into Figure 6. In Figure 6, the weight of edge is represented by 1/. For example, when ,10 = 0.5 holds in Figure 5, ,10 = 2 holds in Figure 6. In the remainder of this paper, we only use a converted W-PCG (i.e., ) as illustration. Next, we must search for a group weighted Steiner tree with minimum weight. The definition is as follows.

Definition 7. (Minimum Group Weighted Steiner Tree). Given a set of alternative group weighted Steiner trees, that is, (Q), …, (Q), when ( (Q)) = min ( ( (Q)), …, ( (Q))) (1 ≤ i ≤ m), (Q) is a minimum group weighted Steiner tree.

5.2. Searching for Optimal Answer via Pattern Mining

According to readers’ keywords query Q, must search a minimum group weighted Steiner tree (Q) representing final solution for readers via mining on a W-PCG (i.e., ). However, the computation of the (Q) is the NP-complete problem. Therefore, we will utilize a dynamic programming (DP) technique [28] to solve this problem.

Generally, the DP technique firstly divides the MGWST (minimum group weighted Steiner tree) problem into a set of easier subproblems. Next, each same subproblem is addressed only once and the DP technique stores the corresponding result. Finally, the DP technique can exactly provide optimal solutions to readers by combining previously saved results.

In DP model, (, K′) (K′ K = Q) is a state, which covers a set of query keywords K′. Furthermore, ( (, K′)) denotes the weight of (, K′). The model state transitions equations are defined as follows. :where N() is a set of neighbor nodes of node ; u, , u, , and wu,v. Since tree Tmin (, K′) does not contain any query keywords, the weight of the tree is infinite in formula (3). In formula (10), a tree covers one query nodes and its weight is 0. Formula (5) denotes that tree Tmin (, K′) is acquired by doing the following operations: tree growth operation (i.e., formulas (7) and (8)) and tree merging operation (i.e., formulas (9) and (10)). In Figure 7(a), the tree growth operation creates new tree T (, K′) by adding new node u, and the pseudocode of the tree growth operation is specified in Algorithm 1. In Figure 7(b), the tree merging operation creates new tree (, K′) by merging two trees; these two trees are both rooted at . And the pseudocode of the tree merging operation is specified in Algorithm 2. When K′ = K, searches for a minimum group weighted Steiner tree Tmin (, K′) in formula (11), and Tmin (, K′) contains at least one node.

Input: K = {},Q1 = Ф
Output: Q1
(1)For each uN() do
(2)  If+<
(3)   =+
(4)   enqueueinto Q1
(5)   update Q1
(6)  End If
(7)  If (=||=
(8)    +<
(9)  =+
(10)   enqueueinto Q1
(11)   update Q1
(12)  End If
(13)  Return Q1
(14)End For
Input: K={}, Q1=Ф
Output: Q1
(1)For each uN() do
(2)  If ()
(3)   =+
(4)   enqueueinto Q1
(5)   update Q1
(6)  End If
(7)End For
(8)=
(9)  For eachin Q1 s.t(=Ф=)
(10)   If +  < 
(11)    =
(12)    enqueueinto Q1
(13)    update Q1
(14)  End If
(15)  Return Q1
(16)End for

According to formulas (4)–(11), (Q) is returned by repeating the tree growth and the tree merging operations, and the pseudocode of the MGWST algorithm is specified in Algorithm 3. Furthermore, the algorithm maintains two queues: Q1 and Q2: the queue Q1 records intermediate trees only containing partial query keywords, and the queue Q2 records qualified trees covering all query keywords K.

Input: K={}, Q1=Ф, Q2=Ф
Output: Q1, Q2,
(1)Let Q1=Ф, Q2=Ф
(2)For eachdo
(3)  If v contains any nonempty keyword set
(4)   
(5)   enqueueinto Q1
(6)  End If
(7)End For
(8)While Q1Ф do
(9)  dequeue Q1 to
(10)  If=K
(11)   enqueueinto Q2
(12)   Continue
(13)  End If
(14)  Else tree growth
(15)  Else tree merging
(16)  Return=Q2.top ()
(17)End While

Note that, in the worst-case scenario, the MGWST algorithm may return the entire W-PCG as its output. In addition, the algorithm does not take into account the role of the synonymy and word inflections in the paper recommendation [2935] process.

6. Experiments

To evaluate the usefulness and feasibility of , the large-scale experiments are tested on Hep-Th dataset [36].

6.1. Experimental Environment

Dataset. We select part of the Hep-Th data set for the experiment (i.e., these papers are published between 1997 and 2003); the partial data contains 8721 papers and forms a paper citation graph. Here, each paper node stores paper published time and authors’ information. Furthermore, the keywords’ information of each paper is constructed by employing the RAKE (rapid automatic keyword extraction) technique.Experiment Settings. To obtain an optimal W-PCG (i.e., ), we execute the link prediction process depicted in Figure 4. Furthermore, we also set the parameters values of the KTA weighting criterion: α, ß {0.3, 0.5, 0.7, 0.9}, λ {0.3, 0.9}.In the keywords query experiments, three diverse sets of the keywords are set, i.e., set A, set B, and set C. In set A, the keywords of one paper are regarded as the query keywords, where readers can provide all query keywords for their research content. In set B, the query keywords are randomly selected from different papers (in excess of one paper) as the readers’ research content covers diverse research topics. In set C, the query keywords are randomly selected from any two papers to further verify the feasibility of the MGWST algorithm. Furthermore, an author of each paper basically creates the number of keywords with up to 6, so there are up to six query keywords in each set. Each experiment is repeated 100 times and the average experiment results are adopted.Evaluation Criteria. We compare the following evaluation criteria:(1)Number of new edges: the larger, the better; that is, more new edges denote that our link prediction approach can better solve the sparse problems of the existing paper citation graph.(2)Number of nodes: number of recommended papers (the smaller, the better, i.e., fewer papers denote that high correlation). Note that the return solution contains at least one paper.(3)Success rate [37]: when the number of papers of a recommendation result is less than twice the number of the readers’ query keywords, the recommendation result is successful (the larger, the better).(4)Weight: the weight of a recommended result (the smaller, the better).(5)Computation time: the time for generating an answer tree (the smaller, the better). Here, the computation time can be well described by logarithmic function (log2).(6)Precision [20] is defined aswhere TP denotes a set of papers containing query keywords and denotes a recommendation result.(7)Recall [38] is defined aswhere is the number of papers (e.g.,  = 2) in set C. is a set of papers cited by pa.(8)F1 score is defined as

Here, we compare with several similar paper recommendation approaches:Baseline 1. Link Prediction-Random (LP-Random) [28]. The approach is randomly finding a set of nodes from the , and these nodes collectively cover readers’ query keywords. Finally, this approach grows a minimum weighted spanning tree.Baseline 2. Link Prediction-Greedy (LP-Greedy) [28]. Likewise, the approach selects a set of nodes contained all query keywords from the . Next, this approach regards these nodes as the initial root nodes and continuously generates a tree until these nodes are interconnected.Baseline 3. Link Prediction-Random Walk (LP-RW) [13]. First, using papers’ keywords and the correlation relationships of to build 2-layer graph, and then the approach runs on the 2-layer graph to recommend papers. Furthermore, each query only uses readers’ entered keywords: q = [0, qW].Baseline 4. Link Prediction-Random Walk Restart (LP-RWR) [12]. Likewise, the approach also uses the 2-layer graph for paper recommendation. If the state vectors of LP-RWR steadily grows linearly in the experiment, then we consider the approach achieves linear convergence.Experimental Tools. All experiments are implemented by python and executed with Intel® Core® CPU @ 3.0 GHz, 16 GB RAM, and Windows 10 @ 1809, 64-bit operating system.

6.2. Experimental Results

(1)Profile 1: The Number of New Edges.In this profile, we compare the number of new edges to select a set of appropriate parameters values that are better at solving the sparsity of an existing paper citation graph in some extent. Tables 23 present that we mostly gain different experimental results as paper keywords, paper published time, and paper authors’ information have a significant role in the link prediction process. According to the above experiments results, when α = 0.3, ß = 0.5, and λ = 0.9, we get the best experimental results; that is, the number of new edges is 348. Therefore, we select the set of appropriate parameters values to build a W-PCG.(2)Profile 2: The Number of Nodes of Different Approaches.In the experiment, we test the number of recommended nodes (papers) of different approaches. Here, the number of readers’query keywords ranges from 2 to 6. As shown in Figure 8, these experiment results show that the number of recommended including more papers can satisfy requirements of readers on more keywords query. When readers can accurately provide all query keywords, Figure 8(a) presents that our approach can accurately answer readers’ keywords query; that is, the number of papers is 1. Furthermore, the experiment results of Figure 8(b) demonstrate that can find necessary bridging nodes between target nodes. In the case of same keywords query, Figure 8 shows that the number of recommended papers of is fewer than that of the other two approaches (i.e., LP-Random and LP-Greedy). In fact, a recommended result contains a handful of papers, which means these recommended papers have higher correlation. Thus, the experiment results of Figure 8 can show directly that our paper recommendation approach is superior to LP-Random and LP-Greedy.(3)Profile 3: The Success Rate of Different Approaches.As shown in Figure 9, we compare the success rate of different approaches in the different sets. According to [39], if the number of query keywords equals 6, then the number of papers of a successful recommended result must range from 1 to 12. When readers can accurately provide all query keywords, Figure 9(a) presents that the success rate of our approach is 100%. For the other keywords query cases, the experiment results of Figures 9(b) and 9(c) again present that the success rate of is 100%. However, LP-Random and LP-Greedy are difficult to obtain successful paper recommendation results, especially with the number of query keywords equal to 6. In conclusion, the experiment results of the set C can verify the feasibility of our paper recommendation algorithm (i.e., the MGWST algorithm). Furthermore, the experiment results of Figure 9 can show that can recommend more sets of papers with the higher correlations than other two approaches.(4)Profile 4: The Weight of Different Approaches.In the paper, , LP-Random, and LP-Greedy are both returning a tree containing all query keywords to readers. Furthermore, the weight of a tree can reflect the correlation of a set of papers; the smaller the weight, the higher the correlation. Hence, we compare the weight of different approaches in this experiment. As shown in Figure 10, these experiment results show that the weights of different approaches increase with the number of query keywords increasing, as these approaches need to find more papers to meet readers’ keywords query requirements. In the case of the same keywords query, the weight of our approach is less than the other two approaches (i.e., LP-Random and LP-Greedy) as adopts the weight optimization strategy. Furthermore, Figure 10(a) shows that the weight of equals 0, which further shows that our proposal can find one corresponding paper containing all query keywords. According to the above experiment results, can guarantee to return a set of papers with the minimum weight; that is, these papers have the higher correlation.(5)Profile 5: The Computation Time of Different Approaches.In the paper, a computation time is defined as the time of finding paper recommendation results. Thus, we compare the computation time of different approaches. As LP-RW and LP-RWR only do iterative operations and matrix operations in the process of paper recommendation, the computation time of these two approaches are both fixed values in the sets B and C.Moreover, we only count the time when LP-RW achieves linear convergence, so the computation time of the approach is smaller than that of our approach. As shown in Figure 11, the computation time of , LP-Random, and LP-Greedy generally increases as the number of query keywords increases; that is, these three approaches all take more time to find an answer tree with the increasing number of query keywords. Furthermore, the computation time of these three approaches increases exponentially. As LP-Random and LP-Greedy both use extremely simple heuristic for selecting the bridging nodes from , these two approaches find the answer tree faster than our approach. In fact, the recommended results of LP-Random and LP-Greedy are not ideal for readers as their recommended results contain some unnecessary bridging nodes (see Figure 8). In most really cases, the computation time of is allowable and receivable, which is the price to pay if readers save more time and energy on achieving their research goal.(6)Profile 6: The Precision of Different Approaches.For diverse paper recommendation approaches, we compare the precision of their recommended results. As shown in Figure 12, these experiment results show that the precision of diverse approaches is very different in the same keywords query cases. Whether readers accurately or randomly provide their query keywords, can accurately answer the readers’ keywords query, and the precision equals 100%. However, the precision of LP-Random and LP-Greedy has both a range from 22% to 33% in the experiment. Therefore, the experiment results of Figure 12 further show that readers will easily realize their research aim by using our recommended results.(7)Profile 7: The Recall and F1 Score of Different Approaches.In this profile, we further compare the recall and F1 score of different approaches in sets B and C. Here, the numbers of recommended papers of LP-RW and LP-RWR are both equal to 20. As can be seen Figures 13(a) and 13(b), the recall and F1 score of range from 17% to 69% and from 29% to 81%, respectively; meanwhile, the other four approaches have smaller recall and F1 score than our approach. Likewise, Figures 13(c) and 13(d) also show that the recall and F1 score of other four approaches are smaller than . Therefore, the experiment results of Figure 13 are not only demonstrating that our proposal can satisfy readers’ keywords query requirements but also verifying further the feasibility of the MGWST algorithm (i.e., Algorithm 3).

7. Conclusions

In this paper, a novel keywords-driven and weight-aware paper recommendation approach based on the mobile edge computing framework, that is, , is put forward to address the sparsity and compatibility issues existing in paper recommendations. The recommended papers are not only collectively containing readers’ query keywords but also having the highest correlation degree. Therefore, the recommended papers can significantly promote readers’ in-depth and continuous research around an identical focus. Furthermore, our proposal indirectly solves the problem of latency of recommendation results by adjusting the order of paper recommendation in the mobile edge computing environment. Finally, the experimental results show the feasibility of , in terms of multiple evaluation metrics. In summary, our proposal can provide intelligent and personalized paper recommendation services in the mobile edge computing environment and further promote the quality-of-retrieval experience of readers.

Although our work shows desirable results, there are still several issues remaining unsolved. Firstly, our adopted link prediction approach in recommender system [4046] is a bit naive and straightforward. Therefore, more refinements are needed in the future. Second, readers’ paper search is often a multiobjective decision-making problem that involves a number of influencing factors [4754] in the mobile edge computing environment. We will further improve our proposal by considering these factors.

Data Availability

The experiment dataset Hep-Th used to support the findings of this study has been deposited in “http://snap.stanford.edu/data/cit-HepTh.html.”

Additional Points

This work is intended to improve and perfect the model in [40]. Thus, the work enables intelligent, personalized, and efficient paper recommendation services in the mobile edge computing environment.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Key Research Base of Philosophy and Social Sciences in Jiangsu Universities “Academic Center of Huang Yanpei Vocational Education Thought Research Association,” Jiangsu University Philosophy and Social Science Research Project (2020SJA0675), Scientific Research Project of Nanjing Vocational University of Industry Technology (2020SKYJ03), Jiangsu Province Modern Education Technology Research Project (2020-R-84365), and National Vocational Education Teacher Enterprise Practice Base “Integration of Industry and Education” Special Project (Study on Evaluation Standard of Artificial Intelligence Vocational Skilled Level).