Peer review is a standard professional practice that is designed to professionally assess the quality and feasibility of scientific projects and all academic projects/papers. Since appointed peer review experts directly create and/or contribute to the review results and thus influence reader responses to the applicants’ project proposals, expert assignment has been one of the most important tasks in project management (Huang & Zhong, 2016).

The core issue in peer review expert assignment is how to ensure its accuracy and impartiality (Gandhi & Sauser, 2008). Accuracy in this context means that the experts appointed for project review should be very familiar with the related research field, and show correctness and precision in their project. Impartiality means that these experts should make unbiased (independent and fair) reports on all the projects they review (Wu, 1996). A suitable expert, one who is informed and honest, can be expected to make more objective and impartial comments on the quality of the reviewed project. A good fit for peer reviewers and applicants’ work they assess can help maintain the prestige of the reviewers and better ensure the reputation of the project designers, authors, and affiliate institutions.

The expert assignment problem (EAP) as a common phenomenon has attracted considerable research interest in recent years. Most of these works focus on the accuracy of the expert assignment (Wang, 2007). Aiming to guarantee such assignment accuracy, Li and colleagues in the last decade proposed two heuristic algorithms to solve the EAP—a genetic algorithm (Kumar et al., 2010; Li et al., 2007) and an ant colony optimization algorithm (Dorigo & Blum, 2005; Li et al., 2008). Li, Peng, and Wei (2013) further proposed an adaptive parallel genetic algorithm focused on assignment accuracy and computational efficiency to address the EAP. While these algorithms can fulfill the fundamental task of expert assignment to some extent, two main problems with their methods need to be addressed. First, all of these proposed algorithms assume a closeness (or similarity) measurement between the research fields of every applicant and experts. A formal definition and a calculation method for the measurement, however, should be detailed to clarify these relationships. Second, reviewer impartiality has not been considered in these methods. To deal with this closeness/similarity limitation, Ho et al. (2017) have just created a proposal reviewer recommendation system using keywords with fuzzy weights based on big data. So the topic is starting to get needed attention at this time.

Impartiality in peer reviews is supposed to be supported or guaranteed by project sponsors or government agencies using administrative means (Wu, 1996). For example, the National Natural Science Foundation of China (NSFC)—the most important science foundation in China—requires that applicants’ relatives should be excluded from their peer review expert team (Zhang et al., 2016). But professional or personal relationships that can impact peer review dynamics may be more numerous and meaningful (if less obvious) than those of family members when addressing EAP concerns. Other criteria for judging potential conflict of interest are therefore necessary to ensure impartiality. Outside of certain discussions on rules and regulations (Agee, 2007; Wang et al., 2002), however, the impartiality of expert assignment has rarely been considered in designing algorithms or data assessment techniques to address EPA concerns.

To tackle the above problems, we propose a peer review expert assignment method that considers both accuracy and impartiality. During the expert assignment, the method simultaneously takes into account the

The remainder of this paper is organized as follows. Section 2 defines the four criteria, gives the formal definition of the expert assignment problem, and presents the proposed randomized algorithm. Section 3 presents the simulation analysis of the algorithm. Section 4 offers the conclusion, limitations, discussion, and future research directions.

An appropriate peer review expert needs to have three main qualifications: (1) sufficient knowledge of the area/field under review; (2) enough front-line research experience to grasp the area/field’s key research points and frontiers to better guarantee an accurate understanding of the project proposal quality; and (3) few or no interest associations with the applicant to ensure the fairness of the project review. Interest association in this context refers to (a) the applicants’ academic association (e.g. same organization, co-authored/co-funded works, and other relationships/links, past or present) with the review experts, and (b) other interest associations like similar project proposals of the experts related to the applicants.

We then map the four (1, 2, 3-

The reviewing experts who fit the project proposal’s research background are first selected. The most intuitive way of measuring the fitness between an expert and a project proposal is to assume each expert and proposal applicant has a vector of description keywords (related to the research area). The similarity between the two vectors is then selected as the fitness degree. The problem here is that keywords provided by different researchers may have high levels of personal relatedness, or one research point may apply different expressions depending on different researchers’ word choice. We thus make two assumptions about the research descriptions: (1) the topic description is hierarchically structured, in that it has more than one level of detail; and (2) the keywords in the description are (semi-)controlled, so that most of the words can be matched during similarity calculation.

Actually, the assumptions (about the research descriptions) have been met by the NSFC Committee, which set up an Internet-based Science Information System (ISIS) to manage users’ research resumes. A user of ISIS can log in as either an applicant or expert

Only researchers who have been selected as peer review experts have the role of expert.

. With either role, the system will require the users to register their research resumes. Among other variables, theTo link with appropriate peer review experts, applicants will usually find or accept recommendations on keywords that best fit their project proposal’s research area. Consequently, we calculate the two parties’ fitness degree hierarchically as follows. If the proposal applicant and the expert have the same familiarity code, they have a fitness score of 0.2. Further, if they have the same research direction, they gain a fitness score of 0.3. Given two keywords vectors, the cosine similarity of the vectors is first calculated, and then scaled with 0.5 as keywords fitness score. The three fitness scores are then added to make the final fitness degree between the applicant and the expert. Considering that the same keywords may have completely different meanings across research fields, the keywords only contribute to the fitness degree if the score of the familiarity code is not 0.

Let _{i}_{i}_{i}_{i}_{i}_{i}_{i}

_{i}_{j}_{ij}_{i}_{j}

where ∠(.,.) denotes cosine similarity and Γ(.,.) is a determination equation with Γ(_{ij}

A researcher’s research status is described herein so that when the researcher is appointed as a peer review expert, the confidence level in relation to the value of the expert’s comments can be identified. Because an expert should have enough front-line research experience to grasp the key research points and frontiers of the field and make accurate review comments, we use the

Given an expert’s ^{e}

where Δ_{t} = _{c}_{c}^{2017-2015} = 0.681.

Note that in practice, however, an expert’s

Up to now, fitness degree and research intensity can be used to assign an adequately accurate expert for peer review. It is then necessary to assure impartiality, where there should be little academic association and no conflict of interest between an applicant and the expert.

Academic association is defined based on the network distance between the applicant and expert on academic social networks, which are set up based on various relationships among applicants and experts. Experts having the least degree of association with the applicant can thus be assigned for peer review, where academic association is defined as follows.

_{i}_{j}_{ij}_{i}_{j}_{ij}_{i}_{j}

where _{i}_{j}_{ij}^{-1+1} = 1, the highest degree of association two researchers can have. If the shortest path instead has 4-hops, then _{ij}^{-4+1} = 0.125, indicating very little association between them.

It is now important to assure that a review expert does not review a proposal in the research area that he/she has also applied for support. Since it is hard to say to what extent the similarity between the expert’s proposal and the reviewing proposal can influence the review result, an alarm value is set so the proposal will not be assigned to this expert if the similarity is larger than the threshold. If this is not the case, the expert is regarded as having no potential/actual conflict of interest with the applicant. Conflict of interest is defined as follows.

_{i}_{j}_{j}_{j}_{ij}_{i}_{j}

where _{ij}

Based on Definitions 1 to 4, we can assign the expert that best fits the proposal, has the highest research intensity, least degree of academic association, and no conflict of interest with the applicant for project review. The selectivity degree is used to unify the four concepts as follows.

_{i}_{j}_{ij}_{i}_{j}

where | denotes logical OR, min(

Under Definition 5, if _{iJ}

That is, suppose ξ = 2, and suppose _{ij}^{Hij –1–s}, 1) = min(2^{1–1–3}, 1) = 0.125 ; if _{ij}^{Hij –1–s}, 1) = min(2^{4–1–3}, 1) = 1; further, if _{ij}^{Hij –1–s},1) = min(2^{5–1–3},1) = 1. From these examples it can be known that selectivity degree will increase with larger _{ij}_{ij}_{ij}

Figure 1 gives an example of expert assignment based on selectivity degree. Figure 1(a) shows an academic social network among six researchers, where applicant _{1} (submitted _{1}) has not been selected as the peer review expert, _{2} to _{6} are review experts, and _{2} and _{3} submitted project proposal _{2} and _{3}, respectively. Figure 1(a) also gives the research intensity values of all the experts, calculated using the information provided in Figure 1(b) by setting _{l} will be assigned to _{6} and _{3}, and _{2} and _{3} will both be assigned to _{4} and _{5}. Figure 1(d) (_{l} is closely related academically to _{6}, just as _{2} is related to _{5}, and _{3} is related to _{4} and _{5}. Compared with _{2} and _{3}, _{6} may be more willing to support _{1}. Furthermore, it can be seen that _{5} has not directly contributed to research work for five years, so it may not be appropriate for him/her to be a review expert. By also considering the _{l} will be assigned to _{3} and _{4}, and _{2} and _{3} to _{4} and _{6}. It can be seen that the experts with considerable research intensity that have adequate fitness degree with the proposals and relatively less academic association and conflict of interest

Note that if we do not consider _{12} = 0.63 > 0.56 = _{14}.

In practice, a number of proposals need to be reviewed at the same time. Moreover, every proposal needs to be reviewed by more than one expert, and every expert can only receive a limited number of proposals. Hence, given _{ij}_{ij}_{i}_{j}

_{ij}_{ij} ∈

Definition 6 defines a version of the 0-1 knapsack problem (Freville, 2004), a famous NP-C problem in computer science. That means that the problem may not be solved in a finite amount of time. Hence, the 0-1 knapsack problem is always handled using dynamic programming, greedy algorithm, and randomized algorithm (Martello & Toth, 1987). In this paper, we adopt the randomized algorithm to solve the problem since it can efficiently find an acceptable solution within a reasonable time period.

Before carrying out the proposed algorithm, an example of possible assignments is given (Figure 2). Based on the selectivity degrees presented in Figure 1(g), Figure 2 presents two possible assignments (where each assignment contains a set of 0/1 appointments), by assuming every proposal should be reviewed by two experts (i.e.

The possible assignments are seen to increase exponentially with the increase of experts and applicants. It is hard to find the best assignment in a limited amount of time when the number of experts and applicants is large. Hence, we design a randomized algorithm to find an adequately good assignment in a relatively short period of time. The idea is that, proposals to experts are randomly assigned until all the proposals have ^{5}) times, where the assignment with the largest selectivity degree is employed, as noted in Algorithm 1 (Figure 3).

In this section, we perform simulations to verify the effectiveness and feasibility of the proposed algorithm. During the simulation, four matrices are randomly generated, corresponding to the four proposed criteria. Then the matrix of selectivity degree is calculated, after which the randomized algorithm is run

For the generation of fitness degree, we assume applicants and experts are from the same research field, i.e.

As to research intensity, we assume a uniform distribution of U(0,1) for generation. According to Definition 2, the ranking percentage part (i.e. 1-^{Δt}) will not affect the distribution of research intensity as a whole.

For academic association, we generate network distances based on the Sixth Degree Segmentation Theory (or Small World Phenomenon). Small World Phenomenon proposes that in a social network, there will be no more than six hops before a person can reach any stranger in the network (Milgram, 1967). This phenomenon is also closely applied to academic social networks, due to the particularity of academic circles (e.g. which consist of people with similar research backgrounds who are very willing to know each other) (Cainelli et al., 2015). Hence, we assume a researcher in the academic social network can reach 70% of other researchers in less than three hops, and can reach anyone in the network in less than six hops. More specifically, we assign the probability of reaching

During the simulation, the algorithm parameters are set as: ^{8}. Note that 200 is a sufficiently large number of applicants in practice. Since according to statistics, the total number of proposals is subject to management science, where NSFC 2016 is 8,293 and there are 57 familiarity codes in management science. So there is an average of 145 proposals per familiarity code. Also note that while the proposed algorithm is expected to find a better result with a larger ^{8} can return a good assignment for

A larger

Since the output of the randomized algorithm is non-deterministic, we run the algorithm 100 times (see Figures 4 and 5 for analysis). For each run, we record the average fitness degree, research intensity, and academic association of the best assignment (best value) of all the

Figure 4 plots the maximum, average, and minimum values of best, overall, and worst FD/RI (of the 100 runs), where it is found that (1) after large amounts of sampling, the average values of the overall FD and RI converge to their corresponding theoretical value, and these values are the expected values when the experts are randomly assigned for peer review; (2) the maximum, average, and minimum values of the best FD and RI are significantly larger than their corresponding overall and worst values; (3) the average fitness degree among the experts and applicants of the best assignment is approximately 10% higher than the overall assignments, and 22% higher than the worst assignment; and (4) the average research intensity of the experts of the best assignment is approximately 28% higher than the overall assignments, and 56% higher than the worst assignment. That is, the proposed algorithm can always find experts with considerable research intensity having a better fit with proposal applicants.

For academic association, we plot the maximum, average, and minimum hops of the best, overall, and worst assignments (see Figure 5). The results are very similar to those of fitness degree and research intensity. The converged (i.e. expected) average hops (between an expert and the applicant) is 3.2. Compared to the overall assignments and the worst assignment, on average our algorithm can find the experts for peer review with one or two hops away from the applicants.

The time overhead of the algorithm is analyzed to verify whether it is feasible for expert assignment in practice. The same set of parameters is used here as those used in Section 3.1, except that

In this paper, we formally define the expert assignment problem as an optimization problem while considering both accuracy and impartiality of experts based on four carefully designed criteria. The criteria characterize the properties that a good peer review expert should have. With the help of the criteria, the integrated criterion (i.e. selectivity degree) is defined for expert selection, where a randomized algorithm is proposed to solve the optimization problem. Simulation results show that the proposed method can always identify experts with considerable research intensity, as well as adequate fitness degree and relatively fewer academic associations or conflict of interest with the proposal applicants for project review. Furthermore, the proposed algorithm can return results in an acceptable amount of time.

A limitation of this study is that real data (rather than simulation data) may be more convincing in proving the effectiveness of the proposed method. The authors are actively contacting the officers of project funding agencies (e.g. NSFC) for potential collaboration. Hence, our future work will be dedicated to improving the proposed method in terms of practical applications. For example, more criteria may be considered and adopted for expert characterization to further promote the accuracy and impartial of the assignment based on real data. Also, an algorithm with more sophisticated strategies (e.g. backtracking) may be designed to further improve the efficiency of the assignment when the data volume is extremely large.

^{th} International Symposium on Computational Intelligence and Design (pp 23–26). Washington, DC: IEEE Computer Society.