Ramanujacharyulu (1964) provided a graph-theoretical algorithm to select the winner of a tournament on the basis of the total scores of all the matches, whereby both gains and losses are taken into consideration. Prathap & Nishy (under review) proposed to use this power-weakness ratio (PWR) for citation analysis and journal ranking. PWR has been proposed for measuring journal impact with the arguments that it handles the rows and columns in the asymmetrical citation matrix symmetrically, its recursive algorithm (which it shares with other journal indicators), and its mathematical elegance. However, Ramanujacharyulu (1964) developed the algorithm for scoring tournaments (Prathap, 2014). Can journal competitions be compared to tournaments? In our opinion, journals compete in incomplete tournaments; in a round-robin tournament, all the teams are completely connected. If one team wins, the other loses. This constraint is not valid for journals.
More recently, Prathap, Nishy, and Savithri (in press) claim to have shown that “the Power-weakness Ratio becomes arguably the best quantifiable size-independent network measure of quality of any journal which is a node in a journal citation network, taking into account the full information in the network.” Does PWR indeed improve on the influence weights proposed by Pinski and Narin (1976), the Eigenfactor and Article Influence Scores (Bergstrom, 2007; West, Bergstrom, & Bergstrom, 2010), the PageRank (Brin & Page, 2001), and the Hubs-and-Authorities thesis (Kleinberg, 1999) on the Web Hypertext Induced Topic Search (HITS)? PWR shares with these algorithms the ambition to develop a size-independent metric based on recursion in the evaluation of the accumulated advantages (Price, 1976). Unlike these other measures, in PWR the disadvantages are appreciated equally with the advantages; the “power” (gains) is divided by the “weakness” (losses). In studies of sporting tournament (e.g. crickets), the ranking using PWR was found to outperform other rankings (Prathap, 2014).
In this study, we respond to this proposal in detail by testing PWR empirically in the citation matrix of 83 journals assigned to the Web-of-Science (WoS) category “Library and Information Science” (LIS) in the Journal Citation Reports 2013 of Thomson Reuters. This set is known to be heterogeneous (Leydesdorff & Bornmann, 2016; Waltman, Yan, & van Eck, 2011a): in addition to a major divide between a set of LIS journals (e.g.
We focus the discussion first on the two sub-graphs of journals: (1) seven journals which cited
In our opinion, one is not allowed to compare impact across borders between homogenous sets because citation impacts can be expected to mean something different in other systems of reference. More recently, Todeschini, Grisoni, and Nembri (2015) proposed a weighted variant of PWR (“wPWR”) for situations where the criteria can have different meanings and relevance. However, we have no instruments for weighting citations across disciplines and the borders of specialties in terms of journal sets are fuzzy and not given (Leydesdorff, 2006).
In other words, scholarly publishing can perhaps be considered in terms of tournaments, but only within specific domains. Journals do not necessarily compete in terms of citations across domains. Citation can be considered as a non-zero game: if one player wins, the other does not necessarily lose, and thus the problem is not constrained, as it is in tournaments. Since there are no precise definitions of homogeneous sets, interdisciplinary research can be at risk, while the competition is intellectually organized mainly within core set(s) (Rafols et al., 2012).
The numbers of publications and citations are size-dependent: large journals (e.g.
Pinksy and Narin (1976; cf. Narin, 1976) proposed to improve on JIF by normalizing citations not by the number of publications, but by the aggregated number of (“citing”) references in the articles during the publication window of the citation analysis. Yanovski (1981, at p. 229) called this quotient between citations and references the “citation factor.” The citation factor was further elaborated into the “Reference Return Ratio” by Nicolaisen and Frandsen (2008). In the numerator, however, Pinski & Narin (1976) used a recursive algorithm similar to the one used for the numerator and denominator of PWR. This example of an indicator based on a recursively converging algorithm was later followed with modifications by the above-mentioned authors of PageRank, HITS, Eigenfactor, and the Scimago Journal Ranking (SJR; Guerrero-Bote & Moya-Anegón, 2012).
“Eigenfactor,” for example, can as a numerator be divided by the number of articles in the set in order to generate the so-called “article influence score” (West, Bergstrom, & Bergstrom, 2010; cf. Yan & Ding, 2010). Using Ramanujacharyulu’s (1964) PWR algorithm, however, the same recursive algorithm is applied in the cited-direction to the numerator and in the citing-direction to the denominator. “Being cited” is thus considered as contributing to “power” whereas citing is considered as “weakness” in the sense of being influenced. Let us assume that these are cultural metaphors-we return to this in the discussion-and continue first to investigate the properties of the indicator empirically. For a mathematical elaboration, the reader is referred to Todeschini, Grisoni, and Nembri (2015). In another context, Opthof and Leydesdorff (2010) noted that indicators based on the ratio between two numbers (such as “rates of averages”) are no longer amenable to statistical analysis such as significance testing of differences among the resulting values (Gingras & Larivière, 2011). More recently, other indicators based on comparing observed with expected values have also been introduced (e.g.
Let In social network analysis, the matrix is usually transposed so that action (“citing”) is considered as the row vector.
Using graph theory,
For obtaining weakness, the same operations are carried out column-wise by first using the transposed matrix
In more formal terminology: the vector of power indexes is the solution to the equation
Note that a journal is thus considered powerful when it is cited by other powerful journals and is weakened when it cites other weaker journals. This dual logic of PWR is similar to the Hubs and Authorities thesis of the Web Hypertext Induced Topic Search (HITS), a ranking method of Web pages proposed by Kleinberg (1999), but with one major difference. In the HITS paradigm as applied to a bibliometric context, good authorities would be those journals that are cited by good hubs, and good hubs the journals that cite good authorities. Thus, among other things, the elite structure of science can be discussed. Using PWR, however, good authorities are journals that are cited by good authorities and weak hubs are journals that cite weak hubs. Using CheiRank (e.g. Zhirov, Zhirov, & Shepelyansky, 2010), the two dimensions of power and weakness can also be considered as
We study the effectiveness of PWR as an indicator using journal ecosystems drawn from the LIS set of the WoS (83 journals) as an example. Two local ecosystems (sub-graphs) are isolated from this larger scientific network and the cross-citation behavior within each sub-graph is analyzed. Can the indicator be a measure of the standing of each journal in the cross-citation activity within a sub-graph that is more finely-grained than, for example, the journal impact factor or other indicators defined at the level of the total set? We will also compare our results with the Scimago Journal Ranking (SJR) because this indicator uses a recursive algorithm similar to PageRank.
One can perform the recursive matrix multiplication to the power of a matrix in a spreadsheet program such as Excel. Excel 2010 provides the function MMult() for matrix multiplications, but this function operates with a maximum of 5,460 cells (or
A macro (PWR.MCR) for Pajek is specified in Appendix 1 and provided at In Excel, we use the so-called Stodola method, which simplifies the computation (e.g. Dong, 1977). However, upon extension to the full set and
The values on the main diagonal represent within-journal self-citations. One can argue that self-citations should not be included in subsets since the number of selfcitations is global: it remains the same in the total set and in subsets, and therefore may distort subsets (Narin & Pinsky, 1976, p. 302; cf. Price, 1981, p. 62). In a second sheet of the Excel file named “without self-citations,” we show that in this case the effects are marginally different. In Appendices 1 and 2, the procedures for using Pajek or Excel, respectively, are specified in more detail.
Among the 83 journals assigned to the journal category LIS by Thomson Reuters, one is not cited within this set and four journals do not cite any of these journals. The weakness score in this case is determined by the number of self-citations on the main diagonal and otherwise zero.
Table 1 lists ranked PWR values for 15 of the 75 journals in the central component after 20 iterations (after removing the four non-citing journals).
Fifteen journals ranked highest on PWR among 83 LIS journals.Abbreviation of journal name PWR 59.52 15.62 11.31 8.84 6.96 6.15 5.53 5.01 4.40 4.20 3.57 3.38 3.15 3.09 2.90
As noted above, some journals never cited another journal in this set and one journal never received any citations from the other journals in the set. For analytical reasons, PWR would be zero in the latter case and may go to infinity in the former. However, a structural analysis of the LIS set shows that there are two main subgraphs in this set. These can, for example, be visualized by using the cosine values between the citing patterns of 78 (of the 83) journals (Figure 1).
Figure 1
Two groups of journals within the WoS category LIS; cosine > 0.01;

Using the Louvain algorithm for the decomposition of this cosine-normalized matrix, 40 of these journals are assigned to partition 1 (LIS) and 38 to partition 2 (MIS—Management Information Systems; cf. Leydesdorff & Bornmann, 2016). From these two subsets, we further analysed two ecosystems which were selected because they are well-connected homogeneous sets.
Table 2 shows the two homogeneous journal ecosystems chosen for further study (using abbreviated journal names). The Unlike the
The two homogeneous journal sub-graphs chosen for further analysis, and their abbreviated journal names.Sub-graph Abbreviated name Sub-graph Abbreviated name
For each ecosystem, we take the year 2012 as the “citing” year and we use “total cites” to all (preceding) years as the variable on the “cited” side. Since all journals are well connected within the sub-graphs, there are no dangling nodes (where the journals are cited within the ecosystem but hardly cite any other journals in the same system). Using PWR, no damping or normalization (as is used in the PageRank approach) is proposed: one uses the cross-citation matrix without further tuning of parameters. In each case, when
Table 3 shows the citation matrix
Citation matrix Citing Cited 132 165 49 86 68 46 23 120 756 107 495 189 139 319 12 66 89 72 26 26 30 48 320 34 1542 13 25 552 14 43 29 8 93 39 4 26 96 44 69 128 108 29 29 91 2 269 4 3 302
In Table 4 we report the convergence of the size-independent power-weakness ratio
Convergence of PWR with iteration With self-citations PWR 1 2 3 4 5 6 7 1.49 1.72 1.76 1.76 1.76 1.76 1.77 1.30 1.38 1.52 1.60 1.63 1.64 1.64 1.38 1.48 1.51 1.53 1.53 1.53 1.54 0.91 1.19 1.36 1.43 1.45 1.46 1.46 1.00 0.98 0.98 0.98 0.98 0.98 0.97 0.56 0.48 0.47 0.47 0.47 0.47 0.47 0.44 0.37 0.39 0.40 0.41 0.41 0.41 Without self-citations PWR 1 2 3 4 5 6 7 1.76 1.93 1.75 1.80 1.78 1.79 1.79 1.75 1.52 1.61 1.57 1.59 1.58 1.58 1.41 1.46 1.46 1.48 1.47 1.48 1.48 0.88 1.23 1.23 1.25 1.25 1.25 1.25 0.99 0.99 0.98 0.99 0.98 0.99 0.98 0.42 0.49 0.48 0.48 0.48 0.48 0.48 0.32 0.43 0.41 0.42 0.42 0.42 0.42
Table 4 shows, among other things, that the inclusion of self-citations affects PWR values in this case only in the second decimal.
Figure 2 graphically displays the convergence of PWR with iteration number
Figure 2
Convergence of PWR with iteration number

Figure 3
Convergence of PWR with iteration number

But can the converged values of PWR also be considered as impact indicators of the journals? In our opinion, one can envisage three different options to interpret, for example, the results in Table 4:
Since the authors of this paper are knowledgeable in information science (or scientometrics), the ranking of LIS journals can be interpreted on the basis of our professional experience. The rank-ordering of LIS journals by PWR could not be provided by us with an interpretation. One does not expect Another way of interpreting the results would be to compare PWR with a most similarly designed journal metric. The SCImago Journal Rank (SJR), for example, uses an algorithm similar to PageRank; for the sake of comparison the values of SJR for these seven journals are included in Table 5. The columns for PWR and SJR correlate negatively with A third way of interpreting the results is to compare the metric with an external criterion. For example, we could ask a sample of information scientists to assess the journals. However, we did not expect other assessments to differ from our own, and therefore did not pursue this option.
Seven strongly connected journals in LIS (Journal PWR SJR2013 1.79 0.751 1.58 1.745 1.48 0.876 1.25 1.008 0.99 1.412 0.48 2.541 0.42 0.475
In sum, the indicator did not perform convincingly for journal ranking even in homogeneous sets.
Let us complete the analysis by combining the
Figure 4 shows the convergence of PWR for the After twenty iterations, the
Figure 4
Convergence of PWR with iteration number

Figure 5
Convergence of PWR with iteration number

In other words, Ramanujacharyulu’s PWR paradigm may offer a diagnostic tool for determining whether a journal set is homogeneous or not, but it may also fail to converge or to provide meaningful results in the case of heterogeneous sets. As noted, the application of PWR may have to be limited to strong components.
We investigated whether Ramanujacharyulu’s (1964) metrics for power-weakness ratios could also be used as a meaningful indicator of journal status using the aggregated citation relations among journals. As noted, PWR was considered an attractive candidate for measuring journal impact because of its symmetrical handling of the rows and columns in the asymmetrical citation matrix, its recursive algorithm (which it shares with other journal indicators), and its mathematical elegance (Prathap & Nishy, in preparation). Ramanujacharyulu (1964) developed the algorithm for scoring tournaments (Prathap, 2014). However, journals compete in incomplete tournaments; in a round-robin tournament, all the teams are completely connected. If one team wins, the other loses; but this constraint is not valid for journals.
In order to be able to appreciate the results, we experimented with a subset of the Journal Citation Reports 2013: the 83 journals assigned to the WoS category LIS. One advantage of this subset is our familiarity with these journals, so that we were able to interpret empirical results (Leydesdorff & Bornmann, 2011; 2016). Used as input into Pajek, the 83 × 83 citation matrix led to convergence, but not to interpretable results. Journals that are not represented on the “citing” dimension of the matrix—for example, because they no longer appear, but are still registered as “cited” (e.g.
In a further attempt to find interpretable results, we focused on two specific subsets, namely all the journals citing
The PWR model should work also in the extreme cases: papers cited by the group but not citing any journal of the group, and the opposite case. However, the conceptual fail is clear in these cases: papers with a lot of citations of the group but not citing the group will be in the first position of the final ranking. In addition to the examples mentioned (
In summary, the indicator may mathematically be elegant, but did not perform convincingly for journal ranking. This may also be due to the assumption of equal gain or loss when a citation is added on the cited or the citing side, respectively. Using PWR, journal
In other words, the citation to Ramanujacharyulu (1964) is interesting and historically relevant to eigenvector centrality methods that predate Narin and Pinski (1976). However, the PWR method was conceived in 1964 as a way to evaluate round-robin tournaments, but “wins” and “losses” do not translate to citations. Citations have to be normalized because of field-specificity and the discussion of damping factors can also not be ignored since the transitivity among citations is not unlimited (Brin & Page, 1998). With this study, we have wished to show how a newly proposed indicator can be critically assessed.
Figure 1

Figure 2

Figure 3

Figure 4

Figure 5

Citation matrix Z for the JASIST+ set of seven journals.
Citing | ||||||||
---|---|---|---|---|---|---|---|---|
Cited | ||||||||
132 | 165 | 49 | 86 | 68 | 46 | 23 | ||
120 | 756 | 107 | 495 | 189 | 139 | 319 | ||
12 | 66 | 89 | 72 | 26 | 26 | 30 | ||
48 | 320 | 34 | 1542 | 13 | 25 | 552 | ||
14 | 43 | 29 | 8 | 93 | 39 | 4 | ||
26 | 96 | 44 | 69 | 128 | 108 | 29 | ||
29 | 91 | 2 | 269 | 4 | 3 | 302 |
Seven strongly connected journals in LIS (JASIST+) ranked on their PWR within this group. For comparison, the SJR values from 2013 are included (see http://www.journalmetrics.com/values.php).
Journal | PWR | SJR2013 |
---|---|---|
1.79 | 0.751 | |
1.58 | 1.745 | |
1.48 | 0.876 | |
1.25 | 1.008 | |
0.99 | 1.412 | |
0.48 | 2.541 | |
0.42 | 0.475 |
The two homogeneous journal sub-graphs chosen for further analysis, and their abbreviated journal names.
Sub-graph | Abbreviated name | Sub-graph | Abbreviated name |
---|---|---|---|
Fifteen journals ranked highest on PWR among 83 LIS journals.
Abbreviation of journal name | PWR |
---|---|
59.52 | |
15.62 | |
11.31 | |
8.84 | |
6.96 | |
6.15 | |
5.53 | |
5.01 | |
4.40 | |
4.20 | |
3.57 | |
3.38 | |
3.15 | |
3.09 | |
2.90 |
Convergence of PWR with iteration k for the JASIST+ journals, with and without self-citations.
With self-citations | |||||||
---|---|---|---|---|---|---|---|
PWR | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
1.49 | 1.72 | 1.76 | 1.76 | 1.76 | 1.76 | 1.77 | |
1.30 | 1.38 | 1.52 | 1.60 | 1.63 | 1.64 | 1.64 | |
1.38 | 1.48 | 1.51 | 1.53 | 1.53 | 1.53 | 1.54 | |
0.91 | 1.19 | 1.36 | 1.43 | 1.45 | 1.46 | 1.46 | |
1.00 | 0.98 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | |
0.56 | 0.48 | 0.47 | 0.47 | 0.47 | 0.47 | 0.47 | |
0.44 | 0.37 | 0.39 | 0.40 | 0.41 | 0.41 | 0.41 | |
Without self-citations | |||||||
PWR | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
1.76 | 1.93 | 1.75 | 1.80 | 1.78 | 1.79 | 1.79 | |
1.75 | 1.52 | 1.61 | 1.57 | 1.59 | 1.58 | 1.58 | |
1.41 | 1.46 | 1.46 | 1.48 | 1.47 | 1.48 | 1.48 | |
0.88 | 1.23 | 1.23 | 1.25 | 1.25 | 1.25 | 1.25 | |
0.99 | 0.99 | 0.98 | 0.99 | 0.98 | 0.99 | 0.98 | |
0.42 | 0.49 | 0.48 | 0.48 | 0.48 | 0.48 | 0.48 | |
0.32 | 0.43 | 0.41 | 0.42 | 0.42 | 0.42 | 0.42 |