Accès libre

The dilemmas of systematic literature review: the context of crowdsourcing in science

À propos de cet article

Citez

INTRODUCTION

Conducting a literature review is a basic process for various research projects, regardless of the discipline or field, because effective research requires mapping previous research in order to identify the findings of the predecessors and the recommendations they propose. A literature review can prove useful for identifying both the researchers who have dealt with the topic and the leading, most cited, influential publications such as seminal studies (Webster & Watson, 2002). In addition, a literature review is helpful in assessing the state of knowledge, existing publications, a specific topic, or a research problem. However, the development of science requires transparent and repeatable methods (Czakon et al., 2019) that will minimise the bias of scientists, thus assuring reliable results from which conclusions can be drawn and decisions can be made for the pursuit of further research (Moher et al., 2009). In a narrower sense, a systematic literature review is seen as a form of research that deals with existing publications and applies a systematic methodology to synthesise data that has already been published. It is also a research method undertaken to review the scientific literature employing systematic and rigorous methods (Gough et al., 2012), as well as a logical, linear process where each part is informed of what precedes it. However, in a broader sense, a systematic literature review is a review that aims to comprehensively identify, evaluate, and synthesise all relevant research on a given topic. It is also a clear, comprehensive, reproducible method and a research process that is based on a transparent and reproducible methodology (Kraus et al., 2020) used to identify relevant studies related to a specific review question, evaluate, and synthesise existing collections of completed and documented works.

Despite the fact that the methodology of a systematic literature review has been developed primarily in medical sciences (Davis et al., 2014), there have been many attempts to create guidelines for social sciences, including management (Davis et al., 2014; Palmatier et al., 2018).

More than 4 million scientific articles and various guides on systematic literature review methodology were found based during a search of the Google Scholar database (as of 17 October 2021, keywords: “systematic literature review” AND “research methodology” AND “social science”). It should be noted, however, that previous work on systematic literature review in relation to management sciences offers general guidance on what a literature review is and what stages it consists of (Moher et al., 2009; Rojon et al., 2011; Short, 2009). In contrast, less attention has been paid to the potential mistakes that can be made in planning a systematic literature review, particularly in developing a search strategy. It is accepted in the literature that planning is a critical component of a systematic literature review (Bramer et al., 2018).

The aim of the article is to identify, assess, and classify dilemmas that researchers may face during the planning stage of a systematic literature review. Based on the literature review and the author’s own experience, a list of dilemmas was formulated and illustrated using the example of a systematic literature review in the context of crowdsourcing in science.

DILEMMA 1. INCORRECTLY SELECTED TYPE OF LITERATURE REVIEW

There are various approaches to a literature review, each of which has its own advantages and limitations in terms of their potential usefulness (Paré & Templier, 2015). Despite the diversity in the types of literature reviews, three most useful types — from the point of view of management sciences — are indicated, including the following: semi-systematic, systematic, and integrative (Snyder, 2019). As Snyder (2019, p. 336) points out, “it can be difficult to select the review that is the most appropriate type for the study, but the research question and the specific purpose of the review always determine the right strategy to apply”.

Bearing in mind the above, with regard to the empirical context of this study, in the case of crowdsourcing in science, it should be noted that the concept of crowdsourcing itself is not a new concept. First publications began to appear in 2006 resulting from the article by Howe entitled “The Rise of Crowdsourcing”. In turn, crowdsourcing in science was introduced to the literature in 2007 with the publication that Lakhani et al. (2007) entitled “The value of openness in scientific problem solving”, in which its authors postulated that openness, sharing information and knowledge, and involving the community in generating ideas in science should become the norm for researchers. However, research on crowdsourcing in science has not been conducted on a large scale and the existing achievements are still insufficient and fragmented (Uhlmann et al., 2019). This is confirmed by the findings of Bassi et al. (2019, p. 302) who state that “the use of crowdsourcing in research is still being tested, and the literature is evolving, and researchers have an exciting opportunity to rethink, redesign and reinvent the way research is conducted”. All this made it necessary to identify, evaluate, and interpret all available publications, which was associated with the desire to find cognitive and research gaps. The fact that a systematic literature review is based on an objective, transparent, and rigorous approach to the entire research process may increase the likelihood that all relevant publications will be included in the review (Linnenluecke et al., 2020).

We did not make a decision to conduct a semi-systematic and integrative review for several reasons. A semi-systematic review is generally intended for topics that have been conceptualised and researched by researchers from different disciplines. Crowdsourcing in science arouses the interest of practitioners and researchers from various disciplines and fields; research was conducted in the biological and natural sciences, earth and environmental sciences, medical and health sciences, social sciences, including psychology, sociology, political science, as well as engineering and technical sciences. This has led to a variety of perceptions of crowdsourcing in science, but in this case a semi-systematic review may not be useful, as it comes down to finding out how research on a given topic has developed over time and in different research traditions (Lenart-Gansiniec, 2021).

The ever-growing popularity and the need for academics to use crowdsourcing in science mean that its terminological boundaries go beyond the concept of crowdsourcing, and there are discrepancies as to what is considered to be academic crowdsourcing and what the future directions of research are. The purpose of an integrative review is to evaluate, critique, and synthesise the literature on a given research topic (for mature topics), to emerge new theoretical frameworks and perspectives, or to create preliminary conceptualisations and theoretical models (for emerging topics) (Torraco, 2005). In the case of crowdsourcing in science, an integrative review seems to be insufficient. However, it should be noted that the selection of an appropriate approach to literature review is important for obtaining answers to the formulation of the research question / questions (Webster & Watson, 2002). Thus, a mistake made at the stage of selecting the type of literature review may result in further consequences in the form of omitting important publication items in a given topic.

With regard to the analysis of the collected publications as part of a systematic literature review, it is possible to choose between a qualitative and a quantitative approach. We agree that the analysis depends on the personal judgements of the analyst, researchers who understand the level of the research, and the purpose of the research (Shelby & Vaske, 2008). With regard to crowdsourcing in science, we decided to employ a qualitative approach. This resulted from the intended purpose of our review. We wanted to identify current state of research and directions of future research on crowdsourcing in science. A qualitative systematic review combines research on a topic, systematically seeking research evidence from primary qualitative research and pooling the results (Seers, 2015).

DILEMMA 2. THE NEED FOR A SYSTEMATIC LITERATURE REVIEW

The systematic review of the literature in management sciences is considered to be the “new standard” (Hiebl, 2021, p. 1), the gold standard (Davis et al., 2014), the cornerstone of any research process (Williams et al., 2021), and “the most reliable and comprehensive report on what is done” (Petrosino et al., 2001, p. 20) for several reasons. First, a systematic review of the literature is combined with an objective, transparent, and rigorous approach to the entire research process. The requirement to formulate a research question / questions, adopt specific search strategies, inclusion / exclusion criteria makes this type of literature review clearly show the path to identify previous research, integrate it, and summarise what is known in a given area (Linnenluecke et al., 2020). Secondly, a systematic literature review helps identify gaps in the literature that provide a space for developing or testing new ideas for research. Finally, it may prove useful for assessing the quality of publications, including their internal validity, i.e. the extent to which the publications are free from major methodological errors, such as selection bias (error of selective choice of publications for research, which causes a difference in their characteristics), response error (tendency to give inaccurate or false answers to questions), attribution error (error related to the withdrawal or exclusion of a publication from the study), and observer bias (unconscious distortion of the review results by the researcher expecting some result).

In addition to many postulates encouraging a systematic literature review, the literature indicates an excess (Ioannidis, 2016), multiplication, and duplication of systematic literature reviews. The question then arises whether another systematic review of the crowdsourcing in science literature is needed? The literature indicates that a review is needed if not already done. There is a high probability that someone has already published a systematic review of the literature on a topic of interest to the researcher. Even if there are one or more systematic literature reviews on a given topic, you can continue your own systematic literature review. In the systematic reviews of the literature to date, the researcher should pay attention to the approaches used by other researchers, signalled limitations, and conclusions. Limitations in particular may form the basis of the researcher’s current efforts. On their basis, the researcher can build their review questions. During the initial search, no other systematic reviews of the crowdsourcing in science literature were identified, which formed the basis for further work.

The systematic review of the literature in management sciences is considered to be the “new standard” (Hiebl, 2021, p. 1), the gold standard (Davis et al., 2014), the cornerstone of any research process (Williams et al., 2021) and “the most reliable and comprehensive report on what is done” (Petrosino et al., 2001, p. 20) for several reasons. First, a systematic review of the literature is combined with an objective, transparent and rigorous approach to the entire research process. The requirement to formulate a research question / questions, adopt specific search strategies, inclusion / exclusion criteria makes this type of literature review clearly show the path to identify previous research, integrate it and summarize what is known in a given area (Linnenluecke et al., 2020). Secondly, a systematic literature review helps identify gaps in the literature that provide a space for developing or testing new ideas for research. Finally, it may prove useful for assessing the quality of publications, including their internal validity, i.e. the extent to which the publications are free from major methodological errors, such as selection bias (error of selective choice of publications for research, which causes a difference in their characteristics), response error (tendency to give inaccurate or false answers to questions), attribution error (error related to the withdrawal or exclusion of a publication from the study), and observer bias (unconscious distortion of the review results by the researcher expecting some result).

In addition to many postulates encouraging a systematic literature review, the literature indicates an excess (Ioannidis, 2016), multiplication and duplication of systematic literature reviews. The question then arises whether another systematic review of the crowdsourcing in science literature is needed? The literature indicates that a review is needed if not already done. There is a high probability that someone has already published a systematic review of the literature on a topic of interest to the researcher. Even if there are one or more systematic literature reviews on a given topic, you can continue your own systematic literature review. In the systematic reviews of the literature to date, the researcher should pay attention to the approaches used by other researchers, signalled limitations and conclusions. Limitations in particular may form the basis of the researcher’s current efforts. On their basis, the researcher can build their review questions. During the initial search, no other systematic reviews of the crowdsourcing in science literature were identified, which formed the basis for further work.

In the event of an increase in the number of available literature reviews, the appropriate step is to compare and contrast their results. In this approach, umbrella reviews adopt explicit and systematic methods of searching and identifying multiple systematic reviews and meta-analyses in order to compare and contrast the results of individual reviews and to provide an overall picture of the results for a specific research question (Fusar-Poli & Radua, 2018). Despite the fact that this review is quite popular in medicine, also in management science we already have a solid and rigorous umbrella review (see Klimas, Czakon, Friedrich, 2021).

DILEMMA 3. ERRORS IN THE IMPLEMENTATION OF THE SYSTEMATIC LITERATURE REVIEW PROCEDURE

The systematic review of the literature procedure includes several stages during which the researcher decides whether to include or exclude each publication. Various approaches are recommended in the literature, i.e. ten stages, eight stages, six stages, five stages, three stages (after: Klimas et al., 2020). Despite the differences in the procedures adopted by the authors, three main stages and nine steps are usually indicated (for more, see Lenart-Gansiniec, 2021): (1) planning the review, (2) conducting the review, (3) reporting the review (Xiao & Watson, 2019). At each of those stages, the researcher makes choices (steps) that are important to the success of the entire literature review.

As part of the review planning process, the first stage is to develop a review protocol (Czakon, 2011), taking into account the purpose of the study, research question / questions, inclusion and exclusion criteria, search strategies, criteria for assessing the quality of the collected publications, data extraction, synthesis, and reporting strategies (Linnenluecke et al., 2020). Mistakes made in this process may contribute to obtaining different answers to the same research questions. For example, by selecting a specific keyword or source, a researcher may obtain an erroneous or distorted sample and thus accumulate literature that is irrelevant to the purpose of the literature review. Subsequently, a wrong conclusion about the cognitive gap may be reached. The second stage includes the following steps: collecting the basic literature based on the review protocol, selection, and preliminary evaluation of the collection of publications, their quality evaluation, data extraction and their analysis and synthesis (Czakon, 2011). Once you have established your goal, chosen specific research questions, and developed your review protocol, it is time to do the actual review. Before this happens, however, a pilot test is recommended to verify the correctness of the developed protocol. At this early stage of searches, exhaustion is more important than precision. The inclusion criteria for a systematic review require a high level of detail, which many peer-reviewed articles do not contain, ultimately contributing to their omission. It should be noted that the process of systematic literature review may be iterative because during the second stage it will be necessary to narrow down or expand the research question or change the inclusion criteria (Xiao & Watson, 2017). There may be a risk of researcher bias when searching for and coding publications (Papaioannou et al., 2010). In addition, the assessment of the suitability of a given publication for further analysis is based on research summaries (Breretona et al., 2007). If the abstract does not contain sufficient information, it is necessary to read the part of the publication containing the conclusions.

As part of the last, third stage, the report is prepared and the results are disseminated (Czakon, 2011). Data extracted whilst performing step two can take the form of descriptive information such as authors, years of publication, topic, or type of study, or can be in the form of outcomes and findings. It can also take the form of a conceptualisation of an idea or theoretical perspective. It is also indicated that conducting a systematic literature review may turn out to be a tedious and time-consuming method for a novice researcher. On average, the process takes between 12 and 24 months, with a researcher spending about 60–100 hours developing a search strategy for a systematic review (Khangura et al., 2012). Additionally, selection, response, and attribution errors may occur during the preparation of the report.

DILEMMA 4. INCORRECT FORMULATION OF THE RESEARCH QUESTION(S)

According to the methodology of systematic literature review, before starting to identify literature related to a given topic, it is necessary to select research questions / questions (Briner & Denyer, 2012), which constitute the basis for establishing a search strategy. In other words, research questions clearly reveal and define the boundaries of the research and ensure consistency. Bearing in mind the above, in the analysed empirical context of crowdsourcing in science, the following two research questions were proposed:

P1. What is the current state of research on crowdsourcing in science?

P2. What are the directions of future research on crowdsourcing in science?

The form of the formulated research questions resulted from the perceived need to identify, evaluate, and interpret all available publications and the willingness to find cognitive and research gaps. After defining the research question / questions, the keywords are then identified.

The literature indicates that keywords should be related to the research question / questions (Xiao & Watson, 2017). Due to the decision to perform a preliminary search, the first keyword was “crowdsourcing in science”. Before proceeding with the initial search, it is also important to choose electronic databases. The literature indicates the necessity to refer to at least two databases (Green et al., 2006), which results from the fact that the use of too few databases may threaten the generalisation and accuracy of the results. Reaching for several databases will eliminate the weaknesses and limitations of other databases (Younger, 2010). Besides, there is no perfect database (Shaffril et al., 2021) and no database is comprehensive and comprehensive (Xiao, Watson, 2019). The choice of database largely depends on the examined entity (Tielen et al., 2016). The literature indicates that publisher databases tend to provide fewer hits than multi-matched databases (Hiebl, 2021).

The choice of electronic databases must be transparent and substantively justified. Thus, when selecting the databases, their complexity was taken into account. Ultimately, the initial search was based on searching foreign electronic databases such as Scopus and Web of Science. This approach was due to several reasons. Scopus is a multi-domain database that covers a wide range of publications and offers quick basic and advanced searches (Falagas et al., 2008). On the other hand, Web of Science, compared to other databases such as ProQuest or Emerald, is recommended for its robustness, convenient interface, and the presence of various sorting functions. Based on the initial search, where the filtering criterion was a keyword obtained from the research question (Kitchenham & Charters, 2007), the following number of hits was obtained: 3 (Web of Science), 8 (Scopus). Taking into account the obtained results and their redundancy, a potential error related to the selection of an overly broad research question was identified. As mentioned, the question of researchers is some driving force of the entire procedure of systematic literature review, because the correctness of the review depends on its formulation (Kitchenham & Charters 2007; Shaffril et al., 2021). Some authors argue that research questions should not be too general (Cronin et al., 2008), as this may result in obtaining too much data that does not answer the research question and difficulties in comparing and managing the data in question. However, bearing in mind that crowdsourcing in science is a relatively new concept, it was assumed that too detailed a research question may result in obtaining too few publications, which is in line with the findings of Petticrew and Roberts (2012).

In response to these limitations, some researchers suggest formulating a specific research question (Okoli, 2015) or selecting a sub-topic for the review (Xiao & Watson, 2019). In order to identify sub-topics, Brereton et al. (2007) suggest preliminary mapping. Considering that the search for the right terminology becomes the search for certain ideas and concepts, and the searches should be optimised in order to obtain the largest possible number of (presumably) relevant documents — it was decided that the researchers’ questions would not be changed to proceed with the next requirement under the protocol, i.e. developing a search strategy.

DILEMMA 5. INCORRECT SELECTION OF THE SEARCH STRATEGY

In addition to the research question, a systematic literature review requires an a priori literature search strategy (Atkinson & Cipriani, 2018). The search strategy is an iterative and flexible process and should define how the relevant literature will be identified. The search strategy used must be clear and detailed so that it could be reproduced using the same methodology, with exactly the same results, or so that it could be updated at a later date. This not only allows the verification of its course, but also the identification of potential sources of the search bias that may be manifested in the review results and the quality of conclusions drawn from these results.

Researchers identify three strategies to be used whilst searching for literature: (1) database-based, (2) complementary, and (3) alternative (Ferguson & Brannick, 2012). The database-based strategy takes into account the following activities: database selection, setting inclusion and exclusion criteria (e.g. publication language, publication type, status), keywords, masking characters, and Boolean expressions (Moher et al., 2009). With respect to Boolean logic, the hyphens “AND”, “OR”, and “NOT” may be used in database searches. When designing and testing suitable search combinations, it may be helpful to use approaches described in the literature or to adapt them from previous systematic literature reviews on similar topics. The search strategy reported in another review serves as a description of what has been done at a given time and may not be what the authors would have done if they had decided to re-review in retrospect. In addition, the number of publications is incremental and search terms may behave differently in the same database.

Complementary strategies support the identification of publications that were omitted for various reasons in database searches, such as differences in terminology or database coverage. These strategies include backward search and forward search. When searching backwards, it is possible to refer to the list of references at the end of a given publication. This strategy is based on discovering and obtaining publications crucial for current research and then “wandering” to subsequent publications in order to search for other relevant publications. Such a process should proceed until a clear saturation effect appears, i.e. the researcher detects fewer and fewer new, previously not found, publications on a given topic.

Two techniques are possible under the reverse search, such as “pearl farming” and “snowball” (Greenhalgh & Peacock, 2005). The “Pearl Farming” technique consists of isolating the most important article, from the point of view of a systematic review of the literature, which will be a reference point in the search. The “snowball” technique, on the other hand, is to locate the relevant source article and use it as a starting point to move back from the references in that article.

A forward search, on the other hand, searches the publications of the lead authors who contribute to the set of work so that the researcher can ensure that all relevant publications are included. Additionally, as part of complementary strategies, searches for non-indexed publications are indicated (Ferguson & Brannick, 2012), i.e. grey literature. Alternative strategies involve searching for potential publications on non-academic websites, database archives, grey literature repositories (e.g. OSF Preprints; osf.io/preprints; OpenGrey; opengrey.eu; China Knowledge Resource Integrated Database; Korean Studies Information Service System; Sabinet for Southern African research; EThOS; Theses Canada; Australian Policy Online; VHL Regional Portal; NORART), on conference agendas or contacting other researchers and experts via mailing lists. These strategies can help to improve the comprehensiveness of the review. Although it improves the comprehensiveness of the review it makes collection process more blurred and thus more difficult to ensure a replicability criterion (note: replicability is one of the underlying criteria of SLR; Denyer, Tranfield, 2009).

The choice of a specific literature search strategy depends on the research question / questions and the researcher, including time, access to databases, and other sources of information. As indicated by Egger et al. (2003), it is important that “researchers consider the type of literature search and the degree of understanding that is appropriate for a given review, given budget and time constraints”. It is also important to avoid the omission of key studies and minimise bias that may affect the precision and accuracy of results, data interpretation and distortion of conclusions drawn from research results. In practice, this means that the chosen strategy should allow you to find all records from the point of view of the research in return for devoting as little time or resources as possible. In the context of crowdsourcing in science, it was decided to choose all three strategies, which resulted from the desire to gain access to all potential sources of information.

DILEMMA 6. INCORRECT MATCHING OF KEYWORDS

In order to extract publications from databases for analysis, it is necessary to identify keywords. The answer to the obtained results of the preliminary search is the possibility of extending the previously adopted keyword “crowdsourcing in science” with synonyms, alternative spellings, or related terms (Kitchenham & Charters, 2007). The process of identifying synonyms or related words can be carried out using the thesaurus dictionary. With regard to crowdsourcing, scientific thesaurus searches did not produce results; therefore it was decided to identify synonyms or related concepts. The suggestion that “researchers should choose suggested synonyms carefully as not all suggested terms are appropriate was taken into account. They can (. . .) refer to keywords used in previous studies (. . .) to get some ideas, such as keywords used in previous studies” (Shaffril et al., 2021, p. 1331). Therefore, a search for synonyms and related terms was carried out. For this purpose, publications on crowdsourcing in science were used, and when identifying them, a mixed approach was taken into account (searching electronic databases and the so-called grey literature using the Google Scholar search engine). Such multiple paths are important because only on the basis of objective and representative literature is it possible to generalise the state of a specific research field or answer the formulated research question(s).

In the first stage, the dictionary of synonyms was used (https://www.collinsdictionary.com/), which made it possible to find the following synonyms: “academic”, “science”, “research”. Then, databases (Web of Science, Scopus) and the Google Scholar search engine were searched using Boolean logic and the “OR” hyphen. The following keywords were finally identified: “scientific crowdsourcing”, “crowdsourcing”, “online citizen science”, “crowdsourcing citizen science”, “crowdsourced science”, “crowdsourcing science”, “citizen cyberscience”, “virtual citizen science”, “crowd science”, “crowd research”, “crowdsourcing in the science”, “crowdsourcing for research”, “crowdsourcing for science”, and “academic crowdsourcing”. After identifying the keywords, the databases (Web of Science and Scopus) were searched again, where the following results were finally obtained: 13 (Web of Science), 10 (Scopus).

DILEMMA 7. LITERATURE DEFICIENCY

Due to the number of hits obtained, it was considered necessary to identify potentially relevant keywords that would be used in the proper search and collection of literature (Hiebl, 2021). It took into account that researchers should strike a balance between exhaustion and precision when choosing the right keywords. At this early stage of the search, it is more important to obtain comprehensive hits that will identify the current state of knowledge on the topic. Therefore, it was decided to remove the keywords containing the word “citizen” due to the fact that the literature indicates that crowdsourcing is not rooted in the interference of citizens in the scientific process. Therefore, it was concluded that scientific and civil science crowdsourcing are not synonymous and related concepts. Using two databases (Scopus and Web of Science), the following search results were obtained: 32 (Web of Science), 20 (Scopus).

DILEMMA 8. OMISSION OF SIGNIFICANT PUBLICATIONS

As Randolph (2009, p. 7) points out, “electronic searches only get you about ten percent of your articles”. This is confirmed by the findings of Greenhalgh and Peacock (2005), who believe that electronic databases can only identify 30 percent of scientific publications on a given topic. Additionally, no database covers a full set of publications (Xiao & Watson, 2019) and the use of even several databases will not help to minimise the risk of missing important publications. Moreover, when a given issue is relatively new, it is necessary to identify all sources (Kraus et al., 2020). Bearing in mind the above limitations, further searches were carried out based on the recommended complementary and alternative strategies. At the same time, the researcher must be aware that the use of multiple search strategies may lead to less transparency of systematic reviews. This does not mean that it is postulated that these strategies should be abandoned, but the emphasis is placed on providing detailed information on the search strategies used in the report on the systematic literature review.

First, the reverse search was supplemented with a forward search by searching the publications of the main authors who contributed to crowdsourcing in science (Webster & Watson, 2002). For this purpose, as recommended, the Google Scholar search engine was used (Hiebl, 2021). Employing Google Scholar resulted from the fact that this search engine is one of the most popular in the world and can supplement and support the literature search process. In addition, Google Scholar allows you to access content other than that the one offered in multi-publishing databases. Secondly, with regard to the alternative strategy, the conference agendas were reviewed, and then speakers were contacted through the ResearchGate portal. Under the complementary and alternative strategies, a total of 60 hits were obtained.

DILEMMA 9. CONSTANT SEARCHING

Searching for literature using a variety of strategies should be rigorous and exhaustive, but it is difficult to know when to stop searching and strive for rigor and saturation (Shaffril et al., 2021). One solution to this problem is to stop the search in a situation where searching using the same keywords in different databases and search engines does not bring any new results. Another solution is to use the Capture-Mark-Recapture technique (Kastner et al., 2007), which consists of initially searching and downloading publications and marking them, and then re-searching and including previously marked publications. This technique is widely used in ecology to estimate the number of fish in water reservoirs and has been adapted for research in science and health (Kastnera et al., 2007). Because the Capture-Mark-Recapture technique was not used in systematic literature reviews in the context of management sciences - ultimately, in the context of crowdsourcing in science, a theoretical saturation was decided in which the researcher notices during the search that the same results occur, allowing a statement about saturation. Thus, if no new information can be obtained from the new results, it is possible to terminate the search. As a result, 65 hits were obtained.

DILEMMA 10. IDENTIFICATION OF PUBLICATIONS INCONSISTENT WITH THE RESEARCH QUESTION

After determining the keywords and search strategy, it is necessary to select the inclusion and exclusion criteria for the publication for further analysis. These criteria allow the identification of potentially relevant research items from the point of view of the research question / questions and enable readers to understand the potential sources of bias in the researcher’s search (Snyder, 2019). These criteria should reveal the exact reasons why a particular piece of research is included or excluded from the review.

The literature indicates that the selection of inclusion and exclusion criteria should take into account the aspect of complexity (Xiao & Watson, 2019), practicality (Kitchenham & Charters, 2007; Okoli & Schabram, 2010), and the possibility of obtaining answers to the formulated research question / questions (Kitchenham & Charters, 2007; Okoli, 2015; Johnson & Hennessy, 2019). Some authors suggest that the inclusion criteria should include the language, type of research, type of publication, and date of publication (Kraus et al., 2020). With regard to the language, it is postulated to choose one that will allow for international reach. In the context of the type of research, it is pointed out that it is possible to include only empirical research and to exclude other reviews in order to avoid double inclusion of research. In addition, it is possible to include post-conference materials and book chapters in order to increase the chances of reaching items valuable from the point of view of the research question / questions. In the case of the timeline, less mature research needs a longer time horizon to trace a large number of articles (Kraus et al., 2020). However, if the period covered is limited, it is necessary to disclose the precise and well-structured reasons for such a limitation in order to make a systematic review transparent (Hiebl, 2021). Regardless of the choice of criteria, it is important to include the justification and motives for the decisions made (Snyder, 2019).

In order to further narrow the search to publications that directly focus on research on crowdsourcing in science, the following four inclusion criteria were used:

(1) subject – publications should include in the title and / or summary the word “crowdsourced science” OR “crowdsourcing science” OR “crowd science” OR “crowd research” OR “crowdsourcing” OR “scientific crowdsourcing in the science” OR “crowdsourcing for research” OR “crowdsourcing for science” OR “academic crowdsourcing”;

(2) study design – empirical research only, as empirical evidence on crowdsourcing in science was found interesting. Systematic reviews have been excluded to avoid double counting of studies;

(3) year of publication – works published in the period from 2006 to October 2021 were taken into account, where the starting date is related to the first publication on crowdsourcing;

(4) language – only publications in English were included, as efforts were made to obtain publications in a generally accepted language;

(5) publication status – only international, peer-reviewed, full-text publications were included, as the emphasis was on their reliability.

The literature postulates the need to conduct interdisciplinary research on crowdsourcing, including sociology, psychology, management sciences, economics, computer science and artificial intelligence; hence the decision was not made to narrow the search to the category of “business, economy, management”. During this step, 60 hits were identified and further analysed. When analysing the previous publications on the methodology of systematic literature review, it is hardly possible to look for a final position on the minimum number of literature items to be analysed (Kitchenham & Charters, 2007; Shaffril et al., 2021). However, there are some recommendations. For example, Short (2009, p. 1312) suggests that “there is an accumulation of many conceptual and empirical articles without prior review efforts or synthesis of previous works” and points to 50 publications as the minimum number of research items (Short, 2016). Shaffril et al. (2021) recognise that the number of publications (up to 50) may seem arbitrary. In the case of a smaller number of identified publications, some authors believe that a systematic review of the literature may constitute material for the assessment of the state of the field (Linnenluecke et al., 2020; Rojon et al., 2011), but cannot be a separate scientific publication (Hiebl, 2021). According to the methodology of systematic literature review, there is a need to verify abstracts, which will allow the publication base to be narrowed down to those focused strictly on crowdsourcing in science. Thus, there should be a move to the second stage of a systematic literature review.

CONCLUSION

A properly conducted systematic review of the literature can, among other things, be the basis for identifying gaps in the literature and developing or testing new ideas for research or assessing the quality of publications. However, the value and quality of a systematic review depends on rigorous methods and the correctness of organisation and planning. Dilemmas may arise at the stage of planning a systematic literature review. They are related to the multitude of decisions that the researcher makes before starting a literature review. Based on the empirical context of crowdsourcing in science, the following ten dilemmas were identified: (1) incorrectly selected type of literature review, (2) the need for a systematic literature review, (3) errors in the implementation of the systematic literature review procedure, (4) incorrect formulation of the research question(s), (5) incorrect selection of the search strategy, (6) incorrect matching of keywords, (7) literature deficiency, (8) omission of significant publications, (9) constant searching, (10) identification of publications inconsistent with the research question.

The identified dilemmas are consistent with the findings of other authors (Hiebl, 2021, Shaffril et al., 2021) and supplement the existing literature on planning a systematic literature review (Bramer et al., 2018). However, the presented findings are not without limitations. The main limitation is the focus on the empirical context of crowdsourcing in science, which is a relatively new concept. This may help to ignore the dilemmas that researchers will face when analysing mature concepts. It is also important to be aware of the dilemmas that researchers may face during the implementation of the two remaining stages of the systematic literature review (conducting and reporting). All this is a promising direction for further research in the field of dilemmas.