Revista y Edición

AHEAD OF PRINT

Volumen 8 (2023): Edición 3 (June 2023)

Volumen 8 (2023): Edición 2 (April 2023)

Volumen 8 (2023): Edición 1 (February 2023)

Volumen 7 (2022): Edición 4 (November 2022)

Volumen 7 (2022): Edición 3 (August 2022)

Volumen 7 (2022): Edición 2 (April 2022)

Volumen 7 (2022): Edición 1 (February 2022)

Volumen 6 (2021): Edición 4 (November 2021)

Volumen 6 (2021): Edición 3 (June 2021)

Volumen 6 (2021): Edición 2 (March 2021)

Volumen 6 (2021): Edición 1 (February 2021)

Volumen 5 (2020): Edición 4 (November 2020)

Volumen 5 (2020): Edición 3 (August 2020)

Volumen 5 (2020): Edición 2 (April 2020)

Volumen 5 (2020): Edición 1 (February 2020)

Volumen 4 (2019): Edición 4 (December 2019)

Volumen 4 (2019): Edición 3 (August 2019)

Volumen 4 (2019): Edición 2 (May 2019)

Volumen 4 (2019): Edición 1 (February 2019)

Volumen 3 (2018): Edición 4 (November 2018)

Volumen 3 (2018): Edición 3 (August 2018)

Volumen 3 (2018): Edición 2 (May 2018)

Volumen 3 (2018): Edición 1 (February 2018)

Volumen 2 (2017): Edición 4 (December 2017)

Volumen 2 (2017): Edición 3 (August 2017)

Volumen 2 (2017): Edición 2 (May 2017)

Volumen 2 (2017): Edición 1 (February 2017)

Volumen 1 (2016): Edición 4 (November 2016)

Volumen 1 (2016): Edición 3 (August 2016)

Volumen 1 (2016): Edición 2 (May 2016)

Volumen 1 (2016): Edición 1 (February 2016)

Detalles de la revista
Formato
Revista
eISSN
2543-683X
Publicado por primera vez
30 Mar 2017
Periodo de publicación
4 veces al año
Idiomas
Inglés

Buscar

Volumen 1 (2016): Edición 4 (November 2016)

Detalles de la revista
Formato
Revista
eISSN
2543-683X
Publicado por primera vez
30 Mar 2017
Periodo de publicación
4 veces al año
Idiomas
Inglés

Buscar

0 Artículos

Perspective

Acceso abierto

Data-driven Discovery: A New Era of Exploiting the Literature and Data

Publicado en línea: 01 Sep 2017
Páginas: 1 - 9

Resumen

Abstract

In the current data-intensive era, the traditional hands-on method of conducting scientific research by exploring related publications to generate a testable hypothesis is well on its way of becoming obsolete within just a year or two. Analyzing the literature and data to automatically generate a hypothesis might become the de facto approach to inform the core research efforts of those trying to master the exponentially rapid expansion of publications and datasets. Here, viewpoints are provided and discussed to help the understanding of challenges of data-driven discovery.

The Panama Canal, the 77-kilometer waterway connecting the Atlantic and Pacific oceans, has played a crucial role in international trade for more than a century. However, digging the Panama Canal was an exceedingly challenging process. A French effort in the late 19th century was abandoned because of equipment issues and a significant loss of labor due to tropical diseases transmitted by mosquitoes. The United States officially took control of the project in 1902. The United States replaced the unusable French equipment with new construction equipment that was designed for a much larger and faster scale of work. Colonel William C. Gorgas was appointed as the chief sanitation officer and charged with eliminating mosquito-spread illnesses. After overcoming these and additional trials and tribulations, the Canal successfully opened on August 15, 1914. The triumphant completion of the Panama Canal demonstrates that using the right tools and eliminating significant threats are critical steps in any project.

More than 100 years later, a paradigm shift is occurring, as we move into a data-centered era. Today, data are extremely rich but overwhelming, and extracting information out of data requires not only the right tools and methods but also awareness of major threats. In this data-intensive era, the traditional method of exploring the related publications and available datasets from previous experiments to arrive at a testable hypothesis is becoming obsolete. Consider the fact that a new article is published every 30 seconds (Jinha, 2010). In fact, for the common disease of diabetes, there have been roughly 500,000 articles published to date; even if a scientist reads 20 papers per day, he will need 68 years to wade through all the material. The standard method simply cannot sufficiently deal with the large volume of documents or the exponential growth of datasets. A major threat is that the canon of domain knowledge cannot be consumed and held in human memory. Without efficient methods to process information and without a way to eliminate the fundamental threat of limited memory and time to handle the data deluge, we may find ourselves facing failure as the French did on the Isthmus of Panama more than a century ago.

Scouring the literature and data to generate a hypothesis might become the de facto approach to inform the core research efforts of those trying to master the exponentially rapid expansion of publications and datasets (Evans & Foster, 2011). In reality, most scholars have never been able to keep completely up-to-date with publications and datasets considering the unending increase in quantity and diversity of research within their own areas of focus, let alone in related conceptual areas in which knowledge may be segregated by syntactically impenetrable keyword barriers or an entirely different research corpus.

Research communities in many disciplines are finally recognizing that with advances in information technology there needs to be new ways to extract entities from increasingly data-intensive publications and to integrate and analyze large-scale datasets. This provides a compelling opportunity to improve the process of knowledge discovery from the literature and datasets through use of knowledge graphs and an associated framework that integrates scholars, domain knowledge, datasets, workflows, and machines on a scale previously beyond our reach (Ding et al., 2013).

Expert Review

Acceso abierto

Under-reporting of Adverse Events in the Biomedical Literature

Publicado en línea: 01 Sep 2017
Páginas: 10 - 32

Resumen

AbstractPurpose

To address the under-reporting of research results, with emphasis on the under-reporting/distorted reporting of adverse events in the biomedical research literature.

Design/methodology/approach

A four-step approach is used: (1) To identify the characteristics of literature that make it adequate to support policy; (2) to show how each of these characteristics becomes degraded to make inadequate literature; (3) to identify incentives to prevent inadequate literature; and (4) to show policy implications of inadequate literature.

Findings

This review has provided reasons for, and examples of, adverse health effects of myriad substances (1) being under-reported in the premiere biomedical literature, or (2) entering this literature in distorted form. Since there is no way to gauge the extent of this under/distorted-reporting, the quality and credibility of the ‘premiere’ biomedical literature is unknown. Therefore, any types of meta-analyses or scientometric analyses of this literature will have unknown quality and credibility. The most sophisticated scientometric analysis cannot compensate for a highly flawed database.

Research limitations

The main limitation is in identifying examples of under-reporting. There are many incentives for under-reporting and few dis-incentives.

Practical implications

Almost all research publications, addressing causes of disease, treatments for disease, diagnoses for disease, scientometrics of disease and health issues, and other aspects of healthcare, build upon previous healthcare-related research published. Many researchers will not have laboratories or other capabilities to replicate or validate the published research, and depend almost completely on the integrity of this literature. If the literature is distorted, then future research can be misguided, and health policy recommendations can be ineffective or worse.

Originality/value

This review has examined a much wider range of technical and non-technical causes for under-reporting of adverse events in the biomedical literature than previous studies.

Palabras clave

  • Under-reporting
  • Publication bias
  • Reporting bias
  • Manufactured research
  • Research misconduct
  • Research malfeasance
  • Biomedical literature

Research Paper

Acceso abierto

Topic Detection Based on Weak Tie Analysis: A Case Study of LIS Research

Publicado en línea: 01 Sep 2017
Páginas: 81 - 101

Resumen

AbstractPurpose

Based on the weak tie theory, this paper proposes a series of connection indicators of weak tie subnets and weak tie nodes to detect research topics, recognize their connections, and understand their evolution.

Design/methodology/approach

First, keywords are extracted from article titles and preprocessed. Second, high-frequency keywords are selected to generate weak tie co-occurrence networks. By removing the internal lines of clustered sub-topic networks, we focus on the analysis of weak tie subnets’ composition and functions and the weak tie nodes’ roles.

Findings

The research topics’ clusters and themes changed yearly; the subnets clustered with technique-related and methodology-related topics have been the core, important subnets for years; while close subnets are highly independent, research topics are generally concentrated and most topics are application-related; the roles and functions of nodes and weak ties are diversified.

Research limitations

The parameter values are somewhat inconsistent; the weak tie subnets and nodes are classified based on empirical observations, and the conclusions are not verified or compared to other methods.

Practical implications

The research is valuable for detecting important research topics as well as their roles, interrelations, and evolution trends.

Originality/value

To contribute to the strength of weak tie theory, the research translates weak and strong ties concepts to co-occurrence strength, and analyzes weak ties’ functions. Also, the research proposes a quantitative method to classify and measure the topics’ clusters and nodes.

Palabras clave

  • Research topics
  • Weak tie network
  • Weak tie theory
  • Weak tie nodes
  • Library and Information Science (LIS)
Acceso abierto

Open Peer Review in Scientific Publishing: A Web Mining Study of PeerJ Authors and Reviewers

Publicado en línea: 01 Sep 2017
Páginas: 60 - 80

Resumen

AbstractPurpose

To understand how authors and reviewers are accepting and embracing Open Peer Review (OPR), one of the newest innovations in the Open Science movement.

Design/methodology/approach

This research collected and analyzed data from the Open Access journal PeerJ over its first three years (2013–2016). Web data were scraped, cleaned, and structured using several Web tools and programs. The structured data were imported into a relational database. Data analyses were conducted using analytical tools as well as programs developed by the researchers.

Findings

PeerJ, which supports optional OPR, has a broad international representation of authors and referees. Approximately 73.89% of articles provide full review histories. Of the articles with published review histories, 17.61% had identities of all reviewers and 52.57% had at least one signed reviewer. In total, 43.23% of all reviews were signed. The observed proportions of signed reviews have been relatively stable over the period since the Journal’s inception.

Research limitations

This research is constrained by the availability of the peer review history data. Some peer reviews were not available when the authors opted out of publishing their review histories. The anonymity of reviewers made it impossible to give an accurate count of reviewers who contributed to the review process.

Practical implications

These findings shed light on the current characteristics of OPR. Given the policy that authors are encouraged to make their articles’ review history public and referees are encouraged to sign their review reports, the three years of PeerJ review data demonstrate that there is still some reluctance by authors to make their reviews public and by reviewers to identify themselves.

Originality/value

This is the first study to closely examine PeerJ as an example of an OPR model journal. As Open Science moves further towards open research, OPR is a final and critical component. Research in this area must identify the best policies and paths towards a transparent and open peer review process for scientific communication.

Palabras clave

  • Open Peer Review (OPR
  • Adoption of OPR
  • Open Access
  • Open Science
  • Open research
  • Scientific communication
Acceso abierto

Mapping Diversity of Publication Patterns in the Social Sciences and Humanities: An Approach Making Use of Fuzzy Cluster Analysis

Publicado en línea: 01 Sep 2017
Páginas: 33 - 59

Resumen

AbstractPurpose

To present a method for systematically mapping diversity of publication patterns at the author level in the social sciences and humanities in terms of publication type, publication language and co-authorship.

Design/methodology/approach

In a follow-up to the hard partitioning clustering by Verleysen and Weeren in 2016, we now propose the complementary use of fuzzy cluster analysis, making use of a membership coefficient to study gradual differences between publication styles among authors within a scholarly discipline. The analysis of the probability density function of the membership coefficient allows to assess the distribution of publication styles within and between disciplines.

Findings

As an illustration we analyze 1,828 productive authors affiliated in Flanders, Belgium. Whereas a hard partitioning previously identified two broad publication styles, an international one vs. a domestic one, fuzzy analysis now shows gradual differences among authors. Internal diversity also varies across disciplines and can be explained by researchers’ specialization and dissemination strategies.

Research limitations

The dataset used is limited to one country for the years 2000–2011; a cognitive classification of authors may yield a different result from the affiliation-based classification used here.

Practical implications

Our method is applicable to other bibliometric and research evaluation contexts, especially for the social sciences and humanities in non-Anglophone countries.

Originality/value

The method proposed is a novel application of cluster analysis to the field of bibliometrics. Applied to publication patterns at the author level in the social sciences and humanities, for the first time it systematically documents intra-disciplinary diversity.

Palabras clave

  • Bibliometrics
  • Social sciences and humanities
  • Publication patterns
  • Dissemination
  • Cluster analysis
0 Artículos

Perspective

Acceso abierto

Data-driven Discovery: A New Era of Exploiting the Literature and Data

Publicado en línea: 01 Sep 2017
Páginas: 1 - 9

Resumen

Abstract

In the current data-intensive era, the traditional hands-on method of conducting scientific research by exploring related publications to generate a testable hypothesis is well on its way of becoming obsolete within just a year or two. Analyzing the literature and data to automatically generate a hypothesis might become the de facto approach to inform the core research efforts of those trying to master the exponentially rapid expansion of publications and datasets. Here, viewpoints are provided and discussed to help the understanding of challenges of data-driven discovery.

The Panama Canal, the 77-kilometer waterway connecting the Atlantic and Pacific oceans, has played a crucial role in international trade for more than a century. However, digging the Panama Canal was an exceedingly challenging process. A French effort in the late 19th century was abandoned because of equipment issues and a significant loss of labor due to tropical diseases transmitted by mosquitoes. The United States officially took control of the project in 1902. The United States replaced the unusable French equipment with new construction equipment that was designed for a much larger and faster scale of work. Colonel William C. Gorgas was appointed as the chief sanitation officer and charged with eliminating mosquito-spread illnesses. After overcoming these and additional trials and tribulations, the Canal successfully opened on August 15, 1914. The triumphant completion of the Panama Canal demonstrates that using the right tools and eliminating significant threats are critical steps in any project.

More than 100 years later, a paradigm shift is occurring, as we move into a data-centered era. Today, data are extremely rich but overwhelming, and extracting information out of data requires not only the right tools and methods but also awareness of major threats. In this data-intensive era, the traditional method of exploring the related publications and available datasets from previous experiments to arrive at a testable hypothesis is becoming obsolete. Consider the fact that a new article is published every 30 seconds (Jinha, 2010). In fact, for the common disease of diabetes, there have been roughly 500,000 articles published to date; even if a scientist reads 20 papers per day, he will need 68 years to wade through all the material. The standard method simply cannot sufficiently deal with the large volume of documents or the exponential growth of datasets. A major threat is that the canon of domain knowledge cannot be consumed and held in human memory. Without efficient methods to process information and without a way to eliminate the fundamental threat of limited memory and time to handle the data deluge, we may find ourselves facing failure as the French did on the Isthmus of Panama more than a century ago.

Scouring the literature and data to generate a hypothesis might become the de facto approach to inform the core research efforts of those trying to master the exponentially rapid expansion of publications and datasets (Evans & Foster, 2011). In reality, most scholars have never been able to keep completely up-to-date with publications and datasets considering the unending increase in quantity and diversity of research within their own areas of focus, let alone in related conceptual areas in which knowledge may be segregated by syntactically impenetrable keyword barriers or an entirely different research corpus.

Research communities in many disciplines are finally recognizing that with advances in information technology there needs to be new ways to extract entities from increasingly data-intensive publications and to integrate and analyze large-scale datasets. This provides a compelling opportunity to improve the process of knowledge discovery from the literature and datasets through use of knowledge graphs and an associated framework that integrates scholars, domain knowledge, datasets, workflows, and machines on a scale previously beyond our reach (Ding et al., 2013).

Expert Review

Acceso abierto

Under-reporting of Adverse Events in the Biomedical Literature

Publicado en línea: 01 Sep 2017
Páginas: 10 - 32

Resumen

AbstractPurpose

To address the under-reporting of research results, with emphasis on the under-reporting/distorted reporting of adverse events in the biomedical research literature.

Design/methodology/approach

A four-step approach is used: (1) To identify the characteristics of literature that make it adequate to support policy; (2) to show how each of these characteristics becomes degraded to make inadequate literature; (3) to identify incentives to prevent inadequate literature; and (4) to show policy implications of inadequate literature.

Findings

This review has provided reasons for, and examples of, adverse health effects of myriad substances (1) being under-reported in the premiere biomedical literature, or (2) entering this literature in distorted form. Since there is no way to gauge the extent of this under/distorted-reporting, the quality and credibility of the ‘premiere’ biomedical literature is unknown. Therefore, any types of meta-analyses or scientometric analyses of this literature will have unknown quality and credibility. The most sophisticated scientometric analysis cannot compensate for a highly flawed database.

Research limitations

The main limitation is in identifying examples of under-reporting. There are many incentives for under-reporting and few dis-incentives.

Practical implications

Almost all research publications, addressing causes of disease, treatments for disease, diagnoses for disease, scientometrics of disease and health issues, and other aspects of healthcare, build upon previous healthcare-related research published. Many researchers will not have laboratories or other capabilities to replicate or validate the published research, and depend almost completely on the integrity of this literature. If the literature is distorted, then future research can be misguided, and health policy recommendations can be ineffective or worse.

Originality/value

This review has examined a much wider range of technical and non-technical causes for under-reporting of adverse events in the biomedical literature than previous studies.

Palabras clave

  • Under-reporting
  • Publication bias
  • Reporting bias
  • Manufactured research
  • Research misconduct
  • Research malfeasance
  • Biomedical literature

Research Paper

Acceso abierto

Topic Detection Based on Weak Tie Analysis: A Case Study of LIS Research

Publicado en línea: 01 Sep 2017
Páginas: 81 - 101

Resumen

AbstractPurpose

Based on the weak tie theory, this paper proposes a series of connection indicators of weak tie subnets and weak tie nodes to detect research topics, recognize their connections, and understand their evolution.

Design/methodology/approach

First, keywords are extracted from article titles and preprocessed. Second, high-frequency keywords are selected to generate weak tie co-occurrence networks. By removing the internal lines of clustered sub-topic networks, we focus on the analysis of weak tie subnets’ composition and functions and the weak tie nodes’ roles.

Findings

The research topics’ clusters and themes changed yearly; the subnets clustered with technique-related and methodology-related topics have been the core, important subnets for years; while close subnets are highly independent, research topics are generally concentrated and most topics are application-related; the roles and functions of nodes and weak ties are diversified.

Research limitations

The parameter values are somewhat inconsistent; the weak tie subnets and nodes are classified based on empirical observations, and the conclusions are not verified or compared to other methods.

Practical implications

The research is valuable for detecting important research topics as well as their roles, interrelations, and evolution trends.

Originality/value

To contribute to the strength of weak tie theory, the research translates weak and strong ties concepts to co-occurrence strength, and analyzes weak ties’ functions. Also, the research proposes a quantitative method to classify and measure the topics’ clusters and nodes.

Palabras clave

  • Research topics
  • Weak tie network
  • Weak tie theory
  • Weak tie nodes
  • Library and Information Science (LIS)
Acceso abierto

Open Peer Review in Scientific Publishing: A Web Mining Study of PeerJ Authors and Reviewers

Publicado en línea: 01 Sep 2017
Páginas: 60 - 80

Resumen

AbstractPurpose

To understand how authors and reviewers are accepting and embracing Open Peer Review (OPR), one of the newest innovations in the Open Science movement.

Design/methodology/approach

This research collected and analyzed data from the Open Access journal PeerJ over its first three years (2013–2016). Web data were scraped, cleaned, and structured using several Web tools and programs. The structured data were imported into a relational database. Data analyses were conducted using analytical tools as well as programs developed by the researchers.

Findings

PeerJ, which supports optional OPR, has a broad international representation of authors and referees. Approximately 73.89% of articles provide full review histories. Of the articles with published review histories, 17.61% had identities of all reviewers and 52.57% had at least one signed reviewer. In total, 43.23% of all reviews were signed. The observed proportions of signed reviews have been relatively stable over the period since the Journal’s inception.

Research limitations

This research is constrained by the availability of the peer review history data. Some peer reviews were not available when the authors opted out of publishing their review histories. The anonymity of reviewers made it impossible to give an accurate count of reviewers who contributed to the review process.

Practical implications

These findings shed light on the current characteristics of OPR. Given the policy that authors are encouraged to make their articles’ review history public and referees are encouraged to sign their review reports, the three years of PeerJ review data demonstrate that there is still some reluctance by authors to make their reviews public and by reviewers to identify themselves.

Originality/value

This is the first study to closely examine PeerJ as an example of an OPR model journal. As Open Science moves further towards open research, OPR is a final and critical component. Research in this area must identify the best policies and paths towards a transparent and open peer review process for scientific communication.

Palabras clave

  • Open Peer Review (OPR
  • Adoption of OPR
  • Open Access
  • Open Science
  • Open research
  • Scientific communication
Acceso abierto

Mapping Diversity of Publication Patterns in the Social Sciences and Humanities: An Approach Making Use of Fuzzy Cluster Analysis

Publicado en línea: 01 Sep 2017
Páginas: 33 - 59

Resumen

AbstractPurpose

To present a method for systematically mapping diversity of publication patterns at the author level in the social sciences and humanities in terms of publication type, publication language and co-authorship.

Design/methodology/approach

In a follow-up to the hard partitioning clustering by Verleysen and Weeren in 2016, we now propose the complementary use of fuzzy cluster analysis, making use of a membership coefficient to study gradual differences between publication styles among authors within a scholarly discipline. The analysis of the probability density function of the membership coefficient allows to assess the distribution of publication styles within and between disciplines.

Findings

As an illustration we analyze 1,828 productive authors affiliated in Flanders, Belgium. Whereas a hard partitioning previously identified two broad publication styles, an international one vs. a domestic one, fuzzy analysis now shows gradual differences among authors. Internal diversity also varies across disciplines and can be explained by researchers’ specialization and dissemination strategies.

Research limitations

The dataset used is limited to one country for the years 2000–2011; a cognitive classification of authors may yield a different result from the affiliation-based classification used here.

Practical implications

Our method is applicable to other bibliometric and research evaluation contexts, especially for the social sciences and humanities in non-Anglophone countries.

Originality/value

The method proposed is a novel application of cluster analysis to the field of bibliometrics. Applied to publication patterns at the author level in the social sciences and humanities, for the first time it systematically documents intra-disciplinary diversity.

Palabras clave

  • Bibliometrics
  • Social sciences and humanities
  • Publication patterns
  • Dissemination
  • Cluster analysis