traductor

jueves, 3 de febrero de 2022

La mayoría de los ‘preprints’ sí fueron fiables durante la pandemia

 

La mayoría de los ‘preprints’ sí fueron fiables durante la pandemia

Una nueva revisión indica que más del 83 % de los trabajos sobre covid-19 prepublicados sin revisión por pares ('preprints') no sufrieron cambios en su versión definitiva, a pesar de que fueron muy cuestionados.  


Los "preprints" el intercambio de manuscritos disponibles en repositorios antes de la revisión por pares, ha ido en aumento en las biociencias desde 2013 y experimentó un aumento durante la pandemia de COVID-19, lo que aceleró la difusión de investigaciones oportunas. Pero, ¿cómo se relacionan los denominados “preprints” con los artículos finales revisados por pares? 

Dos nuevos estudios publicados este martes en la revista de acceso abierto PLOS Biology adoptaron diferentes enfoques para explorar cómo se comparan los preprints publicados en bioRxivy medRxiv con sus versiones publicadas.

El primer estudio, liderado por Jonathon Coates, de la Universidad Queen Mary de Londres, comparó manualmente más de 180 preprints con sus versiones publicadas en los primeros cuatro meses de la pandemia de COVID-19. El segundo estudio, dirigido por David Nicholson, de la Universidad de Pensilvania, utilizó el aprendizaje automático y el análisis textual para explorar las relaciones entre casi 18.000 preprints de bioRxiv y su versión publicada.

Confianza en la ciencia

Las preocupaciones sobre la calidad de las preimpresiones han existido desde la aparición de este formato en las ciencias. Como señala Coates, “aproximadamente el 40 % de las primeras investigaciones sobre la COVID-19 se compartió por primera vez como preimpresión y se utilizaron en decisiones de política y salud pública. Por lo tanto, conocer la calidad de estos preprints es vital para tener confianza en la ciencia en un momento en que muchos intentan erosionar esa confianza”.

El análisis de los repositorios de preprints científicos públicos también tiene el potencial de iluminar muchos detalles previamente ocultos del proceso de revisión por pares.

“Conocer la calidad de estos preprints es vital para tener confianza en la ciencia en un momento en que muchos intentan erosionarla”

Coates y sus colegas compararon todos los preprints de COVID-19 publicados dentro de los primeros cuatro meses de la pandemia y encontraron que más del 83 % de los artículos de ciencias de la vida relacionados con COVID y el 93 % no relacionados con COVID no cambian respecto a las versiones del estudio publicadas de forma definitiva.

Al comparar todo el corpus de bioRxiv con las versiones finalmente publicadas, Nicholson y sus colegas encontraron que parecen ocurrir muchas diferencias en la composición tipográfica y la adición de materiales complementarios; solo hubo cambios modestos en las características lingüísticas de la mayoría de los manuscritos durante el proceso de publicación y revisión por pares.

Además, Nicholson y su equipo crearon un sitio web que utiliza su herramienta de aprendizaje automático para recomendar revistas potenciales que publican artículos lingüísticamente similares.

Preprints fiables

En conjunto, nuestros estudios proporcionan evidencia que respalda la confiabilidad y el uso de preprints tanto durante una pandemia global como para resultados científicos generales”, asegura Casey Greene, que también ha participado era el trabajo. “Examinar los pares de preimpresión y publicación brinda la oportunidad de estudiar el proceso de revisión por pares y, en conjunto, nuestros resultados deberían provocar un replanteamiento del papel y la importancia de la revisión por pares en el sistema de publicación actual”.

"Nuestros resultados deberían provocar un replanteamiento del papel y la importancia de la revisión por pares"

“Con una proporción tan grande de literatura temprana sobre COVID-19 compartida como preprints no revisados por pares, es esencial saber si esos estudios son confiables o no”, añade Coates. “Al comparar manualmente las preimpresiones con sus versiones publicadas y revisadas por pares, mostramos que más del 83 % de las preimpresiones de COVID-19 y el 93 % de las preimpresiones que no son de COVID son fiables y dignas de confianza”.

Referencias: Tracking changes between preprint posting and journal publication during a pandemic. PLoS Biol 20(2): e3001285. https://doi.org/10.1371/journal.pbio.3001285 | Examining linguistic shifts between preprints and publications. PLoS Biol 20(2): e3001470.

 https://doi.org/10.1371/journal.pbio.3001470 

https://www.vozpopuli.com/next/preprints-fiables-covid.html?fbclid=IwAR217skg1tMbxk2DPcGFpeXbolS6m0yBYHSEpi3Po6FfK1bcgMiZPdK_ogc

https://greenelab.github.io/preprint-similarity-search/

Tracking changes between preprint posting and journal publication during a pandemic

Amid the Coronavirus Disease 2019 (COVID-19) pandemic, preprints in the biomedical sciences are being posted and accessed at unprecedented rates, drawing widespread attention from the general public, press, and policymakers for the first time. This phenomenon has sharpened long-standing questions about the reliability of information shared prior to journal peer review. Does the information shared in preprints typically withstand the scrutiny of peer review, or are conclusions likely to change in the version of record? We assessed preprints from bioRxiv and medRxiv that had been posted and subsequently published in a journal through April 30, 2020, representing the initial phase of the pandemic response. We utilised a combination of automatic and manual annotations to quantify how an article changed between the preprinted and published version. We found that the total number of figure panels and tables changed little between preprint and published articles. Moreover, the conclusions of 7.2% of non-COVID-19–related and 17.2% of COVID-19–related abstracts undergo a discrete change by the time of publication, but the majority of these changes do not qualitatively change the conclusions of the paper.

Introduction

Global health and economic development in 2020 were overshadowed by the Coronavirus Disease 2019 (COVID-19) pandemic, which grew to over 3.2 million cases and 220,000 deaths within the first 4 months of the year [13]. The global health emergency created by the pandemic has demanded the production and dissemination of scientific findings at an unprecedented speed via mechanisms such as preprints, which are scientific manuscripts posted by their authors to a public server prior to the completion of journal-organised peer review [46]. Despite a healthy uptake of preprints by the bioscience communities in recent years [7], some concerns persist [810]. In particular, one such argument suggests that preprints are less reliable than peer-reviewed papers, since their conclusions may change in a subsequent version. Such concerns have been amplified during the COVID-19 pandemic, since preprints are being increasingly used to shape policy and influence public opinion via coverage in social and traditional media [11,12]. One implication of this hypothesis is that the peer review process will correct many errors and improve reproducibility, leading to significant differences between preprints and published versions.

Several studies have assessed such differences. For example, Klein and colleagues used quantitative measures of textual similarity to compare preprints from arXiv and bioRxiv with their published versions [13], concluding that papers change “very little.” Recently, Nicholson and colleagues employed document embeddings to show that preprints with greater textual changes compared with the journal versions took longer to be published and were updated more frequently [14]. However, changes in the meaning of the content may not be directly related to changes in textual characters, and vice versa (e.g., a major rearrangement of text or figures might simply represent formatting changes, while the position of a single decimal point could significantly alter conclusions). Therefore, sophisticated approaches aided or validated by manual curation are required, as employed by 2 recent studies. Using preprints and published articles, both paired and randomised, Carneiro and colleagues employed manual scoring of methods sections to find modest, but significant improvements in the quality of reporting among published journal articles [15]. Pagliaro manually examined the full text of 10 preprints in chemistry, finding only small changes in this sample [16], and Kataoka compared the full text of medRxiv randomised controlled trials (RCTs) related to COVID-19, finding in preprint versions an increased rate of spin (positive terms in the title or abstract conclusion section used to describe nonsignificant results [17]. Bero and colleagues [18] and Oikonomidi and colleagues [19] investigated changes in conclusions reported in COVID-19–related clinical studies, finding that some preprints and journal articles differed in the outcomes reported. However, the frequency of changes in the conclusions of a more general sample of preprints remained an open question. We sought to identify an approach that would detect such changes effectively and without compromising on sample size. We divided our analysis between COVID-19 and non-COVID-19 preprints, as extenuating circumstances such as expedited peer review and increased attention [11] may impact research related to the pandemic.

To investigate how preprints have changed upon publication, we compared abstracts, figures, and tables of bioRxiv and medRxiv preprints with their published counterparts to determine the degree to which the top-line results and conclusions differed between versions. In a detailed analysis of abstracts, we found that most scientific articles undergo minor changes without altering the main conclusions. While this finding should provide confidence in the utility of preprints as a way of rapidly communicating scientific findings that will largely stand the test of time, the value of subsequent manuscript development, including peer review, is underscored by the 7.2% of non-COVID-19–related and 17.2% of COVID-19–related preprints with major changes to their conclusions upon publication.

Results

COVID-19 preprints were rapidly published during the early sphase of the pandemic

The COVID-19 pandemic has spread quickly across the globe, reaching over 3.2 million cases worldwide within 4 months of the first reported case [1]. The scientific community responded concomitantly, publishing over 16,000 articles relating to COVID-19 within 4 months [11]. A large proportion of these articles (>6,000) were manuscripts hosted on preprint servers. Following this steep increase in the posting of COVID-19 research, traditional publishers adapted new policies to support the ongoing public health emergency response efforts, including efforts to fast-track peer review of COVID-19 manuscripts (e.g., eLife [20]). At the time of our data collection in May 2020, 4.0% of COVID-19 preprints were published by the end of April, compared to the 3.0% of non-COVID-19 preprints that were published such that we observed a significant association between COVID-19 status (COVID-19 or non-COVID-19 preprint) and published status (chi-squared test; χ2 = 6.78, df = 1, p = 0.009, n = 14,812) (Fig 1A). When broken down by server, 5.3% of COVID-19 preprints hosted on bioRxiv were published compared to 3.6% of those hosted on medRxiv (S1A Fig). However, a greater absolute number of medRxiv versus bioRxiv COVID-19 preprints (71 versus 30) were included in our sample of detailed analysis of text changes (see Methods), most likely a reflection of the different focal topics between the 2 servers (medRxiv has a greater emphasis on medical and epidemiological preprints).

References

  1. 1. WHO. COVID-19 situation report 19. 2020 Aug 2 [cited 2020 May 13]. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200501-covid-19-sitrep.pdf
  2. 2. Zhu N, Zhang D, Wang W, Li X, Yang B, Song J, et al. A Novel Coronavirus from Patients with Pneumonia in China, 2019. N Engl J Med. 2020;382:727–33. pmid:31978945
  3. 3. Coronaviridae Study Group of the International Committee on Taxonomy of Viruses. The species Severe acute respiratory syndrome-related coronavirus: classifying 2019-nCoV and naming it SARS-CoV-2. Nat Microbiol. 2020;5:536–44. pmid:32123347
  4. 4. Sever R, Roeder T, Hindle S, Sussman L, Black K-J, Argentine J, et al. bioRxiv: the preprint server for biology. bioRxiv. 2019:833400.
  5. 5. Kaiser J. Am 12:00. BioRxiv at 1 year: A promising start. In: Science | AAAS [Internet] 11 Nov 2014 [cited 2020 May 13]. https://www.sciencemag.org/news/2014/11/biorxiv-1-year-promising-start
  6. 6. Rawlinson C, Bloom T. New preprint server for medical research. BMJ. 2019;365. pmid:31167753
  7. 7. Abdill RJ, Blekhman R. Tracking the popularity and outcomes of all bioRxiv preprints. Pewsey E, Rodgers P, Greene CS, editors. Elife. 2019;8:e45133. pmid:31017570
  8. 8. Bagdasarian N, Cross GB, Fisher D. Rapid publications risk the integrity of science in the era of COVID-19. BMC Med. 2020;18:192. pmid:32586327
  9. 9. Majumder MS, Mandl KD. Early in the epidemic: impact of preprints on global discourse about COVID-19 transmissibility. Lancet Glob Health 2020;0. pmid:32220289
  10. 10. Sheldon T. Preprints could promote confusion and distortion. Nature. 2018;559:445–6. pmid:30042547
  11. 11. Fraser N, Brierley L, Dey G, Polka JK, Pálfy M, Nanni F, et al. The evolving role of preprints in the dissemination of COVID-19 research and their impact on the science communication landscape. PLoS Biol. 2021;19:e3000959. pmid:33798194
  12. 12. Adie E. COVID-19-policy dataset. 2020.
  13. 13. Klein M, Broadwell P, Farb SE, Grappone T. Comparing published scientific journal articles to their pre-print versions. Int J Digit Libr. 2019;20:335–50.
  14. 14. Nicholson DN, Rubinetti V, Hu D, Thielk M, Hunter LE, Greene CS. Linguistic Analysis of the bioRxiv Preprint Landscape. bioRxiv. 2021; 2021.03.04.433874.
  15. 15. Carneiro CFD, Queiroz VGS, Moulin TC, Carvalho CAM, Haas CB, Rayêe D, et al. Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. Res Integr Peer Rev. 2020;5:16. pmid:33292815
  16. 16. Pagliaro M. Preprints in Chemistry: An Exploratory Analysis of Differences with Journal Articles. Preprints. 2020.
  17. 17. Kataoka Y, Oide S, Ariie T, Tsujimoto Y, Furukawa TA. COVID-19 randomized controlled trials in medRxiv and PubMed. Eur J Intern Med. 2020;81:97–9. pmid:32981802
  18. 18. Bero L, Lawrence R, Leslie L, Chiu K, McDonald S, Page MJ, et al. Cross-sectional study of preprints and final journal publications from COVID-19 studies: discrepancies in results reporting and spin in interpretation. BMJ Open. 2021;11:e051821. pmid:34272226
  19. 19. Oikonomidi T, Boutron I, Pierre O, Cabanac G, Ravaud P. the COVID-19 NMA Consortium. Changes in evidence for studies assessing interventions for COVID-19 reported in preprints: meta-research study. BMC Med. 2020;18:402. pmid:33334338
  20. 20. Eisen MB, Akhmanova A, Behrens TE, Weigel D. Publishing in the time of COVID-19. Elife. 2020;9:e57162. pmid:32209226
  21. 21. Horbach SPJM. Pandemic publishing: Medical journals strongly speed up their publication process for COVID-19. Quant Sci Stud. 2020;1:1056–67.
  22. 22. Lee C, Yang T, Inchoco G, Jones GM, Satyanarayan A. Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online. Proc 2021 CHI Conf Hum Factors Comput Syst. 2021:1–18.
  23. 23. Vale RD. Accelerating scientific publication in biology. Proc Natl Acad Sci U S A. 2015;112:13439–46. pmid:26508643
  24. 24. Ratclif JW. Pattern Matching: the Gestalt Approach. In: Dr. Dobb’s [Internet]. 1998 Jul 1 [cited 2021 Feb 15]. http://www.drdobbs.com/database/pattern-matching-the-gestalt-approach/184407970
  25. 25. Malički M, Costello J, Alperin JP, Maggio LA. From amazing work to I beg to differ—analysis of bioRxiv preprints that received one public comment till September 2019. bioRxiv. 2020.
  26. 26. Horbach SPJM. No time for that now! Qualitative changes in manuscript peer review during the Covid-19 pandemic. Res Eval. 2021 [cited 2021 Feb 17].
  27. 27. Sumner JQ, Haynes L, Nathan S, Hudson-Vitale C, McIntosh LD. Reproducibility and reporting practices in COVID-19 preprint manuscripts. medRxiv. 2020.
  28. 28. Klein M, de Sompel HV, Sanderson R, Shankar H, Balakireva L, Zhou K, et al. Scholarly Context Not Found: One in Five Articles Suffers from Reference Rot. PLoS ONE. 2014;9: e115253. pmid:25541969
  29. 29. Besançon L, Peiffer-Smadja N, Segalas C, Jiang H, Masuzzo P, Smout C, et al. Open Science Saves Lives: Lessons from the COVID-19 Pandemic. bioRxiv. 2020.
  30. 30. Ding Y, Zhang G, Chambers T, Song M, Wang X, Zhai C. Content-based citation analysis: The next generation of citation analysis. J Assoc Inf Sci Technol. 2014;65:1820–1833.
  31. 31. Paul M, Girju R. Topic Modeling of Research Fields: An Interdisciplinary Perspective. Proceedings of the International Conference RANLP-2009 Borovets, Bulgaria: Association for Computational Linguistics; 2009. p. 337–342. https://www.aclweb.org/anthology/R09-1061
  32. 32. Knoth P, Herrmannova D. Towards Semantometrics: A New Semantic Similarity Based Measure for Assessing a Research Publication’s Contribution. -Lib Mag 2014;20. Available from: http://oro.open.ac.uk/42527/
  33. 33. Wadden D, Lin S, Lo K, Wang LL, van Zuylen M, Cohan A, et al. Fact or Fiction: Verifying Scientific Claims. ArXiv200414974 Cs 2020 [cited 2021 Feb 9]. http://arxiv.org/abs/2004.14974
  34. 34. Stab C, Kirschner C, Eckle-Kohler J, Gurevych I. Argumentation Mining in Persuasive Essays and Scientific Articles from the Discourse Structure Perspective. In: Cabrio E, Villata S, Wyner A, editors. Proceedings of the Workshop on Frontiers and Connections between Argumentation Theory and Natural Language Processing. Bertinoro, Italy: CEUR-WS; 2014. http://ceur-ws.org/Vol-1341/paper5.pdf
  35. 35. Bronner A, Monz C. User Edits Classification Using Document Revision Histories. Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Avignon, France: Association for Computational Linguistics; 2012. p. 356–366. https://www.aclweb.org/anthology/E12-1036
  36. 36. Schiermeier Q. Initiative pushes to make journal abstracts free to read in one place. Nature. 2020 [cited 2021 Feb 9]. pmid:33046879
  37. 37. Le Q, Mikolov T. Distributed Representations of Sentences and Documents. International Conference on Machine Learning. PMLR; 2014. p. 1188–1196. http://proceedings.mlr.press/v32/le14.html
  38. 38. Chamberlain S, Zhu H, Jahn N, Boettiger C, Ram K. rcrossref: Client for Various “CrossRef” “APIs.” 2020. https://CRAN.R-project.org/package=rcrossref
  39. 39. Agirre E, Bos J, Diab M, Manandhar S, Marton Y, Yuret D, editors. *SEM 2012: The First Joint Conference on Lexical and Computational Semantics—Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012). Montréal, Canada: Association for Computational Linguistics; 2012. Available from: https://aclanthology.org/S12-1000
  40. 40. Venables WN, Ripley BD. Modern Applied Statistics with S. 4th ed. New York: Springer-Verlag; 2002.
  41. 41. Fox J, Weisberg S. An R Companion to Applied Regression. Thousand Oaks CA: Sage; 2019. https://socialsciences.mcmaster.ca/jfox/Books/Companion/
  42. 42. Fraser N, Kramer B. covid19_preprints. 2020. 
  43. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001285

 

 

 

1 comentario:

Anónimo dijo...

youtube - youtube.com - Videodl
Youtube: youtube.com. The youtube mp3 YouTube videos channel for the video game industry and movies shows some of the best action action in history! youtube.com youtube.