Article Text

PDF
130 A COMPARISION OF THE QUALITY OF DATA GRAPHS IN INITIAL SUBMISSIONS AND PUBLISHED VERSIONS OF RANDOMIZED CONTROLLED TRIALS.
  1. R. Sinha,
  2. P. Liu,
  3. D. Schriger
  1. UCLA Emergency Medicine Center, UCLA School of Medicine, Los Angeles, CA

Abstract

Background Graphic displays in scientific manuscripts convey complex relationships not easily described in text. They are the ideal format to show by-subject data. They provide far more information than a summary statistic or p value. Digital imaging and computer software have simplified the creation and decreased the cost of publication of graphs. While the beneficial effects of peer review on the quality of prose have been documented, it is not known to what extent peer review improves the quality of tables and figures submitted for publication. Access to the submitted randomized controlled trial (RCT) manuscripts of the British Medical Journal (BMJ) allows us to compare the submitted version with the subsequently published version. The objective of this study was to characterize the quantity and quality of data graphs in RCTs submitted to the BMJ and published in peer-reviewed journals. An additional objective of our study was to compare the quality of submitted and published data graphs and investigate to what extent the peer review process affected quality.

Methods We reviewed the data graphs in a cohort of RCTs submitted to the BMJ during 2001 using a validated scoring form. Information was collected on the type of graph, use of advanced features, clarity of each graph, and discrepancies within the graph. The quality of submitted figures to published figures was compared. We also reviewed the BMJ reviewer and editor comments for each RCT to investigate the extent of peer review.

Results There were 41 graphs in the 62 published RCTs (35 with no graphs, 18 with 1 graph, 4 with 2 graphs, 5 with 3 graphs). Despite the ability of graphs to convey complex information, 56% of RCTs submitted to BMJ had no data graphs. Alt Though graphs had few internal errors or contradictions and were visually clear, 72% were simple univariate displays. Graphs failed to portray by-subject data and rarely displayed advanced features such as pairing, symbolic dimensionality, or small multiples. These graphs did not depict data at an appropriate level of detail and did not meet their data presentation potential.

Conclusion There was no evidence that the peer review process attempted to improve these graphs, despite ample room for improvement.

Statistics from Altmetric.com

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.