Tous droits réservés.
Site réalisé par William Traoré

Science Limitations

Par Arsène Kanyamibwa - Le 19 septembre 2020

« Not only is it important to ask questions and find the answers, as a scientist I felt obligated to communicate with the world what we were learning » ―Stephen Hawking

In 2017, Pierre Chirsen, one of the founders of Indésciences declared, in trying to define science popularization, « It’s a term that we have long been afraid to use. We try to avoid using it because it doesn’t really correspond to what we want to transmit. Research can be « straightforward ». Research is for everyone, that’s a fact. Sometimes you just have to slightly adjust the discourse so that it’s for everyone. It’s not a desire to make something extra playful by discrediting it. We want to stick to a scientific rigor we’re very attached to. Making scientific discourse accessible while maintaining rigor is what we are trying to do » (“Vulgariser sans vulgarité” 2017). I dare to imagine that this sentence sums up perfectly the wish of magazines, blogs, TV and radio programs, and finally of conference organizers and all those involved in scientific mediation.

Transmit accurately the information produced by the researchers

However, today more than ever, there is a lack of understanding of scientific issues and epistemic knowledge about science (Scheufele 2013; Scheufele and Krause 2019). This lack of understanding includes public perceptions of scientific uncertainty, media promotion techniques, and the gap between what research says and what the public hears (National Academies of Sciences, Engineering, and Medicine 2017; Scheufele and Krause 2019; Schmidt 2009) as well as the multiplication of sources and organizations involved (Bubela et al. 2009). All this shows that achieving the ultimate goal of civic education (Schmidt 2009) is not an easy task. The danger and explosion of « fake news » have even prompted researchers to study the phenomenon at an accelerated pace since 2016 (Alonso García et al. 2020). Moreover, research has shown that insufficient communication between the scientific sphere and the public can significantly hinder social and political debates (Bubela et al. 2009; National Academies of Sciences, Engineering, and Medicine 2017; Scheufele 2013; Scheufele and Krause 2019; Schmidt 2009). This may even lead to an increase in the prevalence of certain conditions such as cancer (Johnson et al. 2018). We may assume that good science outreach practices, education, quality, and development depend solely on science-trained individuals. When we see the daily progress being made all around us every year, it leads us to believe, often erroneously, that if scientists maintain this passion, perseverance, and dedication it’s because they are “limitless” and, by extension, so is science. In this article, I will present some of the elements that limit the legitimate dissemination of science to the general public and briefly suggest some avenues we need to address.

Study limitations 

The preferred means of communicating scientific or technological advancement is through the writing of scientific articles (Crookes 1986; Peterson 1961). Initially created to disseminate scientific discoveries among researchers, they have become open to the public and are now a major source of information that can be refuted and has a considerable impact on our decision-making in all aspects of life (Hoogenboom and Manske 2012; Nahata 2008). As a result, to limit confusion, they have a very rigid form and publisher-specific guidelines that must be followed (Bates College 2011; Hoogenboom and Manske 2012; Kenyon College n.d.; Mack 2018; Nahata 2008). The abstract, a general summary of the research project, is followed by the introduction, which sets the context and purpose of the study. Next, the materials and methods provide a detailed explanation of how the study was carried out and the results, which objectively present the results of the experiments carried out. Then, the discussion to interpret the results and put them in the general context. Finally, the conclusion to trace the results and their relevance, to propose future directions followed by the references used. In order to avoid any misinterpretation, it is essential, in each study, to clarify the importance of the contribution to science and especially its limits, to ensure total transparency, without harming the research. To this end, there is a very important paragraph in the discussion section that scientists may sometimes be a little reluctant to share (“How To Present Study Limitations and Alternatives” 2018; Puhan et al. 2012; Ross and Bibler Zaidi 2019; ter Riet et al. 2013; Tigre Moura 2017). This section entitled « Study Limitations » refers to any issue that impedes a study and its results (Ross and Bibler Zaidi 2019; Tigre Moura 2017). Regrettably, a study on biomedical research published in 2007 showed that only 17% of respondents reported limitations to their study (Ioannidis 2007). But, as Dr. Puhan and his collaborators have noted: « we need scientists who understand and note the potential for frank and unbiased discussion of study limitations and for journals to recognize and accept such manuscripts » (Puhan et al. 2012).


Furthermore, students in scientific training are the firsts to learn this: when it comes to writing an article or presenting their work at the end of an internship, or even during their studies, they have to be honest, precise, and extremely rigorous. This requirement is perfectly useful and necessary for the communication and progress of science between researchers, but how can this rigor be applied to communication with the general public? Quite simply, a scientist must be able to condense and simplify, or, in other words, “popularize” their discourse in order to get the message across. They must remove all technical terms from their oration in order not only to make themselves understood but also to maintain the public’s attention (Alley 2014). The introduction of magazines and television programs in charge of science outreach has facilitated the task of many researchers and made science more accessible by increasing its dissemination. Despite this, condensing work lasting several months or a ten-page article often represents a very great risk: oversimplification (Doumont 2010; Schmidt 2009). For several years now, some scientific-educational programs in France have included communication courses and require their students to practice this skill fairly regularly. But again, this is something that has only recently been introduced into science curricula. Most of the people who really manage to successfully communicate scientific progress nowadays are people who have not had this kind of training. But have taken advantage and exploited opportunities outside their usual curriculum. These people have often shown creativity and their passion has led them to take up science outreach, such as Dr. Elodie Chabrol director of Pint of Science France, Dr. Sébastien Carassou, or platforms like “Café Des Sciences.

Then we have the journalists trained in science mediation. The principle of one person being taught how to write and communicate science eloquently to the public is an idea that has greatly increased the visibility and funding of research projects. Despite this, with few exceptions, a journalist is not a scientist and a scientist is not a journalist. Researchers are certainly very grateful to magazines and newspapers that are dedicated to science outreach. Nevertheless, as mentioned earlier, the writing of original articles follows inflexible rules and, as a result, offers a very different version of what we see in the media columns. For those trained in technical sciences, reading such a column can sometimes create doubts about the extent or consequences of the research mentioned. It is at this point that a check of the original article may reveal a lack of precision or an oversimplification of the researcher’s assessment. However, this problem can be solved by effective cooperation between the two parties. The journalist should communicate regularly with the scientist and submit the article to him or her before publication in order to agree on the latest version. And the researcher, for his part, must make himself available, be cautious in what he says and ensure that the work of his team is not excessively distorted or overly simplified.


Another important thing to consider when it comes to sharing research with the public it’s that despite all the double checks in place, scientists are human and can, therefore, like everyone else, make mistakes. Whether it is personality, funding size and policies, unconscious bias, or lack of integrity, there are many elements that can hinder the dissemination of solid and accurate information. A term that frequently recurs in discussions of scientific integrity is the term pseudo-science. To use a quite common term nowadays, pseudoscience can be characterized as science’s fake news, and in this case, “pseudo” is a Greek term that literally means fake. Nonetheless, despite the practicality of the term and the general consensus, extreme caution must always be exercised when using it. Most importantly, scientists who have spent most of their adult lives learning to practice science ought to explain to non-experts the process of first defining and then recognizing pseudo-science.

The major difference lies in the ground rules, established to produce rigorous scientific work. Pseudoscience is not based on the scientific method (Coker 2003). Specifically, it is often based on subjective value and starts from an attractive hypothesis, dependent on the arbitrary conventions of culture and not on facts, in which case there’s no evidence to support it (Coker 2003). That is to say, even if it is found to respect the classic structure of an article, it lacks credible evidence based on the (objective) knowledge of nature and does not take into account the overwhelming presence of contradictory data (Coker 2003). In theory, the various processes in place prior to the publication of an article, and especially peer review, should provide a safe barrier to pseudoscience (Marcus and Oransky 2017). The peer-review technique exists in several forms, but in the long run, it allows journals/platforms to solicit 2-3 researchers with expertise in the field to assess the quality and validity of the paper a scientist wishes to publish. Unfortunately, this is not always the case, and whether it is the journals, the reviewers, or the publishers, all must be vigilant in order to no longer give a forum to non-validated ideas (Marcus and Oransky 2017). You can easily use Google and you will discover that there is a list of topics on Wikipedia that have been identified as pseudoscience topics (“List of Topics Characterized as Pseudoscience” 2020). However, any well-informed person can easily realize that some of the topics on this list are certainly not science, and as Michael D. Cordin put it so well in his book Pseudoscience Wars: « The more attractive science is, the more people with unorthodox ideas want to model themselves upon it, and the greater the public appetite for doctrines with the appearance of science » (Gordin 2017). Does this mean that any unorthodox or unconventional idea is necessarily wrong or fake? Absolutely not! Starting with the young Albert Einstein, to less well-known names such as the 1975 Nobel prize winners, David Baltimore, Howard M. Temin, and Renato Dulbecco. The history of science offers us an abundance of stories where brilliant people had revolutionary ideas but were nevertheless mistreated and their scientific integrity was questioned. As described by Schopenhauer, when opinion becomes general, adhering to it is a duty, and people are hated when they dare to formulate their own judgments (Schopenhauer 1830 ploy XXX). Even in this day and age, there are scientists, such as Dr. Ruth Izhaki, who conducts research on Alzheimer’s disease and who, while they respected scientific guidelines and procedures, the fact that they dared stray away from the norm, made people question the importance and validity of her work (Begley 2019). Thus, this is what we can draw from our reflections. These examples show that it is not a question of our opinion on the title or the results of an article, but of focus on the method used to generate these results.

However, like Michael D. Gordin, it is reasonable to assume that those who have been called pseudo-scientists (whether or not history has proven them right) may have considered themselves to be true scientists (Gordin 2017). This does not mean that all those who have found themselves in this category are a marginalized and unrecognized pioneer. Unfortunately, as always, there are charlatans and ill-intentioned opportunists. This is why it is cardinal to regularly double-check our sources and examine all our information. Hence, once again it is a matter of proper communication and the use of modern tools. First of all, the non-expert public needs the keys to assess the reliability of the information and to search for the validity of any new hypothesis. With these means, everyone will be able to understand which hypotheses are mostly accepted and which ideas are controversial, inaccurate, or false. It is obvious that science is evolving every day and that research must be constant. But if this is done correctly and everyone who creates a debate supports the scientific community and ultimately listens to its conclusions, personal and societal progress in this area will be the inevitable reward.

Cognitive Bias

Now, let us now assume that the problems mentioned above disappear. Once again, we are human beings, and whether we like it or not, the way we establish our beliefs is biased. In this case, it is not always our fault, but it is our responsibility to evaluate our behavior and to identify what may prevent us from doing, writing, speaking, or disseminating science. Whether unconscious or conscious, prejudice and cognitive biases shape the way we view the world and, by extension, science.

Let’s start with cognitive biases, and again, Wikipedia has a non-exhaustive list of them (“List of Cognitive Biases” 2020). To put it plainly, it is undeniable that it is not a question of intelligence or education, but simply of how our brains function to allow us to perform and think clearly on a daily basis (Benson, 2019). You may think it is absurd that the driver of the car that is your body deliberately takes it easy but « unreasonable » turns to reach his destination, but if you are driving and your GPS tells you there is a shortcut, won’t you follow it?! You’ll probably save time and energy, and that’s exactly what your brain does. According to author Buster Benson (Benson 2019), there are four main problems that cognitive biases help solve, and here’s how he listed them:

  1. Too much information
    Your brain picks the information that is likely to be most useful, meaning: We notice things that are already primed in memory or repeated often, bizarre/funny/visually-striking/anthropomorphic things stick out more than non-bizarre/unfunny things, things that have changed, details that confirm our own existing beliefs, and we notice flaws in others more easily than flaws in ourselves.
  2. Not enough meaning
    The word is very confusing and we need to make sense of things, meaning: We find stories and patterns even in sparse data, We fill in characteristics from stereotypes, generalities, and prior histories whenever there are new specific instances or gaps in information, We fill in characteristics from stereotypes, generalities, and prior histories whenever there are new specific instances or gaps in information, We simplify probabilities and numbers to make them easier to think about, We think we know what others are thinking, and we project our current mindset and assumptions onto the past and future.
  3. Need to act Fast
    We are limited by time and information and we can’t let that paralyze us, meaning: In order to act, stay focused, avoid mistakes and get things done, We need to be confident in our ability to make an impact and to feel like what we do is important, we favor the immediate, relatable thing in front of us over the delayed and distant, we’re motivated to complete things that we’ve already invested time and energy in, we’re motivated to preserve our autonomy and status in a group and to avoid irreversible decisions, and we favor options that appear simple or that have more complete information over more complex, ambiguous options.
  4. What should we remember?
    Once again the plethora of information around is too much to remember so we keep the most useful bits, meaning: We edit and reinforce some memories after the fact, we discard specifics to form generalities, we reduce events and lists to their key elements, and we store memories differently based on how they were experienced.

Can you imagine what these biases do to the information we consume? In research, despite all the checks and balances that are in place, bias can very often and unconsciously influence the way research is conducted. Psychologist Brian Nosek has dedicated his career to this issue in order to create a more transparent science or « scientific utopia » (Ball 2015). Nosek and his colleague Jeff Spies created the Open Source Framework which essentially allows scientists to submit a plan of their work prior to its completion (Stage 1). They then develop their respective projects and when the results arrive, they compare their results with the previously submitted hypotheses (stage 2) (Ball 2015). This approach limits the presentation of unexpected results and « forces » the scientist to respect their initial plan. It avoids bias in the scientist’s reasoning since they do not change the project by analyzing everything differently and trying to adapt it to their original idea (Ball 2015). Is this the ultimate solution? Nosek states that for the time being no assertions can be made (Ball 2015) although it has other advantages. And, even if « open » science has not yet convinced everyone (Hocquet 2018; Swan 2017), the European Commission’s initiative to launch a new open science platform “Open Research Europe” shows that its development is on the right track.

Processus d’enregistrement des rapports. Note. Figure extraite du site web des rapports enregistrés du Center for Open Science à

Lastly, I would like to briefly talk about attentional bias. This type of bias is part of cognitive bias, but it is wise to discuss a little more about its impact. “Attention is the cognitive process of selectively concentrating on one thing while ignoring other things. It refers to all the mechanisms by which the brain selects information, amplifies it, channels it, and deepens its processing(“Attention” 2020; Dehaene 2020). The phrase « selects information » implies that there is a process where our mind favors some of the information it receives, focuses on it, and eventually stores it. When we focus on intellectual tasks in this way, we become blind to other situations or the information and reduce the impact of information that is deemed irrelevant (Dehaene 2020). Thus, the attentional bias affected by the selective factors of our attention can be as useful as when driving a car. Consequently, in some cases, it can also prevent us from collecting essential information, such as when we concentrate on our phone while crossing a road. When it comes to science, this information, even if it is sometimes contradictory to what we think we know, is necessary in order to arrange the puzzle of our beliefs and experiences.

However, as the current crisis illustrates, we cannot afford misinformation and, above all, we cannot afford to limit the progress and scope of science. Firstly, as individuals, scientists, and non-scientists alike, and secondly as a society. We must constantly and respectfully question ideas, methods, and people. We must assume responsibility and consciously challenge our prejudices, accept the judgment and open communication of our work, and learn to evaluate and interpret the work of others. Only in these circumstances can we hope to avoid oversimplification, discuss productively the limits of a study, weaken the ascendancy along with the visibility of pseudo-science, and lessen the impact of bias, for a science « without limits ».

For more information on:

  1. The Story of the 1975 Nobel Prize Winners: Listen to this podcast by Malcolm Gladwell.
  2. Dr. Ruth Ithaki: You can view the article she struggled to publish here and her subsequent results here.
  3. Brian Nosek and Jeff Spies and their platform read Nosek’s article on psychological science & the piece on  along with their strategic plan.
  4. Michael D. Gordin and his book and other related works, see The University of Chicago Press
  5. The importance of presentation/publication of negative results in science, see the article by D.Mehta Nature Careers.
  6. The peer-review process, see Elsevier.
  7. Open science, see the Business Science report.
  8. The war against misinformation during the COVID19 crisis, you can refer to this article.


  1. Alley, Michael. 2014. “The Craft of Scientific Presentations: Critical Steps to Succeed and Critical Errors to Avoid.” Choice Reviews Online 51 (05): 51-2411-51–2411.
  2. Alonso García, Santiago, Gerardo Gómez García, Mariano Sanz Prieto, Antonio José Moreno Guerrero, and Carmen Rodríguez Jiménez. 2020. “The Impact of Term Fake News on the Scientific Community. Scientific Performance and Mapping in Web of Science.” Social Sciences 9 (5): 73.
  3. “Attention.” 2020. ScienceDaily. March 28, 2020.
  4. Ball, Philip. 2015. “The Trouble With Scientists.” Nautilus. May 14, 2015.
  5. Bates College. 2011. “How to Write Guide: Sections of the Paper.” July 3, 2011.
  6. Begley, Sharon. 2019. “How an Alzheimer’s ‘Cabal’ Thwarted Progress toward a Cure.” June 25, 2019.
  7. Benson, Buster. 2019. “Cognitive Bias Cheat Sheet.” Medium. October 11, 2019.
  8. Bubela, Tania, Matthew C. Nisbet, Rick Borchelt, Fern Brunger, Cristine Critchley, Edna Einsiedel, Gail Geller, et al. 2009. “Science Communication Reconsidered.” Nature Biotechnology 27 (6): 514–18.
  9. Coker, Rory. 2003. “Différence entre la science et la pseudoscience.” December 31, 2003.
  10. Crookes, Graham. 1986. “Towards a Validated Analysis of Scientific Text Structure.” Applied Linguistics 7 (1): 57–70.
  11. Dehaene, Stanislas. 2020. “How We Pay Attention Changes the Very Shape of Our Brains.” Literary  Hub (blog). January 30, 2020.
  12. Doumont, Jean-Luc. 2010. “English Communication for Scientists: Unit 2.1 Scientific Papers.” 2010.
  13. Gordin, Michael D. 2017. “The Problem with Pseudoscience.” EMBO Reports 18 (9): 1482–85.
  14. Hocquet, Alexandre. 2018. “Débat : L’« open Science », Une Expression Floue et Ambiguë.” The Conversation. 2018.
  15. Hoogenboom, Barbara J., and Robert C. Manske. 2012. “HOW TO WRITE A SCIENTIFIC ARTICLE.” International Journal of Sports Physical Therapy 7 (5): 512–17.
  16. “How To Present Study Limitations and Alternatives.” 2018. Wordvice (blog). November 4, 2018.
  17. Ioannidis, John P. A. 2007. “Limitations Are Not Properly Acknowledged in the Scientific Literature.” Journal of Clinical Epidemiology 60 (4): 324–29.
  18. Johnson, Skyler B., Henry S. Park, Cary P. Gross, and James B. Yu. 2018. “Use of Alternative Medicine for Cancer and Its Impact on Survival.” JNCI: Journal of the National Cancer Institute 110 (1): 121–24.
  19. Kenyon College. n.d. “Sections of a Paper: Structure of a Scientific Paper.” Accessed June 1, 2020.
  20. “List of Cognitive Biases.” 2020. In Wikipedia.
  21. “List of Topics Characterized as Pseudoscience.” 2020. In Wikipedia.
  22. Mack, Chris A. 2018. How to Write a Good Scientific Paper. Bellingham, Washington: SPIE Press.
  23. Marcus, Adam, and Ivan Oransky. 2017. “Why Garbage Science Gets Published.” Why Garbage Science Gets Published. December 7, 2017.
  24. Nahata, Milap C. 2008. “Tips for Writing and Publishing an Article.” Annals of Pharmacotherapy 42 (2): 273–77.
  25. National Academies of Sciences, Engineering, and Medicine. 2017. Communicating Science Effectively: A Research Agenda. Communicating Science Effectively: A Research Agenda. National Academies Press (US).
  26. Peterson, Martin S. 1961. “Scientific Thinking and Scientific Writing.” Soil Science 92 (5): 357.
  27. Puhan, Milo A, Elie A Akl, Dianne Bryant, Feng Xie, Giovanni Apolone, and Gerben ter Riet. 2012. “Discussing Study Limitations in Reports of Biomedical Studies- the Need for More Transparency.” Health and Quality of Life Outcomes 10 (February): 23.
  28. Riet, Gerben ter, Paula Chesley, Alan G. Gross, Lara Siebeling, Patrick Muggensturm, Nadine Heller, Martin Umbehr, et al. 2013. “All That Glitters Isn’t Gold: A Survey on Acknowledgment of Limitations in Biomedical Studies.” PLoS ONE 8 (11).
  29. Ross, Paula T., and Nikki L. Bibler Zaidi. 2019. “Limited by Our Limitations.” Perspectives on Medical Education 8 (4): 261–64.
  30. Scheufele, Dietram A. 2013. “Communicating Science in Social Settings.” Proceedings of the National Academy of Sciences 110 (Supplement 3): 14040–47.
  31. Scheufele, Dietram A., and Nicole M. Krause. 2019. “Science Audiences, Misinformation, and Fake News.” Proceedings of the National Academy of Sciences 116 (16): 7662–69.
  32. Schmidt, Charles W. 2009. “Communication Gap: The Disconnect Between What Scientists Say and What the Public Hears.” Environmental Health Perspectives 117 (12): A548–51.
  33. Schopenhauer, Arthur. 1830. “L’art d’avoir Toujours Raison.” In .
  34. Swan, Alma. 2017. “Open Access and the Progress of Science.” American Scientist. February 6, 2017.
  35. Tigre Moura, Francisco. 2017. “Don’t Worry! And Write the LIMITATIONS of Your Research!” July 25, 2017.
  36. “Vulgariser sans vulgarité.” 2017. July 11, 2017.

Toi aussi participe au blog !