Topic outline

  • HOME

    Pen - RW AFSP
     A Framework for Science Logo  Topic Outline Button Preface Button  Site Navigation Button

    HOME

    TOPIC OUTLINE

    PREFACE

    (For Instructors)

    SITE NAVIGATION



    Welcome to A Framework for Scientific Papers. 


    The overall goal of " A Framework for Scientific Papers" (AFSP) is to support curricula that use hypothesis-testing and written communication to learn scientific methods and content. The module is based on three general hypotheses:


    1) Using a clear hypothesis-testing framework is a powerful method for scientific discovery (Shavelson and Towne, 2002).

    2) Using available frameworks for both deductive and inductive reasoning can help to create and defend scientific models (Platt, 1964Hill, 1965).

    3) Using a single framework (hypothesis testing) can simplify the process of writing all sections of scientific papers (Introduction, Methods, Results and Discussion).


    The AFSP module builds on principles introduced in the "Reasoned Writing" module. AFSP focuses on two specific objectives:


    A) To examine how to use reasoning to develop and defend specific hypotheses. The AFSP module uses distinct terminology to reduce confusion between two important roles for scientific hypotheses: (1) as generalized models that explain natural phenomena; and (2) as measurable predictions that can be experimentally tested. 


    B) To help use hypothesis testing as a consistent framework to simplify the process of writing specific and persuasive scientific papers. 


    The AFSP module begins with a brief discussion of what hypotheses are, and why hypotheses provide a useful framework for papers. Most of AFSP explains one hypothesis-centered, reasoned framework to simplify writing papers in the Introduction, Methods, Results and Discussion (IMRaD) format. The module ends with some brief thoughts about how to increase the impact of scientific communication. To continue, follow the links to each area: 

    HYPOTHESES

    STRUCTURING

    SCIENTIFIC PAPERS

    INCREASING IMPACT

    Hypotheses Button
    Scientific Papers Button Increasing Impact Button

    Horizontal Divider

    For those interested in examples of how the Reasoned Writing / A Framework For Scientific Papers modules could be used to help structure coursework, below are some examples:


    EXAMPLES

    Examples Button

    Horizontal Divider

    Reasoned Writing / A Framework for Scientific Papers

    © 2018, Devin Jindrich

    All rights reserved.

  • PREFACE

    Pen - RW AFSP
     A Framework for Science Logo  Preface Button    

    Why "A Framework for Scientific Papers?"

    "A Framework for Scientific Papers?" is intended to help students use one framework to structure scientific papers.


    There are many ways to do science. One of the strengths of science is that each individual scientist makes different contributions in different ways. Curiosity-based research widens the scope of science to the bounds of the human imagination, and provides vital opportunities for serendipity and opportunism. Moreover, the diversity of science contributes to its evolution. Scientific selection acting on diverse ideas helps to adapt science to the current social, factual, and technological environment. Therefore, it would be impossible and misguided to try to fit science into any single process or formula.


    However, norms are also important to science. For example, the peer-review process depends on common expectations for methodology and rigor. Scientific norms include statistical conventions that span many scientific fields. Norms can also be confined to specific practices or fields. Across science and within scientific disciplines, some shared acknowledgement of current best practices is essential for advancing research while maintaining reasonable expectations of quality. Common practices in science provide an opportunity to help students practice skills and approaches that they can transfer to many scientific and professional contexts.


    "A Framework for Scientific Papers" (AFSP) makes an effort to present the most uncontroversial but useful principles for science practice and communication possible. For example, the Aristotelian logic adopted in "Reasoned Writing" and applied in AFSP is time-honored and ubiquitous. AFSP does not seek to do justice to the history, or current debates in the philosophy of science and scientific reasoning (e.g. Giere, 2001). AFSP also does not seek to address different statistical frameworks that are available for designing experiments and interpreting data (Taper and Ponciano, 2016). Instead, AFSP seeks to present basic principles of reasoning and interpreting statistical tests using available and commonly used methods (e.g. t-tests, ANOVA, etc.).


    Most of the recommendations in AFSP generally match the many "guides" for scientific writing available to students. My hope in presenting more normative material is to provide brief, practical advice that can be understood in the context of the reasoning principles introduced in "Reasoned Writing." Therefore, students may be able to gain a deeper understanding of some of the reasons for current scientific practices.


    However, some of the instructional approaches that I have found most useful for helping students improve scientific reasoning and writing deviate somewhat from convention. Three important areas where AFSP may seem (or be) different from other sources of guidance are:


    Horizontal Divider


    1) A focus on quantitative, hypothesis-driven research

    In the interests of simplicity and brevity, AFSP focuses on one framework for scientific papers. The goal of the AFSP module is NOT to survey many kinds of research methods. The goal of AFSP is to present one framework that is specific enough to help students structure scientific papers when time and effort are constrained. 


    The National Research Council identified six guiding principles that underlie scientific inquiry (Shavelson and Towne, 2002):

    1) Pose Significant Questions That Can Be Investigated Empirically.

    2) Link Research to Relevant Theory.

    3) Use Methods That Permit Direct Investigation of the Question.

    4) Provide a Coherent and Explicit Chain of Reasoning.

    5) Replicate and Generalize Across Studies.

    6) Disclose Research to Encourage Professional Scrutiny and Critique.


    Quantitative, hypothesis-driven research satisfies all six principles identified by the NRC, and is therefore an appropriate framework for student inquiry. Hypothesis-driven research is also an important component of the Scientific and Engineering Practices identified by the National Research Council (NRC, 2012).


    Horizontal Divider


    2) Distinction between "General" and "Measurable" Hypotheses


    The term "hypothesis" is used in many ways. On one hand, "hypothesis" is used for very general models of the world or universe (Giere, 2001Lovelock and Margulis, 1974). On the other hand, hypotheses refer to specific predictions that can be statistically tested (Sokal and Rohlf, 1987). In my experience, many students are legitimately confused about the many types of hypotheses that they encounter. 


    When asked to develop a hypothesis, many students write vague statements somewhere between general models and testable predictions. I hypothesize that confusion about the different roles of hypotheses may contribute to the difficulties that students have with developing specific hypotheses. Therefore, AFSP establishes a dichotomy: dividing hypotheses into two separate types that reflect two important roles for hypotheses:


    1) "General Hypotheses," that are specific, explanatory models for natural phenomena, and 

    2) "Measurable hypotheses," that are specific, testable predictions of the outcomes of individual experiments.


    The General/Measurable dichotomy supports the application of the reasoning principles reviewed in "Reasoned Writing." General Hypotheses are typically tested (supported or rejected) by many experimental studies through induction (although Strong Inference can also be used to test General Hypotheses; Platt 1964).  In contrast, Measurable Hypotheses are typically rejected using data from a specific experiment through deduction. Similarly, developing and testing General Hypotheses contributes to evaluation skills, whereas developing and testing Measurable Hypothesis contributes to analytical skills (Bloom et al., 1956).


    Using two distinct forms of hypotheses also helps to explicitly structure the very different sections of scientific papers around the common framework of hypotheses. The Introduction and Discussion primarily focus on General Hypotheses, whereas the Methods and Results are structured around Measurable hypotheses.


    Practically, the dichotomy of General vs. Measurable hypotheses is useful for helping students write simply and specifically. In my experience, students start developing hypotheses with statements that are closer to General than to Measurable Hypotheses. Distinguishing between General and Measurable hypotheses helps to maintain feedback positive and constructive: students can be commended for making a strong start towards a General Hypothesis, then encouraged to create separate, testable Measurable Hypotheses.  


    Therefore, AFSP distinguishes between General and Measurable hypotheses even though I readily acknowledge that the explicit distinction and terminology is NOT common in scientific papers. In scientific papers, the roles of hypotheses are often indicated contextually. However, nuanced scientific context is often difficult for students to understand. Using distinct terminology for different types of hypotheses is a simple approach to help students develop useful hypotheses.


    The primary goal of AFSP is not to help students write publishable scientific papers, but to understand some of the fundamental elements of scientific reasoning and writing. Therefore, I consider some departure from common scientific conventions to be justified. Clearly-written papers with the General/Measurable hypothesis distinction could easily be revised to submit for publication if desired.


    Horizontal Divider


    3) Using a reasoned framework to structure the Methods and Results sections


    Books and resources on scientific writing may provide different guidance for writing different sections of a scientific paper. For example, writers may be encouraged to answer the question "Why was the problem studied?" in the Introduction, "How was the problem studied?" in the Methods, "What were the findings?" in the Results and "What do the findings mean?" in the Discussion (Bolt and Bruins, 2012). I hypothesize that structuring different sections around very general questions that are different for each section is confusing for students.


    Therefore, AFSP specifies a single type of question ("Why") and a single overall goal (hypothesis testing) for each section of the paper. The questions posed by AFSP are:


    INTRODUCTION: WHY does an important GAP in current scientific understanding lead reasonably to the General and Measurable hypotheses?


    METHODS: WHY are the chosen methods necessary and appropriate to test the Measurable Hypotheses?


    RESULTS: WHY do the data lead to the conclusion to reject or support each Measurable Hypothesis?


    DISCUSSION: WHY do the results (i.e. the conclusions about the Measurable Hypotheses) either support existing General Hypotheses or lead us to propose new General Hypotheses? 


    Although the questions for the Introduction and Discussion may be uncontroversial, the questions for the Methods and Results may seem (at first) to be unconventional. Specifically, the question for the Methods section may seem different from common recommendation for the Methods to be primarily descriptive. Similarly, testing hypotheses in the Results section may seem counter to the common recommendation that the Results present data without interpretation (Holstein et al., 2015). However, I consider the recommendations here in the AFSW module to be consistent with common recommendations for the following reasons:


    A) Methods. In my estimation, an explanatory Methods section is more complete than a descriptive Methods section. A purely descriptive Methods section omits important information, even if the section describes all experimental techniques in sufficient detail for rote replication. The reasons for choosing the selected methods instead of alternatives can also represent substantial time, effort and cost. The reasons for selecting methods can also reflect assumptions of a study that may not be explained elsewhere. Therefore, the AFSW module recommends a framework for the Methods section that includes the justification for each method as a part of its presentation. In my estimation, including the reasoning that leads to specific methods is a necessary part of explaining the methods in sufficient detail to be fully understood and replicated.


    B) Results. Collecting data as objectively as possible is clearly important for quantitative research. Likewise, presenting data clearly and with as little subjective interpretation as possible is an important goal for the Results section of a scientific paper. However, in my estimation there is a reason that the Results section is called the "Results" section and not simply the "Data" section. The reason is that, for hypothesis-driven studies, Data alone are not Results. The framework presented in AFSW involves three elements that constitute a "result":


    1) A Measurable Hypothesis that is specific enough to be testable using data collected and analyzed by the experiment.

    2) Data that are objectively collected and analyzed, and suitable for testing the Measurable Hypothesis.

    3) The application of the data to testing the Measurable Hypothesis to yield a result.


    In a strongly-structured Results section, the process of applying data to a specific and testable Measurable Hypothesis is not interpretation because the conclusion does not require subjective judgment. For example, the hypothesis and data can be expressed as the two premises of a deductive syllogism. If the syllogism is strongly structured in the form of modus tollens, the conclusion directly follows from the premises. Therefore, the conclusion does not require interpretation and reasonably belongs in the Results section.


    I acknowledge that some writers and journals prefer to place the conclusions of arguments in the Discussion section. In my estimation, the location of the conclusions is a stylistic issue that does not affect the underlying substance of the reasoning. I consider it clearer to locate the conclusions close to the arguments and data that support them. However, reasonable people differ, and diversity is important for science.


    Structuring each section around "Why" questions and consistent, specific reasoning can simplify the process of scientific writing. Therefore, the AFSW module seeks to use the most consistent frameworks possible to structure each section of a scientific paper.


    Horizontal Divider


    Although there are many ways to conduct research, a comprehensive or comparative review of research techniques is not an objective of the RW/ASSP modules. Instead, the modules seek to help students add ONE framework for structuring scientific papers to their toolbox, with the sincere hope that students will engage in life-long learning to understand the diversity of scientific approaches.



    ABOUT THE AUTHOR


    About Button

  • TOPIC OUTLINE

    Pen
     A Framework for Science Logo  Topic Outline Button    






    A Framework For Scientific Papers Topic Outline
    3) General Hypotheses Button
    2)What Are Hypotheses Button 4) Measurable Hypotheses Button

    7) Hypotheses Modus Tollens Button
    1)Hypotheses Button 6) Deductive Testing Button 8) Strong Inference Button
    5) How to test hypotheses Button Blank Button
    9) Inductive Testing Button 10) Inductive Frameworks Button
    11) Limitations to Hypothesis Testing Button
    Blank Button
    14) Introduction Why Button
    13) IMRaD Introduction Button 15) Introduction What Button
    16) Introduction How Button
    A Framework for Science Logo
    17) IMRaD Methods Button

    Blank Button
    12) Scientific Papers Button 19) Results What Button
    18) IMRaD Results Button 20) Results How Button
    21) Results Why Button
    Blank Button
    23) Discussion What Button
    22) IMRaD Discussion Button 24) Discussion Why Button 25) Supporting General Hypotheses Button
    26) Revising General Hypotheses Button
    27) Discussion How Button
    Blank Button
    29) Title and Abstract Button
    28) Increasing Impact Button 30) Narrative Communication Button
    31) Spoken Communication Button


    File: 1
  • ABOUT THE AUTHOR

    Pen - RW AFSP
     Frameworks Home Button Preface Button About Button 

    HOME

    PREFACE

    ABOUT THE AUTHOR



    Jindrich Picture

    Devin Jindrich is an Associate Professor of Kinesiology at California State University, San Marcos, where he directs the Laboratory for Integrative Motor Behavior (LIMB) lab. Research in the LIMB Lab focuses on the interactions between biomechanics and motor control that result in effective movement, or “neuromechanics.” We seek to advance our fundamental understanding of how biomechanical and neural systems interact during movement, and apply neuromechanical principles to biomedical applications. Whereas simple mechanical models can describe important aspects of constant-speed forward locomotion, the mechanics and control of maneuvering (changing movement direction) or remaining stable (maintaining a desired movement direction) are less well understood. Consequently, we investigate the mechanisms used by insects, humans, and other animals to maneuver and remain stable during rapid locomotion, towards developing a general framework for understanding the control of maneuverability and stability. The results of these experiments support the hypothesis that musculoskeletal design and physiology can simplify the control requirements for maneuvering and remaining stable. A second focus of the LIMB lab is on using neuromechanics to prevent injuries. We use experimental studies and computer  simulations to assess the potential for injuries associated with emerging multitouch computer input devices, with the ultimate goal of helping designers create sets of multitouch gestures that minimize future injury risk. Neuromechanical principles can also make important contributions to improving motor function following injuries. Using rodent and primate animal models, we use neuromechanical techniques to develop more effective therapies and technological interventions for restoring function after neuromotor injury.

    Dr. Jindrich has authored over 37 peer-reviewed publications in established scientific journals, in addition to numerous successful grant applications and conference presentations. Dr. Jindrich has over 10 years of diverse teaching experiences at both research universities (U.C. Berkeley, Arizona State University) and teaching/research institutions (CSU San Marcos). He has mentored undergraduate, Master's, and Ph.D. students at U.C. Berkeley, Harvard School of Public Health, UCLA, Arizona State University, and CSU San Marcos. Dr. Jindrich has successfully implemented the principles of Reasoned Writing / A Framework for Scientific Papers in several content-based courses (Motor Control and Biomechanics). With guidance and feedback, he has observed dramatic improvements in student reasoning and writing over the course of a single semester.

    File: 1
  • 1) HYPOTHESES

    Pen - RW AFSP
     A Framework for Science Logo  Hypotheses Button  Topic Outline Thin Button  

    Hypotheses are central to the modern scientific method


    "all observation must be for or against some view if it is to be of any service" -- Charles Darwin


    Many methods of inquiry are useful for science. Science has a long tradition of using inductive reasoning to create explanations of the world using careful observations (Bacon, 1620). Modern science also uses deductive reasoning to systematically test and improve scientific explanations (Popper, 1959). Therefore, both inductive and deductive reasoning are important for scientific inquiry.


    Hypotheses can help to structure both inductive and deductive reasoning. Hypotheses can also bind scientific papers together into consistent frameworks (Lobban and Schefter, 1992). Therefore, it will be useful to discuss what hypotheses are, and how we can test hypotheses to build knowledge about the world.



    WHAT ARE 

    HYPOTHESES?

    HOW CAN WE TEST 

    HYPOTHESES?


    What Are Hypotheses? How to test hypotheses Button

  • 2) WHAT ARE HYPOTHESES?

    Pen
     A Framework for Science Logo  Hypotheses Button  What Are Hypotheses Button  Topic Outline Thin Button

    Hypotheses are testable explanations and predictions


    Many people think of a hypothesis as "an educated guess" (Moriarty, 1997). Is "an educated guess" a good definition of a hypothesis?


    An "educated guess" is actually not a bad definition for a hypothesis -- simply an incomplete definition.


    In practice, hypotheses are not really "guesses," but hypotheses are tentative statements. We don't know if hypotheses are true or not. Therefore, one of the most important aspects of hypotheses is that we must be able to determine whether hypotheses are true or not true: hypotheses must be testable


    For example, a testable hypothesis could be:


    "If I write my paper a week before the deadline and discuss it with my instructor before revision, I will get an A on the paper."


    The hypothesis is specific enough to make a testable prediction. However, the hypothesis is tentative: discussing a paper with the instructor is not a guarantee of an A (although still a good strategy to get a higher grade). 


    An example of a statement that is NOT testable would be:


    "I was a falcon in my past life." 


    Because there is no way to measure anything about past lives, we cannot test a statement about past lives. Therefore, the statement cannot be a hypothesis.


    DEFINITION:  A useful one-sentence definition of a hypothesis is: "A tentative, specific explanation or prediction of a phenomenon or an observation that can be rejected by experimental data."


    Expressing a definition using only one sentence is concise. However, the definition of a hypothesis has the problem that the sentence expresses two ideas, which can be confusing. Therefore, it is useful to analyze (break apart) the idea of a hypothesis by defining two separate terms:


    DEFINITION:  A "General Hypothesis" is a tentative, specific explanation of a phenomenon that can be rejected by experimental data."


    DEFINITION:  A "Measurable Hypothesis" is a tentative, specific prediction that can be rejected by experimental data."


    Given our definitions, it will be useful to explore General and Measurable hypotheses in more detail. The section "How to test hypotheses" will explain why rejection is such an important attribute for hypotheses.


    GENERAL 

    HYPOTHESES

    MEASURABLE 

    HYPOTHESES


    General Hypotheses Button Measurable Hypotheses Button
  • 3) GENERAL HYPOTHESES

    Pen - RW AFSP AFSP
     Hypotheses Button  What Are Hypotheses Button  General Hypotheses Button  Topic Outline Thin Button

    General Hypotheses are explanatory models that apply to large categories of observations.


    DEFINITION: General Hypotheses are tentative, specific explanations of phenomena that can be rejected by experimental data.


    General Hypotheses take many different forms. Some General Hypotheses are relatively simple statements. For example, consider a General Hypothesis:


    General Hypothesis (GH) 1: "Soda consumption is one cause of childhood obesity in the United States."

    GH1 is tentative, because we are not sure that soda consumption contributes to obesity. GH1 is specific because it focuses on one factor (soda) out of many that could contribute to obesity. GH1 is part of an explanation of the phenomenon of childhood obesity (a current public health problem). It would be possible to demonstrate that soda consumption does not result in childhood obesity and reject GH1. Therefore, GH1 fits the definition of a General Hypothesis.


    General Hypotheses typically apply to large categories of observations. For example, GH1 applies to the entire population of obese American children, a category that includes millions of people. Millions of children may seem like a large population, but General Hypotheses often have even wider scope. For example the General Hypothesis "The interaction of actin and myosin is the basis for muscle contraction" explains how muscle contracts not only for all humans on Earth, but also for all other animals that have ever lived on Earth (many millions of species). Therefore, general hypotheses can can have scopes that extend over wide ranges of entities, space, or time.


    General Hypotheses express models of different types.


    It is useful to think of General Hypotheses as models that explain aspects of the natural world (Giere, 2001). For example, engineers frequently use scale models to design cars, airplanes, buildings, and other physical structures. Engineering models may be physical models, constructed from materials such as wood or plastic. More recently, engineers frequently use computer models, which simulate material properties and physical laws to design structures (or evaluate existing structures).


    Physical models are also used in some fields of science (Vogel, 1999). However, even physical models require mathematical relationships that enable scale models to make accurate predictions about the objects that the models represent. Moreover, constructing physical models is not feasible for many natural systems. Therefore scientists often express General Hypotheses as conceptual or mathematical models (Braaten and Windschitl, 2011). Conceptual models are typically simplified representations of natural systems. Conceptual models are similar to frameworks: structures of assumptions, facts, and rules that are connected using logical relationships. Similar to frameworks, it is often helpful to express conceptual models using pictures or graphical representations.


    For example, the interaction of actin and myosin (or "sliding filament") hypothesis for muscle function can be thought of as a conceptual model of muscle structure at the molecular level. The sliding filament hypothesis is primarily a structural model of muscle based on microscopic visualization (Huxley and Hanson, 1954). However, even structural models can often lead to functional predictions (such as the trapezoidal relationship between muscle force and length; Morgan et al., 2002). 


    Other important models seek to predict function with very basic representation of structure. For example, Newton's Laws of Motion describe how objects behave mechanically. With Newton's second law:


    F = m*a


    we can predict how much an object will accelerate based on a force and the mass of the object. Similarly, Newton's law of gravitation 


    F = G * (M1 + M2) / r


    allows us to predict gravitational forces between objects based on mass and distance. However, Newton's laws do not explain why objects behave like they do. Moreover, Newton's laws are limited to objects moving slowly relative to the speed of light (Einstein's model, "relativity," is necessary to describe the behavior of fast-moving objects). Nevertheless, Newton's laws are useful examples of conceptual and mathematical General Hypotheses that can predict the behavior of physical objects moving relatively slowly.


    "General" does not mean "vague."

    For writing, the distinction between "general" statements and "vague" statements is very important. 


    DEFINITION: "General" statements apply to a large range of people, places, or things; widespread.


    DEFINITION: "Vague" statements are uncertain, indefinite, or of unclear character or meaning.

    Clearly, "General" statements and "Vague" statements are very different things. However, many people mistake vague statements for general statements when writing hypotheses. For example, consider the statement:


    GH2: "Desirable Difficulties affect test performance" 

    (Remember that "Desirable Difficulties" are study or practice strategies hypothesized to make study or practice more difficult, but are desirable because the difficulties contribute to learning).


    Is GH2 a general statement, a vague statement, or both?


    GH2 is definitely a general statement. GH2 implies that Desirable Difficulties affect test performance in ALL situations: for all people, for all tests, for all types of performance. GH2 is also a vague statement. In addition to not specifying types of tests or performance, we also do not know what types of study strategy GH2 refers to. Therefore, GH2 is a vague over-generalization.


    A more specific statement would specify the aspect of study hypothesized to underlie a "Desirable Difficulty" compared to a specific type of study that does not involve Desirable Difficulties. For example, we could create a dichotomy to identify two potential sources of Desirable Difficulty:

    Desirable Difficulties Dichotomy

    Study involving Desirable Difficulties can be compared to "blocked study," where learners repetitively study a single subject for a block of time. Blocked study is both repetitive and predictable.


    GH3: "Non-repetitive study results in lower performance during practice, but more learning, than blocked study of mathematics skills." 


    GH3 is general because it applies to all mathematics skills. However, GH3 also specifies the types of study strategies being compared (non-repetitive vs. blocked) and the outcome measures (performance during practice and learning). Therefore, we can envision testing GH3, provided that we more specifically define the category "mathematics skills"  and the assessment of "performance," and "learning." 


    General Hypotheses are explanations based on deductive reasoning, inductive reasoning, and assumptions to fill a "gap" in knowledge.


    How can we create General Hypotheses?


    Creating General Hypotheses is challenging and requires extensive research and reasoning. The motivation for creating a General Hypothesis is most often to fill a "gap" in understanding. A "gap" in understanding is an area of inquiry that is (1) important; (2) NOT sufficiently understood, and (3) surrounded by areas that we do understand enough to create explanations that include the gap in understanding. 


    The purpose of a General Hypothesis is to provide one explanation that potentially fills the "gap" in understanding.


    There are many ways to create General Hypotheses. However, at the broadest level, creating a General Hypothesis involves creating an explanatory scientific model using deductive reasoning (based on known principles) and inductive reasoning (previous observations). We also cannot avoid making some assumptions. Known assumptions can be stated in a forthright manner. The fact that we also make assumptions that are unknown to us can temper our confidence in hypotheses.

    Whence Hypotheses?


    General Hypotheses do not need to be one sentence.


    There is no reason that General Hypotheses must be expressed in a single sentence! If a General Hypothesis is more than a one sentence explanation, then it is acceptable to use as many sentences as necessary to express the General Hypothesis. For example, it might be clearer to express GH3 with two separate sentences that each express one idea:


    GH4: "Non-repetitive practice results in lower performance during practice than blocked study of mathematics skills. However, non-repetitive practice results in more learning than blocked study of mathematics skills." 


    Explanations (or models) may involve multiple steps or separate elements that require many sentences to explain. For example, the sliding-filament hypothesis involves myosin, actin, binding sites, ATP, etc. Models may be expressed using mathematical relationships. Including or explaining a model can all be part of a General Hypothesis.


    Testing General Hypotheses requires many studies.


    Although General Hypotheses have many forms, General Hypotheses have one property in common: testing General Hypotheses nearly always requires many studies. Theoretically, it may be possible to use modus tollens to reject some General Hypothesis using a single experiment. However, in practice even rejecting General Hypotheses requires multiple experiments (Giere, 2006). Experiments are not perfect, and depend on known and unknown assumptions. It is difficult or impossible to design an experiment that completely isolates a single variable and definitively tests a hypothesis. Therefore, rejecting a General Hypothesis typically requires the "consilience" of MANY studies that consistently point to a single conclusion to reject the General Hypothesis.


    MANY studies are also required to support General Hypotheses. Strong inference requires many studies to exclude many possible alternative hypotheses. Inductive reasoning involves considering evidence from many studies of different types to support or reject hypotheses. Even with many studies, scientists can never be 100% confident in conclusions about hypotheses. Science can therefore be seen as a continual quest to construct more useful (but always tentative to some degree) models of the universe.


    Therefore, many studies are needed to either reject or support General Hypotheses.

    Application Button

    General Hypotheses are testable models that apply to large categories of observations. Strong General Hypotheses are NOT vague, but as specific as possible. 

  • 4) MEASURABLE HYPOTHESES

    Pen
     Hypotheses Button  What Are Hypotheses Button  Measurable Hypotheses Button  Topic Outline Thin Button

    Measurable hypotheses express predictions that can be experimentally tested.


    DEFINITION:  A "Measurable Hypothesis" is a tentative, specific prediction that can be rejected by experimental data.


    Measurable Hypotheses have a very consistent form: Measurable Hypotheses involve a prediction that can be directly compared to an experimental outcome to result in a conclusion. 


    An example of a Measurable Hypothesis is:


    MH1: "We hypothesize that students who serially practice math skills (algebra, geometry, and word problems) will have significantly higher performance on retention and transfer tests than students who use blocked practice of each math skill."


    Can you think of an experiment that would be able to test MH1?

    Strong Measurable Hypotheses predict the outcome of experiments.

    Simply reading MH1 suggests that it could be tested using a cross-sectional design, where one group of students use serial practice and one group of students use blocked practice of math skills (algebra, geometry, and word problems). Average performance on two tests (retention and transfer) could be compared between the two groups using a statistical test (e.g. a t-test). 

    There are still quite a few areas that need to be specified. For example, the appropriate student population to recruit, the amount of practice (e.g. how much practice per day for how many days), the degree of interleaving (e.g. how much time to practice each subject before moving on to the next), whether practice will be spaced with rest breaks or not, the specific type of retention and transfer tests to use, and many others. The approaches chosen for each area, and the reasons for choosing each approach, can be specified and justified in the Methods section of the paper. Therefore, although Measurable Hypotheses cannot express all details of an experiment, strong Measurable Hypotheses predict an experimental design and outcome that depend on a limited number of the most relevant variables.


    Measurable Hypotheses are Predictions.

    A rule of thumb for writing Measurable Hypotheses is to keep in mind that Measurable Hypotheses are predictions. Therefore, it should be easy to write a Measurable Hypothesis as a prediction. For example, if we simply change the word "hypothesis" to "prediction" in MH1, the statement should still make sense:


    MH1: "We predict that students who serially practice math skills (algebra, geometry, and word problems) will have significantly higher performance on retention and transfer tests than students who use blocked practice of each math skill."


    One approach to writing strong Measurable Hypotheses is simply to begin writing with the words "We predict," and revise the prediction until it is specific enough to be compared directly to the outcome of an experiment that you can actually perform. Once a statement makes a prediction specific enough to directly test, then the statement is ready to be a hypothesis, and you can simply replace the word "predict" with "hypothesize."


    Measurable Hypotheses are based on General Hypotheses.

    Although the distinction between General and Measurable Hypotheses is useful, General and Measurable Hypotheses are closely linked. Specifically, Measurable Hypotheses are predictions that come from General Hypotheses. For example, let's re-visit our third General Hypothesis:

    GH3: "Non-repetitive study results in lower performance during practice, but more learning, than blocked study of mathematics skills."


    GH3 is a reasonably specific General Hypothesis, but still does not make predictions that we can directly compare to the outcomes of experiments. To make testable predictions, we can specify that serial study is one specific approach (out of many possible ways) to make study non-repetitive. We can also specify that retention and transfer tests are techniques for assessing learning. Therefore, we can create at least 3 Measurable Hypotheses from GH3:


    General to Measurable Hypotheses 1

    Useful Measurable Hypotheses use variables that we can directly measure.

    To an extent, Measurable Hypotheses are operational: strong Measurable Hypotheses are expressed in terms of measurements sufficient to test the hypothesis. Three common ways of testing Measurable Hypotheses are:


    1) Significant differences among groups (requires STATISTICAL comparisons, t-tests, ANOVA, etc.).
    2) Significant differences over time (requires STATISTICAL comparisons, paired t-tests, repeated-measures ANOVA, etc.).
    3) Significant correlations (requires STATISTICAL comparisons, coefficients of determination, etc.).

    Although it is not necessary to include details of which statistical tests will be performed, writing Measurable Hypotheses that clearly state statistical comparisons (or other objective criteria) is clear and helpful. 


    Measurable Hypotheses do not need to be one sentence.

    Similar to General Hypotheses, Measurable Hypotheses can be as many sentences as necessary to explain the Measurable Hypothesis. For example, we may choose to clarify Measurable Hypothesis 2 (above):


    "MH 2: Serial study will result in significantly higher scores on algebra, geometry, and word problem tests than blocked study during retention tests. We will test for retention one day following practice and 10 days following practice."


    Additional clarification may require additional sentences. However, all clarifications are part of the same Measurable Hypothesis.

    Application Button

    Write measurable hypotheses as specific predictions that reasonably follow from a General Hypothesis. Each General Hypothesis can result in many Measurable Hypotheses. Express Measurable Hypotheses operationally, in terms of specific (e.g. statistical) tests that can be directly applied to data.



  • 5) HOW TO TEST HYPOTHESES?

    Pen
     A Framework for Science Logo  Hypotheses Button  How to test hypotheses Button  Topic Outline Thin Button

    Both deductive and inductive reasoning are useful for testing hypotheses.


    Creating specific General and Measurable Hypotheses is important for scientific progress. However, even if hypotheses are clear and specific, testing the hypotheses involves careful thinking and reasoning. For example, the capabilities and limitations of statistical tests can influence decisions about hypotheses. Therefore, it will be useful to discuss how hypotheses can be tested in more detail.


    Testing hypotheses involves making reasoned arguments. A hypothesis and experimental data form the premises of arguments that lead to a conclusion about whether to reject or support the hypothesis. Two basic types of reasoning can lead to conclusions: deductive reasoning and inductive reasoning.


    The following sections apply the concepts of deductive and inductive reasoning to testing hypotheses. Testing hypotheses can also be limited by the biases and desires of researchers.



    DEDUCTIVE TESTING

    INDUCTIVE TESTING

    LIMITATIONS TO 

    HYPOTHESIS TESTING


    Deductive Testing Button Inductive Testing Button Limitations to Hypothesis Testing Button

  • 6) DEDUCTIVE TESTING

    Pen
     Hypotheses Button  How to test hypotheses Button  Deductive Testing Button  Topic Outline Thin Button

    Deductive reasoning is useful both for rejecting and supporting hypotheses.


    Deductive reasoning leads to conclusions based on premises that can be demonstrated to be true. Deductive reasoning is a central part of the "Hypothetico-Deductive" model for hypothesis testing and scientific discovery. However, deductive arguments must be valid and sound to lead to strong conclusions. 


    How can we use deductive reasoning to test hypotheses? Deductive reasoning can be used to reject and to support hypotheses:


    1) The logical syllogism modus tollens can reject hypotheses.
    2) Strong inference using a tree structure can support hypotheses.



    MODUS TOLLENS

    STRONG INFERENCE



    Hypotheses Modus Tollens Button Hypotheses Strong Inference Button

  • 7) MODUS TOLLENS

    Pen
     How to test hypotheses Button  Deductive Testing Button  Hypotheses Modus Tollens Button  Topic Outline Thin Button

    The syllogism modus tollens can be used to reject hypotheses.


    Modus tollens is a valid deductive syllogism that takes the form:


    PREMISE: If A then B.

    PREMISE: B is NOT true.
    CONCLUSION: Therefore, A is NOT true.


    How can we use modus tollens to test hypotheses? 

    For our first premise, we could imagine making a specific prediction based on a General Hypothesis. For example, we could predict:


    PREMISE 1: IF non-repetitive study results in more learning than blocked study of mathematics skills,
    THEN  serial study (one type of non-repetitive study) will result in significantly higher scores on algebra, geometry, and word problem tests than blocked study during retention tests.


    The first part of the premise is very general, implying that all types of non-repetitive study result in more learning than blocked study of mathematics skills. Therefore, the first part of the premise could be thought of as a General Hypothesis. The second part of the premise is a specific prediction (out of MANY possible). The second part of the premise is therefore one possible Measurable Hypothesis


    We could then perform an experiment, collect data, perform statistical tests, and find:


    PREMISE 2: Retention test scores on algebra, geometry, and word problem tests were NOT higher than blocked study during retention tests (t-tests; P > 0.05).


    Using modus tollens, we could come to the conclusion:


    CONCLUSION: Serial study does NOT result in more learning than blocked study of mathematics skills. Non-repetitive study does not always result in more learning than blocked study of mathematics skills.


    Is the argument a valid deductive argument and a form of modus tollens?


    Logically, the argument is valid because it does have the form of modus tollens. If our second premise is also true and leads to the conclusion that serial study does not always result in more learning of mathematics skills than blocked study, then modus tollens provides the opportunity to reject general hypotheses even based on a single experiment.


    Is the argument a sound deductive argument?


    The argument will be sound if the second premise is true. You might argue: "how could we question its truthfulness without actually seeing the data?" You would have a legitimate point. HOWEVER, there is one problem with Premise 2 that doesn't depend on the data.


    The problem with Premise 2 is that in common practice, statistical tests (like t-tests) are asymmetrical. Statistical tests CAN test for differences among groups to a specified level of confidence (e.g. 95%). However, if a statistical test fails to find significant differences among groups, then the statistical test has simply failed. A failed statistical test is NOT strong evidence of the absence of differences among groups (additional analysis such as interval identification or power analysis can determine the probability of Type II error; Giere, 2006. Completely different statistical frameworks such as Bayesian statistics can provide less categorical statistical comparisons (Höfler et al., 2018). However, a more extensive or nuanced approach to statistics is outside the scope of the current module). 


    A failed statistical test is commonly interpreted as: we still don't know if there is a significant difference between groups or not.


    For example, solely because our t-test failed to find a significant difference between the serial study and blocked study groups (Premise 2), we cannot conclude (within our agreed-upon 95% confidence) that serial study and blocked study are NOT different. All we can conclude is that our t-test failed to find a significant difference between groups: we still do not know if there is a difference between serial and blocked practice or not! Therefore, Premise 2 is a non-sequitur. The failure of a statistical test does NOT reasonably lead to the conclusion that serial study results in the same amount of learning than blocked study of mathematics skills.


    Why can we NOT conclude that two groups the same if a statistical test fails to find a significant difference?


    The reason that we cannot come to firm conclusions based solely on the absence of significant differences is because there are many ways for statistical tests to fail. A true lack of statistical differences between groups is only one potential reason that a statistical test can fail. Other common reasons for a "false negative" are:


    * Sample sizes too small to detect differences between groups (lack of statistical "power").

    * Violating one of the assumptions of parametric statistical tests (e.g. non-normal distribution).

    * Outliers in the dataset that substantially increase the variance of one or more groups.

    * Co-variation among variables that increase variance.


    Strong study design and data analysis can mitigate some of the problems that affect statistical tests. However, for the parametric statistical tests commonly used in educational settings, failing to reject a hypothesis does not provide sufficient evidence to "accept" the hypothesis.


    "Null" Hypotheses allow us to reject hypotheses based on the statistical finding of significant differences.


    If a statistical test fails to find a statistically significant difference between groups, without additional analysis we cannot be confident that there is no actual difference between groups. However, if statistical tests are performed correctly, we can be confident (to a specified confidence level) that finding a statistically significant difference between groups indicates that an actual difference exists between groups. The level of our confidence is related to the "P value," which indicates the potential for a "false positive." A false positive means that even though there is NO difference between two groups, our statistical test finds one. P < 0.05 means that there is less than a 5% chance that our statistical test found a difference between groups that wasn't actually there. 


    Therefore, to use statistics with modus tollens, we must select a reasoning structure that allows us to use significant differences to reject hypotheses. So-called "Null" hypotheses allow us to use modus tollens to reject hypotheses. 


    DEFINITION: A "null" hypothesis is a prediction of NO differences between or among groups. 


    Null Hypotheses may seem awkward because we are predicting the absence of differences instead of the presence of differences between groups (even though the presence of differences is commonly why we create the hypotheses in the first place). However, null hypotheses can help to clarify arguments. For example, we could frame our first premise as a null hypothesis:


    PREMISE 1: IF non-repetitive study does NOT result in more learning than blocked study of mathematics skills,
    THEN  serial study will result in scores that are NOT significantly higher than blocked study on algebra, geometry, and word problems during retention tests.


    If we conduct an experiment and find:


    PREMISE 2: Retention test scores on algebra, geometry, and word problem tests were significantly higher than blocked study during retention tests (t-tests; P < 0.05),


    we can use modus tollens to come to the conclusion:


    CONCLUSION: We reject our null hypothesis. Serial study results in more learning than blocked study of mathematics skills.


    Our conclusion is both valid and sound because we use the valid syllogism modus tollens to reject a hypothesis based on a statistically significant difference.


    A reasonable question might be: what if we performed our experiment and still didn't find a significant difference between groups? In the case of the lack of a significant difference, the argument becomes:


    PREMISE 1: IF non-repetitive study does NOT result in more learning than blocked study of mathematics skills,
    THEN  serial study will result in scores that are NOT significantly higher than blocked study on algebra, geometry, and word problems during retention tests.


    PREMISE 2: Retention test scores on algebra, geometry, and word problem tests after serial study were NOT higher than scores after blocked study during retention tests (t-tests; P > 0.05).


    CONCLUSION: We support our null hypothesis that serial study does NOT result in more learning than blocked study of mathematics skills.


    Is there a problem with the final argument?


    The problem with the final argument is that it is in the form of a logical fallacy: affirming the consequent. We do not even need to think about the limitations of statistical tests to know that the argument is invalid and cannot be sound. Therefore, null hypotheses can help to clarify reasoning.

    Application Button

    The deductive syllogism modus tollens allows us to reject hypotheses. Using null hypotheses can help to construct valid syllogisms and account for the limitations of statistical tests. 


  • 8) STRONG INFERENCE

    Pen
     How to test hypotheses Button  Deductive Testing Button  Strong Inference Button  Topic Outline Thin Button

    "Strong inference" can increase confidence in hypotheses by rejecting alternative hypotheses.


    Rejecting null hypotheses using modus tollens seems like quite a negative project. How can scientists "build models of the universe and its inhabitants" if all scientists can reasonably do is reject hypotheses?


    One process for using deductive reasoning for scientific discovery has been called "strong inference" (Platt, 1964). Strong Inference repeats a single framework with only three steps:


    1) Devising alternative hypotheses.
    2) Designing one or more experiments with at least two feasible outcomes. Every feasible outcome rejects one or more of the hypotheses.
    3) Carrying out the experiment. Using the data to reject all hypotheses that can reasonably be rejected.


    It would be useful to define some terminology before discussing Strong Inference in more detail. 


    Horizontal Divider


    1) Devising alternative hypotheses


    Often, people think that "alternative" hypotheses must be the opposite of a General or Measurable Hypothesis that is the focus of a study. However, alternative hypotheses are strongest if they are NOT simply the negation of a hypothesis, but another plausible explanation of a phenomenon or outcome of an experiment. Often, alternative hypotheses are the result of reasoning from a different set of assumptions than the main hypotheses.


    For example, we could consider alternative predictions for study strategy based on how repetitive and predictable the study is. Repetitive study involves doing the same types of problems over and over again, whereas non-repetitive study involves switching among different types of problems. Predictable study is when a person knows the order of problem type, whereas unpredictable study is when a person cannot predict which problem they will work on next. 


    Blocked practice is both repetitive and predictable. Serial practice is non-repetitive but IS predictable. Random practice is neither repetitive nor predictable. We could ask the question: "Do repetitiveness, predictability, both, or neither affect learning?"


    Three alternative General Hypotheses that might reflect different assumptions about which type of study contributes most to learning might be:


    GH1: Blocked study results in higher performance during practice and more learning than non-repetitive or unpredictable study of mathematics skills.
    GH2: Non-repetitive study results in lower performance during practice, but more learning, than blocked study of mathematics skills.
    GH3: Unpredictable study results in lower performance during practice, but more learning, than blocked study of mathematics skills.


    More General Hypotheses could also be possible based on other assumptions (e.g. both non-repetitive and unpredictable study might contribute to learning but not either on their own). 


    Developing and testing viable alternative hypotheses are important for at least two reasons:


    A) Strong alternative hypotheses can provide an important "hedge," or safeguard in case the data do not turn out as predicted. The safest experiments involve alternative hypotheses that ensure interesting conclusions no matter what the data are. The effort necessary to carefully design experiments and create alternative hypotheses can substantially reduce the time, stress, and probability of success when analyzing data and making conclusions. 


    B) Strong alternative hypotheses can prevent emotional "attachment" to hypotheses (Platt, 1964). Scientists are human, and cannot be purely objective observers and decision-makers. Scientists who have invested considerable time and effort into a single hypothesis will potentially have difficulty rejecting their hypothesis. The scientists may not analyze and interpret the data in the most reasonable way, but in the way most favorable to their hypothesis. However, having alternative hypotheses increases the probability that some hypotheses will not be rejected, making it easier to reject others as necessary.


    APPLICATION: Creating substantive alternative hypotheses has both practical and scientific value. Practically, alternative hypotheses can reduce the possibility for inconclusive experiments. Scientifically, alternative hypotheses can contribute to reasoning and objectivity.


    Horizontal Divider


    2) Designing one or more experiments with at least two feasible outcomes.

    The first step of experimental design is to develop Measurable Hypotheses. Based on each General Hypotheses, we could create several Measurable Hypotheses.


    For example, Measurable Hypotheses corresponding to GH1 could be:


    MH1a: Blocked study [both repetitive and predictable] will result in significantly higher performance during practice than both serial  [not repetitive but predictable] and random [not repetitive or predictable] study of mathematics skills. 

    MH1b: Blocked study will result in significantly higher performance during retention and transfer tests than both serial and random study of mathematics skills. 


    Measurable hypotheses corresponding to the alternative General Hypothesis GH2 could be:


    MH2a: Serial study will result in significantly lower performance during practice than blocked study of mathematics skills. However, serial study will not result in performance during practice that is significantly different from random study.


    MH2b: Serial study will result in significantly higher performance during retention and transfer tests than blocked study of mathematics skills. However, serial study will not result in performance during retention and transfer tests that is significantly different from random study .


    Measurable hypotheses corresponding to the alternative General Hypothesis GH3 could be:


    MH3a: Random study will result in significantly lower performance during practice than both serial and blocked study of mathematics skills. 


    MH3b: Random study will result in significantly higher performance during retention and transfer tests than both serial and blocked study of mathematics skills. 


    A Graphical Framework can help to visualize our hypotheses. A useful framework for Strong Inference is a tree structure (Platt, 1964).


    Strong Inference Tree 01

    Starting from an overall question at the "trunk" of the tree, we can imagine that different branches of the tree represent different possibilities (General Hypotheses). Each General Hypothesis sprouts at least one Measurable Hypothesis. 


    If the tree seems somewhat complicated, then perhaps we should trim it! 


    Trimming the tree involves designing an experiment that can allow us to cut away one or more branches. For example, we could test mathematics performance during practice and also performance on retention and transfer tests to measure learning. We could compare three separate groups of students: students who engaged in blocked study, students who engaged in serial study, and students who engaged in random study. Significant differences in performance among groups could lead us to reject some hypotheses (keep in mind that conversion to null hypotheses may be necessary to properly perform statistics).


    APPLICATION: To use Strong Inference and deductive reasoning, design experiments that are capable of rejecting one or more alternative hypotheses.


    Horizontal Divider

    3) Carrying out the experiment. Using the data to reject all hypotheses that can reasonably be rejected.

    With a strong framework of General and Measurable hypotheses, carrying out an experiment can be relatively straightforward (although experiments are often more complicated than predicted). Imagine that we performed the experiment with results shown in the following table. For the table, only statistically significant comparisons are indicated with a ">" sign (non-significant comparisons not listed).
    Study Schedule Results
    Based on the results, can we reject any of our General Hypotheses?

    Yes. Based on our Results, we can reject both GH1 and GH3. Blocked practice is not as effective as non-repetitive practice for math skills. However, unpredictability did not improve learning outcomes relative to simply non-repetitive practice. Therefore, we can "trim" two branches from our tree:
    Strong Inference Tree 02

    Once we have removed the branches of the tree that we have rejected, we can continue with a new set of questions based on GH2:


    Strong Inference Tree 03
    We can then repeat the procedure (starting from step 1). Each question (branch) of the tree can give rise to several alternative General Hypotheses, Measurable Hypotheses, and experiments. With every experiment, we use deductive reasoning to reject as many hypotheses as possible. Hypotheses that have not been rejected after many experiments have tried (and failed) to reject them can be considered to be "supported." However, even hypotheses that have survived many tests are still hypotheses. There is no time when it is possible to stop and declare that a hypothesis has been "proven" to be true, because it is always possible that other alternative hypotheses exist.


    APPLICATION: Use experimental evidence to reject as many hypotheses as possible. Hypotheses that survive an experiment without being rejected can be thought of as "supported."

    Horizontal Divider

  • 9) INDUCTIVE TESTING

    Pen
     Hypotheses Button  How to test hypotheses Button  Inductive Testing Button  Topic Outline Thin Button

    Inductive reasoning is a common part scientific decision making.


    DEFINITION: Inductive reasoning constructs arguments that are "knowledge expanding." Knowledge expanding means that conclusions of arguments exceed the sphere of the premises(Okasha, 2016). In experimental science, inductive reasoning typically involves generalizing based on a set of observations (although other forms of induction are possible; Moore and Parker, 2017).


    Inductive reasoning has long been a part of scientific inquiry (Bacon, 1620). However, critics of have questioned whether inductive reasoning should be used to defend scientific conclusions (Popper, 1959). Inductive reasoning is not truth preserving, and inductive arguments cannot be "valid" or "sound." Therefore, inductive reasoning cannot lead to definitive conclusions like valid deductive arguments can.


    Nevertheless, inductive reasoning continues to be used widely in science (Okasha, 2016). Inductive reasoning is important for several reasons. First, scientists, engineers, policy-makers and others often must make evidence-based decisions based on existing information. It may not be possible for decision-makers to wait for critical experiments to reject alternative hypotheses and deductively converge on strongly-supported models. Second, it may simply not be possible to enumerate and clearly test all necessary alternative hypotheses (O'Donohue and Buchanan, 2001). Scientists may not have the required conceptual or experimental frameworks or capability to perform experiments that can discriminate between alternatives. Third, science may benefit from a diversity of approaches (O'Donohue and Buchanan, 2001). Scientific discoveries may sometimes emerge from departures from the hypothetico-deductive system exemplified by Strong Inference


    Inductive reasoning can also help to clarify the logic and presentation of individual studies. Three specific uses for Inductive reasoning are to:


    1) Generate General Hypotheses to test deductively.

    2) Resolve conflicts among data and weigh evidence to defend results. 

    3) Support General Hypotheses that no single experiment can test.


    The Introduction and Discussion sections will explain each of the three specific uses for inductive reasoning in more detail. In both the Introduction and the Discussion, using specific frameworks can help to structure inductive reasoning.


    INDUCTIVE 

    FRAMEWORKS



    Inductive Frameworks Button

  • 10) INDUCTIVE FRAMEWORKS

    Pen
     How to test hypotheses Button  Inductive Testing Button  Inductive Frameworks Button  Topic Outline Thin Button

    Frameworks can help structure and simplify inductive reasoning.


    There are many ways to reason using induction. Inductive arguments can be based on many types of observations, use different numbers of observations, and include different specific observations. Instead of being "valid" or "not valid," "sound" or "not sound," the persuasiveness of inductive arguments is a continuum: from weak to strong (Layman, 2005). Because of the "open ended" nature of Induction, it can be difficult to organize information into simple and compelling arguments. 


    Structure is one key to clarity despite complexity.

    Using a framework to structure inductive reasoning can be helpful. For example, frameworks for establishing causation can be generalized to help make inductive arguments (Mill, 1843).


    Many people are familiar with the phrase "correlation does not imply causation." Simply because two events are correlated with each other does NOT mean that one event causes the other to happen. Concluding that a causal relationship exists solely from a correlation is an example of the logical fallacy Affirming the Consequent.

    However, sometimes correlations DO reflect causal relationships! More generally, sometimes data are consistent with predictions because the predictions come from valid scientific models! How can predictions from valid models be separated from spurious coincidence?


    One way to gain confidence in hypothesized causal relationships or scientific models is to repeatedly test predictions of the models and reject alternative models using Strong Inference. Another possibility is to inductively support validity. Inductive reasoning has limitations that constrain the strength of inductive conclusions. However, structuring inductive reasoning using different types, or categories, of evidence can strengthen inductive arguments. 


    When evidence from different categories of investigation are all consistent with a hypothesis, we can be more confident in the hypothesis. 

    One useful set of categories for evidence was explained by Bradford Hill, and are referred to as "Hill's Criteria" (Hill, 1965; Fedak et al., 2015). Although I have modified Hill's criteria from the original 9 to 8 and changed some terminology, the following categories are generally consistent with the original criteria.


    Modified Hill's Criteria.

    1) ReliabilityDo repeated studies all lead to the same conclusions?

    For General Hypotheses to be useful, they must be capable of predictions that apply in different contexts. The most basic requirement for General Hypotheses is to be "reliable:" for the hypotheses to make predictions that match evidence when the same experiment is repeated. For example, consider a new teaching strategy hypothesized to result in better academic performance than a standard strategy. If the teaching strategy truly is effective, then the strategy should result in better performance than a standard strategy when repeated with many different groups of students. 

    2) DiversityDoes evidence from many different approaches all support the hypothesis?

    General Hypotheses (i.e. scientific models) can be tested with many different types of evidence. If a General Hypothesis can lead to predictions that are consistent with evidence from many different types of measurements, then it is more likely that the General Hypotheses are valid representations of underlying phenomena. For example, anthropogenic (human-caused) climate change is supported by evidence from many fields of science: direct measurements of temperature, mathematical models, complex computer simulations, biological measurements (e.g. changes to animal and plant distributions, flowering times, etc.), ocean chemistry measurements, and more (Intergovernmental Panel on Climate Change, 2016). A diversity of evidence supports the hypothesis of anthropogenic climate change. Similarly, the Theory of Evolution is consistent with every aspect of biology: from paleontology to anatomy to physiology to ecology to molecular and cell biology, etc.. There is an overwhelming diversity of evidence that past and present biological variety result from evolution (Dawkins, 2009).

    3) Plausibility – Are there reasonable mechanisms that underlie observed outcomes? Are the mechanisms consistent with, and do not conflict with, other knowledge?

    "Plausability" means that hypothesized mechanisms or relationships are consistent with other known processes. Examples of "known" processes include laws of physics, chemistry, mechanics, and other fundamental laws. Known processes also include more specific information. For example, the hypothesis that smoking causes cancer is plausible because smoke contains mutagens that damage DNA, and damaged DNA is one mechanism for the development of cancer. 


    4) Experimental Interventions – Can direct interventions produce predicted outcomes?


    Hypotheses can be supported using direct experiments to test predictions of the hypotheses. Using Strong Inference and deductive reasoning to experimentally test the predictions of a hypothesis can therefore be one contributor to an inductive argument for the validity of the hypothesis.


    5) Temporality – Are there time-based dependencies (e.g. causes precede effects)?

    For causal relationships, causes must precede effects. For example, if depression causes disruptions to sleep, then other symptoms of depression should precede sleep problems. If depression and sleep problems are concurrent (as they often are), then causality is more difficult to establish.


    6) Strength Is there a strong association between variables

    A strong observed relationship among variables that make up a hypothesis (e.g. a correlation) can support the validity of the relationship (e.g. that the relationship reflects causality). For example, if a small exposure to a chemical consistently leads to a large outcome, then there is a strong association that suggests a causal relationship.

    7) SpecificityAre there specific factors (i.e. not all factors) that result in observed outcomes?

    Just as strong relationships among variables can support the validity of a hypothesis, specific relationships among variables can also support hypotheses. For example, if exposure to a chemical consistently results in specific consequences that are not otherwise observed, then there is a specific association that suggests a causal relationship.

    8) Biological gradientAre there biological gradients or dose-response relationships?

    Biological gradients can be naturally-occurring, or be part of experimental design. A "gradient" is an increase or decrease of one factor associated with a change in another factor. Experiments sometimes involve "dose-response" tests, where experimental systems are systematically exposed to different levels of a factor, and the responses of the system measured. Consistent biological gradients can support causality. In the simplest case of a "linear" gradient, the response will change directly with the change of dose. However, dose-response relationships do not need to be linear, and often involve thresholds or non-linear associations.

    Horizontal Divider


    How can we use Hill's Criteria to help construct arguments to support hypotheses?


    Most importantly, Inductive arguments should faithfully represent the available evidence, including a discussion of research findings that may NOT support a General hypothesis in addition to findings that DO support a hypothesis. For example, inductive arguments that we construct must avoid inductive fallacies and confirmation bias


    Hill's Criteria can help organize information relevant to a General Hypothesis. For example, to evaluate the General Hypothesis "Non-repetitive practice results in more learning than blocked practice, we can organize our research according to Hill's Criteria (facilitated by specific categories in our literature grids). For simplicity, let's consider only three of Hill's Criteria: Diversity, Strength, and Plausibility. Expressing our research as a graphical outline could look like:


    Induction Practice Tree

    Having more supporting evidence in any one of Hill's criteria can clearly contribute to stronger inductive arguments. However, supporting the validity of a hypothesis using evidence from many of Hill's criteria contributes to even stronger arguments.  Strong evidence that supports many or all of Hill's criteria can result in strong support for a General Hypothesis or Scientific Model.

    Application Button Inductive arguments must faithfully represent available evidence in support and opposition to a hypothesis. Frameworks such as Hill's Criteria can help organize information and structure inductive reasoning.  

  • 11) LIMITATIONS TO HYPOTHESIS TESTING

    Pen
     Hypotheses Button  How to test hypotheses Button  Hypothesis Testing Limitations Button  Topic Outline Thin Button

    "Confirmation Bias" is a pervasive danger to hypothesis creation and testing.


    The information that we sense and perceive is filtered through our beliefs and experiences (Elstein, 1999). For example, how people perceive and act on information is often based on heuristics (simple rules) instead of strong reasoning (Tversky and Kahneman, 1974). Heuristics are one example of "cognitive biases," or tendencies for people to think and act in consistently distorted ways (Hicks and Kluemper, 2011). Therefore, cognitive biases can affect scientific, clinical, and political decision making.


    One cognitive bias that is particularly relevant to science is "confirmation bias" (Nickerson, 1998).


    DEFINITION: Confirmation bias is seeking or interpreting evidence in ways that confirm existing beliefs, expectations, or hypotheses.


    Confirmation bias reflects the tendency for people to resist changes to their preconceptions by selectively focusing on information consistent with previous beliefs and expectations while ignoring information that conflicts with their preconceptions (Stanovich et al., 2013). For example, the pre-conception that the world is flat can make it difficult for children to conceptualize a spherical world. Children try to interpret new information (e.g. the world is round) to support their preconception of a flat earth, and think of the world as a pancake shape instead of a sphere (Vosniadou and Brewer, 1989). Therefore, confirmation bias can prevent or hinder learning.


    Confirmation bias is sometimes confused with skepticism (being critical of evidence). Skepticism is an important part of scientific reasoning. For example, the author Carl Sagan famously wrote: "Extraordinary claims require extraordinary evidence." However, whereas skepticism is being critical of ALL evidence, confirmation bias is being selectively critical of evidence that doesn't match preconceptions or expectations (Taber and Lodge, 2006). Therefore, confirmation bias actually hinders skepticism by limiting critical thinking to pre-determined areas.


    Confirmation bias affects many aspects of science. For example, when making basic measurements, researchers may double-check measurements that conflict with the expectations of the researchers, but may not double-check measurements consistent with expectations. Therefore, errors that make measurements more consistent with expectations are less likely to be caught than errors that make measurements less consistent with expectations. Confirmation bias therefore affects premises used for both deductive and inductive reasoning.


    Confirmation bias can also affect deductive and inductive arguments in other ways. Confirmation bias can influence the questions and hypotheses that individuals (or even entire communities) develop. Questions that challenge existing beliefs or expectations may simply not be asked in favor of questions structured to support existing ideas. For example, science reflects prevailing social and cultural assumptions. When racial prejudices were common and more widely accepted, many researchers sought scientific evidence to confirm prevailing biases (Gould, 1996). Objective data, internally-consistent biological models, and quantitative research has ultimately led to the rejection of most race-based hypotheses. However, some social "scientists" and others continue to use confirmation bias (among other fallacies) to promote prejudiced viewpoints (Hernstein and Murray, 1994).


    Confirmation bias is a particular concern for inductive reasoning. Inductive reasoning often draws from large bodies of information, presenting the possibility for "cherry picking" information to support pre-determined conclusions. For example, opponents of efforts to reduce climate change cherry-pick data to make misleading arguments. Representatives of the extractive industries have used a single year (1998) that was abnormally warm to argue that global temperatures are not rising, despite data from more than a century that clearly show increases in global temperature (Temple, 2013). Therefore, confirmation bias can lead to un-reasonable judgments, particularly for individual inductive arguments. 


    Horizontal Divider


    Encouraging and increasing diversity in science can mitigate some of the problems associated with cognitive biases. 


    Science is an evolutionary process that requires extensive communication among individuals and communities of scientists. Scientific discoveries typically "emerge" from interactions among many scientists. Although scientists often pride themselves on individuality, scientific progress is more than the sum of contributions by individual scientists. Instead, scientific progress depends on the composition of the entire scientific community.


    A diverse scientific community helps to mitigate Confirmation Bias and facilitates scientific progress. A diverse scientific community is more likely to result in many viable alternative hypotheses for any particular problem. Scientists can be attached to particular hypotheses so long as the scientific community is diverse, and there are other groups of scientists in the scientific community attached to other hypotheses. Even if each group explicitly champions a particular hypothesis, over time the hypothesis most consistent with data will prevail. Therefore, all forms of diversity (of scientific perspective, gender, race/ethnicity, background, etc.) strengthen scientific inquiry.

     

    If the overall scientific community is diverse, then individual scientists may not need to be completely objective when they interpret data (objective data collection remains critical for science, however). Inductive reasoning can help scientists evaluate which hypotheses are most consistent with objectively-collected knowledge given a diversity of alternative hypotheses. 


    Application Button

    Cognitive biases like Confirmation Bias can affect scientific judgment. Individual scientists can reduce the impact of cognitive biases by understanding what cognitive biases are, and how biases can affect reasoning. Diversity within scientific communities can reduce the impacts of cognitive biases by increasing the number and range of alternative hypotheses.

  • 12) SCIENTIFIC PAPERS

    Pen
     A Framework for Science Logo  Scientific Papers Button  Topic Outline Thin Button  

    A Framework can help to structure scientific papers.


    There are many approaches to science. Both a diversity of scientists and a diversity of approaches strengthen the challenging undertaking of science. Therefore, the fact that scientific papers all differ in many respects is important.


    However, for people beginning the process of scientific writing, the diversity of scientific styles can be overwhelming. Therefore, it can be useful to focus on a single framework for structuring scientific papers. The framework proposed here is not the only framework for presenting research, simply a sufficient framework for many questions. Using a relatively specific framework can help to clearly identify some of the most common elements of scientific papers. More importantly, using a specific framework can maintain focus on the content, not the format, of scientific writing.


    Although there are exceptions, many scientific journals organize papers into 4 main sections: Introduction, Methods, Results and Discussion (the "IMRaD" format). 


    How can we apply the principles for reasoning and writing from the "Reasoned Writing" module, and the guidelines for hypothesis testing discussed thus far, to the IMRaD format for scientific publication?


    Central to applying principles of reasoning to writing different parts of scientific papers is to recognize that each section of a scientific paper has the same overall goal.


    GOAL: Every section of a scientific paper contributes to developing, defending and testing hypotheses


    Every section of a scientific paper makes a different contribution to the goal of defending and testing hypotheses:


    The Introduction section explains WHY an important GAP in current scientific understanding leads reasonably to the General and Measurable hypotheses.


    The Methods section explains WHY the chosen methods are necessary and appropriate to test the Measurable Hypotheses.


    The Results section explains WHY the data lead to the conclusion to reject or support each Measurable Hypothesis.


    The Discussion section explains WHY the results (i.e. the conclusions about the Measurable Hypotheses) either support existing General Hypotheses or lead to new General Hypotheses.


    The links below explain how reasoned arguments can help to structure each section to achieve its goal:



    INTRODUCTION

    METHODS

    RESULTS

    DISCUSSION


    IMRaD Introduction Button IMRaD Methods Button IMRaD Results Button IMRaD Discussion Button
       

    Framework Summary Button
         

  • SUMMARY of A Framework For Scientific Papers

    FRAMEWORK FOR A SCIENTIFIC PAPER: SUMMARY

    INTRODUCTION

     

     

     

     

     

     

    SECTION

    1

     

    Importance. Justify why the research topic is important (e.g. relevant to many people, answering a critical research question, etc.). Identify a GENERAL question. (1 paragraph)

    2

    The GAP in understanding. Explain why past discoveries lead to CONCLUSIONS about current understanding using REASONED ARGUMENTS (DEDUCTIVE and/or INDUCTIVE) and LOGICAL TRANSITIONS (THEREFORE,BUT,AND,OR) between ideas. Reference each statement of fact, definition, or example using peer-reviewed, quantitative studies (all references at the END of sentences). Laws of physics, mathematical derivations, or reasoned conclusions do not require references. Explain current understanding as a logical progression that uses the RESULTS of previous studies as PREMISES (i.e. statements of fact) of arguments. The CONCLUSIONS of the arguments are POSITIVE (supported by evidence), but separated by a clearly-stated and specific GAP in understanding. DEFINE all terms necessary to understand your arguments within the context of each argument.  (3-5 paragraphs)

    3

    Hypotheses. Briefly state the OVERALL GOAL to fill the gap in understanding. Explain how testing the GENERAL hypotheses will achieve the goal. Explain how each GENERAL hypothesis directly leads to one or more MEASURABLE predictions (Hypotheses). Explain ALTERNATIVE hypotheses (e.g. hypotheses that would arise from different assumptions). (Optional) briefly preview the specific approach (e.g. experiment) used to test each Measurable Hypothesis. (1-3 paragraphs)

    METHODS

     

     

     

     

    SECTION (Sections can have >1 paragraphs if necessary. Use subheadings to identify sections.)

    1

    Study participants. How many participants enrolled, and why the participant number was appropriate. Age, sex and other important characteristics of participant population (e.g. mass, anthropometry, etc.), and reasons why the population was appropriate.

    2

    Procedures and Protocols. Overall design of study (cross-sectional, cohort, etc.) and why. Procedures for group selection and why. Treatments used and the purpose of each treatment, explained in detail. Explain procedures used for controls and why necessary and appropriate. Explain all specific testing procedures and their purpose. Data collection: measurements employed and why chosen over other measurement methods, where appropriate. Specific equipment used and for what purpose. Calibrations employed and why necessary. Use a REASONED framework that explains how each procedure contributes to testing the Measurable Hypotheses (use a chronology only when time is critical).

    3

    Data Analysis. How and why collected data were conditioned (e.g. filtered) and reduced. Normalizations employed and why appropriate. Mathematical calculations employed and why (use an Appendix for long derivations). Statistical tests employed and why each test is most appropriate.

    Final Paragraph      4

    Testing Criteria. The specific criteria (calculations, statistics, and judgments) that will be used to support or reject each Measurable Hypothesis.

    RESULTS

     

     

     

    Summary (optional). Brief summary of data and conclusions (i.e. hypothesis tests).

    SECTIONS

    (Sections delimited by bold/italicized subheadings that directly relate to hypotheses)

    Sub-conclusion Sections. Start each section with a bold/italicized subheading that concisely states the conclusion of the section using a complete sentence. The conclusion directly relates to a Measurable Hypothesis (e.g. explicitly states the reasons for rejecting or supporting the Measurable Hypothesis). The body of each section defends WHY the data lead to the conclusion using deductive and/or inductive reasoning (e.g. modus tollens). Link ideas with logical transitions. If comparisons among sampled data are statistically (significantly) different, (1) put differences into PERSPECTIVE by expressing as percentages, and (2) report statistical tests (e.g. P-values, etc.). Comparisons that are not significantly different are NOT different (no “trends,” “non-significant differences,” etc.). Place references to figures, tables, and the results of statistical comparisons only at the END of sentences.

    DISCUSSION

     

     

     

     

     

    SECTION

    (Sections delimited by bold/italicized subheadings)

    1

    Concise summary of the Results. (1 paragraph)

    2

    Defending the Conclusions. For each limitation of the methodology or analysis, explain reasons why the limitations are unlikely to affect the conclusions of the study. (1 paragraph)

    3

    Supporting General Hypotheses and/or generating new General Hypotheses. Explain how the results support a change to our understanding. Explain how the results are CONSISTENT or CONFLICT with existing understanding (e.g. previous research findings that led to the General Hypotheses).

    * If the results are consistent with past understanding and the General Hypotheses, explain why the assumptions are unlikely to affect the conclusions. Explain how placing the study results in the context of other research findings strengthens confidence in the General Hypotheses. Hill’s criteria (Reliability, Diversity, Plausibility, Experimental Interventions, Temporality, Strength, Specificity, Biological Gradient) can be useful for organizing reasoned arguments that support the General Hypotheses.

    * If the results conflict with past understanding and the General Hypotheses, explain how the assumptions of the study or other research findings are potential reasons for conflicts with the original General Hypothesis. Use the results of the current study and other research findings to construct reasoned arguments that support the plausibility of NEW General Hypotheses. (3-5 paragraphs)

    4

    Implications of the study. Why the findings are important. Potential contributions to future research or applications (e.g. to clinical practice, technology development, public policy, etc.). (1 paragraph)

     

    SUMMARY FLOWCHART for A Framework For Scientific Papers


    AFSP Summary Flowchart 

    Files: 2
  • EXAMPLES

    Pen - RW AFSP
     Frameworks Home Button  Examples Button    

    HOME

    EXAMPLES




    Reasoned Writing and A Framework for Scientific Papers are intended to encourage active learning through scientific writing (and reading). The RW/AFSP modules can help students learn fundamentals of scientific reasoning and writing while completing assignments that focus on specific content areas. I hypothesize that students learn more course content when the students are actively engaged in using content knowledge to construct and test scientific models and predictions.


    For example, I base both the "Laboratory" and "Lecture" sections of my courses around specific projects that are assessed by written assignments. 


    I design Laboratory activities not only to demonstrate course concepts, but to place concepts in the context of the scientific process used for discovery. Students generate hypotheses, collect and analyze data, and use their data to defend reasonable conclusions.


    In "Lecture," I ask students to perform scientific "Case Studies" that focus on using peer-reviewed research studies to generate plausible and testable hypotheses.


    Therefore, whereas case studies involve practicing writing effective Introduction sections, laboratories focus on interpreting Results. Both case studies and laboratories consider experimental methodology (i.e. the Methods section). However, due to time constraints, I do not require students to write extensive Discussion sections to interpret their data in context.


    Below are links to pages with examples of laboratories and Case studies:


    LABORATORIES
    1) LABORATORY: Perceptual Adaptation

    CASE STUDIES
    1) CASE STUDY: Practice and Learning


  • EXAMPLE LAB: Perceptual Adaptation

    Pen - RW AFSP
     Frameworks Home Button Examples Button  

    HOME

    EXAMPLES




    Laboratory courses often involve data collection, analysis, and interpretation. The Reasoned Writing / A Framework for Scientific Papers modules can help place laboratory activities in the context of the scientific method. 
    The pdf files at the bottom of the page represent an example of a laboratory that uses RW/AFSP to perform a simple experiment on perceptual adaptation. 


    In the experiment, students begin by reading, analyzing, and interpreting a paper that involves perceptual adaptation and practice (Bock et al., 2005). I selected the paper primarily for brevity, but also because it identifies some clear alternative hypotheses for perceptual adaptation. Based on their analysis of the paper, I ask the students to formulate General and Measurable Hypotheses. The students then design an experiment to test their hypotheses.


    Reasonable experiments can be performed with a limited number of low-cost components, including
    1) Prism Goggles that alter vision (http://www.psychkits.com/)
    2) Magnetic Dart boards (e.g. Doinkit Darts).

    Data analysis involves simple calculations and statistical tests (e.g. t-tests) that can be performed using free spreadsheet software such as Google Sheets.


    Although collecting data with the goggles and darts is fun and engaging, the experiment involves careful planning and reasoning. Importantly, students can go through much of the scientific process (from concept to conclusion) in a reasonable amount of time.


    Files: 2
  • EXAMPLE CASE STUDY: Practice and Learning

    Pen - RW AFSP
     Frameworks Home Button Examples Button  

    HOME

    EXAMPLES




    The Reasoned Writing / A Framework for Scientific Papers modules have allowed me to structure even my "lecture" courses around specific projects or problems that I term "Case Studies." In my estimation, Case Studies can improve instruction relative to lectures alone for several reasons, including:


    1) Students learn some or much of the course content from primary sources. Derivative sources like textbooks support student understanding of scientific problems instead of being definitive, exclusive sources of information.
    2) Case Studies encourage active learning. Lectures can be important for providing context, focus, and explanations. However, lectures often must compete with many other distractions, do not always actively engage students in learning. Using Case Studies in addition to lectures can contribute to helping students directly engage with course content and contribute to learning.
    3) Written papers involve analysis, synthesis, and evaluation. The process of writing contributes to important critical reasoning skills.


    The pdf file below is an example of a Case Study that I have found useful for my "Motor Control and Learning" course. 


    File: 1
  • 13) THE INTRODUCTION

    Pen
     A Framework for Science Logo  Scientific Papers Button  IMRaD Introduction Button  Topic Outline Thin Button

    The Introduction explains why an important GAP in scientific understanding can be filled by testing specific General and Measurable hypotheses.


    Why do papers have an Introduction section?


    Clearly, the Introduction section "introduces" a research project. However, "introduces" is a fairly vague term. What is the specific objective of an Introduction?


    A "funnel" framework can be a difficult approach to the Introduction.


    Many resources suggest using an "Inverted pyramid" or "funnel" model for the Introduction, where the Introduction begins with general principles and moves to more specific topics (Bolt and Bruins, 2012Plaxco, 2010). However, the "general" vs. "specific" distinction can be a false dichotomy

     

    The definition of the word "specific" includes accuracy: specific statements are clearly-defined and unambiguous. Contrasting "general" statements  with "specific" statements implies that general statements need NOT be accurate. Therefore, the "general" vs. "specific" dichotomy suggests that general statements can be vague: having unclear or indefinite meaning. 


    However, vague statements convey little or no information to audiences and are NOT useful elements of reasoned arguments (Layman, 2005). One goal of scientific communication is to ensure that ALL statements are NOT vague but specific (in the sense of being accurate), whether the statements apply widely or to particular situations or experiments.


    If vague statements are unacceptable, can "general" principles have any place in scientific communication? 


    Yes! Discovering general principles is one of the most important GOALS of science! However, instead of vague statements, general principles are specific statements that apply widely (e.g. to all people, all animals, etc.). Scientific generalities allow for specific predictions in MANY different situations. Therefore, "general" principles are extremely valuable, provided that the general principles are also specific: accurate and unambiguous.


    Would a "funnel" approach be an appropriate framework for an Introduction if the "funnel" involves progressing from accurate statements about general principles to specific topics?


    Yes! However, a "funnel" approach is still a challenging framework to implement. Although evidence-based generalities are the goal of science, general statements are often difficult to defend with quantitative evidence. Non-trivial generalities are very rare and can be very difficult to find.  Therefore, basing an Introduction on evidence-based generalities using a "funnel approach" is attractive in principle, but often a difficult approach to implement in practice.


    The Introduction need not address all potential introductory questions.


    In part because of the difficulty of implementing a broadly-defined "funnel" framework, some resources suggest that the Introduction address several more specific questions. Examples of recommended questions include: "What exactly is the work?" "Why is the work important?" "What is needed to understand the work?" "How will the work be presented?" "What was the study's motivating research issue?" "What was novel and unique about the study?" "What hypotheses guided the study?" "What were the specific purposes of the study?" (Alley, 1996Greene, 2010). 


    However, answering eight or more questions is complex and does not conform to the "Rule of Three."  Moreover, not all questions potentially addressed in an Introduction are equally important. Therefore, Introduction sections will be strongest if they focus on only the three (or fewer) most important questions. What are the most important questions for the Introduction?


    The Introduction can focus on defending one important question. 


    Introduction sections can be simplified by focusing on the specific objective of answering a single question:


    WHY does an important GAP in current scientific understanding lead reasonably to the General and Measurable hypotheses?


    To develop a specific Introduction, it is useful to analyze (break up) the central question into three separable sections:


    A) WHY is the area of research important? (Paragraph 1)
    B) WHAT is the GAP in scientific understanding? (3-5 paragraphs)
    C) HOW do the proposed General and Measurable hypotheses FILL the gap in understanding? (1-3 paragraphs)

    Introduction Flowchart


    Using the WHY, WHAT, HOW framework can help to organize and simplify the Introduction:


    WHY is research

     IMPORTANT?

    WHAT is the GAP

     in understanding?

    HOW can HYPOTHESES

     fill the gap?


    Introduction Why Button Introduction What Button Introduction How Button
  • 14) WHY IS THE RESEARCH IMPORTANT?

    Pen
    Scientific Papers Button
     IMRaD Introduction Button  Introduction Why Button  Topic Outline Thin Button

    Establishing why the overall research question is important can help to establish common ground with readers.


    Audiences interpret new information by connecting new information to previous knowledge and assumptions (National Research Council, 2000). Therefore, two principles can help clarify the beginning of a written (or spoken) presentation:


    1) Clearly defining the target audience.
    2) Providing the audience with clear connections to their previous knowledge and assumptions. 

    1) Clearly defining the target audience. A reasonable target audience for a scientific paper is a scientist in a different research field. A scientist in a different field can be expected to understand fundamental principles of the scientific method, physical and biological sciences, and statistics. However, a scientist in a different field cannot be expected to understand specific technical terminology. More importantly, a scientist in a different field cannot be assumed to consider research outside their field important enough to justify the time and effort necessary to understand the research. Therefore, it is incumbent on authors to make a compelling argument that their research is important to readers.

    2) Providing the audience with clear connections to their previous knowledge and assumptions. Estimating the previous knowledge and assumptions of an audience can be challenging. For example, audiences may have values that differ from those of an author in unknown ways. To connect new information in a paper to the previous knowledge of an audience the author must establish a reasonable common ground with an audience. Once again, strongly arguing for the conclusion that a research study is important to many or all members of an audience is one potential strategy for establishing a common ground with an audience. 

    Arguing for the importance of a research project can help audiences commit to understanding a research paper. Therefore, one strong beginning for a scientific paper is an argument that the research study is important


    Generalities, chronologies, and absence alone are NOT strong arguments for importance.

    Making general and potentially vague statements is often NOT a strong approach to beginning an Introduction. Using a "funnel approach" and beginning with general statements might seem like a reasonable strategy for establishing common ground with an audience. However, scientists are not convinced by unsupported generalities. Beginning a paper with general statements that are not supported by specific arguments may accomplish little more than eroding the reader's trust. General statements may be appropriate as premises for deductive arguments, but factual premises require references (and non-trivial, evidence-based generalities can be difficult to find). Moreover, although generality may be part of an argument for importance, generality is not a sufficient argument for importance. Strong arguments support specific conclusions. Therefore, we cannot expect readers to conclude that research is important from general premises alone. 

    Using a chronological framework may also seem like a reasonable approach for establishing a common ground with an audience. For example, beginning papers with time-based generalities such as "Recent research has demonstrated that..." is a tempting approach for an Introduction. However, a chronology is often the weakest framework for structuring presentations. Chronologies can be important when time is the most important variable. However, time cannot be assumed to be the most important variable in scientific research. Simply because research has been performed recently is not by itself a strong argument that the research is important.

    Similar to chronologies, arguments about the absence of research studies are NOT strong arguments for importance. There are an infinite number of research questions that have not been studied. Simply finding that a question is little understood or studied is not sufficient justification for research on the question. The fact that a little or nothing is known about a research topic is only relevant if the research topic is also important for other reasons. Moreover, arguments for the absence of research are difficult to make because there could simply be research that the authors are unaware of. Therefore, arguments solely for the absence of research are not sufficient justifications for research.

    Introduction strategies to avoid 1


    Strong introductory arguments are specific and reasoned arguments for importance.

    Arguing for specific reasons that answering a research question is important is a strong framework for the beginning of an Introduction.  Research questions are important if they have the potential to substantially impact science or society. Examples of areas that are widely understood to have a potentially large impact are:


    A) Contributing to the scientific understanding of a fundamental research question (e.g. how does the brain represent memories?). The argument can identify the specific contribution of the research to improving scientific models.

    B) Contributing to new technology (e.g. how to increase battery storage capacity?). The argument can identify the specific technological advances that could result from the research.

    C) Reducing medical costs (e.g. how can we prevent costly falls in the elderly?). The argument can identify the specific costs that the research could potentially reduce, and the specific contribution of the research to reducing costs.

    D) Improving quality of life, either generally or for specific populations. For example: how can we restore motor function after spinal cord injury? The argument can identify how research discoveries will substantially improve quality of life for people.

    E) Reducing risk, either generally or for specific populations. For example: how can we reduce the risk of opioid addiction among teens? The argument can identify how research will reduce risks of injury, illness, or other adverse events.

    F) Addressing fundamental issues of fairness, equity, equality, etc. For example, how can we identify and reduce systemic biases in the workplace? Research can be important to help individuals or populations have rights and opportunities.

    G) Helping to solve long-term problems or reduce long-term risks (e.g. how to prevent climate change from resulting in massive ecological and economic disruption?). Issues that may not seem important now could potentially be much more important in the future.
    H) Improving human performance (e.g. how to improve workplace communication and productivity). Research can contribute to improving performance for many people, or for particular populations.


    Of course, many other arguments for importance can also be compelling.


    To make a strong argument, the beginning of the Introduction can use deductive reasoning, inductive reasoning, or both. To effectively argue for importance, premises should be based on facts and NOT assumptions. Therefore, premises should either (1) end with a parenthetical reference to one or more peer-reviewed, quantitative studies; or (2) be the sound or strong conclusion of a specific deductive or inductive argument. 


    For most papers, one paragraph is sufficient to argue for the importance of the research. 


    Application Button

    A strong framework for the first paragraph of the Introduction is an argument for a specific contribution that the research will make to an important research topic.   

  • 15) WHAT IS THE GAP IN UNDERSTANDING?

    Pen
    Scientific Papers Button
     IMRaD Introduction Button  Introduction What Button  Topic Outline Thin Button

    Positive, interesting, and truthful statements can effectively support an argument that filling a GAP in understanding is necessary to advance scientific understanding.


    To justify the time and effort necessary for research, scientists must propose to study a problem that many people would agree is important.  Once scientists have identified an important research problem, the objective of scientific research is  typically to make discoveries and generate new understanding (although replicating experiments is a valuable and under-represented effort; Nosek, 2015).  Identifying "Gaps" in our understanding is important for making discoveries (Weissberg & Buker 1990).


    DEFINITION: "Gaps" in understanding are aspects of the world (or universe) where science has limited understanding, but enough information and understanding surround the "Gap" to create reasonable hypotheses. 


    Identifying gaps in understanding involves surveying the published literature, and using published findings to construct deductive or inductive arguments to support the existence of a reasonable gap. Most often, scientists have informally identified a gap in understanding before writing the Introduction section of a paper. Therefore, making arguments for a gap in understanding can use the strategy of reverse engineering: using the proposed gap in understanding to determine what information readers need in order to understand the proposed gap. 


    For example, consider the gap in understanding: "We do not know if interleaving academic training with motor skills training results in more learning than blocked training for either academics or motor skills."


    The gap statement suggests that the Introduction would need to define and/or review:


    A) What "interleaving" practice is.
    B) What "learning" is, and how learning can be measured.
    C) Why specific types of academic study and motor skills are most relevant to investigate.
    D) Current knowledge of the effects of interleaving on academic study relative to blocked study.
    E) Current knowledge of the effects of interleaving on motor skills relative to blocked study.

    F) In addition, the Introduction may choose to review other relevant evidence, such as the plausibility that interleaving academic and motor training increases learning for both relative to blocked training (e.g. by investigating the neural mechanisms of learning). 


    However, the Introduction must do much more than simply define terms and describe current knowledge!


    An effective Introduction identifies concepts, reviews research, and defines terms in the context of an ARGUMENT that filling the gap in understanding is necessary to advance scientific understanding. 


    Making an argument that an experiment is necessary to advance scientific understanding is not easy. Creating a reasoned outline can help us to construct our overall argument that filling the identified the gap in understanding is necessary to advance scientific understanding. A reasoned outline can be built from the conclusions of reasoned sub-arguments.  Once a strong overall outline has been created, specific references to previous research can be used as premises to support the sub-conclusions of the outline. Both the outline and the full text of the paper will benefit from using clear logical transitions. Moreover, the challenging process of creating strong arguments typically requires many revisions.


    Three principles are useful for constructing arguments to defend the necessity to fill an important gap in understanding:


    1) Write positively. Construct arguments from information that is known, where premises can be directly supported.
    2) Write in a compelling way. Use interesting logical transitions:  the conjunction "but" and disjunction "or."
    3) Write truthfully, using statements of appropriate scope.


    Horizontal Divider


    1) Write positively.


    DEFINITION: "Positive" indicates that the information is known (not unknown). Positive information comes from conclusive (sound or strong) research findings.


    To identify and defend a "gap" in understanding, many writers are tempted to directly argue that there is a absence of scientific understanding in a particular area. For example, writers may make statements like "There are few studies that have investigated X..." Arguments about the absence of understanding may be strong when made after a comprehensive review or meta-analysis of the scientific literature by a highly-experienced expert in a field. However, most writers have not reviewed ALL of the scientific literature in an area, and do not typically have recognized expertise in a field. Therefore, arguments for the absence of understanding or information are NOT likely to be strong.


    Gap In UnderstandingInstead, students (and scientists who are not writing comprehensive reviews) must make arguments based on what is KNOWN, NOT what is unknown. 


    One strategy for identifying a gap in knowledge is to use a dichotomy:

    A) Identify two important areas of inquiry that are related, but different in defined aspects. 


    B) For each area, construct positive, reasoned arguments to support a conclusion that represents the current state of understanding of the area.


    C) Identify aspects where the two conclusions do not overlap, or potentially even conflict, as GAPS in understanding.


    A final argument that one gap in understanding is particularly important to fill constitutes a useful transition to the hypotheses of the study. 


    Therefore, even though gaps in understanding represent areas where more research is necessary, arguments to support gaps can (and should) be constructed from premises based on existing data. Positive arguments can provide strong evidence for gaps in understanding.


    When possible, use repetition and consistent frameworks to simplify the presentation of the Introduction. For example, arguments can be organized around a framework such as:


    * Introduce a concept and define necessary terminology.

    * Present evidence supporting to the concept in the form of an inductive or deductive argument.

    * Make a reasonable conclusion from the evidence.


    APPLICATION: Base arguments on information that you have, not on speculation about the absence of information.


    Horizontal Divider


    2) Write in a compelling way


    Identifying a gap in understanding involves contrast or disjunction. However, Introductions need not be limited to a single contrast or disjunction! Conflict and opposition are interesting for readers, and potentially encourage understanding and learning.


    Constructing positive arguments often involves reviewing research findings that are consistent with each other or with more general hypotheses. However, "laundry lists" of research findings connected with "and" logical transitions may not be the most engaging arguments for the Introduction (even if the arguments are logically sound or strong). Therefore, it is helpful to not only review evidence consistent with overall findings, but to also identify and discuss areas of conflict.


    Of course, the most important aspect of presenting research is that the research faithfully represents current understanding. Compelling and interesting writing are desirable, but are only appropriate when the arguments are valid representations of current understanding.


    APPLICATION: Use contrasts and disjunctions to faithfully represent current understanding in an engaging way.


    Horizontal Divider


    3) Write truthfully, using statements of appropriate scope.


    Arguments in the Introduction should faithfully represent the current state of understanding. Two important requirements for faithfully representing current understanding are:


    A) Selecting research that reasonably represents the current understanding of a research area. 

    Often, more research has been conducted in an area than can be included in the Introduction of a paper. Therefore, selecting relevant research findings is necessary. Selected research findings should be an un-biased (or balanced) representation of current understanding.


    B) Using statements of appropriate scope.

    Authors must be careful to avoid inaccurate or overly-general statements about research findings. For example, generalities about "all" or "most" research findings are seldom justified. Authors should select appropriate modifiers and helping verbs to ensure that statements have the scope justified by the data.


    APPLICATION: Revise writing to make sure that evidence justifies the scope of premises and conclusions.


    Horizontal Divider


    Using positive arguments to strongly argue for a GAP in understanding leads naturally to the primary objective of scientific research: to develop and test hypotheses that can help fill the gap in understanding.


    The amount of information necessary to identify a gap in understanding differs for each study. However, typically 3-5 paragraphs are sufficient to construct positive arguments and identify reasonable gaps.

    Application Button

    The second section of the Introduction presents arguments that there is a GAP in understanding in an important topic. Positive arguments can use existing information to identify gaps in understanding.

  • 16) OBJECTIVES AND HYPOTHESES

    Pen
    Scientific Papers Button
     IMRaD Introduction Button  Introduction Hypotheses Button  Topic Outline Thin Button

    The objective of the Introduction is to provide evidence that justifies the hypotheses.


    Each section of the Introduction builds on previous understanding. The first section of the Introduction builds a common understanding with the readers: a reasonable argument that the research is important. The second section of the Introduction builds positive arguments on the foundation of the shared understanding of importance. By identifying an unknown aspect of the important topic, the second section identifies a "gap" in understanding. The third section of the Introduction uses the gap in understanding to make a specific proposal to make a clear discovery about the important topic.


    The third section of the Introduction involves three steps:


    1) Clearly stating the overall goal or objective of the study.
    2) Explaining how testing the General Hypotheses achieves the identified goal.
    3) Explaining how the General Hypotheses lead to specific predictions (Measurable Hypotheses).


    Horizontal Divider

    1) Clearly stating the overall goal or objective of the study.

    The third section of an Introduction typically begins by stating the overall goal or objective for the proposed research. The research objective is the "deliverable" of the research project: the specific, important finding that readers can expect to take from the paper.


    Research objectives can be written in different ways. One way to write a research objective is declaratively. For example, "Our objective is to determine the movement strategies used to maintain dynamic balance during walking in the elderly." The deliverable is a set of movement strategies used by the elderly. 


    In some contexts, research objectives can be posed as a question. For example, "Our research seeks to address the question: do elderly individuals use the same strategies to maintain dynamic balance as young individuals do?" The deliverable is the answer (yes or no) to the question of whether young and elderly use the same movement strategies.

    Statements of research goals/objectives are typically one sentence long. However posed, clear and specific research goals are important touchstones for readers to understand the motivation for the study.


    Horizontal Divider

    2) Explaining how testing the General Hypotheses achieves the identified goal.

    Once a goal/objective of the study has been identified, a reasonable next step is to explain how testing the General Hypotheses achieve the goal.


    An explanation is typically more than simply stating the General Hypotheses. Instead, the General Hypotheses are the conclusions of reasoned arguments with the goal of the study as the first premise. For example:


    PREMISE 1: Our objective is to determine the movement strategies used to maintain dynamic balance during walking in the elderly.


    PREMISE 2: Movement strategies for dynamic balance affect range of motion, movement variability, and responses to perturbations (The second section of the Introduction, identifying the gap in understanding, defends Premise 2).


    CONCLUSION: Therefore, we hypothesize that elderly individuals will have significantly lower range of movement, higher movement variability, and less effective responses to perturbations than young individuals.

    The goal of the study (e.g. Premise 1), conclusions from previous research (e.g. Premise 2), and other constraints (e.g. existing General Hypotheses, relevant study populations, experimental capabilities and limitations, etc.) can be premises for an argument that testing the proposed General Hypotheses is the most reasonable approach to achieving the goal.


    Horizontal Divider

    3) Explaining how the General Hypotheses lead to testable predictions (Measurable Hypotheses).


    Once the Introduction has defended the General Hypotheses, the Introduction can conclude by explaining the specific Measurable Hypotheses that can be reasonably predicted from the General Hypothesis. Measurable Hypotheses do not typically require much explanation. Often, the Measurable Hypotheses for a study are the predictions from the General Hypothesis that can be tested using feasible and valid quantitative measurements. However, it remains important to explain HOW each Measurable Hypothesis represents a prediction of the General Hypothesis.  


    General to Measurable Hypotheses

    For example, if our General Hypothesis is "we hypothesize that elderly individuals will have significantly lower range of movement, higher movement variability, and less effective responses to perturbations than young individuals," then we could create three Measurable Hypotheses:


    MH1: Elderly individuals will have significantly lower range of hip, knee, and ankle flexion-extension movement during moderate speed walking than young individuals.


    MH2: Elderly individuals will have significantly higher movement variability of hip, knee and ankle angles during moderate speed walking than young individuals.


    MH3: Elderly individuals will take significantly more time to recover from lateral waist-pulls during moderate speed walking than young individuals.


    Horizontal Divider

    Some authors include a brief paragraph summarizing the Methods after the hypotheses (at the end of the Introduction). However, if Measurable Hypotheses are sufficiently specific, then a summary of methodology should not be necessary.

    Application Button

    Relatively short reasoned arguments should be sufficient to explain and justify how the General Hypothesis naturally follows from the overall objective of the study. Measurable Hypotheses should be clearly identifiable as specific predictions of the General Hypothesis.

  • 17) THE METHODS

    Pen
     A Framework for Science Logo  Scientific Papers Button  IMRaD Methods Button  Topic Outline Thin Button

    The Methods explains why measurements and data analysis contribute to testing Measurable Hypotheses.

     

    Why do papers have a Methods section?


    The Methods section is commonly thought of as answering the question: "How was the problem studied?" (Bolt and Bruins, 2012). The purpose of the Methods section is considered as describing the procedures and materials used to perform experiments, perhaps using a chronological framework (Greene, 2013). 


    However, a purely descriptive Methods section omits an important aspect of the Methods: the reasons (or "rationale") for the selected procedures and materials (Greene, 2010). Choosing appropriate procedures and materials can represent a substantial investment of time and effort. In established fields, there may be many viable options for methods (e.g. different commercial suppliers, different techniques, etc.). In less-established fields, experiments may require custom-developed procedures and techniques that required design, development, and refinement. In either case, the reasons for choosing particular methods over alternatives is an important component of scientific methods. Therefore, the Methods must explain, not simply describe, the methods of a study. 


    A more appropriate overall question for the Methods section of a scientific paper is:


    WHY are the chosen methods necessary and appropriate to test the Measurable Hypotheses?


    Using clear frameworks can help simplify a Methods section.


    At the broadest level, Methods sections are commonly structured using a list framework. Methods sections often have at least three subheadings that identify the major components of experimental research (the list of elements within each section below is not exhaustive). For studies involving human participants, the Methods may include the sections:


    1) Study Participants.


    How many participants enrolled, and why the number of participants was appropriate. 

    Specific recruitment methods if relevant.

    Age (mean +/- standard deviation), sex distribution and other important characteristics of participant population (e.g. mass, anthropometry, etc.), and reasons why the population was appropriate.

    Strategies for ensuring that groups are comparable to each other and/or representative of a larger population (balancing, randomization, etc.).

    Evidence that ethical and appropriate procedures were used. Assurance that all procedures were approved by relevant and required governing bodies (e.g. IRB, IACUC) in accordance with all relevant laws and regulations.


    2) Procedures and Protocols.

    Overall design of study (cross-sectional, cohort, etc.). 

    Treatments used and the purpose of each treatment, explained in detail. 

    Procedures used for controls and why necessary and appropriate. 

    All specific testing procedures and their purpose. 

    Data collection: measurements employed and why chosen over other measurement methods, where appropriate. 

    Specific equipment used and for what purpose.

    Calibrations employed and why necessary.


    3) Data Analysis.


    How and why collected data were conditioned (e.g. filtering) and reduced (e.g. calculating means, etc.). 

    Normalizations employed and why appropriate. 

    Mathematical calculations employed and why (detailed mathematical derivations can be placed in an Appendix). 

    Statistical tests employed and why they were the most appropriate tests..


    Particularly for papers by students, a fourth section can help organize and clarify thinking and presentation:


    4) Hypothesis tests

    The specific criteria (calculations, statistics, and judgments) that will be used to support or reject each measurable hypothesis. Hypothesis tests can be simple, declarative statements. For example "If the ankle, knee, and hip range of motion for elderly participants are all significantly less than ankle, knee, and hip motion for young participants, it will support the measurable hypothesis that elderly individuals will have significantly lower range of hip, knee, and ankle flexion-extension movement during moderate speed walking than young individuals."

    AFSP Methods Flowchart


    Using repeated frameworks can simplify the Methods.


    Repetition can be a powerful tool to simplify presentation.  A simple three-part, hierarchical framework that can be helpful for structuring subsections of the Methods is:


    Goal - Procedure - Rationale


    The "goal" is typically a measurement that is necessary to test one or more Measurable Hypotheses. For clarity, major goals that require several procedures and/or materials can be identified with a subheading delimiting a group of related procedures. For example:


    "Metabolic Measurements

    We measured oxygen consumption with indirect calorimetry during rest and during moderate-speed walking. Indirect calorimetry allows for measurements of metabolic energy expenditure and also the relative utilization of fat and carbohydrate (Ferrannini, 1988). We used a ...


    The goal is to measure oxygen consumption. The procedure is indirect calorimetry (during rest and walking). The rationale for using indirect calorimetry is that the technique measures energy expenditure and the fuel used to power metabolism. The rationale is supported with a reference to a more detailed explanation of indirect calorimetry (Ferrannini, 1988). 

    The Procedure-Rationale part of the framework can be repeated for each method used to achieve a goal:
    .

    Methods Framework


    References to past research can provide justification for methods.


    Some techniques (such as indirect calorimetry) become so widely-used that they do not require an extensive rationale. Common techniques can be justified simply by using a reference to a past study that explains the technique in more detail (e.g. Ferrannini, 1988).

    References can help justify most or all choices explained in the Methods. For example, selecting procedures that have been demonstrated to be effective by past studies can help to make audiences confident that the procedures are reliable and valid. Alternatively, study populations or procedures may be selected to differ from past research (requiring references to previous studies). Therefore, references to past studies strengthen the argument of the Methods section, by helping to explain why each method is appropriate for testing the Measurable Hypotheses.


    All procedures in the Methods section contribute to testing one or more Measurable Hypothesis.


    The purpose of the procedures and materials in the Methods section is to make measurements sufficient to test the Measurable Hypotheses of the study. Therefore, all explanations in the Methods section must be clearly necessary to test one or more Measurable Hypotheses


    Because Methods are conventionally structured around a list of standard sections, the connections between methods and hypotheses are often indirect and contextual. For example, the Methods may explain a procedure that results in a measurement used to test a hypothesis.


    The fourth section of the Methods (Hypothesis Tests), can help readers understand more directly how each method contributes to testing a Measurable Hypothesis. By clearly stating the Measurable Hypotheses and explaining the specific criteria for supporting or rejecting the hypotheses, the Hypothesis Tests section can provide a valuable summary of the Methods section. The Hypothesis Tests section also provides a clear framework for the Results section.


    Methods or measurements that do not contribute to testing hypotheses result in unnecessary text and can be removed.


    Methods must be explained in sufficient detail for competent scientists to repeat the experiment.


    A reasonable target audience for a scientific paper is a scientist in a different field. Therefore, methods must be explained with enough detail that competent scientists can replicate the procedures. Competent scientists can reasonably be expected to know how to perform basic mathematical and statistical calculations. Competent scientists can reasonably be expected to be able to look up references and use procedures explained in other studies. However, any terminology, procedures, or materials that are not common knowledge, or not justified with a reference, must be explained in the Methods section.


    Application Button

    Strong Methods sections are NOT simply descriptive, but explanatory. Using strong, repeated frameworks can help explain (1) WHY each procedure or material is justified based on reason and/or previous successful studies; and (2) HOW each procedure contributes to testing the Measurable Hypotheses.

  • 18) THE RESULTS

    Pen
     A Framework for Science Logo  Scientific Papers Button  IMRaD Results Button  Topic Outline Thin Button

    Results are applications of data to Measurable Hypotheses that result in conclusions.


    Why do scientific papers have a Results section?


    The purpose of the Results section is commonly explained as describing data without interpretation. The Results section is thought to answer the question: "What were the findings?" (Bolt and Bruins, 2012).


    If the purpose of the Results section is simply to describe data, then why is the section not simply called the "Data" section?


    The reason scientific papers have a "Results" section is because data alone are NOT results


    Whereas "data" refers to objective, quantitative measurements (or "facts"), "results" are outcomes. But outcomes of what?


    Results are the outcomes of arguments that use data to test Measurable Hypotheses.

    Therefore, a clearer question that the Results section answers is:


    WHY do the data lead to the conclusion to reject or support each Measurable Hypothesis?

    A single "result" can be considered the outcome of a comparison: between the specific prediction of a Measurable Hypothesis and the measured data.



    AFSP Results Flowchart

    Being "measurable" means that Measurable Hypotheses can be directly tested by experimental data. Experiments typically use statistical tests or other objective comparisons.


    Supporting or testing Measurable Hypotheses does NOT require interpretation or judgment IF Measurable Hypotheses are specific, and the Results section employs strong reasoning (e.g. rejecting hypotheses using modus tollens). Therefore Results can be both explanatory and not involve interpretation.


    The central question of the Results section includes three components:


    1) WHAT the data are.
    2) HOW the data compare with the Measurable Hypotheses.
    3) WHY the data either support or reject each measurable hypothesis.


    WHAT the data are

    How to compare data with 

    Measurable Hypotheses

    Why data support or reject 

    Measurable Hypotheses


    Results What Button Results How Button Results Why Button

  • 19) WHAT are the DATA?

    Pen
     Scientific Papers Button  IMRaD Results Button  Results What Button  Topic Outline Thin Button

    Data support factual premises.


    In the Results section, data support the factual premises necessary to test Measurable Hypotheses. Therefore, data do not need to be isolated and presented without context. Data can be objectively collected and and presented, but also placed in context by contributing to strong reasoned arguments that help readers understand conclusions.


    Clearly presenting data involves presenting data as simply as possible. Three principles can help present data effectively and simply:


    1) Present only data necessary to test Measurable Hypotheses.
    2) Present data as completely and objectively as possible.
    3) Help readers put data into perspective.


    Horizontal Divider

    1) Present only data necessary to test Measurable Hypotheses.


    Collecting and analyzing data can be grueling, and involve years or decades of work. Consequently, when preparing a paper to disseminate their findings, researchers often (understandably) have a desire to present all of the data they labored so hard to collect. However, the purpose of scientific papers is to advance understanding, not simply to archive data (online databases like Genbank and supplemental data available with many journals provide opportunities to archive raw data). Therefore, the Results section needs to present ONLY data that are used as premises for arguments to test Measurable Hypotheses.


    Even with modestly complex datasets, there are many ways to present data, and many comparisons that could potentially be made. Presenting only data necessary to test Measurable Hypotheses is a reasonable standard for deciding which data to include in the Results. Therefore, data that do not support premises for reasoned arguments need not be included in a paper (or can be removed if included). 


    One way to determine if data support premises is to ensure that all data are referred to by premises in the Results. Figures and tables should contain only the data necessary to support the premises that reference the figure or table. 


    Horizontal Divider

    2) Present data as completely and objectively as possible.


    A scientific paper must present ALL data relevant to testing measurable hypotheses. Datasets sometimes contain data from individual participants, or individual experimental trials, that are outliers that do not seem representative of the remainder of the data. If data are outliers for clear reasons that affect the validity of experimental measurements (e.g. malfunctioning equipment), then there is cause to exclude outliers from a dataset. However, when excluding outliers, it is important to verify the validity of all other measurements at the same time to avoid confirmation bias. Without definitive and justified cause, data cannot be removed from quantitative datasets.


    Data are typically presented in one of three ways: the body text of the paper, in tables, and in figures.


    The primary purpose of the body text is to put data into the context of arguments to test the Measurable Hypotheses. The amount of data that can be included in body text is limited. Moreover, extensive data can be difficult to read and thus detract from the reasoning of the Results. Therefore, scientific papers typically use tables and figures to report the bulk of the data.


    Tables are typically used for data where reporting precise values is important. Data in tables must be clearly-presented and labeled. All labels must include the units used to express numbers. Numbers in tables should be presented to an appropriate number of significant figures based on the precision of underlying measurements. Only data used to support the premises of reasoned arguments should be included in a table. For example, the following table reports numeric data from several different variables, along with the results of statistical tests:


    Sample Table


    Figures can help readers understand the data. Unlike tables, the primary purpose of figures is often not to report data (although figures can be economical ways of reporting large data sets such as time series or continuous relationships between variables), but to provide a convincing visual representation of data to help readers understand an argument. Therefore, figures are strongest when they clearly convey at least one "main message" that is self-evident in the figure.


    For example, even without providing context, some things are evident from the figure below.


    Figure 1. Mean braking force for different mass (M) and rotational inertia (I) conditions. Values represent averages across individuals for each condition. Error bars represent one standard deviation.
    Vertical Forces Figure
    Clearly, the two dashed lines (red squares and blue triangles) are more similar to each other than they are to the solid black line. Moreover, the dashed lines remain relatively constant over the five conditions on the abscissa (x-axis), whereas the black line clearly decreases. Of course, comparing the three variables with each other or among conditions requires statistics. However, the figure suggests that there are unlikely to be significant or substantial differences between the three variables in the M 0%, I 1 condition to the left, but the potential for substantial differences among variables for the M 17%, I 4 condition at the right. Therefore, there is at least one clear "main message" conveyed by the figure.


    Just as with tables, axes and other elements must be clearly labeled and include units. Data that are presented without labels and corresponding units are meaningless and accomplish nothing more than to confuse readers. Therefore, it is essential to ensure that all figures are properly labeled with appropriate units.


    Figure titles can be descriptive.


    Tables and figures include titles that describe the table or figures. Titles can be either above or below the table or figure. Figure titles are the ONE place in a scientific paper where text can be purely descriptive, where text is not part of an argument or other framework. Table and figure titles typically start with a one-sentence summary description of the figure, then provide concise descriptions of the elements of the table or figure.


    Use repetition to help readers understand figures.


    Use the principle of repetition in figures. Within a figure, when possible use the SAME scale for each axis when plotting data of the same units against one another. Use a systematic labeling convention that is consistent with the text of the paper. When creating multiple figures, make all figures as consistent with each other as possible. Variables, symbols, colors, order of presentation should be as consistent as possible across figures.


    Finally, color information can be unreliable. For examples, printers may not print color, or computer screens may not render colors accurately. More importantly, not all people are capable of distinguishing among all colors. Therefore, although using color figures is acceptable, color should not be the ONLY way to differentiate variables.


    Horizontal Divider

    3) Help readers put data into perspective.

    If tables and figures present the majority of data in a paper, is there any reason to include data in the text?


    Yes. Including data in the text can be helpful for several reasons. For example, individual measurements that are not repeated or numerous enough to warrant a table are appropriate for the text. For example, in the "Study participants" section of the Methods, participant characteristics like age, body mass, gender, etc. are commonly reported in the text. 

    More importantly, presenting data in the body text of a paper can help readers put the data into context, or perspective. Putting data into perspective involves helping readers to understand importance of individual elements of data. Examples of perspective include expressing the magnitude of data relative to some baseline, or comparing different measurements to each other. Therefore, the purpose of data presented in body text is not simply to report numbers, but to help readers gain a conceptual understanding of the contribution of the values to the conclusions of the Results.


    One effective way to put data into perspective is to express data as percentages, or relative changes. For example, writing "leg ground-reaction force during acceleration was 50 N more than during constant-speed running" is a reasonably specific statement (N refers to "Newtons", or units of force). However, how much is 50 N? A little? A lot? Enough to potentially cause injury? Even experienced scientists may have difficulty understanding how important a change of 50 N represents. 


    However, expressing the statement as "leg ground-reaction force was 25% higher during acceleration than during constant-speed running" expresses the change in force as a percentage that readers can clearly understand (25% is a fairly substantial change). Therefore, percentages are an effective way to report data that help readers conceptually understand the Results.


    Although providing perspective can help readers understand data, the scope of the Results section is limited to the data collected in the study and the Measurable Hypotheses. Putting data into perspective does not include comparisons to other research or any information outside the data collected for the study (providing broader context is an important objective of the Discussion). Therefore, the Results section typically does not contain references to other studies.


    Horizontal Divider

    The purpose of text in the Results section is not solely to report data. Most sentences in the Results section of a scientific paper are premises that support conclusions about Measurable Hypotheses.  The purpose of data are to support premises in the text. Therefore, representations of data that help place the data into perspective (e.g. percentages) can be included in text of the premises. However, place references to tables, figures, and statistical tests parenthetically at the END of sentences.


    For example, a paragraph from the Results section could read:


    "Sinusoids were sufficient to reconstruct forces in the initial movement direction (imd) over stance. Sinusoidal reconstructions of ground-reaction forces (GRF) accounted for 64 ± 11 % of the variance in GRF over the entire step (Table 1). However, sinusoidal reconstructions did not capture transients associated with leg impact in the first 50 ms of stance (Fig. 2 B). After the first 50 ms of stance, reconstructions accounted for 78 ± 6 % of GRF (Table 1). The variance accounted for (VAF) of sinusoidal reconstructions after the first 50 ms was significantly greater than for the entire stance period (P<0.001). Therefore, although sinusoids did not model transient forces at leg impact, sinusoids could accurately reconstruct leg forces during 85% of stance."


    Application Button

    The purpose of text in the Results section is to explain how measured data support or reject Measurable Hypotheses. Therefore, the majority of data can be presented descriptively in tables and figures. Text in the Results can include data that help readers put findings into perspective. Otherwise, references to data or calculations should be placed parenthetically at the END of sentences.

  • 20) HOW data compare with hypotheses

    Pen
     Scientific Papers Button  IMRaD Results Button  Results How Button  Topic Outline Thin Button

    Comparisons require statistics.


    Comparison is central to hypothesis testing. Specifically, the predictions made by Measurable Hypothesis must be compared to experimentally-collected data. 


    Typically, comparisons involve reasoning like:


    "Our General Hypothesis is: the average value for [some measurement] will be higher for individuals who experience Condition X than the average value for [some measurement] in individuals who experience Condition Y.

    Question: Is the average value of [some measurement] actually higher for experimental Group X (who experienced Condition X) than for Group Y (who experienced Condition Y)?" 


    Answering the question posed by the General Hypothesis could potentially involve several additional questions, including:


    1) How much higher must an average value of [some measurement] BE for Group X relative to Group Y in order to have confidence that Condition X actually leads to higher average values than Condition Y?


    2) Even if we can be confident that Condition X leads to higher average values than Condition Y, do the higher values matter in some practical sense?


    BOTH Question (1) and Question (2) should be addressed in a scientific paper! Testing hypotheses (Question 1) is necessary, but arguing that research findings are important (Question 2) is also necessary.


    Statistics allows Question (1) to be answered in a (relatively) objective way that does not require subjective judgment. Therefore, statistics makes question (1) appropriate for the Results section of a scientific paper.


    However, statistics cannot help to answer Question (2). Addressing Question (2) most often requires interpretation of data with reference to other studies or points of reference. Therefore, Question (2) is completely separate from Question (1). Putting data into perspective in the Results can provide information useful to address the importance of any differences (i.e.  Question 2) in the Discussion.


    Finding statistical differences between or among groups do not in and of themselves indicate whether the differences matter.

    Significant differences do not necessitate substantial differences. For example, some studies have suggested that eating meat significantly decreases life expectancy relative to diets low in meat (Singh et al., 2003). Should people stop eating meat to increase their life expectancy?

    If the magnitude of decreases in life expectancy average approximately 3 years, different people might make very different decisions about how much 3 years of life matters relative to the quality of life of eating meat. The statistical question of whether eating meat causes significantly lower life expectancy is totally separate from the question of the relative value of eating meat relative to lifespan.


    There is no such thing as a "non-significant difference" or "trend" in the Results section.


    There are many statistical frameworks that involve different approaches to analyzing and interpreting data (Goodman, 2016). Different statistical frameworks can result in fundamentally different ways of thinking about and conducting science. Statistics could potentially help to formalize the process of making decisions based on probabilities that account for sources and consequences of uncertainty (Goodman, 2016).


    However, experimental research most often uses relatively simple, criterion-based statistical tests to evaluate hypotheses. Therefore, our discussion will be limited to the most common statistical procedures: parametric tests and evaluations of P values (e.g. t-tests, ANOVA, etc.).


    Correctly performing even standard, parametric statistical tests is notoriously tricky. There are many variables to consider: properties of the data, a variety of possible statistical tests (all with different assumptions), different ways to normalize and transform data, etc. Moreover, there are many opportunities for confirmation bias to affect statistical tests and interpretation (sometimes called "P hacking"). For example, researchers may collect data until comparisons achieve statistical significance and then stop, or perform many statistical or experimental tests and only report significant outcomes. "P hacking" can contribute to the larger problem of "publication bias," where only significant results are successfully published. Clearly, statistical tests are not the simple, unambiguous criteria for making comparisons that we would like them to be.


    Ethical scientists try to perform the most appropriate statistical tests possible (often working with statisticians to do so). The study design and the data determine the appropriate type of statistical tests. Moreover, there is a common convention that the probability of Type I error (finding a significant difference when none actually exists) should be at most 5% (P < 0.05). In some situations, 5% is too high of a potential for error, and lower thresholds (e.g. 1% or 0.1%) are more appropriate. 


    Performing statistical tests that result in P values slightly above the conventional threshold of 0.05 (for example, 0.06) is frustrating. P values close to 0.05 are so frustrating that some scientists try to consider them as statistically significant based on their proximity to 0.05, referring to "non-significant differences" or "trends" in the data.


    A "non-significant difference" is simply an oxymoron. Determining what constitutes a "trend" involves interpreting the results of the statistical tests. Therefore discussing "non-significant differences" or "trends" is NOT appropriate for the Results section, and "trends" can NOT be used to test Measurable Hypotheses in the Results. The Discussion section provides opportunities for more complex arguments that can weigh the probability that the statistical tests resulted in Type II errors based on assumptions or limitations of the study. The Discussion also provides opportunities for interpreting statistical probabilities in a more nuanced way than simply criterion-based rejection (Goodman, 2016).


    When using criterion-based statistical tests, differences among groups must be supported by statistical tests that demonstrate significant differences (to a confidence level of at least 0.05). The terms "different," "higher," "lower," "increased," "decreased," "greater than," "less than," and any other comparisons are only acceptable when referring to statistically significant differences. Comparisons without statistical support cannot be objectively described as "different." Comparisons that do not reach the (pre-determined) confidence level also cannot be objectively described as "different."
     
    Scientists often use statistics to support Measurable Hypotheses.

    An extensive discussion of statistics is clearly outside the scope of our discussion. Appropriate study design and selecting appropriate statistical tests are important issues, and require considerable training and thought (and often consultation with a statistician). However, one limitation to statistics that is relevant to our discussion is:

    Statistical tests alone can be evidence for differences between or among groups to a particular level of confidence (often indicated by the P value). However, the failure of criterion-based statistical tests (alone) is NOT strong evidence for the absence of differences between or among groups (without additional analyses such as interval or power analysis; Amrhein et al., 2019). The asymmetry of statistical tests is analogous to the aphorism, "the absence of evidence is not the evidence of absence." Therefore, statistics are formally used to test null hypotheses: the hypothesis of NO difference between or among groups. 

    However, null hypotheses are cumbersome. For example, consider our General Hypothesis "the average value for [some measurement] will be higher for individuals who experience Condition X than the average value for [some measurement] in individuals who experience Condition Y." To pose our General Hypothesis as a Null hypothesis, we could negate the hypothesis: "the average value for [some measurement] will NOT be higher for individuals who experience Condition X than the average value for [some measurement] in individuals who experience Condition Y." We could then construct a deductive argument using modus tollens:


    PREMISE: If our General NULL Hypothesis is true then we would NOT expect a significant difference in [some measurement] between experimental groups X and Y.

    PREMISE: We DO find a significant difference in [some measurement] between Group X and Group Y.
    CONCLUSION: Therefore, our experiment rejects our Measurable (and General) Null Hypothesis.


    The syllogism is reasonable, but involves a lot of negatives (rejecting a null hypothesis). Instead of sticking with the formality of Null hypotheses, scientists often take some shortcuts. Scientists may take an "inverse" (of sorts) of the above syllogism, to create the more positive deductive argument: 


    PREMISE: If our General Hypothesis is true then we would expect to observe a significant difference in [some measurement] between experimental groups X and Y.
    PREMISE: We DO find a significant difference in [some measurement] between Group X and Group Y. .
    CONCLUSION: Therefore, our experiment supports our Measurable (and General) Hypothesis.


    Does the second syllogism seem like a valid argument?


    If you object that the second syllogism seems an awful lot like affirming the consequent... you concerns are warranted! The argument IS structured similarly to affirming the consequent, and is therefore in danger of being a logical fallacy.

    Why would scientists routinely use arguments that could be fallacies? 

    The shortcut that scientists are actually using is combining modus tollens and Strong Inference. Scientists are considering the Null hypotheses to be alternatives to the General and Measurable Hypotheses. Rejecting the Null Hypotheses (i.e. in the first syllogism) DOES reject an alternative hypothesis, and can therefore be considered to "support" the Measurable and General Hypotheses through Strong Inference (second syllogism).

    Hypotheses cannot be experimentally "accepted" or "proven."


    Terminology becomes (unfortunately) important when discussing hypotheses. The conclusion to "support" a hypothesis is acceptable if we consider the word "support" to mean rejecting at least one alternative (like the Null Hypothesis). However, "supporting" a hypothesis does NOT imply a claim that a hypothesis is true -- simply that the hypothesis has not been rejected YET

    Stronger terminology like "accepting" or "proving" hypotheses are NOT appropriate, because "accepting" or "proving" implies that the hypothesis has been found to be true. Hypotheses cannot be declared unquestionably true using either deductive or inductive reasoning. Strong Inference cannot reject all possible alternatives, and inductive reasoning cannot lead to proof or truth. Therefore, although "proof" is available in closed systems like mathematics, and "accepting" hypotheses may be terminology used in statistics, "proving" or "accepting" hypotheses is not possible for the messy world of experimental research.


    The Results can include comparisons to test Measurable Hypotheses, but not General Hypotheses.


    For specificity, both General and Measurable hypotheses have been part of our present discussion of hypothesis testing. However, the Results section of a scientific paper need only address the Measurable Hypotheses. One task of the Introduction (or potentially Methods) is to explain how each General Hypothesis leads to each measurable prediction (Measurable Hypothesis). Because Measurable Hypotheses can be tested using objective, statistical comparisons that do not require interpretation, testing Measurable Hypotheses is appropriate for the Results. However, testing General Hypotheses most often requires judgment, and therefore must be left to the Discussion section.


    Application Button

    Deductively testing hypotheses requires comparisons. Comparisons most often require statistics. Therefore, testing Measurable Hypotheses in the Results most often requires statistical comparisons. Combining modus tollens and Strong Inference to support hypotheses is generally considered to be acceptable. However, statistical comparisons and hypothesis tests do not necessarily imply that differences among groups are important.

  • 21) WHY data support or reject hypotheses

    Pen
     Scientific Papers Button  IMRaD Results Button  Results Why Button  Topic Outline Thin Button

    Results are conclusions about whether to reject or support Measurable Hypotheses.


    The overall goal of the data and comparisons of the Results section is to explain WHY the data support specific conclusions about the Measurable Hypotheses. Therefore, the conclusions from the tests of the Measurable Hypotheses provide a natural framework for structuring the Results section. Using the conclusions of hypothesis tests as a framework can help focus the text on the most important part of the Results: the specific conclusions that the data support.


    A Results section could be organized using a list framework indicated by subheadings, each of which clearly explain the conclusion of testing each Measurable Hypothesis. For example:


    Results

    Turning performance did not differ among inertia conditions (first Measurable Hypothesis)
    [supporting arguments]
    Peak braking forces did not decrease as predicted by the turning model (second Measurable Hypothesis)
    [supporting arguments]
    Force direction relative to the leg did not change with altered inertia (third Measurable Hypothesis)
    [supporting arguments]
    (Qiao et al., 2014). 


    Repetition can help clearly structure the text of the Results.

    Using a list based on the Measurable Hypotheses can clarify the Results using repetition if the arguments are consistent for each section. For example, if each section uses modus tollens/Strong Inference either to reject (or support) a Measurable Hypothesis, then the common reasoning structure can help the reader understand each section. Other forms of repetition (such as consistent figures or tables) can also be helpful. 


    Results sections commonly include factual premises (supported by data in the text and/or references to data in tables and figures) that reasonably lead to conclusions about Measurable Hypotheses. Strong premises and conclusions are simple, specific, and connected using clear logical transitions. All data presented in the Results section should clearly contribute to testing at least one Measurable Hypothesis.


    Because the Results section has a specific objective of presenting data so that the data clearly test the Measurable Hypotheses, the Results section does not include references to other studies.


    Clarifications typically have a limited role in the Results


    Including one (or a very limited number) of examples can help readers better understand the data (e.g. "typical" trials, although "typical" trials should result in measurements that are close to average, not trials that are exceptional in any way). However, scientific conclusions cannot be supported by examples or anecdotal evidence alone. The primary purpose of examples are to help readers understand the data, not to test hypotheses.


    Other clarifications are usually unnecessary in the Results. Terminology is typically defined before the Results. Results sections typically do not require summaries (moreover, Discussion sections commonly begin with a summary of the Results). Therefore, the secondary (supportive) role of clarifications is particularly important in the Results.


    All data belong in the Results. However, not all conclusions of a paper are in the Results.


    Scientists commonly expect all data collected during a study and used to test hypotheses to be presented in the Results (and not in the Discussion). Therefore, it is advisable to test all Measurable Hypotheses in the Results regardless if tests are conclusive or not. If necessary, data used to test Measurable Hypotheses can then be re-visited in the Discussion to support additional conclusions if necessary.


    The Results section uses data to defend conclusions that do not require interpretation or judgment (e.g. conclusions that inevitably result from data through sound deductive reasoning applied to Measurable Hypotheses). However, the Results section does not necessarily contain all of the arguments in a paper. In some cases, even when strong conclusions about Measurable Hypotheses cannot be made in the Results, additional arguments in the Discussion can subsequently lead to strong conclusions.


    Application Button Using data to test Measurable Hypotheses is appropriate in the Results section if tests do not require interpretation or judgment. Therefore, testing Measurable Hypotheses can provide a useful framework to structure the Results.


  • 22) THE DISCUSSION

    Pen
     A Framework for Science Logo  Scientific Papers Button  IMRaD Discussion Button  Topic Outline Thin Button

    The Discussion uses study results to test and create General Hypotheses.


    Why do papers have a Discussion section?


    The purpose of the Results section is to present data and logical conclusions that do not require interpretation or subjective judgment. In contrast, the Discussion section provides an opportunity to interpret data and conclusions and answer the question: "What do the findings mean?" (Bolt and Bruins, 2012). 


    Finding meaning is a very broad (and somewhat vague) mandate. Therefore, writing a Discussion section can be a daunting prospect. 


    One way to analyze and simplify the process of writing a Discussion section is to focus on a more specific question:


    WHY do the results (i.e. the conclusions about the Measurable Hypotheses) either support existing General Hypotheses or lead us to propose new General Hypotheses?

    The main question of the Discussion can be broken down into three sub-questions:


    A) WHY are the conclusions of the experiment justified despite experimental or analytical limitations?


    B) WHY do the conclusions of the Results support or reject the General Hypotheses?


    C) WHY are the conclusions about the General Hypotheses important?

    AFSP Discussion Flowchart

    By focusing on supporting or developing new General Hypotheses, answering the three sub-questions places the results of the current experiment into a specific context of current understanding. 



    Experimental 

    Limitations 

    Supporting or rejecting the

     General Hypotheses 

    How findings 

    advance science


    Discussion What Button Discussion Why Button Discussion How Button

  • 23) EXPERIMENTAL LIMITATIONS

    Pen
     Scientific Papers Button  IMRaD Discussion Button  Experimental Limitations Button  Topic Outline Thin Button

    Conclusions about Measurable Hypotheses must be defended before being interpreted.


    The Discussion can be divided into specific sections that make particular arguments. Using subheadings can clarify the purpose of each section of the Discussion.


    Paragraph 1: SUMMARY.


    Particularly for long or complex studies, it can be helpful to begin the Discussion by summarizing the overall reasoning of the paper (Brand and Huiskes, 2001). Leading up to the Discussion, the reasoning includes the overall goal of the study, the General Hypotheses, and the Measurable Hypotheses (explained in the Introduction). If the reasoning in the Results section is strong, the Discussion section can begin with a set of conclusions about Measurable Hypotheses (instead of simply reviewing the Measurable Hypotheses or the data themselves). Summarizing the Results seldom requires more than 1 paragraph (the first paragraph of the Discussion).


    Paragraph 2: DEFENDING the Conclusions Despite Study LIMITATIONS.


    After the summary of the Results, there may still be unanswered questions that the Discussion must address before moving on to test General Hypotheses (the main focus of the Discussion). Specifically, experimental research involves limitations. Careful scientific readers will reach the Discussion having already compiled a list of questions associated with the  limitations of the study. For example, "How does limitation Y affect the data collected by the study?" "What if the data had been collected or analyzed differently?" Most importantly, readers will ask the question: "Do any of the limitations, or choices affect the conclusions of the Results?"


    It is important to DEFEND the conclusions of the study from questions about potential limitations.


    All experiments have limitations. Broadly, money and time can limit many aspects of experiments. Using the newest, most powerful equipment is not be possible for most researchers. Even well-provisioned laboratories may only be able to perform complex, expensive, or time-consuming analysis on subsets of data. Experimental subjects often have limited time that they can volunteer to participate in research. Access to patient populations may be limited. Participants may drop out of longitudinal studies. Animals show individual physiological and behavioral differences (and cannot be given specific instructions). And many, many more limitations...


    Because all studies have limitations, there is no shame in identifying experimental limitations. Scientists cannot expect all researchers to be able to conduct research with the state-of-the art equipment and infrastructure available to the world's best-supported institutions. Strong reasoning is just as important as technology for scientific progress (Platt, 1964). Moreover, innovative research can be performed with simple, clever techniques (Wigglesworth, 1974). The important criterion that scientists use for evaluating scientific publications is: are the conclusions of the research justified despite the limitations of the study. Therefore, it is important to make forthright arguments that study limitations do not affect the conclusions of the study


    It is not possible to anticipate and address every conceivable question that potential readers could ask. However, clearly identifying the rationale for making choices in the Methods can help authors identify some of the questions about study limitations that readers are most likely to have. 


    A useful framework for defending the conclusions of the study despite limitations is:


    1) Identify the limitation, and why the limitation was unavoidable.
    2) Explain with a reasoned argument why the limitation does NOT affect the conclusions of the study (e.g. the tests of the Measurable Hypotheses in the Results).

    Both steps of the framework are important. Identifying limitations without immediately explaining the reasons that the limitations do not affect the conclusions implies that the limitations DO affect the conclusions. Limitations that are allowed to affect study conclusions degrade trust in all arguments of the paper. Therefore, it is important to explain why the conclusions remain justified despite limitations. 


    Arguing that identified limitations do not affect study conclusions can include references to other studies, alternative analysis of data, or limited additional calculations as necessary. For example, studies that were limited to laboratory data collection could identify evidence that laboratory measurements are equivalent to field measurements. Addressing probable questions and explaining how the limitations, or choices made, in the study are not likely to affect the conclusions of the Results can therefore strengthen the interpretation of the data and discussion of the General Hypotheses.


    Arguments that the limitations of the research do not affect the conclusions of the study must be compelling. If a compelling argument is not possible, and a limitation can potentially affect the conclusions of the study, then the study may need substantial changes. Changes may involve revising the hypotheses, re-analyzing data, or even performing additional experiments. 


    Do NOT try to ignore or conceal study limitations. Scientific readers are not likely to be deceived. Whereas limitations are an inevitable aspect of science, deception is a serious breach of scientific trust. Honor and integrity are essential to scientific progress, and deception invites severe consequences.


    Application Button

    Strong Discussions explain WHY the conclusions of the Results are justified and reasonable before interpreting the conclusions in a broader context. Specifically, Discussion sections can summarize the conclusions, identify limitations, and make strong arguments that the limitations do not affect the conclusions of the Results. 


  • 24) WHAT are the conclusions about the General Hypotheses

    Pen
     Scientific Papers Button  IMRaD Discussion Button  General Conclusions Button  Topic Outline Thin Button

    The primary purpose of the Discussion is to test General Hypotheses.


    The Methods and Results sections of scientific papers can focus exclusively on testing Measurable Hypotheses. However, Measurable Hypotheses are simply specific predictions that can be made from broader, explanatory scientific models: General Hypotheses.


    How can we use conclusions about Measurable Hypotheses as evidence to test General Hypotheses?

    The conclusions of the Results most often lead to one of two different types of arguments: Supporting or Revising General Hypotheses. Supporting or Revising General Hypotheses typically requires 3-5 paragraphs of text that form the body of the Discussion.

    A) Supporting existing

    General Hypotheses

    B) Revising

    Hypotheses

    Supporting General Hypotheses Button Revising Hypotheses Button


    The principles of hierarchy and abstraction can clarify arguments of the Discussion.

    Whether supporting or revising General Hypotheses, the principles of hierarchy and abstraction can be important for organizing the overall arguments of the Discussion. Reviewing information from many studies to determine the conclusions most consistent with current data is not easy. Integrating conclusions from the Results into broader arguments to test General Hypotheses is also challenging. Therefore, using strong deductive arguments, or inductive frameworks such as Hill's criteria, can be helpful to organize and simplify the arguments of the Discussion.


    However, individual experimental studies do not need to include all research findings and hypotheses in a particular field in the arguments of the Discussion. For example, individual studies do not need to address all of Hill's Criteria in an inductive argument. Other types of scientific papers (e.g. reviews and meta-analyses) are available to make comprehensive arguments about important research topics. Instead of trying to address many topics in the Discussion, focusing on fewer, stronger arguments can be sufficient to make a positive contribution to scientific understanding.


    The principle of abstraction suggests that each paragraph defend one main argument. One way to structure a strong Discussion section is to relate paragraphs together, hierarchically, into a single argument that focuses on a clear conclusion about the General Hypothesis. 


    Premises of arguments in the Discussion are primarily facts, with each premise supported by a reference either to data or conclusions presented in the Results, or to data and conclusions from other studies. All references in the Discussion should be placed parenthetically at the END of sentences. 

    Application Button The primary purpose of the Discussion section is to test General Hypotheses. Data and/or conclusions about Measurable Hypotheses that are consistent with existing General Hypotheses can be used to support the General Hypotheses. Data or conclusions that conflict with existing General Hypotheses can lead to rejecting the existing General Hypotheses, and justify creating new General Hypotheses.


  • 25) SUPPORTING GENERAL HYPOTHESES

    Pen
     IMRaD Discussion Button  General Conclusions Button  Supporting General Hypotheses Button  Topic Outline Thin Button


    Frameworks can contribute to supporting General Hypotheses.


    If data are consistent with the predictions of Measurable Hypotheses, the data can be considered to support the General Hypotheses that led to the predictions of the Measurable Hypotheses

    DEFINITION: "Support" for a hypothesis has a specific meaning: the data of the current experiment did not reject the hypothesis. 


    However, simply failing to reject a particular General Hypothesis of a study is only one piece of evidence, and may not alone be sufficient reason to continue research to test and further develop the General Hypothesis. Therefore, one role of the Discussion can be to provide additional support for General Hypotheses.


    Additional support for General Hypotheses can involve


    1) Defending the assumptions used in the reasoning of the study.

    2) Explaining how the findings of the study and the General Hypotheses are consistent with broader scientific understanding.


    Horizontal Divider


    1) Defending the assumptions used in the reasoning of the study.


    Similar to a forthright discussion of experimental limitations, identifying the major assumptions of the study can help establish the reader's trust. Moreover, clearly identifying assumptions can anticipate probable questions, and prevent unanswered questions from undermining arguments about the General Hypotheses. Therefore, it can be helpful to provide readers with a clear explanation of each major known, assumption made in designing and conducting the study.

    The assumptions that could potentially affect research vary considerably by field. Examples of assumptions in research involving humans could include assuming that sex or gender of study participants does not affect physiology or performance, an assumption that convenience samples (often college students for university research) represent a broader population, assumptions that important variables do not substantially change with age, assumptions that behavior in laboratory settings transfers to behavior outside the laboratory, etc. Although animal models are critical for biomedical research, much research on animals assumes that principles learned from animals also have relevance to humans at the molecular, physiological, or even behavioral levels.

    Assumptions are not limited to biology. For example, the physical sciences and engineering commonly study systems that can be "linearized:" investigated in narrow ranges where the responses of systems are linearly related to inputs. Principles like the "Ideal Gas Law" assume that simplified relationships apply broadly to many different compounds. Clearly, researchers make assumptions in almost every field of science.

    There is no shame in making assumptions. However, if authors do not recognize and address important assumptions, readers can be confused, withhold judgment, not agree with the arguments of the study, or lose trust in the competence of the authors (or all at once). Simply avoiding mention of important assumptions is not a viable strategy: competent scientists will be able to "read between the lines," and do not appreciate subterfuge. Therefore, it is in the authors' best interest to voluntarily identify the major assumptions of a study.

    Similar to the limitations, a useful framework for explaining assumptions is:


    1) Identify the assumption made, and why the assumption was necessary.


    2) Explain using a reasoned argument why the assumption does NOT affect the conclusions of the study (e.g. the tests of the Measurable Hypotheses in the Results).

    Many students perform step (1) and identify assumptions without performing step (2) and explaining why the assumptions do NOT affect the conclusions! Readers are therefore forced to come to conclusions on their own (and scientific readers are not inclined to be charitable, particularly when expected to do work for the authors). Therefore, it is critical to perform step (2) and make a clear, evidence-based argument why an assumption is NOT likely to affect the conclusions of the study. 

    Addressing the assumptions can involve references to other studies, alternative analyses of data or limited additional calculations as necessary. For example, the assumption that sex differences do not affect performance could be supported by the results of similar studies that tested for (and did not find) sex differences.

    Horizontal Divider

    2) Explaining how the findings of the study and the General Hypotheses are consistent with broader scientific understanding.


    One framework that can help to organize arguments to support General Hypotheses is inductive reasoning using Hill's Criteria


    Examples of how Hill's Criteria could apply to the Discussion include:


    1) Reliability – Do repeated studies all lead to the same conclusions?

    Do the data collected by the present study match data collected in previous studies? Finding that the data are quantitatively consistent with other research can strengthen confidence in the Methods of the study, the resulting data and conclusions of the Results, and also contribute to supporting shared General Hypotheses. An example of an argument for reliability could involve comparing the results of complex calculations of arm movement to previous measurements: "The elbow excursions of 77 ± 11° that the monkeys used for the present task were comparable to the 81 ± 20° excursions reported by Christel and Billard (2002)" (Jindrich et al., 2011).


    2) Diversity – Does evidence from many different approaches all support the hypothesis?

    Do different types of studies all support the same General Hypothesis? If a diversity of approaches are all consistent with a hypothesized explanation, then the explanation is more likely to be a general, valid explanation. The Discussion can make arguments for diversity by surveying a wide range of literature and finding consistent support for a General Hypothesis. For example, "Similar differences between ‘‘massed’’ and ‘‘distributed’’ practice were observed in motor learning paradigms other than adaptation (Lee and Genovese 1988), as well as in verbal learning paradigms (Ebbinghaus 1885; Glenberg 1979)" (Bock et al., 2005), or "That exercise was equally effective [in reducing symptoms of depression] as medication after 16 weeks of treatment is consistent with findings of other studies of exercise training in younger depressed adults [14,15,17,18]." (Blumenthal et al., 1999).

    However, as always, it is important to make sure that arguments in the Discussion are a valid representation of the research on a topic. Arguments for Diversity should not represent "cherry picking" in the service of confirmation bias.

    3) Plausibility – Are there reasonable mechanisms that underlie observed outcomes? Are the mechanisms consistent with, and do not conflict with, other knowledge?

    Consistency, or "consilience," of scientific explanations is extremely important for science. For example, proposed biological mechanisms must be consistent with known laws of physics and chemistry (e.g. conservation of energy, entropy, etc.). Physiological or behavioral explanations must be consistent with known physiological or neural processes. Therefore, plausibility is an important and common argument in the Discussion.

    Two approaches to arguments for plausibility are (A) information from other studies suggest reasonable mechanisms to explain data observed in the current study; or (B) data from the current study provides direct mechanistic evidence for General Hypotheses. An example of the first type of argument is: "Animal research suggests that [differences between ‘‘massed’’ and ‘‘distributed’’ practice] may be related to differential modulation of protein synthesis- dependent molecular processes which affect the expression of synaptic connectivity (Genoux et al. 2002; Scharf et al. 2002)" (Bock et al., 2005).

    4) Experimental Interventions – Can direct interventions produce predicted outcomes?

    Sometimes General Hypotheses are developed from first principles, physical models, or observed correlations. Direct experimental testing of General Hypotheses is an indispensable tool for science. The Discussion can include arguments that experimental data supports scientific explanations or models. For example, "Our results suggest that humans show body control strategies that result in relationships among movement parameters that are consistent with the distributed feedback rules used by Raibert’s robots" (Qiao and Jindrich, 2012).

    5) Temporality – Are there time-based dependencies (e.g. causes precede effects)?

    Time-based arguments are particularly important for hypotheses that involve causal relationships. Effects are commonly observed after causal phenomena. The Discussion can include time-based arguments to support hypotheses. For example, "It is clear that neuronal processes that precede a self-initiated voluntary action, as reflected in the readiness-potential, generally begin substantially before the reported appearance of conscious intention to perform that specific act" (Libet et al., 1983).

    6) Strength – Is there a strong association between variables? 

    Although statistical tests can test for differences among groups, statistical tests alone do not address whether differences among groups are important. Demonstrating that there are strong associations among variables can be an important part of arguing that statistically-observed differences are important. The Discussion can compare findings to other phenomena to make an argument that observed relationships among variables are strong and important. For example, "The magnitude of reductions in depression scores is also compatible to the levels achieved using sertraline in other clinical trials of depression [45,48]. Moreover, the changes in depressive symptoms found for all treatments in our study are consistent with the extent of improvements reported in more than a dozen studies of psychosocial interventions for MDD [12,49-53]" (Blumenthal et al., 1999).

    7) Specificity – Are there specific factors (i.e. not all factors) that result in observed outcomes?

    Specificity can be important for using Strong Inference to reject alternative hypotheses. If General Hypotheses lead to specific predictions that are consistent with data (whereas the predictions of other hypotheses are not), the General Hypothesis may be stronger than alternatives. For example "It became obvious that the improved stepping associated with step training occurred as a result of the repetitive activity of those spinal locomotor circuits that actually generated the load-bearing stepping, since spinal cats that were trained to stand bilaterally learned to stand but could not step as well as even those spinal cats that were not trained at all" (Edgerton and Roy, 2009).

    8) Biological gradient – Are there biological gradients or dose-response relationships?

    Experimental studies may directly test for dose-response relationships. For example, "Quipazine increased the sensitivity of the spinal cord to ES. The stimulation threshold to elicit muscle twitch as detected visually and by palpation was lower after quipazine administration (Table 1)... There was a significant decrease in the effective ES intensity after administration of quipazine at dosages of 0.2, 0.3, and 0.5 mg/kg (Table 1)" (Ichiyama et al., 2008).
    Even if an experimental study does not directly test for biological gradients, using the results of similar studies can allow for experimental data to contribute to arguments for a biological gradient. 


    The Discussion can focus on making a limited number of strong arguments.

    Papers can typically devote 3-5 paragraphs of the Discussion to supporting General Hypotheses. Three to five paragraphs may not allow strong arguments based on all of Hill's Criteria. Therefore, it can be acceptable to focus on 2 or 3 of the most appropriate and strongest areas. 


    Horizontal Divider

    Inductive reasoning using Hill's Criteria is only one possible framework available to structure a Discussion. Other types of evidence and arguments could also contribute to putting the results of a study and the General Hypotheses that the results support into a broader scientific context.

    Application Button

    The purpose of Discussions that support General Hypotheses is to make strong arguments that the General Hypothesis is a plausible and useful explanation that fills the gap in understanding. A supportive Discussion brings the conclusions of the Results together with conclusions from other studies to make compelling arguments for existing General Hypotheses.

  • 26) REVISING GENERAL HYPOTHESES

    Pen
     IMRaD Discussion Button  General Conclusions Button  Revising General Hypotheses Button  Topic Outline Thin Button


    Rejecting or revising hypotheses involves reasoned arguments.


    Rejecting Measurable Hypotheses involves evidence that the experimental data do NOT match the predictions of the Measurable Hypotheses. 


    Null Hypotheses can be rejected with statistical tests. For example, if a model mathematically predicts that a value will not change over a set of conditions, then finding significant differences among conditions could provide evidence to reject the model (e.g. Qiao et al., 2014). 


    Rejecting Measurable Hypotheses may require arguments in the Discussion.

    Rejecting Measurable Hypotheses that are not posed as Null Hypotheses is far less straightforward. For example, if a Measurable Hypothesis predicts significant differences among groups, failing to find statistically significant differences among groups does not necessarily provide sufficient evidence to reject a Measurable Hypothesis. There are many reasons that statistical tests can fail, and the true absence of differences among groups is only one reason for failure.


    Imagine that we are interested in testing the General Hypothesis,

    "Soda consumption is one cause of childhood obesity in the United States," by making the Measurable prediction,

    "Removing soda machines from Central High School at the start of the school year will result in significantly lower average Body Mass Index (BMI) for students at the end of the school year relative to BMI at the start of the school year."

    Does the experiment seem a reasonable test of the General Hypothesis? If we perform the experiment and fail to find a significant difference in BMI between the start and end of the school year, can we conclusively reject our Measurable Hypothesis? 

    Not yet. Rejecting the Measurable Hypothesis may require arguments in the Discussion. Addressing the experimental limitations and arguing that the limitations are not likely to affect the conclusions of the study can be a useful start and provide initial arguments. However, we may need to provide additional arguments. For example, we may need to argue that we measured enough students to have statistical power sufficient to resolve potential differences in BMI. We also may need to argue that one school year is sufficient time for differences in BMI to be measurably large (potentially based on other studies that successfully found changes to BMI over a 9-month period). 

    Therefore, rejecting Measurable Hypotheses that are not posed as Null Hypotheses can require arguments in the Discussion. 


    Revising Measurable Hypotheses can result in stronger hypotheses.


    If the data of a study do not support a Measurable Hypotheses, or if the data reject the Measurable Hypotheses, then the Discussion can propose ways to revise the Measurable Hypotheses.


    For example, even if we convincingly reject our Measurable Hypotheses that removing soda machines from school significantly decreases BMI, the bulk of evidence from many other studies may still support the General Hypothesis that soda contributes to childhood obesity. Therefore, we may choose to make different measurable predictions: to revise our Measurable Hypotheses. For example, students could document (through food logs) their soda consumption, and we could test for positive correlations between soda consumption and BMI. 


    Revising or developing new Measurable Hypotheses can involve re-visiting the assumptions of the study.

    Whereas defending the assumptions of a study can be important for supporting Hypothesesquestioning assumptions can lead to reasonable approaches to revise Measurable Hypotheses. Therefore, identifying and discussing assumptions made in a study can provide a reasonable starting point for arguments for revised Measurable Hypotheses.


    For example, our study on soda consumption was based on the assumption that school vending machines are a major source of sugary drinks in the diet of High School students. Questioning our assumption would require us to revise the Measurable Hypothesis -- i.e. to generate a hypothesis that is NOT based on the assumption that students primarily drink sugary drinks at school. Although the testing the new Measurable Hypothesis is outside the scope of the Discussion, the Discussion can present arguments that the revised Measurable Hypothesis is reasonable, and propose subsequent experiments to test the revised hypothesis.


    Horizontal Divider

    Rejecting General Hypotheses typically requires many studies.

    Imagine that we were able to convincingly reject our Measurable Hypothesis that removing soda machines from Central High School would result in significantly lower BMI among students. Based on the rejection of our Measurable Hypothesis, can we therefore reject our General Hypothesis that soda consumption contributes to childhood obesity?

    Again, not yet. It is easy to think of many reasons that removing soda machines would not change BMI, even if overall soda consumption does in fact contribute to obesity. Students could bring soda from home, or buy soda elsewhere, or simply increase soda consumption outside of school. Because it is seldom possible to be confident that an experiment has controlled for all necessary variables, rejecting a single Measurable Hypotheses may not provide convincing evidence to reject a General Hypothesis. 

    Therefore, rejecting a General Hypothesis typically requires evidence from many studies that all conflict with the General Hypothesis. Constructing arguments against General Hypotheses can involve the same types of reasoned arguments as supporting General Hypotheses. Hill's Criteria (or other frameworks) could likewise contribute to rejecting hypotheses. For example, paragraphs of the Discussion could argue that there is not a diversity of studies that support the General Hypothesis, or that there are not plausible mechanisms for the hypothesized explanation, or that there is not an association between variables strong enough to be important.


    If a General Hypothesis consistently fails to make successful predictions, then it may be necessary to revise the General Hypothesis. Revision may involve creating an entirely new General Hypothesis. A revised or new General Hypothesis should lead to a different set of predictions that can subsequently be tested. A reasonable objective for a Discussion section would therefore be to present and defend the revised General Hypothesis.


    Horizontal Divider

    Application Button

    Appropriate statistical tests can be sufficient evidence to reject Null Measurable Hypotheses. However, reasoned arguments are typically necessary to reject Measurable Hypotheses that are not Null Hypotheses or to reject General Hypotheses. Arguments to to revise Measurable Hypotheses can start with questioning assumptions that led to the [rejected] Measurable Hypothesis. Arguments to revise General Hypotheses can employ frameworks such as Hill's Criteria.

  • 27) ADVANCING UNDERSTANDING AND APPLICATION

    Pen
     Scientific Papers Button  IMRaD Discussion Button  Advancing Science Button  Topic Outline Thin Button

    The Discussion can end with arguments for the importance of the conclusions.


    "Closure," where a story resolves conflicts and mysteries before ending, is satisfying. Often stories achieve closure by ending where the story began (Campbell, 1991). The same principle can help the Discussion bring closure to readers of scientific papers.


    In our framework, the first paragraph of the Introduction made an argument for the importance of the research.  Therefore, arguing for importance in the last paragraph of the Discussion can help provide closure to a scientific paper.


    An immediate argument for the importance of the research is a summary of how the conclusions help to fill the gap in understanding identified in the Introduction. If the Introduction successfully argued that the research topic was important, then testing hypotheses to help fill the gap in understanding is also important.


    However, it is often acceptable to make additional arguments for the importance of the research findings in the Discussion. Examples of topics include potential contributions of study conclusions to guiding ongoing or future research, and potential applications of study conclusions (to clinical practice, injury prevention, technology development, public policy, etc.).


    Arguments for potential contributions or applications of research should not be overly speculative or vague. Instead, strong arguments for the importance of research explain specific contributions, and defend the potential contributions with references


    One short paragraph is typically appropriate for making a final argument for the importance of the research.


    Application Button

    The final paragraph of the Discussion can bring closure to the paper by presenting arguments for the importance of the conclusions. Contributions to future research or application outside of science are common ways to support the importance of study findings.


  • 28) INCREASING IMPACT

    Pen
     A Framework for Science Logo  Increasing Impact Button  Topic Outline Thin Button  

    Catering to audiences can increase the impact of scientific communication.


    The purpose of "A Framework for Scientific Papers" is to help students structure writing with strong, reasoned frameworks. Using hypothesis-driven research and papers that use the IMRaD format is sufficient for satisfying the main goals of AFSP.


    However, some additional considerations may be helpful for revising papers to be as readable as possible. Moreover, some situations may require spoken communication (e.g. summary presentations). Therefore, some very brief remarks about titles and abstracts, narrative communication, and spoken presentation are appropriate.


    Title and Abstract

    Narrative Communication

    Spoken Presentation


    Title and Abstract Button Narrative Writing Button Spoken Communication Button

  • 29) TITLE AND ABSTRACT

    Pen
     A Framework for Science Logo  Increasing Impact Button  Title and Abstract Button  Topic Outline Thin Button

    The title and abstract are the most widely-read sections of a paper.


    Over 2.5 million scientific papers are published every year, and the number of publications increases every year (Ware and Mabe, 2015). Clearly, it is not possible for any scientist to read any more than a fraction of the published research. Scientists must be selective, and only read papers with clear and important conclusions.


    Papers are typically judged first on their title and abstract. Potential readers may decide to devote time to a paper based on the title or abstract alone. Therefore, titles and abstracts are important parts of scientific papers.


    Strong titles are as specific as possible.

    One way to convince a potential reader that a paper has valuable information is to provide the reader with specific information in the title.


     For example, the conclusion of a study can make a useful, specific title:


    "Sagittal plane biomechanics cannot injure the ACL during sidestep cutting" (McLean et al., 2004).

    Not all studies lend themselves to clear, one-sentence conclusions. However, one reasonable goal for titles is to convey as much specific information as possible to readers.


    Sometimes titles use specific questions instead of conclusions. For example:


    "Kangaroo rat locomotion: design for elastic energy storage or acceleration?" (Biewener and Blickhan, 1988). 

    Questions can effectively stimulate reader interest. Questions can also introduce a framework to structure a study, such as the dichotomy between elastic energy storage and acceleration.


    Overall, strong titles convey as much information to potential readers using as few words as possible. 


    Abstracts explain the reasoning of the study.

    Abstracts (sometimes called "Summaries") are typically limited to approximately 250 words (or less). Like titles, abstracts must also convey as much information to potential readers using as few words as possible. 

    Abstracts lead readers through the reasoning of the study, and summarize the Introduction, Methods, Results and Discussion. Each section must typically be explained in 2-4 sentences (very little).

     
    The abstract of a paper will be read much more than the rest of the paper. Therefore, the primary purpose of the Abstract is to explain the hypotheses and conclusions of the study in a way that readers can understand.

    Time invested in writing a strong outline can be returned when writing an abstract. If an outline consists of conclusive subheadings, then one place to start writing an abstract is by collecting all of the subheadings of the Introduction, Results and Discussion into a single paragraph. A strong outline will result in a single-paragraph Abstract that summarizes the reasoning of the paper. Brief explanations of methods and definitions may be necessary to clarify the text of the abstract.


    Abstracts can (and should) contain data. However, data should not be the focus of an abstract. Data in a strong abstract clearly and strongly support the conclusions.


    Application Button

    Titles and abstracts convey as much information to readers in as few words as possible. Strong titles have a clear purpose (e.g. conclusion or question), and strong abstracts faithfully summarize the reasoning and conclusions of the study.


  • 30) NARRATIVE COMMUNICATION

    Pen
     A Framework for Science Logo  Increasing Impact Button Narrative Communication Button   Topic Outline Thin Button

    Using tools from narrative storytelling can help make scientific communication more interesting.


    Strongly-reasoned arguments presented in hierarchies will have clear connections among ideas. Connections among ideas create smooth transitions and conceptual "flow" that make text be easier to understand. However, although being easy to understand is clearly important, being interesting to audiences can also strengthen scientific communication. Therefore, scientific writing can benefit from strategies to encourage and maintain reader interest.


    Storytelling is one effective way to make communication more interesting to audiences (Olson, 2015; Luna, 2013; Schimel, 2012). Stories are often interesting because they involve conflict. Therefore, including and emphasizing disjunctions (OR dichotomies) and contrasts (BUT conjunctions) in reasoned arguments can help make scientific communication more interesting.


    Having a overall storyline, or "arc,"  can also help to create smoother transitions between ideas and make communication more interesting (Olson, 2015Luna, 2013; Schimel, 2012). Many successful stories use a similar arc as a framework to structure the story, the so-called "Hero's Journey" (Campbell, 1991). 


    The Hero's Journey involves an ordinary person who is called to adventure. The Hero finds themselves at a threshold where they must make a decision: go or stay. The hero decides to embark on the adventure, where they face challenges, temptations, and reach a low point (an ordeal). To overcome the ordeal, the hero must undergo a transformation, where they change (for the better). The transformation allows the hero to find their path home and return a hero. 
    Heroes Journey 01
    The scientific process is analogous to the Hero's Journey framework (Olson, 2015). Science begins with mysteries (questions). Scientists create hypotheses, and decide whether to experimentally test the hypotheses. Experiments and data collection are often challenging, and often involve crises or ordeals. Data may not at first make sense, causing some hypotheses to be rejected. However, by transforming their thinking (revising hypotheses), scientists create new hypotheses that DO seem to make sense of the data. With their new hypotheses, the scientists confidently submit their manuscripts (convinced that journals and reviewers will acknowledge the scientists as heroes smile.
    Heroes Journey 02
    The sections of a scientific paper are also broadly consistent with the Hero's Journey framework. The Introduction introduces the important questions of the study, identifies the gap in understanding (the threshold), and commits to General and Measurable Hypotheses. The Methods explain the challenges of performing the experiments. The Results apply the data to the Measurable Hypotheses in a thorough and exacting way. Finally, the Discussion uses the conclusions of the Results to advance scientific understanding by supporting or revising General Hypotheses.
    Heroes Journey 03
    Of course, there are many ways to use the Hero's Journey (or other frameworks) to make scientific communication more interesting and engaging for audiences. Using logical transitions that present questions, emphasize contrasts and challenges, and emphasize transformations can help scientific communication take advantage of the power of narrative storytelling.


    Application Button

    Narrative frameworks such as the "Hero's Journey" can help improve the overall cohesion and flow of scientific communication. Using interesting logical transitions (OR or BUT) can emphasize conflict and challenge. Strong conclusions (THEREFORE) can emphasize transformations.


  • 31) SPOKEN COMMUNICATION

    Pen
    A Framework for Science Logo Increasing Impact Button Spoken Communication Button  Topic Outline Thin Button

    Strong spoken presentations focus on the audience.


    Spoken presentation is clearly different from written communication. Conveying information involves directly interacting with an audience. For some people, being at the center of a spoken presentation is exhilarating. Others (like myself), may be more reserved and uneasy with spoken presentation. Each individual must approach spoken presentation in their own way.


    Although individual approaches to spoken presentation are all different, spoken presentations share one basic problem:


    Spoken presentations are hard for audiences to understand

    Some things that are particularly challenging for spoken presentations are:


    * Audiences can only understand a limited amount of information at any one time.


    * Presentations are necessarily sequential (time-based). Audiences cannot easily go back and review information that they may have missed (or forgotten).


    * Audiences have limited attention, and can easily be distracted or bored.


    * Spoken presentations typically have very limited time to convey information.

    Some people have the mis-perception that scientists are impressed by quantity of data or complexity of analysis. However, scientists do not typically attend presentations to be impressed. Scientists attend presentations to learn. Therefore, the objective of most presentations is to maximize learning


    Communicating potentially complex arguments in a spoken presentation is clearly a difficult task. However, some principles from "Reasoned Writing" and "A Framework for Scientific Papers" can help to create and present effective spoken presentations. Just as for writing, presentations will be more effective if they are simple and specific


    Three additional principles for spoken presentation are:


    1) Use strong frameworks.
    2) Use the Rule of Three.
    3) Focus on the audience.

    Horizontal Divider

    1) Use strong frameworks.

    Speakers typically want audiences to focus on the content, not the format, of a presentation. Therefore, it can be helpful to use the simplest format possible for spoken presentations.


    One way to simplify the format of a spoken presentation is to organize information using a simple framework, and repeat the framework throughout the presentation.


    For example, one simple framework might be: QUESTION - EVIDENCE - CONCLUSION. A short presentation might address a single question, and a longer presentation might address two or three questions. The overall structure of a longer presentation could repeat the same framework for each section of the presentation:


    Presentation Framework 1

    Sometimes it may be necessary to explain some detail (like a necessary aspect of the methods) during the course of an argument. Repeating the same framework can also help to explain sub-questions:


    Presentation Framework 2

    Using hierarchies and trees can also help to create strong, organized frameworks. For example, a tree structure based on dichotomies only requires audiences to think about three pieces of information at a time: the main topic and two sub-topics:


    Presentation Framework 3

    For example, 


    Presentation Framework 4

    Each "branch" of the hierarchy could either branch to a more specific dichotomy (for a long presentation), or move on to specific information presented in the QUESTION - EVIDENCE - CONCLUSION framework.


    Of course, other frameworks are also possible. For example, a more methods-oriented presentation (such as a research proposal) might use a QUESTION - PROBLEM - SOLUTION framework.


    APPLICATION: Strong, repeated frameworks can reduce the amount of effort that audiences need to devote to understanding the format of a presentation. Therefore, strong, repeated frameworks can increase the amount of attention that audiences can devote to understanding the content of a presentation.


    Horizontal Divider

    2) Use the Rule of Three.

    The Rule of Three is particularly important for spoken presentations. The capacity of working memory is very limited, and it is simply not possible for most people to entertain too many pieces of information at once. Moreover, forgetting is a critical part of memory. We forget the vast majority of sensory information that we collect. Even important information in working memory is not necessarily consolidated to long-term memory and often forgotten. Therefore, presenters cannot expect audiences to be able to retain information previously discussed in a presentation.


    The Rule of Three the has two parts:


    (1) Use 3 or fewer important elements in each level of an argument.
    (2) Repeat elements that are important for audiences to understand and remember 3 or more times.


    The first recommendation, to use 3 or fewer important elements in each level of an argument, can be applied to any part of a spoken presentation. For example, focusing on a single main conclusion can result in the strongest presentation. Frameworks used to explain arguments can involve three or fewer elements. Dichotomies, if appropriate, can provide audiences clear choices. 


    Spoken presentations are hard for audiences to understand. Therefore, audiences appreciate presentations that are easy to understand.

    The second part of the Rule of Three, to repeat elements that are important for audiences to understand and remember 3 or more times, is important in presentations for two main reasons:


    A) There is a reasonable probability that audiences will miss an element of information when the information is presented.


    The probability of missing information clearly depends on the individual audience member and the clarity of the presentation. However, imagine that an audience member has a 30% chance of missing a piece of information each time the information is presented to them. If the information is presented only once, the audience member has a 30% chance of not ever retaining the piece of information. If the information is presented twice, the audience member has an almost 10% chance of missing both presentations. However, if information is presented three times, there is a less than 3% chance that the audience member will miss the information all three times. Therefore, repeating important elements of  a presentation three times (or more) greatly increases the likelihood that audiences will have the opportunity to understand and retain important information.


    Moreover, missing information can make it very difficult for someone to understand a presentation. If an audience member misses an important element of information, the person may not be able to fully understand other information that follows. Therefore, repetition provides additional opportunities for audience members to understand potentially large parts of presentations.


    B) Repetition is important for people to retain the information that they hear.


    It is not easy to remember information heard only once. Repetition helps people to remember by emphasizing the repeated information and reinforcing memories. 


    One example of using both repetition and frameworks is to repeat important frameworks multiple times during a presentation. For example, imagine we were creating slides for a presentation using the following framework:


    Presentation Framework 5

    We might be tempted to simply "flatten" our tree into a sequence of slides:


    Presentation Framework 6

    However, by the time the audience has gotten to the 6th slide ("Energy Out") they may have forgotten the dichotomy that motivated the presentation in the first place! Therefore, it would be helpful to repeat the main questions and conclusions of the study:


    Presentation Framework 7

    Repeating the central question and dichotomy of the presentation can help audiences better understand where each argument fits in the overall presentation. Repetition can also prevent audience members from missing information, and facilitate understanding and learning.


    APPLICATION: Reducing and repeating frameworks and elements of arguments is a powerful tool to emphasize important points and help audiences understand and retain information. 


    Horizontal Divider

    3) Focus on the audience.


    Focusing on the audience involves creating presentations designed to help audiences understand and retain conclusions from the presentation. Presentations serve the audience, not the presenter. Therefore, a presenter may not be able to discuss everything that they would like to discuss. 


    Provide enough factual information for scientists in a different field to understand your arguments.


    An appropriate audience for a scientific presentation is a scientist in a different field. Therefore, it is reasonable to expect audiences to understand the scientific method, basic principles of the natural sciences, and basic mathematics and statistics. However, scientists cannot be expected to know the definitions of technical terms. Scientists also cannot be expected to know relevant research relating to the topic of your presentation.


    Therefore, it is important to:


    A) Define all necessary terminology before using it.
    B) Provide specific, factual evidence for all elements of a presentation.


    Just as for written papers, all factual statements of scientific presentations should be supported with references to peer-reviewed, quantitative research. References are clearest when they are parenthetical, and immediately following the statements that the references support. Therefore, references should directly follow premises (i.e. on the same slide). Even definitions typically require references.


    Use a reasoned framework.


    Just as for written communication, reasoning is the strongest framework for scientific presentations. Scientific audiences understand and respect strong reasoning.  Lists can be helpful to create hierarchies such as trees to support reasoning. Just as for written communication, creating a strong reasoned outline can be invaluable. Spoken presentations can directly use many of the subheadings of an outline as bullet points for the presentation. Therefore, developing a strong outline does not require more time to develop a presentation (and can actually save time).


    Only present essential information.


    Audiences have limited capacity to quickly absorb large amounts of information. Scientists are no different. Even audiences of trained scientists or professionals appreciate presentations that reduce the amount information that members of the audience must process. Therefore, strong presentations present only and all the essential information necessary to understand a research project.


    Basing a presentation on a strong, reasoned outline can help to limit presentation to only including essential information. Text or data can only be added to the presentation if the information has a strong, specific place in an argument. For example, long quotes are seldom necessary in scientific presentations. Most often, very specific conclusions or data from the quotes are sufficient to support premises of an argument.


    One way to reduce non-essential information in a presentation is to use a LARGE text size throughout the presentation. For slide presentations, text can be 32 point bold or larger. If text does not fit on a slide (or does not include adequate spacing between lines), then the solution is to reduce the amount of text, not to decrease the font size (however tempting smaller text may be).


    Some of the most consequential people listening to your presentation may not be able to see or hear as well as you can. Many people cannot distinguish different colors (such as red and green), and projectors may not render colors accurately. Therefore, using large, bold fonts and graphics that do not depend on color to be understood can help everyone understand the information in your presentation.

     
    Capture and direct attention.


    Attention is a limited resource. Audience members can only focus attention at one thing at a time. Therefore, effective presentations directly focus attention on one element at a time. Directing attention involves two components: 


    A) Minimize distractions

    To minimize distractions, introduce and discuss one element at a time. ALL other information available to audiences  can be distracting. Therefore, distractions include:

    * Slide backgrounds that contain text or images. Consistent, plain backgrounds are adequate for scientific presentations.

    * Unnecessary text or data. Text or data that do not directly contribute to the current argument should be removed.

    * Unnecessary images. For example, clip art or stock photos can be unnecessarily distracting.


    The presenter themselves can also be a distraction! A presenter cannot expect audiences to be capable of reading text on a slide and listening to the presenter at the same time. People who try to read and listen at the same time are likely to become confused and not understand either the visual or spoken information. Therefore, presentations should be structured so that the presenter can help the audiences read all text, figures, and images before discussing them. 


    For example, when introducing a figure, it is important to first slow down and "walk" the audience through the figure. Explain what each of the axes are, and what any symbols, lines, or shapes represent. The audience needs time to process the information in a figure. Once the presenter has taken the time to explain WHAT a figure is, then the presenter can move on to explain WHY the information contributes to the argument.


    B) Direct attention.

    Even if presentations effectively minimize distractions in the presentation itself, there are many distractions that presenters do not have control over. Other people in the room, phones, computers, or simply unrelated thoughts can all be distractions. Therefore, it is important for presenters to capture and direct attention.


    One way of directing attention is through movement. Movement can occur on presentation slides. For example, using animation features of most presentation software to progressively introduce text can help capture and direct attention (and help to limit the amount of information that audiences need to process at any one time). Therefore, use animations to only introduce and discuss ONE element of a presentation at a time.


    Movement does not need to be limited to slides! The presenter can also provide a source of movement. When possible, interact with the information in the presentation as much as possible: by pointing, gesticulating, etc.


    Use images and video that are directly relevant to the presentation. An old saying is "A picture is worth a thousand words." The saying applies to scientific presentations as well. Images and videos naturally direct attention.


    For example, below is a slow-motion video of a cockroach with a cannon firing off its back (the video is slowed down about 50 times, and represents about half a second of real time).

      
      

    It may be easier to remember the video than  anything else in the module. 


    Images and videos can be extremely powerful. However, images and videos can also be extremely distracting. Therefore, it is important to make sure that images and video directly contribute to supporting the arguments of the presentation.


    Audiences are most likely to retain information from the end of a presentation. Therefore,if you would like audiences to remember the conclusions of your presentation, END the presentation with the conclusions.


    Acknowledgments are not as important as conclusions. Therefore, if a presentation has an acknowledgements slide -- put it at the beginning! Placing an acknowledgments slide at the beginning of the presentation is gracious, and also gets the acknowledgments out of the way. 


    Tell a story.


    Narratives can be particularly important for spoken presentations. Both the presentation content itself and the method of presentation can help create engaging, compelling narratives. The structure of the presentation can use frameworks such as the Hero's Journey. Interesting narratives contain conflict: "but" conjunctions and disjunctions


    Moreover, verbally and physically illustrating and emphasizing conflict can also be helpful. Be theatrical! Changes to intonation, pregnant pauses, earnest looks, and other elements of stagecraft are entirely appropriate for scientific presentations if they contribute to directing attention and helping audiences understand and retain the conclusions of the presentation.


    One rule of thumb is: the more important the statement, the slower to make it.


    Let audiences come to the conclusions.


    People are quite attached to their own ideas, and most people feel positive and satisfied when they solve problems. Therefore, if you frame a presentation in a way that presents data so that the audiences naturally arrive at the conclusions of the presentations by themselves, then the presentation is likely to be more effective and memorable. “Don’t give the audience 4, give them 2+2" (Andrew Stanton).

    Application Button

    Spoken presentations are challenging because audience members have limits to attention and to the amount of information that audience members can process at any one time. Therefore, using simple, specific, and strongly-reasoned arguments are particularly important for spoken presentations.


  • SITE NAVIGATION

    Pen
     A Framework for Science Logo  Site Navigation Button    


    There are several possible ways to navigate the "A Framework for Scientific Papers" (AFSP) module. 


    Horizontal Divider
    SEQUENTIALLY


    Each section of the AFSP module is numbered. One potential way to navigate the site is to start at the beginning and sequentially follow the numbered sections. Links at the top and bottom (left and right) of each page will take you to the previous or next section in the module. Alternatively, you can use the navigation buttons at the left of the page.


    Horizontal Divider

    AS A TREE


    AFSP is organized as a tree structure. The tree structure can also be navigated. At the bottom of each section there are either (A) links to sub-branches of the tree; or (B) a short summary indicated by the Application Button (Application Button). It is possible to follow branches until reaching an Application Button, indicating that you have reached the terminal branch (or "leaf," if you will) of the tree. Going back to the branch point will allow other branches of the tree to be navigated.


    Horizontal Divider

    AS A LINEAR SEQUENCE

    It is also possible to access access site content in a continuous sequence. To show all sections of the module, click on the following link:

    A Framework for Science Logo

    Show all sections of the AFSP module.


    Horizontal Divider

    DIVIDED INTO SUB-SECTIONS


    For hypothesis-driven scientific papers, every aspect of the paper contributes to testing one or more Measurable and General Hypotheses. Much of the reasoning of scientific papers is directly connected to hypothesis testing. Moreover, focusing on hypotheses can help to make papers more simple and specific. Writing can be simplified by ensuring that all text has a clear purpose related to developing and testing hypotheses. A majority of scientific sentences are premises (with references) or conclusions that deductively or inductively follow from premises. Any text that does not clearly and directly contribute to testing one or more hypotheses can be removed. The clear goal of developing and testing hypotheses can also help to make writing more specific. For example, starting with a hypothesis and reverse-engineering text can help to ensure that all text makes a specific contribution to hypothesis testing.


    Therefore, it is reasonable to divide "A Framework for Scientific Papers" into three sections. The first section reviews what hypotheses are, and how both deductive and inductive reasoning can be used to test hypotheses. The second section addresses how the Introduction and Methods sections explain the development of study hypotheses. Finally, the third section addresses how the Results and Discussion sections explain the outcomes and consequences of testing both General and Measurable hypotheses.


    Clear scientific writing requires considerable practice, reflection, and revision (Alley, 1996). Strong reasoning and simple, specific presentation are the most important contributors to clear scientific writing. Therefore, strongly-reasoned, simple, and specific writing is a sufficient goal for the AFSP module. 


    However, once an author can consistently organize and write clear scientific arguments, it is appropriate to consider ways of increasing the impact of scientific writing. Therefore, the "Impact" section of AFSP is  recommended only when authors are comfortable with the reasoning and specific format of scientific papers.


    Three Section Hypotheses Button

    SECTION 1: WHAT HYPOTHESES ARE

    Three Section Developing Button

    SECTION 2: DEVELOPING HYPOTHESES

    Three Section Testing Button

    SECTION 3: TESTING HYPOTHESES

    ADDITIONAL SECTION
     Increasing Impact Button

     SECTION 4: INCREASING IMPACT