{"id":1006,"date":"2019-08-13T19:06:41","date_gmt":"2019-08-13T19:06:41","guid":{"rendered":"https:\/\/opentextbc.ca\/researchmethods\/chapter\/from-the-replicability-crisis-to-open-science-practices\/"},"modified":"2019-11-05T18:02:47","modified_gmt":"2019-11-05T18:02:47","slug":"from-the-replicability-crisis-to-open-science-practices","status":"publish","type":"chapter","link":"https:\/\/opentextbc.ca\/researchmethods\/chapter\/from-the-replicability-crisis-to-open-science-practices\/","title":{"raw":"From the \u201cReplicability Crisis\u201d to Open Science Practices","rendered":"From the \u201cReplicability Crisis\u201d to Open Science Practices"},"content":{"raw":"<div class=\"textbox textbox--learning-objectives\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\">Learning Objectives<\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n<ol>\r\n \t<li>Describe what is meant by the \"replicability crisis\" in psychology.<\/li>\r\n \t<li>Describe some questionable\u00a0research practices.<\/li>\r\n \t<li>Identify\u00a0some ways in which scientific rigour may be increased.<\/li>\r\n \t<li>Understand the importance of openness in psychological science.<\/li>\r\n<\/ol>\r\n<\/div>\r\n<\/div>\r\n<a href=\"\/researchmethods\/chapter\/understanding-science\/\">At the start of this book<\/a> we discussed the <a href=\"https:\/\/osf.io\/wx7ck\/\" rel=\"noopener\">\"Many Labs Replication Project\"<\/a>, which failed to replicate the original finding by\u00a0<span>Simone Schnall and her colleagues that washing\u00a0<\/span><span>one\u2019s hands leads people to view moral transgressions\u00a0as less wrong\u00a0<span>(Schnall, Benton, &amp; Harvey, 2008)[footnote]Schnall, S., Benton, J., &amp; Harvey, S. (2008). With a clean conscience: Cleanliness reduces the severity of moral judgments. <em>Psychological Science, 19<\/em>(12), 1219-1222. doi: 10.1111\/j.1467-9280.2008.02227.x[\/footnote]. Although\u00a0this project is a good illustration<\/span><\/span> of the collaborative and self-correcting nature of science,\u00a0it\u00a0also represents one specific response to psychology's\u00a0recent \u201c[pb_glossary id=\"1222\"]replicability crisis[\/pb_glossary],\u201d\u00a0a phrase that\u00a0refers to the inability of researchers to replicate earlier research findings. Consider for example the results of the <a href=\"https:\/\/osf.io\/ezcuj\/\" rel=\"noopener\">Reproducibility Project<\/a>, which involved over 270 psychologists around the world coordinating their efforts to test the reliability of 100 previously published psychological experiments (Aarts et al., 2015)[footnote]Aarts, A. A., Anderson, C. J., Anderson, J., van Assen, M. A. L. M., Attridge, P. R., Attwood, A. S., \u2026 Zuni, K. (2015, September 21). <em>Reproducibility Project: Psychology.<\/em> Retrieved from osf.io\/ezcuj[\/footnote]. Although 97 of the original 100 studies had found statistically significant effects, only 36 of the replications did! Moreover, even the effect sizes of the replications were, on average, half of those found in the original studies (see Figure 13.5). Of course, a failure to replicate a result by itself does not necessarily discredit the original study as differences in the statistical power, populations sampled, and procedures used, or even the effects of moderating variables could\u00a0explain the different results (Yong, 2015)[footnote]Yong, E. (August 27, 2015). How reliable are psychology studies? <em>The Atlantic.<\/em> Retrieved from http:\/\/www.theatlantic.com\/science\/archive\/2015\/08\/psychology-studies-reliability-reproducability-nosek\/402466\/[\/footnote].\r\n\r\n[caption id=\"attachment_190\" align=\"aligncenter\" width=\"490\"]<a href=\"http:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2015\/09\/replicatation-graphic-b.png\"><img src=\"https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/replicatation-graphic-b.png\" alt=\"Summary of the results of the reproducibility project. Long description available.\" class=\"wp-image-190\" width=\"490\" height=\"677\" \/><\/a> Figure 13.5 Summary of the Results of the Reproducibility Project [Baker, M. (30 April, 2015). First results from psychology\u2019s largest reproducibility test. Nature News.] <a href=\"#fig13.5\">[Long Description]<\/a>[\/caption]Although many believe that the failure to replicate research results is an expected characteristic of cumulative scientific progress, others have interpreted this situation as evidence of systematic problems with conventional scholarship in psychology, including a publication bias that favours the discovery and publication of counter-intuitive but statistically significant findings instead of the duller (but incredibly vital) process of replicating previous findings to test their robustness (Aschwanden, 2015[footnote]Aschwanden, C. (2015, August 19). Science isn't broken: It's just a hell of a lot harder than we give it credit for. <em>Fivethirtyeight<\/em>. Retrieved from http:\/\/fivethirtyeight.com\/features\/science-isnt-broken\/[\/footnote]; Frank, 2015[footnote]Frank, M. (2015, August 31). <em>The slower, harder ways to increase reproducibility<\/em>. Retrieved from http:\/\/babieslearninglanguage.blogspot.ie\/2015\/08\/the-slower-harder-ways-to-increase.html[\/footnote]; Pashler &amp; Harris, 2012[footnote]Pashler, H., &amp; Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments explained. <em>Perspectives on Psychological Science, 7<\/em>(6), 531-536. doi:10.1177\/1745691612463401[\/footnote]; Scherer, 2015[footnote]Scherer, L. (2015, September). <em>Guest post by Laura Scherer.<\/em> Retrieved from http:\/\/sometimesimwrong.typepad.com\/wrong\/2015\/09\/guest-post-by-laura-scherer.html[\/footnote]). Worse still is the suggestion that the low replicability of many studies is evidence of the widespread use of questionable research practices by\u00a0psychological researchers. These may include:\r\n<ol>\r\n \t<li>The selective deletion of outliers in order to influence (usually by artificially inflating) statistical relationships among the measured variables.<\/li>\r\n \t<li>The selective reporting of results, cherry-picking only those findings that support one\u2019s hypotheses.<\/li>\r\n \t<li>Mining the data without an <em>a priori<\/em> hypothesis, only to claim that a statistically significant result had been originally predicted, a practice referred to as \u201c[pb_glossary id=\"1074\"]HARKing[\/pb_glossary]\u201d or hypothesizing after the results are known (Kerr, 1998[footnote]Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. <em>Personality and Social Psychology Review, 2<\/em>(3), 196-217. doi:10.1207\/s15327957pspr0203_4[\/footnote]).<\/li>\r\n \t<li>A practice colloquially known as \u201c<a href=\"http:\/\/journals.plos.org\/plosbiology\/article?id=10.1371\/journal.pbio.1002106\"><em>p<\/em>-hacking<\/a>\u201d (briefly discussed in the previous section), in which a researcher might perform inferential statistical calculations to see if a result was significant before deciding whether to recruit additional participants and collect more data (Head, Holman, Lanfear, Kahn, &amp; Jennions, 2015)[footnote]Head M. L., Holman, L., Lanfear, R., Kahn, A. T., &amp; Jennions, M. D. (2015). The extent and consequences of <em>p<\/em>-hacking in science. <em>PLoS Biol, 13<\/em>(3): e1002106. doi:10.1371\/journal.pbio.1002106[\/footnote]. As you have learned, the probability of finding a statistically significant result is influenced by the number of participants in the study.<\/li>\r\n \t<li>Outright fabrication of data (as in the case of Diederik Stapel, described at the start of <a href=\"\/researchmethods\/part\/research-ethics\/\">Chapter 3<\/a>), although this would be\u00a0a case of fraud rather than a \"research practice.\"<\/li>\r\n<\/ol>\r\nIt is important to shed light on these questionable research\u00a0practices to ensure that current and future researchers (such as yourself) understand the damage they wreak to the integrity and reputation of our discipline (see,\u00a0for example,\u00a0the \"<a href=\"https:\/\/replicationindex.com\/\" rel=\"noopener\">Replication Index<\/a>,\" a statistical \"doping test\"\u00a0developed by\u00a0Ulrich Schimmack in 2014 for estimating the replicability of studies, journals, and even specific researchers).\u00a0However, in addition to highlighting <em>what not to do<\/em>, this so-called \u201ccrisis\u201d has also highlighted the importance of enhancing scientific rigour by:\r\n<ol>\r\n \t<li>Designing and conducting studies that have sufficient statistical power, in order to increase the reliability of findings.<\/li>\r\n \t<li>Publishing both null and significant findings (thereby counteracting the publication bias and reducing the file drawer problem).<\/li>\r\n \t<li>Describing one\u2019s research designs in sufficient detail to enable other researchers to replicate your study using an identical or at least very similar procedure.<\/li>\r\n \t<li>Conducting high-quality replications and publishing these results (Brandt et al., 2014)[footnote]Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., \u2026 van\u2019t Veer, A. (2014). The replication recipe: What makes for a convincing replication? <em>Journal of Experimental Social Psychology, 50,<\/em> 217-224. doi:10.1016\/j.jesp.2013.10.005[\/footnote].<\/li>\r\n<\/ol>\r\nOne particularly promising response to the replicability crisis has been the emergence of [pb_glossary id=\"1061\"]open science practices[\/pb_glossary] that increase the transparency and openness of the scientific enterprise. For example, <em>Psychological Science<\/em> (the flagship journal of the <a href=\"http:\/\/psychologicalscience.org\/\" rel=\"noopener\">Association for Psychological Science<\/a>) and other journals now issue digital badges to researchers who pre-registered their hypotheses and data analysis plans, openly shared their research materials with other researchers (e.g., to enable attempts at replication), or made available their raw data with other researchers (see Figure 13.6).\r\n\r\n[caption id=\"attachment_1004\" align=\"aligncenter\" width=\"600\"]<a href=\"http:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2015\/09\/view_link_new.jpg\"><img alt=\"Badges that say &quot;Open Data,&quot; &quot;Open Materials,&quot; and &quot;Preregistered.&quot;\" src=\"https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238-1024x330.jpg\" class=\"wp-image-1004\" width=\"600\" height=\"193\" \/><\/a> Figure 13.6 Digital Badges from the Center for Open Science[\/caption]\r\n\r\nThese initiatives, which have been spearheaded by the <a href=\"https:\/\/cos.io\/\">Center for Open Science<\/a>, have led to the development of \u201cTransparency and Openness Promotion guidelines\u201d (see Table 13.7) that have since been formally adopted by more than 500 journals and 50 organizations, a list that grows each week. When you add to this the requirements recently\u00a0imposed by federal funding agencies in Canada (the Tri-Council) and the United States (National Science Foundation) concerning the publication of publicly-funded research in open access journals, it certainly appears that the future of science and psychology will be one that embraces greater \u201copenness\u201d (Nosek et al., 2015)[footnote]Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., \u2026 Yarkoni, T. (2015). Promoting an open research culture. <em>Science, 348<\/em>(6242), 1422-1425. doi: 10.1126\/science.aab2374[\/footnote].\r\n<table border=\"0\"><caption>Table 13.7 Transparency and Openness Promotion (TOP) Guidelines<\/caption>\r\n<tbody>\r\n<tr>\r\n<th scope=\"col\">Criteria<\/th>\r\n<th scope=\"col\">Level 0<\/th>\r\n<th scope=\"col\">Level 1<\/th>\r\n<th scope=\"col\">Level 2<\/th>\r\n<th scope=\"col\">Level 3<\/th>\r\n<\/tr>\r\n<tr>\r\n<th scope=\"row\">Citation Standards<\/th>\r\n<td>Journal encourages citation of data, code, and materials, or says nothing<\/td>\r\n<td>Journal describes citation of data in guidelines to authors with clear rules and examples.<\/td>\r\n<td>Article provides appropriate citation for data and materials used consistent with journal's author guidelines.<\/td>\r\n<td>Article is not published until providing appropriate citation for data and materials following journal's author guidelines.<\/td>\r\n<\/tr>\r\n<tr>\r\n<th scope=\"row\">Data Transparency<\/th>\r\n<td>Journal encourages data sharing, or says nothing<\/td>\r\n<td>Article states whether data are available, and, if so, where to access them.<\/td>\r\n<td>Data must be posted to a trusted repository. Exceptions must be identified at article submission.<\/td>\r\n<td>Data must be posted to a trusted repository, and reported analyses will be reproduced independently prior to publication.<\/td>\r\n<\/tr>\r\n<tr>\r\n<th scope=\"row\">Analytic Methods (Code) Transparency<\/th>\r\n<td>Journal encourages code sharing, or says nothing<\/td>\r\n<td>Article states whether code is available, and, if so, where to access them.<\/td>\r\n<td>Code must be posted to a trusted repository. Exceptions must be identified at article submission.<\/td>\r\n<td>Code must be posted to a trusted repository, and reported analyses will be reproduced independently prior to publication.<\/td>\r\n<\/tr>\r\n<tr>\r\n<th scope=\"row\">Research Materials Transparency<\/th>\r\n<td>Journal encourages materials sharing, or says nothing<\/td>\r\n<td>Article states whether materials are available, and, if so, where to access them.<\/td>\r\n<td>Materials must be posted to a trusted repository. Exceptions must be identified at article submission.<\/td>\r\n<td>Materials must be posted to a trusted repository, and reported analyses will be reproduced independently prior to publication.<\/td>\r\n<\/tr>\r\n<tr>\r\n<th scope=\"row\">Design and Analysis Transparency<\/th>\r\n<td>Journal encourages design and analysis transparency, or says nothing<\/td>\r\n<td>Journal articulates design transparency standards<\/td>\r\n<td>Journal requires adherence to design transparency standards for review and publication<\/td>\r\n<td>Journal requires and enforces adherence to design transparency standards for review and publication<\/td>\r\n<\/tr>\r\n<tr>\r\n<th scope=\"row\">Preregistration of studies<\/th>\r\n<td>Journal says nothing<\/td>\r\n<td>Journal encourages preregistration of studies and provides link in article to preregistration if it exists<\/td>\r\n<td>Journal encourages preregistration of studies and provides link in article and certification of meeting preregistration badge requirements<\/td>\r\n<td>Journal requires preregistration of studies and provides link and badge in article to meeting requirements.<\/td>\r\n<\/tr>\r\n<tr>\r\n<th scope=\"row\">Preregistration of analysis plans<\/th>\r\n<td>Journal says nothing<\/td>\r\n<td>Journal encourages preanalysis plans and provides link in article to registered analysis plan if it exists<\/td>\r\n<td>Journal encourages preanalysis plans and provides link in article and certification of meeting registered analysis plan badge requirements<\/td>\r\n<td>Journal requires preregistration of studies with analysis plans and provides link and badge in article to meeting requirements.<\/td>\r\n<\/tr>\r\n<tr>\r\n<th scope=\"row\">Replication<\/th>\r\n<td>Journal discourages submission of replication studies, or says nothing<\/td>\r\n<td>Journal encourages submission of replication studies<\/td>\r\n<td>Journal encourages submission of replication studies and conducts results blind review<\/td>\r\n<td>Journal uses Registered Reports as a submission option for replication studies with peer review prior to observing the study outcomes.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<div class=\"textbox textbox--key-takeaways\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\">Key Takeaways<\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n<ul>\r\n \t<li>In recent years psychology has grappled with a failure to replicate research findings.\u00a0Some have interpreted this as a normal aspect\u00a0of science but others have suggested that this is\u00a0highlights problems stemming from questionable research\u00a0practices.<\/li>\r\n \t<li>One response to this \"replicability crisis\" has been the emergence of open science practices, which increase the transparency and openness of the research process. These open practices include digital badges to encourage pre-registration of hypotheses and the sharing of raw data and research materials.<\/li>\r\n<\/ul>\r\n<\/div>\r\n<\/div>\r\n<div class=\"textbox textbox--exercises\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\">Exercises<\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n<ol>\r\n \t<li>Discussion: What do you think are some of the key benefits of the adoption of open science practices such as pre-registration and the sharing of raw data and research materials? Can you identify any drawbacks of these practices?<\/li>\r\n \t<li>Practice: Read\u00a0the online article \"<a href=\"http:\/\/fivethirtyeight.com\/features\/science-isnt-broken\/\" rel=\"noopener\">Science isn't broken: It's just a hell of a lot harder than we give it credit for<\/a>\" and use the interactive tool entitled \"Hack your way to scientific glory\" in order to better understand the data malpractice of \"<em>p<\/em>-hacking.\"<\/li>\r\n<\/ol>\r\n<\/div>\r\n<\/div>\r\n<h1>Long Descriptions<\/h1>\r\n<strong id=\"fig13.5\">Figure 13.5 long description:<\/strong> Infographic titled \"Reliability Test.\" It says, \"An effort to reproduce 100 psychology findings found that only 39 held up (based on criteria set at the start of each study). But some of the 61 non-replications reported similar findings to those of their original papers.\"\r\n\r\nThere is a graphic representing these 100 reproductions as squares of various shades of blue and black. The graphic answers the question, \"Did replicate match original's results?\" There are 61 squares on the \"No\" side and 39 on the \"Yes\" side.\r\n\r\nEach square's colour is determined by how closely the findings of the experiment it represents resemble the original study. The ratings are:\r\n<ul>\r\n \t<li>Virtually identical<\/li>\r\n \t<li>Extremely similar<\/li>\r\n \t<li>Very similar<\/li>\r\n \t<li>Moderately similar<\/li>\r\n \t<li>Somewhat similar<\/li>\r\n \t<li>Slightly similar<\/li>\r\n \t<li>Not at all similar<\/li>\r\n<\/ul>\r\nFor the \"No\" side, the results break down as such:\r\n<ul>\r\n \t<li>Virtually identical: 1<\/li>\r\n \t<li>Extremely similar: 1<\/li>\r\n \t<li>Very similar: 6<\/li>\r\n \t<li>Moderately similar: 16<\/li>\r\n \t<li>Somewhat similar: 10<\/li>\r\n \t<li>Slightly similar: 12<\/li>\r\n \t<li>Not at all similar: 15<\/li>\r\n<\/ul>\r\nFor the \"Yes\" side, the results break down as such:\r\n<ul>\r\n \t<li>Virtually identical: 4<\/li>\r\n \t<li>Extremely similar: 12<\/li>\r\n \t<li>Very similar: 15<\/li>\r\n \t<li>Moderately similar: 4<\/li>\r\n \t<li>Somewhat similar: 3<\/li>\r\n \t<li>Slightly similar: 1<\/li>\r\n<\/ul>\r\n<a href=\"#attachment_190\">[Return to Figure 13.5]<\/a>\r\n<h3>Media Attribution<\/h3>\r\n<ul>\r\n \t<li><a href=\"http:\/\/www.nature.com\/news\/first-results-from-psychology-s-largest-reproducibility-test-1.17433\">Summary of the Results of the Reproducibility Project<\/a>. Reprinted by permission from Macmillan Publishers Ltd: Nature [Baker, M. (30 April, 2015). First results from psychology\u2019s largest reproducibility test. Nature News], copyright 2015.<\/li>\r\n \t<li>Transparency and Openness Promotion (TOP) Guidelines. Reproduced with permission<\/li>\r\n<\/ul>","rendered":"<div class=\"textbox textbox--learning-objectives\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\">Learning Objectives<\/p>\n<\/header>\n<div class=\"textbox__content\">\n<ol>\n<li>Describe what is meant by the &#8220;replicability crisis&#8221; in psychology.<\/li>\n<li>Describe some questionable\u00a0research practices.<\/li>\n<li>Identify\u00a0some ways in which scientific rigour may be increased.<\/li>\n<li>Understand the importance of openness in psychological science.<\/li>\n<\/ol>\n<\/div>\n<\/div>\n<p><a href=\"\/researchmethods\/chapter\/understanding-science\/\">At the start of this book<\/a> we discussed the <a href=\"https:\/\/osf.io\/wx7ck\/\" rel=\"noopener\">&#8220;Many Labs Replication Project&#8221;<\/a>, which failed to replicate the original finding by\u00a0<span>Simone Schnall and her colleagues that washing\u00a0<\/span><span>one\u2019s hands leads people to view moral transgressions\u00a0as less wrong\u00a0<span>(Schnall, Benton, &amp; Harvey, 2008)<a class=\"footnote\" title=\"Schnall, S., Benton, J., &amp; Harvey, S. (2008). With a clean conscience: Cleanliness reduces the severity of moral judgments. Psychological Science, 19(12), 1219-1222. doi: 10.1111\/j.1467-9280.2008.02227.x\" id=\"return-footnote-1006-1\" href=\"#footnote-1006-1\" aria-label=\"Footnote 1\"><sup class=\"footnote\">[1]<\/sup><\/a>. Although\u00a0this project is a good illustration<\/span><\/span> of the collaborative and self-correcting nature of science,\u00a0it\u00a0also represents one specific response to psychology&#8217;s\u00a0recent \u201c<a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_1006_1222\">replicability crisis<\/a>,\u201d\u00a0a phrase that\u00a0refers to the inability of researchers to replicate earlier research findings. Consider for example the results of the <a href=\"https:\/\/osf.io\/ezcuj\/\" rel=\"noopener\">Reproducibility Project<\/a>, which involved over 270 psychologists around the world coordinating their efforts to test the reliability of 100 previously published psychological experiments (Aarts et al., 2015)<a class=\"footnote\" title=\"Aarts, A. A., Anderson, C. J., Anderson, J., van Assen, M. A. L. M., Attridge, P. R., Attwood, A. S., \u2026 Zuni, K. (2015, September 21). Reproducibility Project: Psychology. Retrieved from osf.io\/ezcuj\" id=\"return-footnote-1006-2\" href=\"#footnote-1006-2\" aria-label=\"Footnote 2\"><sup class=\"footnote\">[2]<\/sup><\/a>. Although 97 of the original 100 studies had found statistically significant effects, only 36 of the replications did! Moreover, even the effect sizes of the replications were, on average, half of those found in the original studies (see Figure 13.5). Of course, a failure to replicate a result by itself does not necessarily discredit the original study as differences in the statistical power, populations sampled, and procedures used, or even the effects of moderating variables could\u00a0explain the different results (Yong, 2015)<a class=\"footnote\" title=\"Yong, E. (August 27, 2015). How reliable are psychology studies? The Atlantic. Retrieved from http:\/\/www.theatlantic.com\/science\/archive\/2015\/08\/psychology-studies-reliability-reproducability-nosek\/402466\/\" id=\"return-footnote-1006-3\" href=\"#footnote-1006-3\" aria-label=\"Footnote 3\"><sup class=\"footnote\">[3]<\/sup><\/a>.<\/p>\n<figure id=\"attachment_190\" aria-describedby=\"caption-attachment-190\" style=\"width: 490px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2015\/09\/replicatation-graphic-b.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/replicatation-graphic-b.png\" alt=\"Summary of the results of the reproducibility project. Long description available.\" class=\"wp-image-190\" width=\"490\" height=\"677\" \/><\/a><figcaption id=\"caption-attachment-190\" class=\"wp-caption-text\">Figure 13.5 Summary of the Results of the Reproducibility Project [Baker, M. (30 April, 2015). First results from psychology\u2019s largest reproducibility test. Nature News.] <a href=\"#fig13.5\">[Long Description]<\/a><\/figcaption><\/figure>\n<p>Although many believe that the failure to replicate research results is an expected characteristic of cumulative scientific progress, others have interpreted this situation as evidence of systematic problems with conventional scholarship in psychology, including a publication bias that favours the discovery and publication of counter-intuitive but statistically significant findings instead of the duller (but incredibly vital) process of replicating previous findings to test their robustness (Aschwanden, 2015<a class=\"footnote\" title=\"Aschwanden, C. (2015, August 19). Science isn't broken: It's just a hell of a lot harder than we give it credit for. Fivethirtyeight. Retrieved from http:\/\/fivethirtyeight.com\/features\/science-isnt-broken\/\" id=\"return-footnote-1006-4\" href=\"#footnote-1006-4\" aria-label=\"Footnote 4\"><sup class=\"footnote\">[4]<\/sup><\/a>; Frank, 2015<a class=\"footnote\" title=\"Frank, M. (2015, August 31). The slower, harder ways to increase reproducibility. Retrieved from http:\/\/babieslearninglanguage.blogspot.ie\/2015\/08\/the-slower-harder-ways-to-increase.html\" id=\"return-footnote-1006-5\" href=\"#footnote-1006-5\" aria-label=\"Footnote 5\"><sup class=\"footnote\">[5]<\/sup><\/a>; Pashler &amp; Harris, 2012<a class=\"footnote\" title=\"Pashler, H., &amp; Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments explained. Perspectives on Psychological Science, 7(6), 531-536. doi:10.1177\/1745691612463401\" id=\"return-footnote-1006-6\" href=\"#footnote-1006-6\" aria-label=\"Footnote 6\"><sup class=\"footnote\">[6]<\/sup><\/a>; Scherer, 2015<a class=\"footnote\" title=\"Scherer, L. (2015, September). Guest post by Laura Scherer. Retrieved from http:\/\/sometimesimwrong.typepad.com\/wrong\/2015\/09\/guest-post-by-laura-scherer.html\" id=\"return-footnote-1006-7\" href=\"#footnote-1006-7\" aria-label=\"Footnote 7\"><sup class=\"footnote\">[7]<\/sup><\/a>). Worse still is the suggestion that the low replicability of many studies is evidence of the widespread use of questionable research practices by\u00a0psychological researchers. These may include:<\/p>\n<ol>\n<li>The selective deletion of outliers in order to influence (usually by artificially inflating) statistical relationships among the measured variables.<\/li>\n<li>The selective reporting of results, cherry-picking only those findings that support one\u2019s hypotheses.<\/li>\n<li>Mining the data without an <em>a priori<\/em> hypothesis, only to claim that a statistically significant result had been originally predicted, a practice referred to as \u201c<a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_1006_1074\">HARKing<\/a>\u201d or hypothesizing after the results are known (Kerr, 1998<a class=\"footnote\" title=\"Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196-217. doi:10.1207\/s15327957pspr0203_4\" id=\"return-footnote-1006-8\" href=\"#footnote-1006-8\" aria-label=\"Footnote 8\"><sup class=\"footnote\">[8]<\/sup><\/a>).<\/li>\n<li>A practice colloquially known as \u201c<a href=\"http:\/\/journals.plos.org\/plosbiology\/article?id=10.1371\/journal.pbio.1002106\"><em>p<\/em>-hacking<\/a>\u201d (briefly discussed in the previous section), in which a researcher might perform inferential statistical calculations to see if a result was significant before deciding whether to recruit additional participants and collect more data (Head, Holman, Lanfear, Kahn, &amp; Jennions, 2015)<a class=\"footnote\" title=\"Head M. L., Holman, L., Lanfear, R., Kahn, A. T., &amp; Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biol, 13(3): e1002106. doi:10.1371\/journal.pbio.1002106\" id=\"return-footnote-1006-9\" href=\"#footnote-1006-9\" aria-label=\"Footnote 9\"><sup class=\"footnote\">[9]<\/sup><\/a>. As you have learned, the probability of finding a statistically significant result is influenced by the number of participants in the study.<\/li>\n<li>Outright fabrication of data (as in the case of Diederik Stapel, described at the start of <a href=\"\/researchmethods\/part\/research-ethics\/\">Chapter 3<\/a>), although this would be\u00a0a case of fraud rather than a &#8220;research practice.&#8221;<\/li>\n<\/ol>\n<p>It is important to shed light on these questionable research\u00a0practices to ensure that current and future researchers (such as yourself) understand the damage they wreak to the integrity and reputation of our discipline (see,\u00a0for example,\u00a0the &#8220;<a href=\"https:\/\/replicationindex.com\/\" rel=\"noopener\">Replication Index<\/a>,&#8221; a statistical &#8220;doping test&#8221;\u00a0developed by\u00a0Ulrich Schimmack in 2014 for estimating the replicability of studies, journals, and even specific researchers).\u00a0However, in addition to highlighting <em>what not to do<\/em>, this so-called \u201ccrisis\u201d has also highlighted the importance of enhancing scientific rigour by:<\/p>\n<ol>\n<li>Designing and conducting studies that have sufficient statistical power, in order to increase the reliability of findings.<\/li>\n<li>Publishing both null and significant findings (thereby counteracting the publication bias and reducing the file drawer problem).<\/li>\n<li>Describing one\u2019s research designs in sufficient detail to enable other researchers to replicate your study using an identical or at least very similar procedure.<\/li>\n<li>Conducting high-quality replications and publishing these results (Brandt et al., 2014)<a class=\"footnote\" title=\"Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., \u2026 van\u2019t Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217-224. doi:10.1016\/j.jesp.2013.10.005\" id=\"return-footnote-1006-10\" href=\"#footnote-1006-10\" aria-label=\"Footnote 10\"><sup class=\"footnote\">[10]<\/sup><\/a>.<\/li>\n<\/ol>\n<p>One particularly promising response to the replicability crisis has been the emergence of <a class=\"glossary-term\" aria-haspopup=\"dialog\" aria-describedby=\"definition\" href=\"#term_1006_1061\">open science practices<\/a> that increase the transparency and openness of the scientific enterprise. For example, <em>Psychological Science<\/em> (the flagship journal of the <a href=\"http:\/\/psychologicalscience.org\/\" rel=\"noopener\">Association for Psychological Science<\/a>) and other journals now issue digital badges to researchers who pre-registered their hypotheses and data analysis plans, openly shared their research materials with other researchers (e.g., to enable attempts at replication), or made available their raw data with other researchers (see Figure 13.6).<\/p>\n<figure id=\"attachment_1004\" aria-describedby=\"caption-attachment-1004\" style=\"width: 600px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2015\/09\/view_link_new.jpg\"><img loading=\"lazy\" decoding=\"async\" alt=\"Badges that say &quot;Open Data,&quot; &quot;Open Materials,&quot; and &quot;Preregistered.&quot;\" src=\"https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238-1024x330.jpg\" class=\"wp-image-1004\" width=\"600\" height=\"193\" srcset=\"https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238-1024x330.jpg 1024w, https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238-300x97.jpg 300w, https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238-768x247.jpg 768w, https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238-65x21.jpg 65w, https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238-225x72.jpg 225w, https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238-350x113.jpg 350w, https:\/\/opentextbc.ca\/researchmethods\/wp-content\/uploads\/sites\/37\/2019\/08\/view_link_new-e1443336145238.jpg 1279w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><figcaption id=\"caption-attachment-1004\" class=\"wp-caption-text\">Figure 13.6 Digital Badges from the Center for Open Science<\/figcaption><\/figure>\n<p>These initiatives, which have been spearheaded by the <a href=\"https:\/\/cos.io\/\">Center for Open Science<\/a>, have led to the development of \u201cTransparency and Openness Promotion guidelines\u201d (see Table 13.7) that have since been formally adopted by more than 500 journals and 50 organizations, a list that grows each week. When you add to this the requirements recently\u00a0imposed by federal funding agencies in Canada (the Tri-Council) and the United States (National Science Foundation) concerning the publication of publicly-funded research in open access journals, it certainly appears that the future of science and psychology will be one that embraces greater \u201copenness\u201d (Nosek et al., 2015)<a class=\"footnote\" title=\"Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., \u2026 Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422-1425. doi: 10.1126\/science.aab2374\" id=\"return-footnote-1006-11\" href=\"#footnote-1006-11\" aria-label=\"Footnote 11\"><sup class=\"footnote\">[11]<\/sup><\/a>.<\/p>\n<table>\n<caption>Table 13.7 Transparency and Openness Promotion (TOP) Guidelines<\/caption>\n<tbody>\n<tr>\n<th scope=\"col\">Criteria<\/th>\n<th scope=\"col\">Level 0<\/th>\n<th scope=\"col\">Level 1<\/th>\n<th scope=\"col\">Level 2<\/th>\n<th scope=\"col\">Level 3<\/th>\n<\/tr>\n<tr>\n<th scope=\"row\">Citation Standards<\/th>\n<td>Journal encourages citation of data, code, and materials, or says nothing<\/td>\n<td>Journal describes citation of data in guidelines to authors with clear rules and examples.<\/td>\n<td>Article provides appropriate citation for data and materials used consistent with journal&#8217;s author guidelines.<\/td>\n<td>Article is not published until providing appropriate citation for data and materials following journal&#8217;s author guidelines.<\/td>\n<\/tr>\n<tr>\n<th scope=\"row\">Data Transparency<\/th>\n<td>Journal encourages data sharing, or says nothing<\/td>\n<td>Article states whether data are available, and, if so, where to access them.<\/td>\n<td>Data must be posted to a trusted repository. Exceptions must be identified at article submission.<\/td>\n<td>Data must be posted to a trusted repository, and reported analyses will be reproduced independently prior to publication.<\/td>\n<\/tr>\n<tr>\n<th scope=\"row\">Analytic Methods (Code) Transparency<\/th>\n<td>Journal encourages code sharing, or says nothing<\/td>\n<td>Article states whether code is available, and, if so, where to access them.<\/td>\n<td>Code must be posted to a trusted repository. Exceptions must be identified at article submission.<\/td>\n<td>Code must be posted to a trusted repository, and reported analyses will be reproduced independently prior to publication.<\/td>\n<\/tr>\n<tr>\n<th scope=\"row\">Research Materials Transparency<\/th>\n<td>Journal encourages materials sharing, or says nothing<\/td>\n<td>Article states whether materials are available, and, if so, where to access them.<\/td>\n<td>Materials must be posted to a trusted repository. Exceptions must be identified at article submission.<\/td>\n<td>Materials must be posted to a trusted repository, and reported analyses will be reproduced independently prior to publication.<\/td>\n<\/tr>\n<tr>\n<th scope=\"row\">Design and Analysis Transparency<\/th>\n<td>Journal encourages design and analysis transparency, or says nothing<\/td>\n<td>Journal articulates design transparency standards<\/td>\n<td>Journal requires adherence to design transparency standards for review and publication<\/td>\n<td>Journal requires and enforces adherence to design transparency standards for review and publication<\/td>\n<\/tr>\n<tr>\n<th scope=\"row\">Preregistration of studies<\/th>\n<td>Journal says nothing<\/td>\n<td>Journal encourages preregistration of studies and provides link in article to preregistration if it exists<\/td>\n<td>Journal encourages preregistration of studies and provides link in article and certification of meeting preregistration badge requirements<\/td>\n<td>Journal requires preregistration of studies and provides link and badge in article to meeting requirements.<\/td>\n<\/tr>\n<tr>\n<th scope=\"row\">Preregistration of analysis plans<\/th>\n<td>Journal says nothing<\/td>\n<td>Journal encourages preanalysis plans and provides link in article to registered analysis plan if it exists<\/td>\n<td>Journal encourages preanalysis plans and provides link in article and certification of meeting registered analysis plan badge requirements<\/td>\n<td>Journal requires preregistration of studies with analysis plans and provides link and badge in article to meeting requirements.<\/td>\n<\/tr>\n<tr>\n<th scope=\"row\">Replication<\/th>\n<td>Journal discourages submission of replication studies, or says nothing<\/td>\n<td>Journal encourages submission of replication studies<\/td>\n<td>Journal encourages submission of replication studies and conducts results blind review<\/td>\n<td>Journal uses Registered Reports as a submission option for replication studies with peer review prior to observing the study outcomes.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div class=\"textbox textbox--key-takeaways\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\">Key Takeaways<\/p>\n<\/header>\n<div class=\"textbox__content\">\n<ul>\n<li>In recent years psychology has grappled with a failure to replicate research findings.\u00a0Some have interpreted this as a normal aspect\u00a0of science but others have suggested that this is\u00a0highlights problems stemming from questionable research\u00a0practices.<\/li>\n<li>One response to this &#8220;replicability crisis&#8221; has been the emergence of open science practices, which increase the transparency and openness of the research process. These open practices include digital badges to encourage pre-registration of hypotheses and the sharing of raw data and research materials.<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<div class=\"textbox textbox--exercises\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\">Exercises<\/p>\n<\/header>\n<div class=\"textbox__content\">\n<ol>\n<li>Discussion: What do you think are some of the key benefits of the adoption of open science practices such as pre-registration and the sharing of raw data and research materials? Can you identify any drawbacks of these practices?<\/li>\n<li>Practice: Read\u00a0the online article &#8220;<a href=\"http:\/\/fivethirtyeight.com\/features\/science-isnt-broken\/\" rel=\"noopener\">Science isn&#8217;t broken: It&#8217;s just a hell of a lot harder than we give it credit for<\/a>&#8221; and use the interactive tool entitled &#8220;Hack your way to scientific glory&#8221; in order to better understand the data malpractice of &#8220;<em>p<\/em>-hacking.&#8221;<\/li>\n<\/ol>\n<\/div>\n<\/div>\n<h1>Long Descriptions<\/h1>\n<p><strong id=\"fig13.5\">Figure 13.5 long description:<\/strong> Infographic titled &#8220;Reliability Test.&#8221; It says, &#8220;An effort to reproduce 100 psychology findings found that only 39 held up (based on criteria set at the start of each study). But some of the 61 non-replications reported similar findings to those of their original papers.&#8221;<\/p>\n<p>There is a graphic representing these 100 reproductions as squares of various shades of blue and black. The graphic answers the question, &#8220;Did replicate match original&#8217;s results?&#8221; There are 61 squares on the &#8220;No&#8221; side and 39 on the &#8220;Yes&#8221; side.<\/p>\n<p>Each square&#8217;s colour is determined by how closely the findings of the experiment it represents resemble the original study. The ratings are:<\/p>\n<ul>\n<li>Virtually identical<\/li>\n<li>Extremely similar<\/li>\n<li>Very similar<\/li>\n<li>Moderately similar<\/li>\n<li>Somewhat similar<\/li>\n<li>Slightly similar<\/li>\n<li>Not at all similar<\/li>\n<\/ul>\n<p>For the &#8220;No&#8221; side, the results break down as such:<\/p>\n<ul>\n<li>Virtually identical: 1<\/li>\n<li>Extremely similar: 1<\/li>\n<li>Very similar: 6<\/li>\n<li>Moderately similar: 16<\/li>\n<li>Somewhat similar: 10<\/li>\n<li>Slightly similar: 12<\/li>\n<li>Not at all similar: 15<\/li>\n<\/ul>\n<p>For the &#8220;Yes&#8221; side, the results break down as such:<\/p>\n<ul>\n<li>Virtually identical: 4<\/li>\n<li>Extremely similar: 12<\/li>\n<li>Very similar: 15<\/li>\n<li>Moderately similar: 4<\/li>\n<li>Somewhat similar: 3<\/li>\n<li>Slightly similar: 1<\/li>\n<\/ul>\n<p><a href=\"#attachment_190\">[Return to Figure 13.5]<\/a><\/p>\n<h3>Media Attribution<\/h3>\n<ul>\n<li><a href=\"http:\/\/www.nature.com\/news\/first-results-from-psychology-s-largest-reproducibility-test-1.17433\">Summary of the Results of the Reproducibility Project<\/a>. Reprinted by permission from Macmillan Publishers Ltd: Nature [Baker, M. (30 April, 2015). First results from psychology\u2019s largest reproducibility test. Nature News], copyright 2015.<\/li>\n<li>Transparency and Openness Promotion (TOP) Guidelines. Reproduced with permission<\/li>\n<\/ul>\n<hr class=\"before-footnotes clear\" \/><div class=\"footnotes\"><ol><li id=\"footnote-1006-1\">Schnall, S., Benton, J., &amp; Harvey, S. (2008). With a clean conscience: Cleanliness reduces the severity of moral judgments. <em>Psychological Science, 19<\/em>(12), 1219-1222. doi: 10.1111\/j.1467-9280.2008.02227.x <a href=\"#return-footnote-1006-1\" class=\"return-footnote\" aria-label=\"Return to footnote 1\">&crarr;<\/a><\/li><li id=\"footnote-1006-2\">Aarts, A. A., Anderson, C. J., Anderson, J., van Assen, M. A. L. M., Attridge, P. R., Attwood, A. S., \u2026 Zuni, K. (2015, September 21). <em>Reproducibility Project: Psychology.<\/em> Retrieved from osf.io\/ezcuj <a href=\"#return-footnote-1006-2\" class=\"return-footnote\" aria-label=\"Return to footnote 2\">&crarr;<\/a><\/li><li id=\"footnote-1006-3\">Yong, E. (August 27, 2015). How reliable are psychology studies? <em>The Atlantic.<\/em> Retrieved from http:\/\/www.theatlantic.com\/science\/archive\/2015\/08\/psychology-studies-reliability-reproducability-nosek\/402466\/ <a href=\"#return-footnote-1006-3\" class=\"return-footnote\" aria-label=\"Return to footnote 3\">&crarr;<\/a><\/li><li id=\"footnote-1006-4\">Aschwanden, C. (2015, August 19). Science isn't broken: It's just a hell of a lot harder than we give it credit for. <em>Fivethirtyeight<\/em>. Retrieved from http:\/\/fivethirtyeight.com\/features\/science-isnt-broken\/ <a href=\"#return-footnote-1006-4\" class=\"return-footnote\" aria-label=\"Return to footnote 4\">&crarr;<\/a><\/li><li id=\"footnote-1006-5\">Frank, M. (2015, August 31). <em>The slower, harder ways to increase reproducibility<\/em>. Retrieved from http:\/\/babieslearninglanguage.blogspot.ie\/2015\/08\/the-slower-harder-ways-to-increase.html <a href=\"#return-footnote-1006-5\" class=\"return-footnote\" aria-label=\"Return to footnote 5\">&crarr;<\/a><\/li><li id=\"footnote-1006-6\">Pashler, H., &amp; Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments explained. <em>Perspectives on Psychological Science, 7<\/em>(6), 531-536. doi:10.1177\/1745691612463401 <a href=\"#return-footnote-1006-6\" class=\"return-footnote\" aria-label=\"Return to footnote 6\">&crarr;<\/a><\/li><li id=\"footnote-1006-7\">Scherer, L. (2015, September). <em>Guest post by Laura Scherer.<\/em> Retrieved from http:\/\/sometimesimwrong.typepad.com\/wrong\/2015\/09\/guest-post-by-laura-scherer.html <a href=\"#return-footnote-1006-7\" class=\"return-footnote\" aria-label=\"Return to footnote 7\">&crarr;<\/a><\/li><li id=\"footnote-1006-8\">Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. <em>Personality and Social Psychology Review, 2<\/em>(3), 196-217. doi:10.1207\/s15327957pspr0203_4 <a href=\"#return-footnote-1006-8\" class=\"return-footnote\" aria-label=\"Return to footnote 8\">&crarr;<\/a><\/li><li id=\"footnote-1006-9\">Head M. L., Holman, L., Lanfear, R., Kahn, A. T., &amp; Jennions, M. D. (2015). The extent and consequences of <em>p<\/em>-hacking in science. <em>PLoS Biol, 13<\/em>(3): e1002106. doi:10.1371\/journal.pbio.1002106 <a href=\"#return-footnote-1006-9\" class=\"return-footnote\" aria-label=\"Return to footnote 9\">&crarr;<\/a><\/li><li id=\"footnote-1006-10\">Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., \u2026 van\u2019t Veer, A. (2014). The replication recipe: What makes for a convincing replication? <em>Journal of Experimental Social Psychology, 50,<\/em> 217-224. doi:10.1016\/j.jesp.2013.10.005 <a href=\"#return-footnote-1006-10\" class=\"return-footnote\" aria-label=\"Return to footnote 10\">&crarr;<\/a><\/li><li id=\"footnote-1006-11\">Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., \u2026 Yarkoni, T. (2015). Promoting an open research culture. <em>Science, 348<\/em>(6242), 1422-1425. doi: 10.1126\/science.aab2374 <a href=\"#return-footnote-1006-11\" class=\"return-footnote\" aria-label=\"Return to footnote 11\">&crarr;<\/a><\/li><\/ol><\/div><div class=\"glossary\"><span class=\"screen-reader-text\" id=\"definition\">definition<\/span><template id=\"term_1006_1222\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_1006_1222\"><div tabindex=\"-1\"><p>The inability of researchers to replicate earlier research findings.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><template id=\"term_1006_1074\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_1006_1074\"><div tabindex=\"-1\"><p>Hypothesizing after the results are known.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><template id=\"term_1006_1061\"><div class=\"glossary__definition\" role=\"dialog\" data-id=\"term_1006_1061\"><div tabindex=\"-1\"><p>Practices that increase the transparency and openness of the scientific enterprise. Examples include the&nbsp;pre-registration of hypotheses and the sharing of raw data and research materials.<\/p>\n<\/div><button><span aria-hidden=\"true\">&times;<\/span><span class=\"screen-reader-text\">Close definition<\/span><\/button><\/div><\/template><\/div>","protected":false},"author":123,"menu_order":4,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-1006","chapter","type-chapter","status-publish","hentry"],"part":989,"_links":{"self":[{"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/pressbooks\/v2\/chapters\/1006","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/wp\/v2\/users\/123"}],"version-history":[{"count":4,"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/pressbooks\/v2\/chapters\/1006\/revisions"}],"predecessor-version":[{"id":1496,"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/pressbooks\/v2\/chapters\/1006\/revisions\/1496"}],"part":[{"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/pressbooks\/v2\/parts\/989"}],"metadata":[{"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/pressbooks\/v2\/chapters\/1006\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/wp\/v2\/media?parent=1006"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/pressbooks\/v2\/chapter-type?post=1006"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/wp\/v2\/contributor?post=1006"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/opentextbc.ca\/researchmethods\/wp-json\/wp\/v2\/license?post=1006"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}