References
1. Albanese, D, Filosi, M, Visintainer, R, Riccadonna, S, Jurman, G, and Furlanello, C. Minerva and minepy: A c engine for the mine suite and its r, python and matlab wrappers. Bioinformatics bts707, 2012.
2. Albanese, D, Filosi, M, Visintainer, R, Riccadonna, S, Jurman, G, and Furlanello, C. Minerva and minepy: A C engine for the MINE suite and its R, Python and MATLAB wrappers. Bioinformatics bts707, 2012.
3. Allaire, J, Horner, J, Xie, Y, Marti, V, and Porte, N. Markdown: Render markdown with the c library ’sundown’. 2019.Available from: https://CRAN.R-project.org/package=markdown
4. Allen, MJ and Yen, WM. Introduction to Measurement Theory. 1 edition. Long Grove, Ill: Waveland Pr Inc, 2001.
5. Allen, M, Poggiali, D, Whitaker, K, Marshall, TR, and Kievit, R. Raincloudplots Tutorials And Codebase., 2018.
6. Allen, M, Poggiali, D, Whitaker, K, Marshall, TR, and Kievit, RA. Raincloud plots: A multi-platform tool for robust data visualization. Wellcome Open Research 4: 63, 2019.
7. Altmann, T, Bodensteiner, J, Dankers, C, Dassen, T, Fritz, N, Gruber, S, et al. Limitations of Interpretable Machine Learning Methods. 2019.
8. Amrhein, V, Trafimow, D, and Greenland, S. Inferential Statistics as Descriptive Statistics: There Is No Replication Crisis if We Don’t Expect Replication. The American Statistician 73: 262–270, 2019.
9. Angrist, JD and Pischke, J-S. Mastering ’metrics: The path from cause to effect. Princeton ; Oxford: Princeton University Press, 2015.
10. Anvari, F and Lakens, D. Using Anchor-Based Methods to Determine the Smallest Effect Size of Interest., 2019.
11. Barker, RJ and R. Schofield, M. Inference About Magnitudes of Effects. International Journal of Sports Physiology and Performance 3: 547–557, 2008.
12. Barron, JT. A General and Adaptive Robust Loss Function. arXiv:170103077 [cs, stat], 2019.Available from: http://arxiv.org/abs/1701.03077
13. Batterham, AM and Hopkins, WG. Making Meaningful Inferences About Magnitudes. International Journal of Sports Physiology and Performance 1: 50–57, 2006.
14. Beaujean, AA. Latent variable modeling using R: A step by step guide. New York: Routledge/Taylor & Francis Group, 2014.
15. Biecek, P and Burzykowski, T. Predictive Models: Explore, Explain, and Debug. 2019.
16. Binmore, K. Rational Decisions. Fourth Impression edition. Princeton, NJ: Princeton University Press, 2011.
17. Bischl, B, Lang, M, Kotthoff, L, Schiffner, J, Richter, J, Studerus, E, et al. mlr: Machine learning in r. Journal of Machine Learning Research 17: 1–5, 2016.Available from: http://jmlr.org/papers/v17/15-066.html
18. Bischl, B, Lang, M, Richter, J, Bossek, J, Horn, D, and Kerschke, P. ParamHelpers: Helpers for parameters in black-box optimization, tuning and machine learning. 2020.Available from: https://CRAN.R-project.org/package=ParamHelpers
19. Bischl, B, Richter, J, Bossek, J, Horn, D, Thomas, J, and Lang, M. MlrMBO: A modular framework for model-based optimization of expensive black-box functions. arXiv preprint arXiv:170303373, 2017.
20. Bland, JM and Altman, DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet (London, England) 1: 307–310, 1986.
21. Borg, DN, Minett, GM, Stewart, IB, and Drovandi, CC. Bayesian Methods Might Solve the Problems with Magnitude-based Inference: Medicine & Science in Sports & Exercise 50: 2609–2610, 2018.
22. Borsboom, D. Measuring the mind: Conceptual issues in modern psychometrics. Cambridge: Cambridge University Press, 2009.
23. Borsboom, D. Latent Variable Theory. Measurement: Interdisciplinary Research & Perspective 6: 25–53, 2008.
24. Borsboom, D, Mellenbergh, GJ, and van Heerden, J. The theoretical status of latent variables. Psychological Review 110: 203–219, 2003.
25. Botchkarev, A. A New Typology Design of Performance Metrics to Measure Errors in Machine Learning Regression Algorithms. Interdisciplinary Journal of Information, Knowledge, and Management 14: 045–076, 2019.
26. Breheny, P and Burchett, W. Visualization of regression models using visreg. The R Journal 9: 56–71, 2017.
27. Breiman, L. Statistical Modeling: The Two Cultures. Statistical Science 16: 199–215, 2001.
28. Buchheit, M and Rabbani, A. The 3015 Intermittent Fitness Test Versus the Yo-Yo Intermittent Recovery Test Level 1: Relationship and Sensitivity to Training. International Journal of Sports Physiology and Performance 9: 522–524, 2014.
29. Caldwell, AR and Cheuvront, SN. Basic statistical considerations for physiology: The journal Temperature toolbox. Temperature 1–30, 2019.
30. Canty, A and Ripley, BD. Boot: Bootstrap R (S-Plus) Functions. 2017.
31. Carsey, T and Harden, J. Monte Carlo Simulation and Resampling Methods for Social Science. 1 edition. Los Angeles: Sage Publications, Inc, 2013.
32. Casalicchio, G, Bossek, J, Lang, M, Kirchhoff, D, Kerschke, P, Hofner, B, et al. OpenML: An r package to connect to the machine learning platform openml. Computational Statistics 1–15, 2017.
33. Chai, T and Draxler, RR. Root mean square error (RMSE) or mean absolute error (MAE)? Arguments against avoiding RMSE in the literature. Geoscientific Model Development 7: 1247–1250, 2014.
34. Clarke, DC and Skiba, PF. Rationale and resources for teaching the mathematical modeling of athletic training and performance. Advances in Physiology Education 37: 134–152, 2013.
35. Cohen, J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, N.J: L. Erlbaum Associates, 1988.
36. Cumming, G. The New Statistics: Why and How. Psychological Science 25: 7–29, 2014.
37. Curran-Everett, D. Magnitude-based Inference: Good Idea but Flawed Approach. Medicine & Science in Sports & Exercise 50: 2164–2165, 2018.
38. Dankel, SJ and Loenneke, JP. A Method to Stop Analyzing Random Error and Start Analyzing Differential Responders to Exercise. Sports Medicine, 2019.
39. Davison, AC and Hinkley, DV. Bootstrap methods and their applications. Cambridge: Cambridge University Press, 1997.Available from: http://statwww.epfl.ch/davison/BMA/
40. Davison, AC and Hinkley, DV. Bootstrap Methods and Their Applications. Cambridge: Cambridge University Press, 1997.
41. Dienes, Z. Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference. 2008 edition. New York: Red Globe Press, 2008.
42. Efron, B. Bayesians, Frequentists, and Scientists. Journal of the American Statistical Association 100: 1–5, 2005.
43. Efron, B and Hastie, T. Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. 1 edition. New York, NY: Cambridge University Press, 2016.
44. Estrada, E, Ferrer, E, and Pardo, A. Statistics for Evaluating Pre-post Change: Relation Between Change in the Distribution Center and Change in the Individual Scores. Frontiers in Psychology 9, 2019.
45. Everitt, B and Hothorn, T. An introduction to applied multivariate analysis with R. New York: Springer, 2011.
46. Finch, WH and French, BF. Latent variable modeling with R. New York: Routledge, Taylor & Francis Group, 2015.
47. Fisher, AJ, Medaglia, JD, and Jeronimus, BF. Lack of group-to-individual generalizability is a threat to human subjects research. Proceedings of the National Academy of Sciences 115: E6106–E6115, 2018.
48. Foreman, JW. Data smart: Using data science to transform information into insight. Hoboken, New Jersey: John Wiley & Sons, 2014.
49. Fox, J. Effect displays in R for generalised linear models. Journal of Statistical Software 8: 1–27, 2003.Available from: http://www.jstatsoft.org/v08/i15/
50. Fox, J and Hong, J. Effect displays in R for multinomial and proportional-odds logit models: Extensions to the effects package. Journal of Statistical Software 32: 1–24, 2009.Available from: http://www.jstatsoft.org/v32/i01/
51. Fox, J and Weisberg, S. Visualizing fit and lack of fit in complex regression models with predictor effect plots and partial residuals. Journal of Statistical Software 87: 1–27, 2018.Available from: https://www.jstatsoft.org/v087/i09
52. Fox, J, Weisberg, S, and Price, B. CarData: Companion to applied regression data sets. 2019.Available from: https://CRAN.R-project.org/package=carData
53. Friedman, J, Hastie, T, and Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software 33: 1–22, 2010.
54. Gelman, A. Causality and Statistical Learning. American Journal of Sociology 117: 955–966, 2011.
55. Gelman, A and Greenland, S. Are confidence intervals better termed “uncertainty intervals”? BMJ l5381, 2019.
56. Gelman, A and Hennig, C. Beyond subjective and objective in statistics. Journal of the Royal Statistical Society: Series A (Statistics in Society) 180: 967–1033, 2017.
57. Giavarina, D. Understanding Bland Altman analysis. Biochemia Medica 25: 141–151, 2015.
58. Gigerenzer, G, Hertwig, R, and Pachur, T. Heuristics: The Foundations of Adaptive Behavior. Reprint edition. Oxford University Press, 2015.
59. Glazier, PS and Mehdizadeh, S. Challenging Conventional Paradigms in Applied Sports Biomechanics Research. Sports Medicine, 2018.
60. Goldstein, A, Kapelner, A, Bleich, J, and Pitkin, E. Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation. arXiv:13096392 [stat], 2013.Available from: http://arxiv.org/abs/1309.6392
61. Greenwell, B, Boehmke, B, and Gray, B. Vip: Variable importance plots. 2020.Available from: https://CRAN.R-project.org/package=vip
62. Greenwell, BM. Pdp: An r package for constructing partial dependence plots. The R Journal 9: 421–436, 2017.Available from: https://journal.r-project.org/archive/2017/RJ-2017-016/index.html
63. Greenwell, BM. Pdp: An R Package for Constructing Partial Dependence Plots. The R Journal 9: 421–436, 2017.
64. Hamner, B and Frasco, M. Metrics: Evaluation metrics for machine learning. 2018.Available from: https://CRAN.R-project.org/package=Metrics
65. Hastie, T, Tibshirani, R, and Friedman, JH. The elements of statistical learning: Data mining, inference, and prediction. 2nd ed. New York, NY: Springer, 2009.
66. Heckman, JJ. Rejoinder: Response to Sobel. Sociological Methodology 35: 135–150, 2005.
67. Hecksteden, A, Kraushaar, J, Scharhag-Rosenberger, F, Theisen, D, Senn, S, and Meyer, T. Individual response to exercise training - a statistical perspective. Journal of Applied Physiology 118: 1450–1459, 2015.
68. Hecksteden, A, Pitsch, W, Rosenberger, F, and Meyer, T. Repeated testing for the assessment of individual response to exercise training. Journal of Applied Physiology 124: 1567–1579, 2018.
69. Henry, L and Wickham, H. Purrr: Functional programming tools. 2020.Available from: https://CRAN.R-project.org/package=purrr
70. Henry, L, Wickham, H, and Chang, W. Ggstance: Horizontal ’ggplot2’ components. 2020.Available from: https://CRAN.R-project.org/package=ggstance
71. Hernan, MA. Causal Knowledge as a Prerequisite for Confounding Evaluation: An Application to Birth Defects Epidemiology. American Journal of Epidemiology 155: 176–184, 2002.
72. Hernan, MA and Cole, SR. Invited Commentary: Causal Diagrams and Measurement Bias. American Journal of Epidemiology 170: 959–962, 2009.
73. Hernán, MA. Does water kill? A call for less casual causal inferences. Annals of epidemiology 26: 674–680, 2016.
74. Hernán, MA. Causal Diagrams: Draw Your Assumptions Before Your Conclusions Course | PH559x | edX. edX., 2017.Available from: https://courses.edx.org/courses/course-v1:HarvardX+PH559x+3T2017/course/
75. Hernán, MA. The C-Word: Scientific Euphemisms Do Not Improve Causal Inference From Observational Data. American Journal of Public Health 108: 616–619, 2018.
76. Hernán, MA, Hsu, J, and Healy, B. A Second Chance to Get Causal Inference Right: A Classification of Data Science Tasks. CHANCE 32: 42–49, 2019.
77. Hernán, MA and Robins, J. Causal Inference. Boca Raton: Chapman & Hall/CRC,.
78. Hernán, MA and Taubman, SL. Does obesity shorten life? The importance of well-defined interventions to answer causal questions. International Journal of Obesity 32: S8–S14, 2008.
79. Hesterberg, TC. What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum. The American Statistician 69: 371–386, 2015.
80. Hocking, TD. Directlabels: Direct labels for multicolor plots. 2020.Available from: https://CRAN.R-project.org/package=directlabels
81. Hopkins, W. Spreadsheets for analysis of validity and reliability. Sportscienceorg 9, 2015.
82. Hopkins, W and Batterham, A. The Vindication of Magnitude-Based Inference. Sportscienceorg 12, 2018.
83. Hopkins, WG. Measures of Reliability in Sports Medicine and Science. Sports Med 15, 2000.
84. Hopkins, WG. Bias in Bland-Altman but not Regression Validity Analyses. Sportscience.org., 2004.Available from: https://sportsci.org/jour/04/wghbias.htm
85. Hopkins, WG. Understanding Statistics by Using Spreadsheets to Generate and Analyze Samples. Sportscience.org., 2007.Available from: https://www.sportsci.org/2007/wghstats.htm
86. Hopkins, WG. A Socratic Dialogue on Comparison of Measures. Sportscience.org., 2010.Available from: http://www.sportsci.org/2010/wghmeasures.htm
87. Hopkins, WG. How to Interpret Changes in an Athletic Performance Test. 2, 2004.
88. Hopkins, WG. New View of Statistics: Effect Magnitudes., 2006.Available from: https://www.sportsci.org/resource/stats/effectmag.html
89. Hopkins, WG. Individual responses made easy. Journal of Applied Physiology 118: 1444–1446, 2015.
90. Hopkins, WG, Marshall, SW, Batterham, AM, and Hanin, J. Progressive Statistics for Studies in Sports Medicine and Exercise Science: Medicine & Science in Sports & Exercise 41: 3–13, 2009.
91. J, T, L, B, T, H, J, R, M, W, and M, H. Different ways to estimate treatment effects in randomised controlled trials. Contemporary Clinical Trials Communications 10: 80–85, 2018.Available from: https://linkinghub.elsevier.com/retrieve/pii/S2451865417301849
92. James, G, Witten, D, Hastie, T, and Tibshirani, R. An Introduction to Statistical Learning: With Applications in R. 1st ed. 2013, Corr. 7th printing 2017 edition. New York: Springer, 2017.
93. Jiménez-Reyes, P, Samozino, P, Brughelli, M, and Morin, J-B. Effectiveness of an Individualized Training Based on Force-Velocity Profiling during Jumping. Frontiers in Physiology 7, 2017.
94. Jiménez-Reyes, P, Samozino, P, and Morin, J-B. Optimized training for jumping performance using the force-velocity imbalance: Individual adaptation kinetics. PLOS ONE 14: e0216681, 2019.
95. Jovanovic, M. shorts: Short sprints., 2020.Available from: https://mladenjovanovic.github.io/shorts/
96. Jovanovic, M and Hemingway, BS. dorem: Dose response modeling., 2020.Available from: https://dorem.net
97. Jovanović, M. bmbstats: Bootstrap magnitude-based statistics. Belgrade, Serbia, 2020.Available from: https://github.com/mladenjovanovic/bmbstats
98. Jovanović, M. vjsim: Vertical jump simulator., 2020.Available from: https://mladenjovanovic.github.io/vjsim/
99. Jovanović, M. Extending the Classical Test Theory with Circular Performance Model. Complementary Training., 2020.
100. Kabacoff, R. R in action: Data analysis and graphics with R. Second edition. Shelter Island: Manning, 2015.
101. Keogh, RH, Shaw, PA, Gustafson, P, Carroll, RJ, Deffner, V, Dodd, KW, et al. STRATOS guidance document on measurement error and misclassification of variables in observational epidemiology: Part 1-Basic theory and simple methods of adjustment. Statistics in Medicine, 2020.
102. King, MT. A point of minimal important difference (MID): A critique of terminology and methods. Expert Review of Pharmacoeconomics & Outcomes Research 11: 171–184, 2011.
103. Kleinberg, J, Liang, A, and Mullainathan, S. The Theory is Predictive, but is it Complete? An Application to Human Perception of Randomness. arXiv:170606974 [cs, stat], 2017.Available from: http://arxiv.org/abs/1706.06974
104. Kleinberg, S. Causality, probability, and time. 2018.
105. Kleinberg, S. Why: A Guide to Finding and Using Causes. 1 edition. Beijing ; Boston: O’Reilly Media, 2015.
106. Kruschke, JK. Bayesian estimation supersedes the t test. Journal of Experimental Psychology: General 142: 573–603, 2013.
107. Kruschke, JK and Liddell, TM. Bayesian data analysis for newcomers. Psychonomic Bulletin & Review 25: 155–177, 2018.
108. Kruschke, JK and Liddell, TM. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychonomic Bulletin & Review 25: 178–206, 2018.
109. Kuhn, M. Caret: Classification and regression training. 2020.Available from: https://CRAN.R-project.org/package=caret
110. Kuhn, M and Johnson, K. Feature Engineering and Selection: A Practical Approach for Predictive Models. Milton: CRC Press LLC, 2019.
111. Kuhn, M and Johnson, K. Applied Predictive Modeling. 1st ed. 2013, Corr. 2nd printing 2016 edition. New York: Springer, 2018.
112. Kuhn, M, Wing, J, Weston, S, Williams, A, Keefer, C, Engelhardt, A, et al. Caret: Classification and Regression Training. 2018.
113. Lakens, D. Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses. Social Psychological and Personality Science 8: 355–362, 2017.
114. Lakens, D, Scheel, AM, and Isager, PM. Equivalence Testing for Psychological Research: A Tutorial. Advances in Methods and Practices in Psychological Science 1: 259–269, 2018.
115. Lang, KM, Sweet, SJ, and Grandfield, EM. Getting beyond the Null: Statistical Modeling as an Alternative Framework for Inference in Developmental Science. Research in Human Development 14: 287–304, 2017.
116. Lang, M, Binder, M, Richter, J, Schratz, P, Pfisterer, F, Coors, S, et al. mlr3: A modern object-oriented machine learning framework in R. Journal of Open Source Software, 2019.Available from: https://joss.theoj.org/papers/10.21105/joss.01903
117. Lang, M, Kotthaus, H, Marwedel, P, Weihs, C, Rahnenfuehrer, J, and Bischl, B. Automatic model selection for high-dimensional survival analysis. Journal of Statistical Computation and Simulation 85: 62–76, 2014.
118. Lantz, B. Machine learning with R: Expert techniques for predictive modeling. 2019.
119. Lederer, DJ, Bell, SC, Branson, RD, Chalmers, JD, Marshall, R, Maslove, DM, et al. Control of Confounding and Reporting of Results in Causal Inference Studies. Guidance for Authors from Editors of Respiratory, Sleep, and Critical Care Journals. Annals of the American Thoracic Society 16: 22–28, 2019.
120. Lederer, W and Küchenhoff, H. A short introduction to the SIMEX and MCSIMEX. R News 6, 2006.
121. Ludbrook, J. SPECIAL ARTICLE COMPARING METHODS OF MEASUREMENT. Clinical and Experimental Pharmacology and Physiology 24: 193–203, 1997.
122. Ludbrook, J. Statistical Techniques For Comparing Measurers And Methods Of Measurement: A Critical Review. Clinical and Experimental Pharmacology and Physiology 29: 527–536, 2002.
123. Ludbrook, J. Linear regression analysis for comparing two measurers or methods of measurement: But which regression?: Linear regression for comparing methods. Clinical and Experimental Pharmacology and Physiology 37: 692–699, 2010.
124. Ludbrook, J. A primer for biomedical scientists on how to execute Model II linear regression analysis: Model II linear regression analysis. Clinical and Experimental Pharmacology and Physiology 39: 329–335, 2012.
125. Lübke, K, Gehrke, M, Horst, J, and Szepannek, G. Why We Should Teach Causal Inference: Examples in Linear Regression With Simulated Data. Journal of Statistics Education 1–7, 2020.
126. Makowski, D, Ben-Shachar, M, and Lüdecke, D. bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework. Journal of Open Source Software 4: 1541, 2019.
127. Makowski, D, Ben-Shachar, MS, Chen, SA, and Lüdecke, D. Indices of Effect Existence and Significance in the Bayesian Framework.
128. Makowski, D, Ben-Shachar, MS, and Lüdecke, D. BayestestR: Describing effects and their uncertainty, existence and significance within the bayesian framework. Journal of Open Source Software 4: 1541, 2019.Available from: https://joss.theoj.org/papers/10.21105/joss.01541
129. Makowski, D, Ben-Shachar, MS, and Lüdecke, D. Understand and describe bayesian models and posterior distributions using bayestestR. CRAN, 2019.
130. McElreath, R. Statistical Rethinking: A Bayesian Course with Examples in R and Stan. 1 edition. Boca Raton: Chapman and Hall/CRC, 2015.
131. McGraw, KO and Wong, SP. A common language effect size statistic. Psychological Bulletin 111: 361–365, 1992.
132. Miller, T. Explanation in Artificial Intelligence: Insights from the Social Sciences. arXiv:170607269 [cs], 2017.Available from: http://arxiv.org/abs/1706.07269
133. Mitchell, S. Unsimple truths: Science, complexity, and policy. paperback ed. Chicago, Mich.: The Univ. of Chicago Press, 2012.
134. Mitchell, SD. Integrative Pluralism. Biology & Philosophy 17: 55–70, 2002.
135. Molenaar, PCM. A Manifesto on Psychology as Idiographic Science: Bringing the Person Back Into Scientific Psychology, This Time Forever. Measurement: Interdisciplinary Research & Perspective 2: 201–218, 2004.
136. Molenaar, PCM and Campbell, CG. The New Person-Specific Paradigm in Psychology. Current Directions in Psychological Science 18: 112–117, 2009.
137. Molnar, C. Interpretable Machine Learning. Leanpub, 2018.
138. Molnar, C, Bischl, B, and Casalicchio, G. Iml: An R package for Interpretable Machine Learning. JOSS 3: 786, 2018.
139. Morey, RD, Hoekstra, R, Rouder, JN, Lee, MD, and Wagenmakers, E-J. The fallacy of placing confidence in confidence intervals. Psychonomic Bulletin & Review 23: 103–123, 2016.
140. Mullineaux, DR, Barnes, CA, and Batterham, AM. Assessment of Bias in Comparing Measurements: A Reliability Example. Measurement in Physical Education and Exercise Science 3: 195–205, 1999.
141. Müller, K and Wickham, H. Tibble: Simple data frames. 2020.Available from: https://CRAN.R-project.org/package=tibble
142. Nevill, AM, Williams, AM, Boreham, C, Wallace, ES, Davison, GW, Abt, G, et al. Can we trust “Magnitude-based inference”? Journal of Sports Sciences 36: 2769–2770, 2018.
143. Norman, GR, Gwadry Sridhar, F, Guyatt, GH, and Walter, SD. Relation of Distribution- and Anchor-Based Approaches in Interpretation of Changes in Health-Related Quality of Life: Medical Care 39: 1039–1047, 2001.
144. Novick, MR. The axioms and principal results of classical test theory. Journal of Mathematical Psychology 3: 1–18, 1966.
145. O’Hagan, T. Dicing with the unknown. Significance 1: 132–133, 2004.
146. Page, SE. The Model Thinker: What You Need to Know to Make Data Work for You. Basic Books, 2018.
147. Pearl, J. Causal inference in statistics: An overview. Statistics Surveys 3: 96–146, 2009.
148. Pearl, J. The seven tools of causal inference, with reflections on machine learning. Communications of the ACM 62: 54–60, 2019.
149. Pearl, J, Glymour, M, and Jewell, NP. Causal Inference in Statistics: A Primer. 1 edition. Chichester, West Sussex: Wiley, 2016.
150. Pearl, J and Mackenzie, D. The Book of Why: The New Science of Cause and Effect. 1 edition. New York: Basic Books, 2018.
151. Pinheiro, J, Bates, D, DebRoy, S, Sarkar, D, and R Core Team. nlme: Linear and nonlinear mixed effects models., 2020.Available from: https://CRAN.R-project.org/package=nlme
152. Probst, P, Au, Q, Casalicchio, G, Stachl, C, and Bischl, B. Multilabel classification with r package mlr. arXiv preprint arXiv:170308991, 2017.
153. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing, 2018.
154. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing, 2020.Available from: https://www.R-project.org/
155. Reshef, DN, Reshef, YA, Finucane, HK, Grossman, SR, McVean, G, Turnbaugh, PJ, et al. Detecting Novel Associations in Large Data Sets. Science 334: 1518–1524, 2011.
156. Revelle, W. Psych: Procedures for psychological, psychometric, and personality research. Evanston, Illinois: Northwestern University, 2019.Available from: https://CRAN.R-project.org/package=psych
157. Ribeiro, MT, Singh, S, and Guestrin, C. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. arXiv:160204938 [cs, stat], 2016.Available from: http://arxiv.org/abs/1602.04938
158. Rohrer, JM. Thinking Clearly About Correlations and Causation: Graphical Causal Models for Observational Data. Advances in Methods and Practices in Psychological Science 1: 27–42, 2018.
159. Rousselet, GA, Pernet, CR, and Wilcox, RR. A practical introduction to the bootstrap: A versatile method to make inferences by using data-driven simulations., 2019.
160. Rousselet, GA, Pernet, CR, and Wilcox, RR. The percentile bootstrap: A teaser with step-by-step instructions in R., 2019.
161. Rousselet, GA, Pernet, CR, and Wilcox, RR. Beyond differences in means: Robust graphical methods to compare two groups in neuroscience. European Journal of Neuroscience 46: 1738–1748, 2017.
162. RStudio Team. RStudio: Integrated Development Environment for R. Boston, MA: RStudio, Inc., 2016.
163. Saddiki, H and Balzer, LB. A Primer on Causality in Data Science. arXiv:180902408 [stat], 2018.Available from: http://arxiv.org/abs/1809.02408
164. Sainani, KL. Clinical Versus Statistical Significance. PM&R 4: 442–445, 2012.
165. Sainani, KL. The Problem with "Magnitude-based Inference". Medicine and Science in Sports and Exercise 50: 2166–2176, 2018.
166. Sainani, KL, Lohse, KR, Jones, PR, and Vickers, A. Magnitude-Based Inference is Not Bayesian and is Not a Valid Method of Inference. Scandinavian Journal of Medicine & Science in Sports, 2019.
167. Samozino, P. A Simple Method for Measuring Force, Velocity and Power Capabilities and Mechanical Effectiveness During Sprint Running. In: Biomechanics of Training and Testing. Morin, J-B and Samozino, P, eds.. Cham: Springer International Publishing, 2018. pp. 237–267
168. Samozino, P. A Simple Method for Measuring Lower Limb Force, Velocity and Power Capabilities During Jumping. In: Biomechanics of Training and Testing. Morin, J-B and Samozino, P, eds.. Cham: Springer International Publishing, 2018. pp. 65–96
169. Samozino, P. Optimal Force-Velocity Profile in Ballistic Push-off: Measurement and Relationship with Performance. In: Biomechanics of Training and Testing. Morin, J-B and Samozino, P, eds.. Cham: Springer International Publishing, 2018. pp. 97–119
170. Samozino, P, Morin, J-B, Hintzy, F, and Belli, A. A simple method for measuring force, velocity and power output during squat jump. Journal of Biomechanics 41: 2940–2945, 2008.
171. Samozino, P, Morin, J-B, Hintzy, F, and Belli, A. Jumping ability: A theoretical integrative approach. Journal of Theoretical Biology 264: 11–18, 2010.
172. Samozino, P, Rejc, E, Di Prampero, PE, Belli, A, and Morin, J-B. Optimal ForceVelocity Profile in Ballistic MovementsAltius: Medicine & Science in Sports & Exercise 44: 313–322, 2012.
173. Sarkar, D. Lattice: Multivariate data visualization with r. New York: Springer, 2008.Available from: http://lmdvr.r-forge.r-project.org
174. Savage, LJ. The Foundations of Statistics. 2nd Revised ed. edition. New York: Dover Publications, 1972.
175. Shang, Y. Measurement Error Adjustment Using the SIMEX Method: An Application to Student Growth Percentiles: Measurement Error Adjustment Using the SIMEX Method. Journal of Educational Measurement 49: 446–465, 2012.
176. Shaw, PA, Gustafson, P, Carroll, RJ, Deffner, V, Dodd, KW, Keogh, RH, et al. STRATOS guidance document on measurement error and misclassification of variables in observational epidemiology: Part 2-More complex methods of adjustment and advanced topics. Statistics in Medicine, 2020.
177. Shmueli, G. To Explain or to Predict? Statistical Science 25: 289–310, 2010.
178. Shrier, I and Platt, RW. Reducing bias through directed acyclic graphs. BMC Medical Research Methodology 8, 2008.
179. Swinton, PA, Hemingway, BS, Saunders, B, Gualano, B, and Dolan, E. A Statistical Framework to Interpret Individual Response to Intervention: Paving the Way for Personalized Nutrition and Exercise Prescription. Frontiers in Nutrition 5, 2018.
180. Tenan, M, Vigotsky, AD, and Caldwell, AR. On the Statistical Properties of the Dankel-Loenneke Method.
181. Textor, J, van der Zander, B, Gilthorpe, MS, Li’skiewicz, M, and Ellison, GTH. Robust causal inference using directed acyclic graphs: The R package “dagitty”. International Journal of Epidemiology dyw341, 2017.
182. Therneau, T and Atkinson, B. Rpart: Recursive partitioning and regression trees. 2019.Available from: https://CRAN.R-project.org/package=rpart
183. Turner, A, Brazier, J, Bishop, C, Chavda, S, Cree, J, and Read, P. Data Analysis for Strength and Conditioning Coaches: Using Excel to Analyze Reliability, Differences, and Relationships. Strength and Conditioning Journal 37: 76–83, 2015.
184. Vaughan, D and Kuhn, M. Hardhat: Construct modeling packages. 2020.Available from: https://CRAN.R-project.org/package=hardhat
185. Wagenmakers, E-J. A practical solution to the pervasive problems ofp values. Psychonomic Bulletin & Review 14: 779–804, 2007.
186. Wallace, M. Analysis in an imperfect world. Significance 17: 14–19, 2020.
187. Watts, DJ, Beck, ED, Bienenstock, EJ, Bowers, J, Frank, A, Grubesic, A, et al. Explanation, prediction, and causality: Three sides of the same coin?, 2018.
188. Weinberg, G and McCann, L. Super thinking: The big book of mental models. New York: Portfolio/Penguin, 2019.
189. Welsh, AH and Knight, EJ. “Magnitude-based Inference”: A Statistical Review. Medicine & Science in Sports & Exercise 47: 874–884, 2015.
190. Wickham, H. Ggplot2: Elegant graphics for data analysis. Springer-Verlag New York, 2016.Available from: https://ggplot2.tidyverse.org
191. Wickham, H. Stringr: Simple, consistent wrappers for common string operations. 2019.Available from: https://CRAN.R-project.org/package=stringr
192. Wickham, H. Forcats: Tools for working with categorical variables (factors). 2020.Available from: https://CRAN.R-project.org/package=forcats
193. Wickham, H, Averick, M, Bryan, J, Chang, W, McGowan, LD, François, R, et al. Welcome to the tidyverse. Journal of Open Source Software 4: 1686, 2019.
194. Wickham, H, François, R, Henry, L, and Müller, K. Dplyr: A grammar of data manipulation. 2020.Available from: https://CRAN.R-project.org/package=dplyr
195. Wickham, H and Henry, L. Tidyr: Tidy messy data. 2020.Available from: https://CRAN.R-project.org/package=tidyr
196. Wickham, H, Hester, J, and Francois, R. Readr: Read rectangular text data. 2018.Available from: https://CRAN.R-project.org/package=readr
197. Wikipedia contributors. Causal model., 2019.
198. Wilcox, R, Peterson, TJ, and McNitt-Gray, JL. Data Analyses When Sample Sizes Are Small: Modern Advances for Dealing With Outliers, Skewed Distributions, and Heteroscedasticity. Journal of Applied Biomechanics 34: 258–261, 2018.
199. Wilcox, RR. Introduction to robust estimation and hypothesis testing. 4th edition. Waltham, MA: Elsevier, 2016.
200. Wilcox, RR and Rousselet, GA. A guide to robust statistical methods in neuroscience. bioRxiv, 2017.
201. Wilke, CO. Cowplot: Streamlined plot theme and plot annotations for ’ggplot2’. 2019.Available from: https://CRAN.R-project.org/package=cowplot
202. Wilke, CO. Ggridges: Ridgeline plots in ’ggplot2’. 2020.Available from: https://CRAN.R-project.org/package=ggridges
203. Willmott, C and Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate Research 30: 79–82, 2005.
204. Xie, Y. Dynamic documents with R and knitr. 2nd ed. Boca Raton, Florida: Chapman; Hall/CRC, 2015.Available from: https://yihui.org/knitr/
205. Xie, Y. Bookdown: Authoring books and technical documents with R markdown. Boca Raton, Florida: Chapman; Hall/CRC, 2016.Available from: https://github.com/rstudio/bookdown
206. Yarkoni, T and Westfall, J. Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning. Perspectives on Psychological Science 12: 1100–1122, 2017.
207. Zhao, Q and Hastie, T. Causal Interpretations of Black-Box Models. Journal of Business & Economic Statistics 1–10, 2019.
208. Zhu, H. KableExtra: Construct complex table with ’kable’ and pipe syntax. 2019.Available from: https://CRAN.R-project.org/package=kableExtra