Started the day finally watching Eric Rodenbecks’s 2014 Eyeo talk that I had unfortunately missed seeing in person last year, but had heard that I had to see. Very glad that I did. It seems quite honest and humble which are two of the things that I value about the Eyeo community.
Next up were a few articles on uncertainty sent my way by Aran Lunzer: The Truth Wears Offabout why scientific results fail to be replicable, Revised Standards for Statistical Evidence, and The Problem with P Values: How Significant Are They Really?. These also reminded me of the 2010 paper on Graphical Inference for InfoVis which tested how reliably we could tell the difference between meaningful data and noise in data visualizations and on this years InfoVis paper on encoding uncertainty into visualizations showing the results of statistical tests.
Following up on the first article, I found Ioannidis’ “Why Most Published Research Findings are False”. The abstract states that “Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” Wow.