Simply Statistics A statistics blog by Rafa Irizarry, Roger Peng, and Jeff Leek

A glass half full interpretation of the replicability of psychological science

tl;dr: 77% of replication effects from the psychology replication study were in (or above) the 95% prediction interval based on the original effect size. This isn't perfect and suggests (a) there is still room for improvement, (b) the scientists who did the replication study are pretty awesome at replicating, (c) we need a better definition of replication that respects uncertainty but (d) the scientific sky isn't falling. We wrote this up in a paper on arxiv; the code is here. 

A week or two ago a paper came out in Science on Estimating the reproducibility of psychological science. The basic behind the study was to take a sample of studies that appeared in a particular journal in 2008 and try to replicate each of these studies. Here I'm using the definition that reproducibility is the ability to recalculate all results given the raw data and code from a study and replicability is the ability to re-do the study and get a consistent result. 

The paper is pretty incredible and the authors did an amazing job of going back to the original sources and trying to be faithful to the original study designs. I have to admit when I first heard about the study design I was incredibly pessimistic about the results (I suppose grouchy is a natural default state for many statisticians –especially those with sleep deprivation). I mean 2008 was well before the push toward reproducibility had really taken off (Biostatistics was one of the first journals to adopt a policy on reproducible research and that didn't happen until 2009). More importantly, the student researchers from those studies had possibly moved on, study populations may change, there could be any number of minor variations in the study design and so forth. I thought the chances of getting any effects in the same range was probably pretty low. 

So when the results were published I was pleasantly surprised. I wasn’t the only one:

But that was definitely not the prevailing impression that the paper left on social and mass media. A lot of the discussion around the paper focused on the idea that only 36% of the studies had a p-value less than 0.05 in both the original and replication study. But many of the sample sizes were small and the effects were modest. So the first question I asked myself was, “Well what would we expect to happen if we replicated these studies?” The original paper measured replicability in several ways and tried hard to calibrate expected coverage of confidence intervals for the measured effects.

With Roger and Prasad we tried a little different approach. We estimated the 95% prediction interval for the replication effect given the original effect size.

pi_figure_nofilter

 

72% of the replication effects were within the 95% prediction interval and 2 were above the interval (showed a stronger signal in replication in than predicted from original study). This definitely shows that there is still room for improvement in replication of these studies - we would expect 95% of the effects to fall into the 95% prediction interval. But at least my opinion is that 72% (or 77% if you count the 2 above the P.I.) of studies falling in the prediction interval is (a) not bad and (b) a testament to the authors of the reproducibility paper and their efforts to get the studies right.

An important point here is that replication and reproducibility aren’t the same thing. When reproducing a study we expect the numbers and figures to be exactly the same. _But a replication involves recollection of data and is subject to variation and so _we don’t expect the answer to be exactly the same in the replication. This is of course made more confusing by regression to the mean, publication bias, and the garden of forking paths.  Our use of a prediction interval measures both the variation expected in the original study and in the replication. One thing we noticed when re-analyzing the data is how many of the studies had very low sample sizes. samplesize_figure_nofilter

 

Sample sizes were generally bigger in the replication, but often very low regardless. This makes it more difficult to disentangle what didn’t replicate from what is just expected variation for a small sample size study.  The point remains whether those small studies should be trusted in general, but for the purposes of measuring replication it makes the problem more difficult.

One thing I have been thinking about a lot and this study drove home is that if we are measuring replication we need a definition that incorporates uncertainty directly. Suppose that you collect a data set D0 from an original study and  D1 from a replication. Then replication means that the data from a study replicates if D0 ~ F and D1 ~ F. Informally, if the data are generated from the same distribution in both experiments then the study replicates. To get an estimate you apply a pipeline to the data set to get an estimate e0 = p(D0). If the study is also reproducible than p() is the same for both studies and p(D0) ~ G and p(D1) ~ G, subject to some conditions on p(). 

One interesting consequence of this definition is that each complete replication data set represents only a single data point for measuring replication. To measure replication with this definition you either need to make assumptions about the data generating distribution for D0 and D1 or you need to perform a complete replication of a study many times to determine if it replicates. However, it does mean that we can define replication even for studies with very small number of replicates as the data generating distribution may be arbitrarily variable in each case.

Regardless of this definition I was excited that the OSF folks did the study and pulled it off as well as they did and was a bit bummed about the most common  reaction. I think there is an easy narrative that “science is broken” which I think isn’t a positive thing for a number of reasons. I love the way that {reproducibility/replicability/open science/open publication} are becoming more and more common, but often think we fall into the same trap in wanting to report these results as clear cut as we do when reporting exaggerations or oversimplifications of scientific discoveries in headlines. I’m excited to see how these kinds of studies look in 10 years when Github/open science/pre-prints/etc. are all the standards.

Apple Music's Moment of Truth

Today is the day when Apple, Inc. learns whether it’s brand new streaming music service, Apple Music, is going to be a major contributor to the bottom line or just another streaming service (JASS?). Apple Music launched 3 months ago and all new users are offered a 3-month free trial. Today, that free trial ends and the big question is how many people will start to pay for their subscription, as opposed to simply canceling it. My guess is that most people (> 50%) will opt to pay, but that’s a complete guess. For what it’s worth, I’ll be paying for my subscription. After adding all this music to my library, I’d hate to see it all go away.

Back on August 18, 2015, consumer market research firm MusicWatch released a study that claimed, among other things, that

Among people who had tried Apple Music, 48 percent reported they are not currently using the service.

This would suggest that almost half of people who had signed up for the free trial period of Apple Music were not interested in using it further and would likely not pay for it once the trial ended. If it were true, it would be a blow to the newly launched service.

But how did MusicWatch arrive at its number? It claimed to have surveyed 5,000 people in its study. Shortly before the survey by MusicWatch was released, Apple claimed that about 11 million people had signed up for their new Apple Music service (because the service had just launched, everyone who had signed up was in the free trial period). Clearly, 5,000 people do not make up the entire population, so we have but a small sample of users.

What is the target that MusicWatch was trying to answer? It seems that they wanted to know the percentage of all people who had signed up for Apple Music that were still using the service. Can they make inference about the entire population from the sample of 5,000?

If the sample is representative and the individuals are independent, we could use the number 48% as an estimate of the percentage in the population who no longer use the service. The press release from MusicWatch did not indicate any measure of uncertainty, so we don’t know how reliable the number is.

Interestingly, soon after the MusicWatch survey was released, Apple released a statement to the publication The Verge, stating that 79% of users who had signed up were still using the service (i.e. only 21% had stopped using it, as opposed to 48% reported by MusicWatch). In other words, Apple just came out and gave us the truth! This was unusual because Apple typically does not make public statements about newly launched products. I just found this amusing because I’ve never been in a situation where I was trying to estimate a parameter and then someone later just told me what its value was.

If we believe that Apple and MusicWatch were measuring the same thing in their analyses (and it’s not clear that they were), then it would suggest that MusicWatch’s estimate of the population percentage (48%) was quite far off from the true value (21%). What would explain this large difference?

  1. Random variation. It’s true that MusicWatch’s survey was a small sample relative to the full population, but the sample was still big with 5,000 people. Furthermore, the analysis was fairly simple (just taking the proportion of users still using the service), so the uncertainty associated with that estimate is unlikely to be that large.
  2. Selection bias. Recall that it’s not clear how MusicWatch sampled its respondents, but it’s possible that the way that they did it led them to capture a set of respondents who were less inclined to use Apple Music. Beyond this, we can’t really say more without knowing the details of the survey process.
  3. Respondents are not independent. It’s possible that the survey respondents are not independent of each other. This would primiarily affect the uncertainty about the estimate, making it larger than we might expect if the respondents were all independent. However, since we do not know what MusicWatch’s uncertainty about their estimate was in the first place, it’s difficult to tell if dependence between respondents could play a role. Apple’s number, of course, has no uncertainty.
  4. Measurement differences. This is the big one, in my opinion. We don’t know is how either MusicWatch or Apple defined “still using the service”. You could imagine a variety of ways to determine whether a person was still using the service. You could ask “Have you used it in the last week?” or perhaps “Did you use it yesterday?” Responses to these questions would be quite different and would likely lead to different overall percentages of usage.

We Used Data to Improve our HarvardX Courses: New Versions Start Oct 15

You can sign up following links here

Last semester we successfully [You can sign up following links here

Last semester we successfully](http://simplystatistics.org/2014/11/25/harvardx-biomedical-data-science-open-online-training-curriculum-launches-on-january-19/) of my Data Analysis course. To create the second version, the first was split into eight courses. Over 2,000 students successfully completed the first of these, but, as expected, the numbers were lower for the more advanced courses. We wanted to remove any structural problems keeping students from maximizing what they get from our courses, so we studied the assessment questions data, which included completion rate and time, and used the findings to make improvements. We also used qualitative data from the discussion board. The major changes to version 3 are the following:

  • We no longer use R packages that Microsoft Windows users had trouble installing in the first course.
  • All courses are now designed to be completed in 4 weeks.
  • We added new assessment questions.
  • We improved the assessment questions determined to be problematic.
  • We split the two courses that students took the longest to complete into smaller modules. Students now have twice as much time to complete these.
  • We consolidated the case studies into one course.
  • We combined the materials from the statistics courses into a book, which you can download here. The material in the book match the materials taught in class so you can use it to follow along.

You can enroll into any of the seven courses following the links below. We will be on the discussion boards starting October 15, and we hope to see you there.

  1. Statistics and R for the Life Sciences starts October 15.
  2. Introduction to Linear Models and Matrix Algebra starts November 15.
  3. Statistical Inference and Modeling for High-throughput Experiments starts December 15.
  4. High-Dimensional Data Analysis starts January 15.
  5. Introduction to Bioconductor: Annotation and Analysis of Genomes and Genomic Assays starts February 15.
  6. High-performance Computing for Reproducible Genomics starts March 15.
  7. Case Studies in Functional Genomics start April 15.

The landing page for the series continues to be here.

Data Analysis for the Life Sciences - a book completely written in R markdown

The book Data Analysis for the Life Sciences is now available on Leanpub.

title_pageData analysis is now part of practically every research project in the life sciences. In this book we use data and computer code to teach the necessary statistical concepts and programming skills to become a data analyst. Following in the footsteps of Stat Labs, instead of showing theory first and then applying it to toy examples, we start with actual applications and describe the theory as it becomes necessary to solve specific challenges.  We use simulations and data analysis examples to teach statistical concepts. The book includes links to computer code that readers can use to program along as they read the book.

It includes the following chapters: Inference, Exploratory Data Analysis, Robust Statistics, Matrix Algebra, Linear Models, Inference for High-Dimensional Data, Statistical Modeling, Distance and Dimension Reduction, Practical Machine Learning, and Batch Effects.

 The text was completely written in R markdown and every section contains a link to the  document that was used to create that section. This means that you can use knitr to reproduce any section of the book on your own computer. You can also access all these markdown documents directly from  GitHub. Please send a pull request if you fix a typo or other mistake! For now we are keeping the R markdowns for the exercises private since they contain the solutions.  But you can see the solutions if  you take our online course quizzes. If we find that most readers want access to the solutions, we will open them up as well.

The material is based on the online courses I have been teaching with Mike Love. As we created the course, Mike and I wrote R markdown documents for the students and put them on GitHub. We then used jekyll to create a webpage with html versions of the markdown documents. Jeff then convinced us to publish it on LeanbupLeanpub. So we wrote a shell script that compiled the entire book into a Leanpub directory, and after countless hours of editing and tinkering we have a 450+ page book with over 200 exercises. The entire book compiles from scratch in about 20 minutes. We hope you like it.

The Leek group guide to writing your first paper

I have written guides on reviewing papers, sharing data,  and writing R packages. One thing I haven’t touched on until now has been writing papers. Certainly for me, and I think for a lot of students, the hardest transition in graduate school is between taking classes and doing research.

There are several hard parts to this transition including trying to find a problem, trying to find an advisor, and having a ton of unstructured time. One of the hardest things I’ve found is knowing (a) when to start writing your first paper and (b) how to do it. So I wrote a guide for students in my group:

https://github.com/jtleek/firstpaper

On how to write your first paper. It might be useful for other folks as well so I put it up on Github. Just like with the other guides I’ve written this is a very opinionated (read: doesn’t apply to everyone) guide. I also would appreciate any feedback/pull requests people have.