19 Aug 2015
Everyone knows now that you have to correct for multiple testing when you calculate many p-values otherwise this can happen:
http://xkcd.com/882/
One of the most popular ways to correct for multiple testing is to estimate or control the false discovery rate. The false discovery rate attempts to quantify the fraction of made discoveries that are false. If we call all p-values less than some threshold t significant, then borrowing notation from this great introduction to false discovery rates
So F(t) is the (unknown) total number of null hypotheses called significant and S(t) is the total number of hypotheses called significant. The FDR is the expected ratio of these two quantities, which, under certain assumptions can be approximated by the ratio of the expectations.
To get an estimate of the FDR we just need an estimate for E[_F(t)] _ and E[S(t)]. _The latter is pretty easy to estimate as just the total number of rejections (the number of _p < t). If you assume that the p-values follow the expected distribution then E[_F(t)] can be approximated by multiplying the fraction of null hypotheses, multiplied by the total number of hypotheses and multiplied by _t since the p-values are uniform. To do this, we need an estimate for , the proportion of null hypotheses. There are a large number of ways to estimate this quantity but it is almost always estimated using the full distribution of computed p-values in an experiment. The most popular estimator compares the fraction of p-values greater than some cutoff to the number you would expect if every single hypothesis were null. This fraction is about the fraction of null hypotheses.
Combining the above equation with our estimates for E[_F(t)] _ and _E[S(t)] _we get:
The q-value is a multiple testing analog of the p-value and is defined as:
This is of course a very loose version of this and you can get a more technical description here. But the main thing to notice is that the q-value depends on the estimated proportion of null hypotheses, which depends on the distribution of the observed p-values. The smaller the estimated fraction of null hypotheses, the smaller the FDR estimate and the smaller the q-value. This suggests a way to make any p-value significant by altering its “testing partners”. Here is a quick example. Suppose that we have done a test and have a p-value of 0.8. Not super significant. Suppose we perform this test in conjunction with a number of hypotheses that are null generating a p-value distribution like this.
Then you get a q-value greater than 0.99 as you would expect. But if you test that exact same p-value with a ton of other non-null hypotheses that generate tiny p-values in a distribution that looks like this:
Then you get a q-value of 0.0001 for that same p-value of 0.8. The reason is that the estimate of the fraction of null hypotheses goes essentially to zero, which drives down the q-value. You can do this with any p-value, if you make its testing partners have sufficiently low p-values then the q-value will also be as small as you like.
A couple of things to note:
- Obviously doing this on purpose to change the significance of a calculated p-value is cheating and shouldn’t be done.
- For correctly calculated p-values on a related set of hypotheses this is actually a sensible property to have - if you have almost all very small p-values and one very large p-value, you are doing a set of tests where almost everything appears to be alternative and you should weight that in some sensible way.
- This is the reason that sometimes a “multiple testing adjusted” p-value (or q-value) is smaller than the p-value itself.
- This doesn’t affect non-adaptive FDR procedures - but those procedures still depend on the “testing partners” of any p-value through the total number of tests performed. This is why people talk about the so-called “multiple testing burden”. But that is a subject for a future post. It is also the reason non-adaptive procedures can be severely underpowered compared to adaptive procedures when the p-values are correct.
- I’ve appended the code to generate the histograms and calculate the q-values in this post in the following gist.
12 Aug 2015
I was asked to speak at the UCLA Department of Statistics Commencement Ceremony this past June. As one of the first graduates of that department back in 2003, I was tremendously honored to be invited to speak to the graduates. When I arrived I was just shocked at how much the department had grown. When I graduated I think there were no more than 10 of us between the PhD and Master’s programs. Now they have ~90 graduates per year with undergrad, Master’s and PhD. It was just stunning.
Here’s the text of what I said, which I think I mostly stuck to in the actual speech.
UCLA Statistics Graduation: Some thoughts on a career in statistics
When I asked Rick [Schoenberg] what I should talk about, he said to ‘talk for 95 minutes on asymptotic properties of maximum likelihood estimators under nonstandard conditions”. I thought this is a great opportunity! I busted out Tom Ferguson’s book and went through my old notes. Here we go. Let X be a complete normed vector space….
I want to thank the department for inviting me here today. It’s always good to be back. I entered the UCLA stat department in 1999, only the second entering class, and graduated from UCLA Stat in 2003. Things were different then. Jan was the chair and there were not many classes so we could basically do whatever we wanted. Things are different now and that’s a good thing. Since 2003, I’ve been at the Department of Biostatistics at the Johns Hopkins Bloomberg School of Public Health, where I was first a postdoctoral fellow and then joined the faculty. It’s been a wonderful place for me to grow up and I’ve learned a lot there.
It’s just an incredible time to be a statistician. You guys timed it just right. I’ve been lucky enough to witness two periods like this, the first time being when I graduated from college at the height of the dot come boom. Today, it’s not computer programming skills that the world needs, but rather it’s statistical skills. I wish I were in your shoes today, just getting ready to startup. But since I’m not, I figured the best thing I could do is share some of the things I’ve learned and talk about the role that these things have played in my own life.
Know your edge: What’s the one thing that you know that no one else seems to know? You’re not a clone—you have original ideas and skills. You might think they’re not valuable but you’re wrong. Be proud of these ideas and use them to your advantage. As an example, I’ll give you my one thing. Right now, I believe the greatest challenge facing the field of statistics today is getting the entire world to know what we in this room already know. Data are everywhere today and the biggest barrier to progress is our collective inability to process and analyze those data to produce useful information. The need for the things that we know has absolutely exploded and we simply have not caught up. That’s why I created, along with Jeff Leek and Brian Caffo, the Johns Hopkins Data Science Specialization, which is currently the most successful massive open online course program ever. Our goal is to teach the entire world statistics, which we think is an essential skill. We’re not quite there yet, but—assuming you guys don’t steal my idea—I’m hopeful that we’ll get there sometime soon.
At some point the edge you have will no longer work: That sounds like a bad thing, but it’s actually good. If what you’re doing really matters, then at some point everyone will be doing it. So you’ll need to find something else. I’ve been confronted with this problem at least 3 times in my life so far. Before college, I was pretty good at the violin, and it opened a lot of doors for me. It got me into Yale. But when I got to Yale, I quickly realized that there were a lot of really good violinists here. Suddenly, my talent didn’t have so much value. This was when I started to pick up computer programming and in 1998 I learned an obscure little language called R. When I got to UCLA I realized I was one of the only people who knew R. So I started a little brown bag lunch series where I’d talk about some feature of R to whomever would show up (which wasn’t many people usually). Picking up on R early on turned out to be really important because it was a small community back then and it was easy to have a big impact. Also, as more and more people wanted to learn R, they’d usually call on me. It’s always nice to feel needed. Over the years, the R community exploded and R’s popularity got to the point where it was being talked about in the New York Times. But now you see the problem. Saying that you know R doesn’t exactly distinguish you anymore, so it’s time to move on again. These days, I’m realizing that the one useful skill that I have is the ability to make movies. Also, my experience being a performer on the violin many years ago is coming in handy. My ability to quickly record and edit movies was one of the key factors that enabled me to create an entire online data science program in 2 months last year.
Find the right people, and stick with them forever. Being a statistician means working with other people. Choose those people wisely and develop a strong relationship. It doesn’t matter how great the project is or how famous or interesting the other person is, if you can’t get along then bad things will happen. Statistics and data analysis is a highly verbal process that requires constant and very clear communication. If you’re uncomfortable with someone in any way, everything will suffer. Data analysis is unique in this way—our success depends critically on other people. I’ve only had a few collaborators in the past 12 years, but I love them like family. When I work with these people, I don’t necessarily know what will happen, but I know it will be good. In the end, I honestly don’t think I’ll remember the details of the work that I did, but I’ll remember the people I worked with and the relationships I built.
So I hope you weren’t expecting a new asymptotic theorem today, because this is pretty much all I’ve got. As you all go on to the next phase of your life, just be confident in your own ideas, be prepared to change and learn new things, and find the right people to do them with. Thank you.
12 Aug 2015
Biologists make wide use of correlation as a measure of reproducibility. Specifically, they quantify reproducibility with the correlation between measurements obtained from replicated experiments. For example, the ENCODE data standards document states
A typical R2 (Pearson) correlation of gene expression (RPKM) between two biological replicates, for RNAs that are detected in both samples using RPKM or read counts, should be between 0.92 to 0.98. Experiments with biological correlations that fall below 0.9 should be either be repeated or explained.
However, for reasons I will explain here, correlation is not necessarily informative with regards to reproducibility. The mathematical results described below are not inconsequential theoretical details, and understanding them will help you assess new technologies, experimental procedures and computation methods.
Suppose you have collected data from an experiment
x1, x2,..., xn
and want to determine if a second experiment replicates these findings. For simplicity, we represent data from the second experiment as adding unbiased (averages out to 0) and statistically independent measurement error d to the first:
y1=x1+d1, y2=x2+d2, ... yn=xn+dn.
For us to claim reproducibility we want the differences
d1=y1-x1, d2=y2-x2,... ,dn=yn-xn
to be “small”. To give this some context, imagine the x and y are log scale (base 2) gene expression measurements which implies the d represent log fold changes. If these differences have a standard deviation of 1, it implies that fold changes of 2 are typical between replicates. If our replication experiment produces measurements that are typically twice as big or twice as small as the original, I am not going to claim the measurements are reproduced. However, as it turns out, such terrible reproducibility can still result in correlations higher than 0.92.
To someone basing their definition of correlation on the current common language usage this may seem surprising, but to someone basing it on math, it is not. To see this, note that the mathematical definition of correlation tells us that because d and x are independent:
This tells us that correlation summarizes the variability of d relative to the variability of x. Because of the wide range of gene expression values we observe in practice, the standard deviation of x can easily be as large as 3 (variance is 9). This implies we expect to see correlations as high as 1/sqrt(1+1/9) = 0.95, despite the lack of reproducibility when comparing x to y.
Note that using Spearman correlation does not fix this problem. A Spearman correlation of 1 tells us that the ranks of x and y are preserved, yet doest not summarize the actual differences. The problem comes down to the fact that we care about the variability of d and correlation, Pearson or Spearman, does not provide an optimal summary. While correlation relates to the preservation of ranks, a much more appropriate summary of reproducibly is the distance between x and y which is related to the standard deviation of the differences d. A very simple R command you can use to generate this summary statistic is:
sqrt(mean(d^2))
or the robust version:
median(abs(d)) ##multiply by 1.4826 for unbiased estimate of true sd
The equivalent suggestion for plots it to make an MA-plot instead of a scatterplot.
But aren’t correlations and distances directly related? Sort of, and this actually brings up another problem. If the x and y are standardized to have average 0 and standard deviation 1 then, yes, correlation and distance are directly related:
However, if instead x and y have different average values, which would put into question reproducibility, then distance is sensitive to this problem while correlation is not. If the standard devtiation is 1, the formula is:
Once we consider units (standard deviations different from 1) then the relationship becomes even more complicated. Two advantages of distance you should be aware of are:
- it is in the same units as the data, while correlations have no units making it hard to interpret and select thresholds, and
- distance accounts for bias (differences in average), while correlation does not.
A final important point relates to the use of correlation with data that is not approximately normal. The useful interpretation of correlation as a summary statistic stems from the bivariate normal approximation: for every standard unit increase in the first variable, the second variable increased r standard units, with r the correlation. A summary of this is here. However, when data is not normal this interpretation no longer holds. Furthermore, heavy tail distributions, which are common in genomics, can lead to instability. Here is an example of uncorrelated data with a single pointed added that leads to correlations close to 1. This is quite common with RNAseq data.
10 Aug 2015
For the last several years I have been collecting functions I routinely use during exploratory data analysis in a private R package. Mike Love and I used some of these in our HarvardX course and now, due to popular demand, I have created man pages and added the rafalib package to CRAN. Mike has made several improvements and added some functions of his own. Here is quick descriptions of the rafalib functions I most use:
mypar - Before making a plot in R I almost always type mypar(). This basically gets around the suboptimal defaults of par. For example, it makes the margins (mar, mpg) smaller and defines RColorBrewer colors as defaults. It is optimized for the RStudio window. Another advantage is that you can type mypar(3,2) instead of par(mfrow=c(3,2)). bigpar() is optimized for R presentations or PowerPoint slides.
as.fumeric - This function turns characters into factors and then into numerics. This is useful, for example, if you want to plot values x,y with colors defined by their corresponding categories saved in a character vector labsplot(x,y,col=as.fumeric(labs)).
shist (smooth histogram, pronounced shitz) - I wrote this function because I have a hard time interpreting the y-axis of density. The height of the curve drawn by shist can be interpreted as the height of a histogram if you used the units shown on the plot. Also, it automatically draws a smooth histogram for each entry in a matrix on the same plot.
splot (subset plot) - The datasets I work with are typically large enough that
plot(x,y) involves millions of points, which is a problem. Several solution are available to avoid over plotting, such as alpha-blending, hexbinning and 2d kernel smoothing. For reasons I won’t explain here, I generally prefer subsampling over these solutions. splot automatically subsamples. You can also specify an index that defines the subset.
sboxplot (smart boxplot) - This function draws points, boxplots or outlier-less boxplots depending on sample size. Coming soon is the kaboxplot (Karl Broman box-plots) for when you have too many boxplots.
install_bioc - For Bioconductor users, this function simply does the source(“http://www.bioconductor.org/biocLite.R”) for you and then uses BiocLite to install.
09 Aug 2015
Editor's note: This is a guest post by Ani Eloyan. She is an Assistant Professor of Biostatistics at Brown University. Dr. Eloyan’s work focuses on semi-parametric likelihood based methods for matrix decompositions, statistical analyses of brain images, and the integration of various types of complex data structures for analyzing health care data. She received her PhD in statistics from North Carolina State University and subsequently completed a postdoctoral fellowship in the Department of Biostatistics at Johns Hopkins University. Dr. Eloyan and her team won the ADHD200 Competition discussed in this article. She tweets @eloyan_ani.
Neuroscience is one of the exciting new fields for biostatisticians interested in real world applications where they can contribute novel statistical approaches. Most research in brain imaging has historically included studies run for small numbers of patients. While justified by the costs of data collection, the claims based on analyzing data for such small numbers of subjects often do not hold for our populations of interest. As discussed in
this article, there is a huge demand for biostatisticians in the field of quantitative neuroscience; so called neuroquants or neurostatisticians. However, while more statisticians are interested in the field, we are far from competing with other substantive domains. For instance, a quick search of abstract keywords in the online program of the upcoming
JSM2015 conference of “brain imaging” and “neuroscience” results in 15 records, while a search of the words “genomics” and “genetics” generates 76
records.
Assuming you are trained in statistics and an aspiring neuroquant, how would you go about working with brain imaging data? As a graduate student in the
Department of Statistics at NCSU several years ago, I was very interested in working on statistical methods that would be directly applicable to solve problems in neuroscience. But I had this same question: “Where do I find the data?” I soon learned that to
reallyapproach substantial relevant problems I also needed to learn about the subject matter underlying these complex data structures.
In recent years, several leading groups have uploaded their lab data with the common goal of fostering the collection of high dimensional brain imaging data to build powerful models that can give generalizable results.
Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC) founded in 2006 is a platform for public data sharing that facilitates streamlining data processing pipelines and compiling high dimensional imaging datasets for crowdsourcing the analyses. It includes data for people with neurological diseases and neurotypical children and adults. If you are interested in Alzheimer’s disease, you can check out
ADNI.
ABIDE provides data for people with Autism Spectrum Disorder and neurotypical peers.
ADHD200 was released in 2011 as a part of a competition to motivate building predictive methods for disease diagnoses using functional magnetic resonance imaging (MRI) in addition to demographic information to predict whether a child has attention deficit hyperactivity disorder (ADHD). While the competition ended in 2011, the dataset has been widely utilized afterwards in studies of ADHD. According to Google Scholar, the
paper introducing the ABIDE set has been cited 129 times since 2013 while the
paper discussing the ADHD200 has been cited 51 times since
2012. These are only a few examples from the list of open access datasets that could of utilized by statisticians.
Anyone can download these datasets (you may need to register and complete some paperwork in some cases), however, there are several data processing and cleaning steps to perform before the final statistical analyses. These preprocessing steps can be daunting for a statistician new to the field, especially as the tools used for preprocessing may not be available in R.
This discussion makes the case as to why statisticians need to be involved in every step of preprocessing the data, while
this R package contains new tools linking R to a commonly used platform
FSL. However, as a newcomer, it can be easier to start with data that are already processed.
This excellent overview by Dr. Martin Lindquist provides an introduction to the different types of analyses for brain imaging data from a statisticians point of view, while our
paper provides tools in R and example datasets for implementing some of these methods. At least one course on Coursera can help you get started with
functional MRI data. Talking to and reading the papers of biostatisticians working in the field of quantitative neuroscience and scientists in the field of neuroscience is the key.