Simply Statistics A statistics blog by Rafa Irizarry, Roger Peng, and Jeff Leek

On the future of personalized medicine

Jeff Leek, Reeves Anderson, and I recently wrote a correspondence to Nature (subscription req.) regarding the Supreme Court decision in Mayo v. Prometheus and the recent Institute of Medicine report related to the Duke Clinical Trials Saga

The basic gist of the correspondence is that the IOM report stresses the need for openness in the process of developing ‘omics based tests, but the Court decision suggests that patent protection will not be available to protect those details. So how will the future of personalized medicine look? There is a much larger, more general, discussion that could be had about patents in this arena and we do not get into that here (hey, we had to squeeze it into 300 words). But it seems that if biotech companies cannot make money from patented algorithms, then they will have to find a new avenue. 

Here are some slides from a recent lecture I gave outlining some of the ideas and providing some background.

Sunday data/statistics link roundup (4/22)

  1. Now we know who is to blame for the pie chart. I had no idea it had been around, straining our ability to compare relative areas, since 1801. However, the same guy (William Playfair) apparently also invented the bar chart. So he wouldn’t be totally shunned by statisticians. (via Leonid K.)
  2. A nice article in the Guardian about the current group of scientists that are boycotting Elsevier. I have to agree with the quote that leads the article, “All professions are conspiracies against the laity.” On the other hand, I agree with Rafa that academics are partially to blame for buying into the closed access hegemony. I think more than a boycott of a single publisher is needed; we need a change in culture. (first link also via Leonid K)
  3. A blog post on how to add a transparent image layer to a plot. For some reason, I have wanted to do this several times over the last couple of weeks, so the serendipity of seeing it on R Bloggers merited a mention. 
  4. I agree the Earth Institute needs a better graphics advisor. (via Andrew G.)
  5. A great article on why multiple choice tests are used - they are an easy way to collect data on education. But that doesn’t mean they are the right data. This reminds me of the Tukey quote: “The data may not contain the answer. The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data”. It seems to me if you wanted to have a major positive impact on education right now, the best way would be to develop a new experimental design that collects the kind of data that really demonstrates mastery of reading/math/critical thinking. 
  6. Finally, a bit of a bleg…what is the best way to do the SVD of a huge (think 1e6 x 1e6), sparse matrix in R? Preferably without loading the whole thing into memory…

Replication, psychology, and big science

Reproducibility has been a hot topic for the last several years among computational scientists. A study is reproducible if there is a specific set of computational functions/analyses (usually specified in terms of code) that exactly reproduce all of the numbers in a published paper from raw data. It is now recognized that a critical component of the scientific process is that data analyses can be reproduced. This point has been driven home particularly for personalized medicine applications, where irreproducible results can lead to delays in evaluating new procedures that affect patients’ health. 

But just because a study is reproducible does not mean that it is replicable. Replicability is stronger than reproducibility. A study is only replicable if you perform the exact same experiment (at least) twice, collect data in the same way both times, perform the same data analysis, and arrive at the same conclusions. The difference with reproducibility is that to achieve replicability, you have to perform the experiment and collect the data again. This of course introduces all sorts of new potential sources of error in your experiment (new scientists, new materials, new lab, new thinking, different settings on the machines, etc.)

Replicability is getting a lot of attention recently in psychology due to some high-profile studies that did not replicate. First, there was the highly-cited experiment that failed to replicate, leading to a show down between the author of the original experiment and the replicators. Now there is a psychology project that allows researchers to post the results of replications of experiments - whether they succeeded or failed. Finally, the Reproducibility Project, probably better termed the Replicability Project, seeks to replicate the results of every experiment in the journals Psychological Science, _the Journal of Personality and Social Psychology,or the Journal of Experimental Psychology: Learning, Memory, and Cognition _in the year 2008.

Replicability raises important issues for “big science” projects, ranging from genomics (The Thousand Genomes Project) to physics (The Large Hadron Collider). These experiments are too big and costly to actually replicate. So how do we know the results of these experiments aren’t just errors, that upon replication (if we could do it) would not show up again? Maybe smaller scale replications of sub-projects could be used to help convince us of discoveries in these big projects?

In the meantime, I love the idea that replication is getting the credit it deserves (at least in psychology). The incentives in science often only credit the first person to an idea, not the long tail of folks who replicate the results. For example, replications of experiments are often not considered interesting enough to publish. Maybe these new projects will start to change some of the perverse academic incentives.

Roche: Illumina Is No Apple

Roche: Illumina Is No Apple

Sunday data/statistics link roundup (4/15)

  1. Incredibly cook, dynamic real-time maps of wind patterns in the United States. (Via Flowing Data)
  2. A d3.js coding tool that updates automatically as you update the code. This is going to be really useful for beginners trying to learn about D3. Real time coding (Via Flowing Data)
  3. An interesting blog post describing why the winning algorithm in the Netflix prize hasn’t actually been implemented! It looks like it was too much of an engineering hassle. I wonder if this will make others think twice before offering big sums for prizes like this. Unless the real value is advertising…(via Chris V.)
  4. An article about a group at USC that plans to collect all the information from apps that measure heart beats. Their project is called everyheartbeat. I think this is a little bit pre-mature, given the technology, but certainly the quantified self field is heating up. I wonder how long until the target audience for these sorts of projects isn’t just wealthy young technofiles? 
  5. A really good deconstruction of a recent paper suggesting that the mood on Twitter could be used to game the stock market. The author illustrates several major statistical flaws, including not correcting for multiple testing, an implausible statistical model, and not using a big enough training set. The scary thing is apparently a hedge fund is teaming up with this group of academics to try to implement their approach. I wouldn’t put my money anywhere they can get their hands on it. This is just one more in the accelerating line of results that illustrate the critical need for statistical literacy both among scientists and in the general public.