The genomics field is a funny place: at times it feels like we’re not making progress quickly enough, and at other times things seem to be moving so fast we’re just holding on for dear life. We were reminded of this paradox recently thanks to some conference discussions with our fellow genomic veterans, so we came here to think out loud about it.
On one hand, we hear about consumers who can’t get access to their DNA data because of FDA regulations, or patients who would likely benefit from genomic testing but their doctors are too traditional to give it a shot. On the other hand, it seems like just yesterday there were thousands of scientists around the world working hard to assemble the first human genome sequence, and now it’s almost routine for organizations to launch studies including tens of thousands of whole genomes.
Some recent indications suggest that we’ll soon leave behind those lingering doubts about the speed of progress. If you didn’t see this summer’s PLoS Biology publication about the growth of genomic data, it’s well worth a read. Scientists from a number of institutions came together to write the commentary, which builds on data metrics from recent years to make the argument that in as little as a decade, genomics may lap fields such as astronomy and social media to become the most prolific producer of big data. How’s that for a field that until recently relied on shipping hard drives back and forth?
At the ASHG conference in Baltimore this month, we saw more evidence for the rapidly increasing pace of genomics. NIH Director Francis Collins spoke about the Precision Medicine Initiative and its efforts to aggregate or enlist 1 million whole genomes for a national database. Several speakers mentioned the 100,000 Genomes Project that’s underway in the UK, or the efforts at Geisinger Health System in Pennsylvania to sequence 250,000 patients. The days of congratulating ourselves for sequencing a single genome seem to exist only in the rear-view mirror.
Our internal data supports the same trend. Last November we launched the high-throughput version of our Pippin automated DNA sizing platform — an instrument designed in response to customer demand — and already the PippinHT is galloping along. So far, customers have ordered enough HT cassettes to process more than 30,000 samples. Whew! The PippinHT is best-suited for large-scale genomic or transcriptomic studies; even a few years ago, we would never have predicted this level of demand for the instrument. It’s a sure sign that genomics is scaling faster than any of us could have anticipated.
Samples Run on the PippinHT
We hope this pace in genome science translates into exciting new approaches to healthcare, the most obvious beneficiary of so many sequence databases and massive-scale discovery projects. For our part, we’ll keep churning out those high-capacity cassettes to help our customers increase their throughput for larger and larger studies.