Moving to the Clinic: Interview with BC Cancer Agency’s Marco Marra
Mendelspod’s recent interview with Marco Marra of the BC Cancer Agency and the University of British Columbia is well worth a listen. In the podcast, Marra describes his team’s use of genome and transcriptome sequencing for patients whose cancer is considered incurable.
Marra first captured attention in this area in 2009 when he reported his lab’s use of whole genome sequencing to inform treatment decisions for a patient with a rare adenocarcinoma. Genome and transcriptome analysis revealed that the tumor was driven by the RET oncogene. The patient, for whom there had been no clear therapy option, was treated with a RET inhibitor that was in clinical trials at the time — and the tumor shrank significantly.
Since then, Marra parlayed that individual project into a pilot study for how whole genome sequencing could be expanded to other cancer patients. That study was broadened again in 2014 and today his team has analyzed some 400 people who essentially have no other options for treatment. The scientists look for all sorts of mutation types, from SNPs to structural variants and more. One major challenge has been off-label use of drugs: in many cases, genome analysis points to a therapy that’s not indicated for the patient’s type of cancer, and gaining access to the therapy is hit or miss. As the cancer genomics program has expanded, Marra wrestles with questions like, “What is the meaning of having whole genome analysis that points you to a particular agent that you can’t get?” As he told interviewer Theral Timpson, “These are deep conversations that are happening within our environment and probably elsewhere.”
Marra is also keeping a close eye on how clinicians apply information from the genomic analysis. Doctors who just get a report of mutations tend to be less comfortable incorporating that data into treatment decisions. But a weekly conference that allows physicians, scientists, bioinformaticians, and pathologists to walk through case studies often prompts useful interdisciplinary discussions and frequently leads to increased implementation of genomic results, he said.
If you’ve got a little time, we highly recommend listening!
Preprints Galore!
We normally wait until papers come out in scientific journals before reporting on them here, but there are so many great preprints featuring Sage Science tools we couldn’t resist pointing them out. (On a side note, the rising numbers of biology-focused papers posted as preprints is a terrific trend. We’re thrilled to see the peer-review process becoming more transparent and results getting out to the community faster.)
Here are quick recaps of several preprints, all available through bioRxiv.
The megabase-sized fungal genome of Rhizoctonia solani assembled from nanopore reads only
Erwin Datema, Raymond J.M. Hulzink, Lisanne Blommers, Jose Espejo Valle-Inclan, Nathalie Van Orsouw, Alexander H.J. Wittenberg, Martin De Vos
Posted: November 1, 2016
This paper from Keygene scientists used Oxford Nanopore sequencing technology to analyze the fungal pathogen Rhizoctonia solani, generating a highly contiguous 54 Mb assembly. The team focused on optimizing methods for handling high molecular weight DNA to produce the longest sequencing reads. They used BluePippin’s high-pass mode to remove smaller DNA fragments. According to the paper, this approach allows the lab to generate a low-cost eukaryotic fungal genome within a week.
Conrad P.D.T. Gillett, Andrew J Johnson, Iain Barr, Jiri Hulcr
Posted: September 12, 2016
In this preprint, scientists from the University of Florida and the University of East Anglia evaluated a sequencing-based approach to monitoring biodiversity in a region using dung beetles. Since these beetles regularly consume vertebrate dung, the contents of their intestines can reveal quite a bit about animals in the area. They sequenced samples from 10 species of dung beetles collected from a savanna region in southern Africa, and then compared the mitochondrial DNA results against public databases. Results matched animals expected in the area, such as zebra, cattle, goat, and wildebeest. DNA libraries were size-selected using the SageELF system followed by sequencing on an Illumina NextSeq.
Error rates, PCR recombination, and sampling depth in HIV-1 whole genome deep sequencing
Fabio Zanini, Johanna Brodin, Jan Albert, Richard Neher
Posted: September 25, 2016
Researchers at Stanford University, the Karolinska Institute, and the Max Planck Institute collaborated in this effort to establish more accurate and reliable methods for deep sequencing of viral genomes without the amplification biases and sequencing errors that often occur. In a study focused on sequencing populations of HIV-1, the team adjusted the standard sequencing workflow to reduce artifacts and errors. One of those changes involved replacing bead-based size selection with BluePippin sizing, which yielded a more uniform size distribution to meet the insert size needed by the MiSeq platform. With this approach, the scientists were able to detect rare mutations down to 0.2% and to avoid PCR recombination.
Two novel genes discovered in human mitochondrial DNA using PacBio full-length transcriptome data
Gao Shan, Xiaoxuan Tian, Yu Sun, Zhenfeng Wu, Zhi Cheng, Pengzhi Dong, Bingjun He, Jishou Ruan, Wenjun Bu
Posted: October 6, 2016
This work from scientists at Nankai University and Tianjin University of Traditional Chinese Medicine focuses on mitochondrial biology. They used the Iso-Seq method to generate “the first full-length human mitochondrial transcriptome from the MCF7 cell line based on the PacBio platform.” As part of the study, the team used transcriptome data publicly released by PacBio, for which size selection was performed on a SageELF to create six binned libraries.
Nanopore Sequencers Give Optimal Results with BluePippin Size Selection
The Oxford Nanopore team has been speaking recently about their use of our BluePippin automated size selection system for optimizing the read length obtained from nanopore sequencers. For anyone interested in the Oxford platforms who hasn’t seen this information, here’s a quick recap.
As we’ve seen with PacBio, the other long-read platform, single-molecule sequencers tend to produce reads as long as the fragments fed to them. Naturally, users interested in maximizing the read lengths of these systems want to feed them only the longest possible fragments. The simplest and most effective way to do that is what we call high-pass sizing, or selecting all DNA fragments longer than a certain size threshold during the sample prep process.
For the MinIon and PromethIon sequencers from Oxford Nanopore, the company recommends BluePippin sizing for various protocols. This library prep workflow for both sequencing systems uses BluePippin to eliminate shorter fragments; one example of outcomes shows a whopping 255 Kb read from an E. coli experiment. There’s a similar rationale for recommending BluePippin for de novo whole genome assembly with the MinIon system. And this protocol demonstrates how automated sizing fits into a sequence-capture approach for library prep prior to nanopore sequencing.
We’re delighted that BluePippin is showing such utility for nanopore sequencing. If you’re an Oxford Nanopore customer who doesn’t already have access to one of these instruments, contact us to learn how BluePippin can make a difference in your pipeline.
ASHG 2016: Structural Variants, Mega Databases, and the Best-Ever Human Genome
The Sage Science team enjoyed our trip across the border to Canada for the annual meeting of the American Society of Human Genetics this month. Vancouver was practically teeming with genome scientists!
Many presentations this year featured results generated by mining the slew of publicly accessible sequence databases that have come online in recent years, with plenty of exciting new correlations between genetic conditions and markers. Cancer was a commonly studied disease, with new associations that could really pay off for patients down the line if they can be validated for clinical utility.
We also saw that these databases are growing quickly. The Broad Institute announced a new version of the ExAC database, called gnomAD, which essentially doubles the number of exomes and genomes from which it catalogs genetic variants. The conjunction of these massive genomic resources with big data analysis tools stands to revolutionize the speed at which discovery moves in human genetics.
In recent years, we’ve seen that ASHG presenters are going beyond SNPs to interrogate large variants, and this year continued the trend. Structural variants got more airtime than ever, with technologies such as PacBio, BioNano Genomics, 10x, and more making it easier for researchers to access this complex information. We’re encouraged that the community is embracing these tools to analyze extremely large fragments of DNA in order to uncover entirely new genetic mechanisms that could contribute significantly to disease.
A real highlight of the meeting was Macrogen’s presentation of the recently published Korean reference genome, which used PacBio, BioNano, and other technologies to create the most complete human assembly ever. Several whole chromosome arms were assembled into individual contigs, a stunning feat. The team also reminded ASHG attendees that two-thirds of the world’s population is of Asian descent, underscoring the need to invest in more genome resources for Asian individuals.
Many thanks to all the attendees who took time out of a busy conference to stop by our booth and chat with the Sage team. We were honored by all the attention that our new SageHLS instrument got in its first outing, and look forward to seeing how it enables better science when it hits lab benches soon.
Get a Glimpse of the New HLS Instrument at ASHG 2016
The Sage Science team is heading to Vancouver this week for the annual meeting of the American Society of Human Genetics. ASHG is the biggest genomics meeting we attend each year, and it never fails to deliver on its promise of top-notch science and compelling speakers.
We’re especially enthusiastic about this meeting because it’ll be the first time we show off our newest instrument, the HLS platform, in booth #732. Though this is just a sneak peek as development continues, we anticipate launching the instrument soon and thought ASHG attendees would enjoy getting a glimpse.
The HLS platform (short for HMW Library System) will allow scientists to purify ultra high molecular weight DNA directly from cells for the increasing number of applications that require it, such as long-read sequencing or long-range genomics. Working with large DNA fragments has become a lost art in the era of short-read sequencing. But with the rise of PacBio and Oxford Nanopore sequencers, 10x Genomics synthetic long reads, and optical maps from BioNano Genomics and other providers, it’s clear that users need a solution for handling DNA that’s hundreds of kilobases or even megabases long.
We built the HLS instrument to address this need. At launch, it will be able to purify DNA from about 50 Kb to 2 Mb in length, directly from blood samples, cell lines, or bacterial cultures. In our hands, the elutions yield more than a microgram — plenty of DNA for de novo genome sequencing, droplet digital PCR, and other long-range genomics applications. Initially, users will perform purification on the HLS instrument followed by traditional library prep, but in the future we aim to incorporate library prep directly into the system. We’re also already working on targeted genomic fragment extraction using CRISPR/Cas9 as a later-stage application for the HLS instrument.
To learn more about how the system works, check out this blog post or poster. If you’ll be at ASHG, we hope you stop by the booth to check it out!