Natural (e.g., soil and seawater) or a host-associated (e.g., human gut) environment containing micro-organisms organized into communities or microbiomes. DNA is extracted from theenvironmental sample containing a mixture of multiple genomes and then sequenced without prior separation. The resulting dataset comprises millions of mixed sequence reads from the multiple genomes contained in the sample. Traditionally, DNA has been sequenced using Sanger sequencing technology [2] and the reads generated are routinely 800?000 base pairs long. However this technology is extremely cumbersome and costly. Recently next-generation sequencers, e.g., Illumina/Solexa, Applied Tunicamycin Biosystems’ SOLiD, and Roche’s 454 Life Sciences sequencing systems, have emerged as the future of genomics with incredible Sudan I ability to rapidly generate large amounts of sequence data [3,4]. These new technologies greatly facilitate highthroughput while lowering 1326631 the cost of metagenomic studies. However, the reads generated are of much shorter length making reads assembly and alignment more challenging. For example, Illumina/Solexa and SOLiD generate reads ranging between 35?100 base pairs while Roche 454 reads are approximately 100?00 base pairs in length. One goal of metagenomic studies is to identify what genomes are contained in the environmental sample and to estimate theirTaxonomic Assignment of Metagenomic Readsrelative abundance. Identification of genomes is complicated by the mixed nature of multiple genomes in the sample. A widely used approach is assigning the sequence reads to NCBI’s taxonomy tree based on sequence read homology alignment with known sequences catalogued in reference databases. The sequence reads are first aligned to the reference sequence databases using a sequence comparison program such as BLAST [5]. Reads which have hits in the database are then assigned to the taxonomy tree based on the best match or multiple high-scoring hits. The challenge of this approach is that hits may be found in multiple genomes for a single read at a given threshold of bit-score or Expect value, due to sequence homology and overlaps associated with similarity among species. Strategy of weighting similarities for multiple BLAST hits has been used to estimate the relative genomic abundance and average size [6]. Another representative and stand-alone analysis tool, MEGAN [7], assigns a read with hits in multiple genomes to their lowest common ancestor (LCA) in the NCBI taxonomy tree. Thus assignments of reads to different ranks of taxonomy tree depend on what threshold for bit-score or Expect value is used. Furthermore, MEGAN assigns reads one at a time. As a consequence, the results have less false positives but lack specificity. Various methods have been proposed to improve 1326631 the taxonomic assignment of reads by assigning more reads to the lower ranks of taxonomy tree [8?2]. In particular, CARMA3 [10] which is BLAST-based but not LCA-based, uses reciprocal search technique as in SOrt-ITEMS [13] to reduce the number of hits and hence further improves the accuracy of the taxonomic classification. In this paper, we propose a statistical approach, TAMER, for taxonomic assignment of metagenomic sequence reads. In this approach we first identify a list of candidate genomes using homology searches. A mixture model is then employed to estimate the proportion of reads generated by each candidate genome. Finally, instead of assigning reads one at a time to the taxonomy tree as done by.Natural (e.g., soil and seawater) or a host-associated (e.g., human gut) environment containing micro-organisms organized into communities or microbiomes. DNA is extracted from theenvironmental sample containing a mixture of multiple genomes and then sequenced without prior separation. The resulting dataset comprises millions of mixed sequence reads from the multiple genomes contained in the sample. Traditionally, DNA has been sequenced using Sanger sequencing technology [2] and the reads generated are routinely 800?000 base pairs long. However this technology is extremely cumbersome and costly. Recently next-generation sequencers, e.g., Illumina/Solexa, Applied Biosystems’ SOLiD, and Roche’s 454 Life Sciences sequencing systems, have emerged as the future of genomics with incredible ability to rapidly generate large amounts of sequence data [3,4]. These new technologies greatly facilitate highthroughput while lowering 1326631 the cost of metagenomic studies. However, the reads generated are of much shorter length making reads assembly and alignment more challenging. For example, Illumina/Solexa and SOLiD generate reads ranging between 35?100 base pairs while Roche 454 reads are approximately 100?00 base pairs in length. One goal of metagenomic studies is to identify what genomes are contained in the environmental sample and to estimate theirTaxonomic Assignment of Metagenomic Readsrelative abundance. Identification of genomes is complicated by the mixed nature of multiple genomes in the sample. A widely used approach is assigning the sequence reads to NCBI’s taxonomy tree based on sequence read homology alignment with known sequences catalogued in reference databases. The sequence reads are first aligned to the reference sequence databases using a sequence comparison program such as BLAST [5]. Reads which have hits in the database are then assigned to the taxonomy tree based on the best match or multiple high-scoring hits. The challenge of this approach is that hits may be found in multiple genomes for a single read at a given threshold of bit-score or Expect value, due to sequence homology and overlaps associated with similarity among species. Strategy of weighting similarities for multiple BLAST hits has been used to estimate the relative genomic abundance and average size [6]. Another representative and stand-alone analysis tool, MEGAN [7], assigns a read with hits in multiple genomes to their lowest common ancestor (LCA) in the NCBI taxonomy tree. Thus assignments of reads to different ranks of taxonomy tree depend on what threshold for bit-score or Expect value is used. Furthermore, MEGAN assigns reads one at a time. As a consequence, the results have less false positives but lack specificity. Various methods have been proposed to improve 1326631 the taxonomic assignment of reads by assigning more reads to the lower ranks of taxonomy tree [8?2]. In particular, CARMA3 [10] which is BLAST-based but not LCA-based, uses reciprocal search technique as in SOrt-ITEMS [13] to reduce the number of hits and hence further improves the accuracy of the taxonomic classification. In this paper, we propose a statistical approach, TAMER, for taxonomic assignment of metagenomic sequence reads. In this approach we first identify a list of candidate genomes using homology searches. A mixture model is then employed to estimate the proportion of reads generated by each candidate genome. Finally, instead of assigning reads one at a time to the taxonomy tree as done by.

Natural (e.g., soil and seawater) or a host-associated (e.g.

About author

Leave a reply

You must be logged in to post a comment.