Changes between Initial Version and Version 1 of SnpCallingPipeline


Ignore:
Timestamp:
Sep 12, 2010 11:20:00 PM (14 years ago)
Author:
Morris Swertz
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • SnpCallingPipeline

    v1 v1  
     1[[TOC()]]
     2= SNP calling pipeline =
     3Status: Alpha
     4Authors: Freerk van Dijk, Morris Swertz
     5Based on Broad pipeline.
     6
     7To perform the analysis as fast and good as possible the pipeline has been divided into several small processes. These processes are all numbered and can be found in figure 1 & 2 in the appendix. The processes including commands, input and output files are described below, starting with pre-alignment.
     8
     9== A. Pre-alignment ==
     10
     11This process involves all needed steps to prepare the raw sequencing reads for an alignment using BWA. These steps are all conducted using PICARD. An overview of the steps can be found in Appendix A.
     12
     13=== 1) Read converting ===
     14
     15This step involves converting the output from the Illumina GAII, FASTQ format, to binary SAM format (BAM). To do this !FastqToSam.jar can be used. The following command was used:
     16
     17java -jar !FastqToSam.jar FASTQ=../testrun/s_2_sequence041090.txt QUALITY_FORMAT=Illumina OUTPUT=../testrun/s_2_sequence.bam SAMPLE_NAME=s_2_test_sample PLATFORM_UNIT=barcode_13434HWUSIsomething PLATFORM=illumina SORT_ORDER=coordinate
     18
     19It is also possible to convert raw BUSTARD data to BAM using !BustardToSam.jar. The advantage of this tool is the multiple lane input, in theory all lanes from one flow cell can be converted parallel. To remove possible linkers from reads a tool named !ExtractIlluminaBarcodes.jar can be used. Since there isn’t any data including these barcodes this and the !BustardToSam haven’t been tested yet.
     20
     21=== 2) Sort & index reads ===
     22
     23The output of !FastqToSam.jar needs to be sorted and indexed in order to use it as input for the alignment. To sort these files SAMtools was used. The following commands can be executed on the BAM file:
     24
     25./samtools sort ../testrun/s_2_sequence041090_aligned.bam ../testrun/s_2_aligned_sorted
     26
     27The postfix .bam is automatically added to the output file.
     28
     29The corresponding BAM index file can be created using:
     30
     31./samtools index ../testrun/s_2_aligned_sorted.bam ../testrun/s_2_aligned_sorted.bam.bai
     32
     33== B. Alignment ==
     34This consists of alignment using BWA, coverting steps and the sorting and indexing of the output BAM file.
     35
     36=== 3) Alignment] ===
     37To align the reads to a reference genome the GATK uses BWA. The indices used for this step were already made for an earlier Galaxy project. GATK uses an own version of BWA which doesn’t support multiple threading yet. To overcome this problem alignment was performed using the standard BWA. The following command was executed:'''''''
     38
     39./bwa aln -l 101 -k 4 -t 16 ../Sting/dist/resources/index_hg36.fa ../testrun/s_2_sequence041090.txt -f ../testrun/s_2_sequence041090_aligned.sai
     40
     41This results in an alignment file in .sai format. To convert this to SAM format the following command was used:
     42
     43./bwa aln -l 101 -k 4 -t 16 ../Sting/dist/resources/index_hg36.fa ../testrun/s_2_sequence041090.txt -f ../testrun/s_2_sequence041090_aligned.sai
     44
     45To convert this SAM file to BAM SAMtools was used, the command:
     46
     47./bwa aln -l 101 -k 4 -t 16 ../Sting/dist/resources/index_hg36.fa ../testrun/s_2_sequence041090.txt -f ../testrun/s_2_sequence041090_aligned.sai
     48
     49=== 4) Sort & index alignment ===
     50After this converting step the BAM file needs to be sorted & indexed again using the following command:
     51
     52./samtools sort ../testrun/s_2_sequence041090_aligned.bam ../testrun/s_2_aligned_sorted
     53
     54The index was created using:
     55
     56./samtools index ../testrun/s_2_aligned_sorted.bam ../testrun/s_2_aligned_sorted.bam.bai
     57
     58'''NOTE. Keep track of the reference build used. UCSC uses human build hg18 and hg19, which correspond to build 36 and 37 of the NCBI. These builds contains the same information and sequences but different headers etc. The dbSNP ROD files which need to be used further on in the pipeline can be downloaded from UCSC. To overcome any compatibility problems it is advised to use hg18 and hg19 as reference during alignment and further calculations.'''
     59
     60== C. Post-alignment ==
     61This step involves all processes executed on the generated alignments. These steps include quality value recalibration, SNP/indel calling, annotation and the calculation of statistics as well.
     62
     63=== 5) Quality value recalibration (Count covariates)] ===
     64
     65To determine the covariates influencing base quality scores in the BAM file obtained after alignment this step is performed. This results in a CSV file. The following command was used to execute this step:
     66
     67java -jar GenomeAnalysisTK.jar -R resources/index_hg36.fa --default_platform illumina --DBSNP resources/hg36/snp130.txt -I ../../testrun/s_2_aligned_sorted.bam --max_reads_at_locus 20000 -T !CountCovariates -cov !ReadGroupcovariate -cov !QualityScoreCovariate -cov !CycleCovariate -cov !DinucCovariate -recalFile ../../testrun/recal_data.csv
     68
     69The parameters used for this calculation need to be tested more. This to see the influence of each parameter on the output results.
     70
     71=== 6) Quality value recalibration (Table recalibration) ===
     72
     73This step includes the actual quality value recalibration. The recalibration takes place using the generated CSV file as input. The output of this process is the alignment file with recalibrated quality values in BAM format. This step was executed using the following command:
     74
     75java -jar GenomeAnalysisTK.jar -R resources/index_hg36.fa --default_platform illumina -I ../../testrun/s_2_aligned_sorted.bam -T !TableRecalibration -outputBam ../../testrun/s_2_aligned_recalibrated.bam -recalFile ../../testrun/recal_data.csv
     76
     77=== 7) Sort & index recalibrated alignment] ===
     78
     79The resulting BAM file needs to be sorted and indexed again. This step needs to be repeated every time an analysis or calculation has been performed on a BAM file. To sort and index this file use the commands explained in step 4.
     80
     81=== 8) Quality value recalibration (Count covariates) ===
     82
     83This step is executed on the obtained BAM file after the quality value recalibration. This file is used as input, the command used further is exactly the same as the command used in step 5.
     84
     85=== 9) Generate graphs/stats ===
     86To generate several statistics and graphs on the recalibrated alignment the !AnalyzeCovariates.jar tool can be used. This tool uses the generated .CSV file as an input. The output is a directory containing several statistics and graphs about the quality values etc. The following command was used:
     87
     88Java –jar !AnalyzeCovariates.jar –recalFile ../../testrun/recal_data.csv –outputDir ../../testrun/analyzecovariates_output –resources resources/index_hg36.fa –ignoreQ 5
     89
     90=== 10) Generate graphs/stats ===
     91This step calculates the same as step 9, but used the CSV file from step 5 as input. The output from this step can be compared to the output of step 9. By comparing the output from both above mentioned steps the increase and changes in quality value etc. can be visualized.
     92
     93== D. SNP & indel calling ==
     94This part of the pipeline involves the tools for SNP and indel calling. This part also includes local realignment and filtering for potential SNPs. The following steps divide this part in sub processes.
     95
     96=== 11) Local realignment around indels ===
     97
     98The first step is to determine small suspicious intervals in the alignment. These intervals will then be realigned to get an optimal output result. To do this, these intervals were calculated using the following command:
     99
     100java -jar GenomeAnalysisTK.jar -T !RealignerTargetCreator -I ../../testrun/s_2_aligned_recalibrated_sorted.bam -R resources/index_hg36.fa -o ../../testrun/forRealigner.intervals -D resources/hg36/snp130.txt
     101
     102This process outputs a file containing intervals on which realignment should be executed.
     103
     104=== 12) Cleaning BAM alignment file by realignment ===
     105
     106For each detect interval a local realignment takes place. This results in a cleaned BAM file on which SNP and indel calling takes place. The local alignment was performed using the following command:
     107
     108java -Djava.io.tmpdir=/tmp -jar GenomeAnalysisTK.jar -I ../../testrun/s_2_aligned_recalibrated_sorted.bam -R resources/index_hg36.fa -T !IndelRealigner -targetIntervals ../../testrun/forRealigner.intervals --output ../../testrun/s_2_aligned_cleaned.bam -D resources/hg36/snp130.txt
     109
     110=== 13) Sort & index cleaned alignment === The cleaned BAM file is again sorted and indexed using the command as described in step 4.
     111
     112=== 14) Generate raw indel calls ===
     113In this step raw indel calls are made. This output two bed files, the first containing raw indels, the second detailed output. This process was executed using the following command:
     114
     115Java –jar GenomeAnalysisTK.jar –T IndelGenotyperV2 –R resources/index_hg36.fa --I ../../testrun/s_2_aligned_cleaned.bam –O ../../testrun/indels.raw.bed –o ../../testrun/detailed.output.bed –verbose –mnr 1000000
     116
     117The generated raw indel file will later be used in step 17.
     118
     119=== 15) Filter raw indels ===
     120To retrieve the actual list of indels this last step needs to be performed. This results in a filtered list containing all the detected indels. The following command (Perl script) was used:
     121
     122perl ../perl/filterSingleSampleCalls.pl --calls ../../testrun/detailed.output.bed --max_cons_av_mm 3.0 --max_cons_nqs_av_mm 0.5 --mode ANNOTATE > ../../testrun/indels.filtered.bed
     123
     124=== 16) Making SNP calls ===
     125To make SNP calls this process needs to be executed. This results in a VCF file containing raw SNPs. This list will be filtered in step 18, using output from step 17. To generate this list of raw SNPs the following command was used:
     126
     127Java –jar GenomeAnalysisTK.jar –R resources/index_hg36.fa –T !UnifiedGenotyper -I ../../testrun/s_2_aligned_cleaned.bam -D resources/hg36/snp130.txt –varout ../../testrun/s2_snps.raw.vcf –stand_call_conf 30.0
     128
     129=== 17) SNP filter pre-processing ===
     130In this step the raw indel file created in step 14 was used. Based on the regions described in this file the tool filters out SNPs located near these potential indels. The output of this file is used for filtering the SNP calls in step 18. The following command was used:
     131
     132Python ../python/makeIndelMask.py ../../testrun/indels.raw.bed 10 ../../testrun/indels.mask.bed
     133
     134The number in this command stands for the number of bases that will be included on either side of the indel.
     135
     136=== 18) Filter SNP calls ===
     137The last step is the filtering on the list of SNPs. This filtering step includes a list of parameters which haven’t been tested yet. These parameters aren’t included in this command because they have to be tested. The following command was executed:
     138
     139Java –jar GenomeAnalysisTK.jar –T !VariantFiltration –R resources/index_hg36.fa –O ../../testrun/s2_snps.filtered.vcf –B ../../testrun/s2_snps.raw.vcf –B ../../testrun/indels.mask.bed –maskName !InDel –clusterWindowSize 10 –filterExpression “AB > 0.75 && DP > 40
     140
     141||DP > 100||MQ0 > 40||SB > -0.10” –filterName “!StandardFilters” –filterExpression “MQ0 >= 4 && ((1.0 * DP) > 0.1)” –filterName “HARD_TO_VALIDATE”||
     142
     143The output of this step is the list of SNPs found in this individual. More information on the parameters used can be found on: http://www.broadinstitute.org/gsa/wiki/index.php/VariantFiltrationWalker & http://www.broadinstitute.org/gsa/wiki/index.php/Using_JEXL_expressions
     144
     145== E. Calculating quality metrics & removing duplicate reads ==
     146
     147To remove duplicate reads after alignment PICARD can be used. This tool can also calculate different metrics on alignment data. Therefore PICARD was installed and tested on an aligned and sorted BAM file. A process overview can be seen in figure 3. This process starts at step 4 of the analysis pipeline and uses the sorted alignment file as input. A complete testrun also was performed on this tool and developed pipeline. The command, results etc. can be found in Complete_Picard_test_run.doc.
     148
     149== Remarks & Suggestions ==
     150There are several other tools which can be implemented into the pipeline, one of these tools is an implementation for Beagle. This tool is only usable in an experimental way at the moment.  The only use cases supported by this interface are:
     151
     152 * Inferring      missing genotype data from call sets (e.g. for lack of coverage in      low-pass data)
     153 * Genotype      inference for unrelated individuals.
     154
     155The basic workflow for this interface is as follows:
     156
     157-          After variants are called and possibly filtered, the GATK walker !ProduceBeagleInputWalker will take the resulting VCF as input, and will produce a likelihood file in BEAGLE format.
     158
     159-          User needs to run BEAGLE with this likelihood file specified as input.
     160
     161-          After Beagle runs, user must unzip resulting output files (.gprobs, .phased) containing posterior genotype probabilities and phased haplotypes.
     162
     163-          User can then run GATK walker BeagleOutputToVCFWalker to produce a new VCF with updated data. The new VCF will contain updated genotypes as well as updated annotations.
     164
     165GATK also has the ability to run process in parallel. This option is only supported by a limited amount of tools. It can be imported in every tool using parallelism, more information on this can be found on: http://www.broadinstitute.org/gsa/wiki/index.php/Parallelism_and_the_GATK
     166
     167= Manuals, help etc =
     168More information about the GATK can be found at:
     169
     170http://www.broadinstitute.org/gsa/wiki/index.php/The_Genome_Analysis_Toolkit
     171
     172A forum containing a lot of background information can be found at:
     173
     174http://getsatisfaction.com/gsa
     175
     176Information on the other tools used can be found at;
     177
     178SAMtools: http://samtools.sourceforge.net/
     179
     180PICARD: http://picard.sourceforge.net/index.shtml
     181
     182Another source of valuable information on anything NGS related:
     183
     184http://seqanswers.com/forums/index.php
     185
     186'''[[BR]] ''' = [ Appendix A – Schematic overview of the pipeline] =
     187
     188''Figure 1. Overview of the analysis pipeline created using the GATK. The steps/processes are all numbered. More information on these steps can be found in the corresponding text sessions.''
     189
     190''Figure 2. Overview of the analysis pipeline created using the GATK. This part starts at step 7 from figure 1. The steps/processes are all numbered. More information on these steps can be found in the corresponding text sessions.''
     191
     192''Figure 3. Overview of the PICARD pipeline. This tool performs several quality checks after the alignment step completed.''