Author Archives: stecal

Using CINECA

Content

  1. CINECA HPC system
  2. CINECA servers
  3. PICO
  4. BATCH Scheduler: PBS

 

1. CINECA HPC system

The CINECA HPC system is a HPC infrastructure based in Italy (Bologna) at CINECA.

To get access to the system you need to do the following:

  1. Register to the the system at this URL: https://userdb.hpc.cineca.it/user/register
  2. Once you receive your credentials, connect to the UserDB portal (https://userdb.hpc.cineca.it/user/r)
  3. Go to the “HPC Access” section (you find the link among the Available Services, in the left menu)
  4. Complete the three sub-sections:
    1. – Documents for HPC (upload a document, fill the mandatory fields, and subscribe the agreement on the “User Responsibilities”)
    2. – Institution, this is already OK (you filled the required fields)
    3. – Personal Data, this is already OK (you filled the fields and subscribed the agreement

Once finished, in the HPC Access page a Submit button will appear. Click on it to request an HPC access (that is, username and password to log into the clusters) After a short verification of the data you inserted you will receive your credentials in two different e-mails.

It will require approximately two hours from the email receiving for the credentials to be active on the servers

2. CINECA servers

Up to date there exist two main servers: GALILEO and PICO. For Bioinformatics work the suggested system is PICO, which has already installed a wide variety of tools

3. PICO

To access PICO one can use standard ssh command line

ssh username@login.pico.cineca.it

For a detailed description of PICO’s infrastructure see here: PICO userguide

Resources saldo

To have an overview of the present availability of resources one can use this command

saldo -b

that will return both space and CPU hrs saldo.
For DRES filesystem the command is

cindata

Several projects might be active and ready to use. Please refer to Stefano Calza to get an updated overview of which project is better rely on for the present work.
All projects share a common DRES filesystem that can be used for storing data and sharing data between projects.

Data management

Detailed information on data management in CINECA can be found here: Data management

Modules

PICO system, just like UppMax, is based on modules to provide additional features. To inspect available module run

module avail

To load advanced modules run

module load profile/advanced

Inspecting the module list will now show a whole new set of tools. E.g. to load R

module load gnu/4.8.3
module load r/3.2.2

4. PBS

The batch scheduler installed in CINECA machines is similar to UppMax’s one (SLURM) though with some minor differences. To create a batch files and find out all the details please refer to PBS userguide: PBS userguide

 

Driver Genes


Computing driver-gene score using integrated genomics and transcriptomics profiles of tumor and matched-normal tissues 


Content

  1. Introduction
  2. Flowchart
  3. Example usage
  4. References

 1. Introduction

All cancers arise as a result of somatically acquired changes in the DNA of cancer cells, yet not all somatic abnormalities found in a cancer genome are involved in tumor development (carcinogenesis). Passenger mutations occur by genetic hitchhiking in an unstable environment and they have no effect on the fitness of a clone or the tumor progression. Distinguishing driver mutations actively involved in carcinogenesis from passenger mutations is a key step to understand the mechanism of tumor emergence and evolution, and to determine potential therapeutic targets.

With the progress of the sequencing methods full profiles of tumor genomes and exomes can now be obtained which further enables unbiased search for drivers mutations. Methods based on functional and/or frequency approaches are being proposed, e.g. DOTS-Finder, MuSiC, OncodriveFM.

DNA and RNA sequencing of tumor and matched-normal tissues from the same patient allows accurate detection of somatic mutations and differential expression at isoform-level. To integrate mutation, expression and functional data from the different omics data, we are developing a pipeline to calculate driver-gene score (DGscore). To contribute to the score, a gene has to satisfy four key properties:

  • frequently mutated
  • with mutational impact on protein coding gene,
  • exhibit a large tumour-normal differential expression and
  • functionally linked to many differentially-expressed neighbours in a functional gene network, such as protein-protein interaction network.

 2. Flowchart

We have developed a computational approach to calculate driver-gene scores using existing bioinformatics tools as well as tools previously developed by our group and a set of in-house written bash, python and R scripts. For learning purposes the approach can be viewed in five steps as shown in the gray boxes on Figure 1. Briefly, these include 1): estimating gene- and isoform expression using joint statistical model accounting for non-uniform isoform-specific read distribution as implemented in our Sequgio pipeline; 2) preprocessing BAM files; 3) assessing statistical significance of association between alleic counts and tumor/normal status for somatic variant calling; 4) predicting impact of somatic mutations on protein coding genes; 5) integrating gene- and isoform expressions and predicted somatic mutations to derive driver genes z-scores using Network Enrichment Analysis.

Below, we explain each step in more details, providing examples of usage and links to more detailed description. We also provide ready-to-download scripts, where applicable, to speed-up learning and setting-up process. It is recommended to run all calculation using high performance computing cluster e.g. UPPMAX for users in Sweden. Although our approach is not system dependent, the majority of the scripts has been developed and tested using UPPMAX and we will therefore assume access to UNIX environment in the below description. To run our pipeline effectively, various bioinformatics approaches and tools have to be understood. Unless one is already familiar with GATK and snpEff and our previously developed methods, Sequgio and Network Enrichment Analysis, setting-up the full run may take several days. As the approach is build upon many tools that we feel are best to be put together by the user, we focus on providing the essential know-how on using our approach, and assume basic bioinformatics knowledge.

The complete run for one patient, with 6GB tumor and matched-normal .BAM files (each) from WXS sequencing (WXS-seq) and 5GB tumor and matched-normal .BAM files (each) from RNA sequencing (RNA-seq) can be completed in 24h on UPPMAX using 4 cores.

DGpipeline

Figure 1. Flowchart of computational identification of the driver genes based on the tumor and matched-normal RNA-seq and Exome-seq tissue data; AGS, altered gene sets; FGS, functional gene set. Step 1: estimating gene- and isoform expression using joint statistical model accounting for non-uniform isoform-specific read distribution as implemented in our Sequgio pipeline; Step 2: preprocessing BAM files; Step 3: assessing statistical significance of association between allelic counts and tumor/normal status for somatic variant calling; Step 4: predicting impact of somatic mutations on protein coding genes; Step 5: integrating gene- and isoform expressions and predicted somatic mutations to derive driver genes z-scores using Network Enrichment Analysis.


 3. Example usage

To use the Driver Genes approach, you need 4 BAM files per patient, including tumor and matched-normal WXS-seq data and tumor and matched-normal RNA-seq. These files should be at least sorted and indexed, but we also recommend marking duplicates and removing reads with 0 quality score. Example script showing preparation of the BAM files is under our SOMAC wiki post (More on BAM processing).

Step 1: Estimating gene- and isoform expression (Sequgio)

Sequgio uses a joint statistical model that accounts for non-uniform isoform-specific read distribution when estimating gene isoform expression. Sequgio is implemented as R package, available from the GitHub and soon to be available as part of the Bioconductor. Example usage and data have been released with the Sequgio publication [1] and are described in details in another post of our Wiki. Briefly, using RNA-seq data (both tumor and normal for each patient) we 1) create annotation template, 2) create design matrix, 3) import BAM files and create a count matrix and 4) fit the model to obtain gene-isoform expression estimates.

Here you can download R script containing the example usage and data as released with Sequgio publication. The script contains explanatory comments.

Download sequgio.R script: sequgio.R

Step 2: Preprocessing BAM files (GATK)

Variant calling is not error rate free, with major sources of errors including erroneous realignment in low-complexity regions and incomplete reference genome with respect sample. It could be therefore good practice to include post-alignment BAM processing steps, including realignment and base quality recalibration, especially when using variant callers not performing local realignment.

We used GATK for post-alignment and base quality recalibration and we refer you to well-documented GATK website for details. However, the example bash script can be found below.

This step can be also performed using our SOMAC v01 pipeline, as described in Step 3: Statistical significance of association between allelic counts and tumor/normal status for somatic variant calling (SOMAC).

#!/bin/bash

# GATK realignment and base quality recalibration example.sh script
 # To be run from the directory where example.bam and example.bam.bai is located 

# Specify pathway to GATK, ref.fasta and indels.vcf
> path2GATK="/sw/apps/bioinfo"
> refFasta="/proj/b2014296/DATA/FILES/Homo_sapiens_assembly19.fasta"
> refKnown="/proj/b2014296/DATA/FILES/Mills_and_1000G_gold_standard.indels.b37.vcf"

# GATK (3.3.0) RealingerTargetCreator
> java -Xmx2g -jar $path2GATK/GATK/3.3.0/GenomeAnalysisTK.jar -T RealignerTargetCreato -R $refFasta -I example.bam -o forIndelRealinger.intervals -known $refKnown

# GATK (3.3.0) IndelRealinger
> java -Xmx2g -jar $path2GATK/GATK/3.3.0/GenomeAnalysisTK.jar -T IndelRealigner -R $refFasta -I example.bam -targetIntervals forIndelRealigner.itervals -o realigned.bam

# GATK (3.3.0) Base Recalibrator
> java -Xmx2g -jar $path2GATK/GATK/3.3.0/GenomeAnalysisTK.jar -T BaseRecalibrator -I realigned.bam -R -knownSites $refKnown -o recal.grp 

# GATK (3.3.0) PrintReads
> java -Xmx2g -jar $path2GATK/GATK/3.3.0/GenomeAnalysisTK.jar -T PrintReads -BQSR recal.grp -o recalibrated.bam -I reca.grp 

Download example post alignment BAM processing .sh script here: GATK_example.sh

Step 3: Statistical significance of association between allelic counts and tumor/normal status for somatic variant calling (SOMAC)

The next step after processing of the whole exosome sequencing data is to call somatic variants using the paired tumor and normal samples. To this end we have developed SOMAC pipeline that is based on standard parametric statistical tests (chi-square) and novel multidimensional false discovery rate (FDR) estimation (Ploner et al. 2006) The pipeline is described in details here. As part of this pipeline it is possible to automatically run bam post-alignment processing (realignment and base quality recalibration as described in step 2).

# Running SOMAC
> bash path/to/somac config.TN inputFiles.txt partition.txt

SOMAC output is a table of potential somatic variants including chi-square, fdr2d and LRT statistics (in .txt and .RData) and .vcf file containing the potential somatic variants. These can be further filtered using desired statical criteria, e.g. chi-square < 0.001.

SomacOutput

Download SOMAC scripts: Somac_v01

Step 4: Predicting impact of somatic mutations on protein coding genes (snpEff)

To identify the driver genes, we focus on significant high and moderate impact mutations, that is mutations leading for instance to exon deletions, frame shifts or codon insertion and deletions. To identify these we used snpEff, a variant annotation and effect prediction tool (Cingolani et al. 2012).  For example, significant isoforms (top 1000 ranked using chi-square and chi-square < 0.001) mutated in at least ten patients and predicted to have high or moderate impact in more than two patients were kept on the list of potential driver genes when searching for driver genes predictive of breast cancer survival based on the 60 paired tumor and normal TCGA samples.

# Running snpEff
> java -Xmx4g -jar /path/to/snpEff.jar GRCh37.75 input.vcf > output.vcf

Download snpEff here & more on snpEff can be found here

Step 5: Integrating gene- and isoform expressions and predicted somatic mutations to derive driver genes z-scores (Network Enrichment Analysis)

Network Enrichment Analysis (NEA) extends the overlap statistics in Gene-set enrichment analysis to network links between genes in the experimental set and those in the functional categories (Alexeyenko et al. 2012). We use it to uncover potential driver genes that have strong network connectivity with a list of altered gene set (AGS). Each mutated gene is refereed to as functional gene set (FGS) and the AGS are derived from the differentially expressed genes and isoforms (Sequgio). A quantitative enrichment z-score is calculated to assess statistical significance of over-represented direct links between AGS and FGS; the z-score allows to prioritise mutated genes (Alexeyenko et al. 2012).

To identify common-driver genes we set AGS as genes with differentially expressed isoform levels between tumor and normal.

To identify patient-specific driver genes we set AGS as genes with large tumor/normal expression ratio within each patient.

# Running NEA in R
> require(nea)
> res.MNEA <- nea(ags=AGS, fgs=FGS, fgslib=NULL, network=mnet, pnet=NET, nperm=50)

where: 
ags: a vector of altered genes or isoforms, e.g. obtained from Sequgio
fgs: a list of functional gene sets, e.g. obtained from mutation effect prediction
fgslib: a character of the name of annotation data
network: a vector of gene pairs representing the network link
pnet: a list of randomly permuted networks
nperm: number of permutations

# For more detailed usage
> ?nea

Download NEA here and follow fully worked-out example in Section C of Additional file 1 (Alexeyenko et al. 2012)


Questions?

yudi.pawitan@ki.se


References

Alexeyenko, Andrey, Woojoo Lee, Maria Pernemalm, Justin Guegan, Philippe Dessen, Vladimir Lazar, and Janne Lehti. 2012. “Network Enrichment Analysis : Extension of Gene-Set Enrichment Analysis to Gene Networks.” BMC Bioinformatics 13: 226.

Cingolani, Pablo, Adrian Platts, Le Lily Wang, Melissa Coon, Tung Nguyen, Luan Wang, Susan J Land, Xiangyi Lu, and Douglas M Ruden. 2012. “A Program for Annotating and Predicting the Effects of Single Nucleotide Polymorphisms, SnpEff: SNPs in the Genome of Drosophila Melanogaster Strain w1118; Iso-2; Iso-3,” no. June: 80–92.

Ploner, Alexander, Stefano Calza, Arief Gusnanto, and Yudi Pawitan. 2006. “Multidimensional Local False Discovery Rate for Microarray Studies.” Bioinformatics (Oxford, England) 22 (5): 556–65.

Suo, Chen, Stefano Calza, Agus Salim, and Yudi Pawitan. 2013. “Joint Estimation of Isoform Expression and Isoform-Specific Read Distribution Using Multi-Sample RNA-Seq Data,” 1–8.

test

TCGA data

TCGA

This post collects all the information regarding accessing data from TCGA Database.

Data Manifest

Manifest data store summary information on data available in TCGA. The tab separated file can be download as follow


wget https://cghub.ucsc.edu/reports/SUMMARY_STATS/LATEST_MANIFEST.tsv

Sequgio

An Introduction to Sequgio

Contents
2. Installing Sequgio
4.  Other comments
4.2 Alternative counting method

1 Introduction

This document provides a brief guide to the Sequgio package, which is a package for gene isoform expression estimation, using a joint statistical model accounting for non-uniform isoform-specific read distribution (Suo et all. 2014). There are four components to this package:

  • constructing annotation template txdb
  • constructing design matrices used in expression estimation
  • importing BAM files to create counts matrix
  • estimating the expression levels, and optionally the read distribution and standard error of the expression estimates

Read intensity in the RNA-sequencing data is often not uniform, in which case the currently standard methods of gene and isoform expression estimates result in biased estimates. Sequgio accounts for non-uniform isoform-specific read distribution, yielding improved gene isoform expression estimation. A statistical regularization with L1 smoothing penalty is imposed to control the estimation. Also, for estimability reasons, the method uses information across samples from the same gene (Suo et all. 2014).

Installing Sequgio

Sequgio methods is implemented as an R package and will be soon available at bioconductor.org. In the meantime please use our developer version found on GitHub

Obtaining Sequgio package from GitHub (available now: 2015/01/20) From R execute below commands 

install.packages("devtools")
library(devtools)
dev_mode(on=T)
install_github("Sequgio","Senbee",ref = "Stable")
dev_mode(on=F) 

Example data

We show the functionality Sequgio package using RNA sequencing samples provided in the RNAseqData.HNRNPC.bam.chr14. For demonstration purposes, Sequgio provides annotation template object (txdb, obtained in Step1) and design matrix (Design, obtained in Step2).

> library(RNAseqData.HNRNPC.bam.chr14)
> library("TxDb.Hsapiens.UCSC.hg19.knownGene")
> data ("TxDb")
> data ("Design")

For better performances the package supports parallel computing via the BiocParallel package which is loaded automatically. For parallel processing set the parameters to the ones suiting your platform. We will use sequencial computation here. 
> param = SerialParam()

3.1 Step 1: creating annotation template (txdb)

The first step is providing TranscriptDb object that stores transcript annotation information. Here, we will use the one provided by the package TxDb.Hsapiens.UCSC.hg19.knownGene.

The TranscriptDb objects is then preprocessed to generate disjoint regions using the reshapeTxDb function. To obtain accurate results, set the read length parameter to match your experiment (72 nucleotides for the example data).

Further, since the example data contain only chromosome 14 we limit TranscriptDb  to this chromosome as well (reducing computational burden).

# Example data
> seqs = seqnames(seqinfo(TxDb.Hsapiens.UCSC.hg19.knownGene))
> sel =  rep(FALSE,length(seqs))
> names(sel) =  seqs
> sel["chr14"] = TRUE
> isActiveSeq(TxDb.Hsapiens.UCSC.hg19.knownGene) =  sel
> txdb = reshapeTxDb(TxDb.Hsapiens.UCSC.hg19.knownGene,probelen = 72L, mcpar=param)

# For the real data obtaining txdb could look like below
> txdb.ucsc = makeTranscriptDbFromGFF(file="pathway/to/genes.gft", format="gtf", dataSource=paste(http://cufflinks.cbcd.umd.edu/igenomes.html", sep="")), species="hg19.2014-06-02-13-47-56")
> seqs = seqnames(seqinfo(txdb.ucsc))
> sel = rep(TRUE, length(seqs))
> names(sel) = seas
> isActiveSeq(txdb.ucsc)=sel
> txdb = reshapeTxDb(txdb.ucss, probelen = 48L, mcpar=param_core)

This step has to be repeated only if the experimental parameters change, e.g. annotation database, reads length, with/without junctions etc.

If the user wants to focus on a subset of the “gene/features” in the genome, this can be done using the argument include.only. This takes a GRanges object as input. The purpose is to limit resources usage at BAM files input: this will be restricted only to the specific regions supplied.



# Let's consider only two genes for example

> GENES = genes(TxDb.Hsapiens.UCSC.hg19.knownGene)[c(1,3)]
> txdb = reshapeTxDb(TxDb.Hsapiens.UCSC.hg19.knownGene,probelen = 72L, 
            include.only=GENES, mcpar=param)

 

The ranges can be also be supplied later in the pipeline (see setCounts).

3.2 Step 2: creating design matrix

The second step is to create design matrices for each “transcriptional unit” (see references). These matrices will be used in the fitting procedure.

Several parameters can be tuned. The main one regards which kind of library is the experiment using: paired-end (“PE”) or single-end (“SE”, the default). This can be set with the method argument. The required mulen argument provides an estimate of the average fragment length.

Here we will use paired-end data.

> Design <- makeXmatrix(txdb, method="PE", mulen=155, sd=50, mcpar=param)

This step has to be only performed once, as step 1.

2.3 Step 3: importing BAM files and create counts matrix

Models are fit based on a matrix with read counts for every region in every sample. We will now import the aligned read counts in BAM files into Ras object called ’allCounts’. To do so we create a target object storing the filenames (with full path) and sample names to be used for count matrix headings. If BAI file are available, they can be provided in the target object.

The resulting object (allCounts) will count for every exons the overlapping reads.

Let’s first create the BamFileList object pointing to the BAM files. Data are pair-end so we set asMates to be TRUE.

> bflist <- BamFileList(RNAseqData.HNRNPC.bam.chr14_BAMFILES, asMates=TRUE)

Then we create a seqCounts object that will store some informations used in counting. A file name for file backed matrix can be provided or a random one will be generated. Also the root directory where the files are stored can be specified (defaults to working directory).

If the backing files (*.bin and *.desc) are then moved elsewhere, the files location informations in the seqCounts will have to be updated.

By default the function seqCounts will save the created object in the working directory, with the same fileName as *.bin and *.desc files. If this file is lost, it can be recreated allowing to link again to the *.bin file. To do specify seqCounts(…,existDesc=”fileName.desc”). A control on new exons/samples length will be performed but not on sample and exon names.

> seqObj <- setCounts(bflist,txdb,fileName="test")

If the user would like to reduce I/O resources usage when reading in BAM files, this can be restricted to specific region of interest specifying a set of ranges using includeOnly


 #This would show if any subset is already set
> includeOnly(seqObj)
 # Here we set it (see reshapeTxDb for GRanges definition example) 
> includeOnly(seqObj) = GENES
 # Or if we would like to remove it 
> includeOnly(seqObj) = NULL

We then perform the actual counting with the function doCounts (no return value). This function saves the counts in the shared/filebacked big.matrix object.

> doCounts(seqObj,mcpar=param)

We can finally import in RAM the counts creating a standard R matrix.

> allCounts <- getCounts(seqObj)

In case the same big.matrix is used for more counting it should be reset to have counts zero.

To do so use the resetCounts function

> resetCounts(seqObj)

Again for ease of computation counts object is already provided.

> data(“Counts”)

We can see how many read we counted

> colSums(allCounts)

The counting procedure can be performed on single BAM files too or a subset of a BamFileList

For a single file we can proceed as follows

> bfl <- BamFile(RNAseqData.HNRNPC.bam.chr14_BAMFILES[1],asMates=TRUE)
> seqObj1 <- setCounts(bfl,txdb,fileName='test')

and then perform the following steps as described.

For a subset of a seqCounts we can proceed as described in the following code, using columns indicators (integers)

> doCounts(seqObj1,mcpar=param,which.sample=c(1L,4L))

or similarly using sample names

> doCounts(seqObj1,mcpar=param,which.sample=c('ERR127306','ERR127309'))

This last procedure would allow to have different nodes of an HPC platform to count a subset of the samples i parallel (each nodes get a different which.sample vector), while each node can use multiple cores to count individual samples. All the counts will be stored in the same shared matrix.

It also possible to subset a single sample (or many samples) based on genomic position. This can be achieved using filterBam function in Rsamtools package.

For example to read in the whole chr22 (in a human genome) we could proceed as follow

bfl <- BamFile(RNAseqData.HNRNPC.bam.chr14_BAMFILES[1],asMates=TRUE,
               yieldSize=100000)
seqObj <- setCounts(bfl,txdb,fileName="test")
lenChr22 = seqlengths(txdb)['chr22']
bam.params = ScanBamParam(simpleCigar = FALSE, reverseComplement = FALSE, 
                what = c("qname", "qwidth", "mapq"), 
                flag = scanBamFlag(isPaired = TRUE, isUnmappedQuery = FALSE, 
                isDuplicate = FALSE,isNotPassingQualityControls = FALSE, 
                hasUnmappedMate = FALSE),
                which = RangesList(chr22=IRanges(1,lnChr22))))

tmpFile = filterBam(bfl,tempfile(),param=bam.params)
doCounts(seqObj,mcpar=param)

2.3.1 Fixing QNAME

Some preprocessing pipelines deliver BAM files for paired-end reads where every mate has a slightly different QNAME. E.g. using the SRA toolkit to convert to fastq (fastq-dump), it will generate out two fastq files, and the QNAME in each of the fastq files will be appended with a “.1” for the first pairs and a “.2” for the second pairs. Similarly we might find that pairs have a different prefix.

The matching procedure implemented in Rsamtools requires mates to have the same QNAME, so a trimming is required. This can be achieved when declaring the BamFile or BamFileList object using the arguments ’qnamePrefixEnd’ or ’qnameSuffixStart’.

Unique qnames aren’t a problem in this sample file – just using it to demonstrate the usage.

First no trimming

> fl <- system.file("extdata", "ex1.bam", package="Rsamtools")
> param <- ScanBamParam(what="qname") 
> bf <- BamFile(fl, asMates=TRUE)
> scanBam(bf, param=param)[[1]]$qname[1:3]

then trim prefix

> qnamePrefixEnd(bf) = "_"
> scanBam(bf, param=param)[[1]]$qname[1:3]

and now trim suffix also

> qnameSuffixStart(bf) = ":"
> scanBam(bf, param=param)[[1]]$qname[1:3]
In real situation the easiest approach is to specify trimming character directly in the call to BamFile or BamFileList.
For example in TCGA data (e.g. BRCA and LUNG) we have the following QNAME pattern for a pair of reads:
UNC14-SN744_125:6:2108:11845:164113/1
UNC14-SN744_125:6:2108:11845:164113/2
so we need to trim everything after (including) “/”
We would then define the BamFile object (or list if more than one) as follow
> bf <- BamFile(Filename,asMates=TRUE,qnameSuffixStart="/") ## single file
> bfl <- BamFileList(Filenames,asMates=TRUE,qnameSuffixStart="/") ## more than one BAM

## ALTERNATIVE WAY FOR STEP 3 WITH BASH-PYTHON (without fixing Qname) ##

This alternative method takes ~15 min=1 core=1 BAM file, and each BAM file can be counted in parallel (e.g. 12 samples = 12 parallel batch jobs)

1. Save your txdb into a text file.

> write.table(as.data.frame(txdb), "mytxdb.txt", sep="\t", row.names=FALSE, col.names=FALSE, quote=FALSE)

We already have pre-compiled TXDB for general purpose: /proj/b2012036/Dhany/Sequgio/txdb_hg19.txt and /proj/b2012036/Dhany/Sequgio/txdb_grch.txt

2. Write the list of annotated exon-pairs.

Do this, so that we can ignore novel exon pairs in this counting step. Make sure you still have the txdb variable in R:

> ex_list <- split(values(txdb@unlistData)$exon_name,values(txdb@unlistData)$tx_name)
> reg_vec <- sapply(split(values(txdb@unlistData)$region_id,values(txdb@unlistData)$tx_name),function(x) x[1])
> sizes_ex_list <- sapply(ex_list,length)
> n.exons <- sum((sizes_ex_list^2+sizes_ex_list)/2)
> exons.names <- unique(.Call("makeExNames",ex_list,reg_vec,as.integer(n.exons)))
> write.table(exons.names, "exons19.txt", row.names=FALSE, col.names=FALSE, quote=FALSE, sep="\t")

We already have pre-compiled list of annotated exon-pairs for general purpose: /proj/b2012036/Dhany/Sequgio/exons19.txt and /proj/b2012036/Dhany/Sequgio/exonsGrch.txt

3. Preview your merged BAM file.

Go to your bash command line, either open a new bash terminal or quit R. Find the pattern for the headers (i.e. first column) and answer the following questions:
(i) Can we tell which one is from pair 1 and 2? (0=no, 1=yes)
(ii) What’s the delimiter that separate headers and pair identifier?
(iii) Is the header on the left or right of the delimiter? (0=left, 1=right)


> samtools view -f 2 merged.bam | head
UNC17-SN1277:47:C0Y3YACXX:4:2205:18544:178539/1 81      chr1    10551   57      48M   ...
UNC17-SN1277:47:C0Y3YACXX:4:2208:7653:169627/2  163     chr1    11193   54      48M   ...

In the above example, (i) yes (=1) (ii) / (iii) left (=0). Thus, we define the “sep” parameter (separated by comma) as: sep=1,/,0

Beware that in some cases, the pair identifier is not /1 and /2. You must decide whether there’s separator or not.

4. List which chromosomes you want to count.

First, you need to know the chromosome names of the BAM files: 1,2,…,X,Y, or chr1,chr2,…,chrX,chrY. The names can be found on the second column of the BAM view (see above). Second, you need to mention which chromosomes you want to work with because there are other chromosomes in the BAM file, e.g. chrM, patch/putative chromosomes. To include those other chromosomes, you need to check whether your txdb recognizes those other chromosomes, too.

5. Run the Sequgio counting.

Run it in a BASH command line (still outside R). Specify the merged BAM file, chromosomes, txdb file, and output file. For >1 sample, it’s recommended to run in parallel (see Uppmax), not in serial. Never run many samples without batch/interactive session! Here is an example of counting for 1 sample:


> cd /proj/b2012036/TCGA/BRCA_BAM_files/
> bash /proj/b2012036/Dhany/Sequgio/newsequgiocount.sh file=UNCID_1120271.5969d3bb-19fb-4bda-9483-db4791257b95.sorted_genome_alignments.bam sep=1,/,0 chr=chr1-22,chrX,chrY txdb=/proj/b2012036/Dhany/Sequgio/txdb_hg19.txt output=UNCID_1120271.5969d3bb-19fb-4bda-9483-db4791257b95.sorted_genome_alignments.bam.txt

6. Join the counts of several samples

Several samples need to be counted so that you can continue to the next step. After joining the sample counts into 1 file, you can come back to R.

The syntax is: python makeAllcountMatrix.py [list of annotated exon-pairs (see no. 2)] [countfile_1] … [countfile_N] [samplename_1] … [samplename_N] > out.txt


> cd /proj/b2012036/TCGA/BRCA_BAM_files/
> python /proj/b2012036/Dhany/Sequgio/makeAllcountMatrix.py /proj/b2012036/Dhany/Sequgio/exons19.txt UNCID_1310380.5db1c37a-2329-4bd3-baf2-89d26a8959fa.sorted_genome_alignments.bam.txt UNCID_1311429.2efa4fd0-1c0b-49c8-a11f-606d8618ec97.sorted_genome_alignments.bam.txt UNCID_1313654.c14f7362-25c3-4e1a-acd7-871bb22ea8a3.sorted_genome_alignments.bam.txt UNCID_1314025.4543cecd-6333-473b-b5ce-96d33ab47706.sorted_genome_alignments.bam.txt UNCID_1338780.aaec0cbc-a493-48f4-9cc7-32629880ee24.sorted_genome_alignments.bam.txt UNCID_1338780.aaec0cbc-a493-48f4-9cc7-32629880ee24.sorted_genome_alignments.bam.txt UNCID_1367534.c7d2b685-5f11-4879-82d5-e983b5dbb07d.sorted_genome_alignments.bam.txt 1310380 1311429 1313654 1314025 1338780 1338780 1367534 > sample.counts
> allCounts 

2.4 Step 4: fit models

Using the region(exons)-by-sample counts matrix (allCounts) and the design matrices object (Design) we can now fit models.

# model fitting
data(Counts)
data(Design)
iGenes <- names(Design) 
# Fit a single transcriptional unit (one element in Design) 
fit1 <- fitModels(iGenes[22],design=Design,counts=allCounts)
# More than one using list/for loops/mclapply/etc
fit2 <- lapply(iGenes[21:22],fitModels,design=Design,counts=allCounts)
# Fit all 
fitAll<-bplapply(iGenes, function(x){fitModels(x,design=Design,counts=allCounts)},BPPARAM=param)
})

It is possible to get gene-level expression values adding up isoform estimates within gene. To do so proceed as follow



> sumByGene(fitAll,txdb)

where aFit is a list containing estimates (output of fitModels) and txdb is either a data.frame or a GRangesList (as returned by reshapeTxDb). The required columns are “gene_id” and “tx_name”.

References

1]Suo C, Calza S, Salim A and Pawitan Y (2014). Joint estimation of isoform expression and isoform-specific read distribution using multisample RNA-Seq data. Bioinformatics, 30, pages 506–513