From: bassam al-kindy Date: Fri, 25 Oct 2013 11:45:33 +0000 (+0200) Subject: updating step 3 of dogma process X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/chloroplast13.git/commitdiff_plain/d72cbe1c61987671906c2cce939728c280e868cb updating step 3 of dogma process --- diff --git a/annotated.tex b/annotated.tex index 55c4324..7109504 100644 --- a/annotated.tex +++ b/annotated.tex @@ -138,9 +138,9 @@ With NCBI, the idea is to use the existing annotations of NCBI with chloroplast The trivial and simple idea to construct the core genome is based on the extraction of Genes names (as gene presence or absence). For instant, in this stage neither sequence comparison nor new annotation were made, we just want to extract all gene counts stored in each chloroplast genome then find the intersection core genes based on gene names.\\ \textbf{Step I: pre-processing}\\ -The objective from this step is to organize, solve genes duplications, and generate sets of genes for each genome. The input to the system is a list genomes from NCBI stored as a \textit{.fasta} file that include a collection of Protein coding genes\cite{parra2007cegma,RDogma}(genes that produce protein) with its coding sequences. -As a preparation step to achieve the set of core genes, we need to translate these genomes using \textit{BioPython} package\cite{chapman2000biopython}, and extracting all information needed to find the core genes. The process starts by converting each genome in fasta format to GenVision\cite{geneVision} format from DNASTAR, and this is not an easy job. The output from this operation is a lists of genes stored in a local database for genomes, their genes names and genes counts. In this stage, we will accumulate some Gene duplications with each genome treated. In other words, duplication in gene name can comes from genes fragments as long as chloroplast DNA sequences. Identical state, which it is the state that each gene present only one time in a genome (i.e Gene has no copy) without considering the position or gene orientation can be reached by filtering the database from redundant gene name. To do this, we have two solutions: first, we made an orthography checking. Orthography checking is used to merge fragments of a gene to be one gene so that we can solve a duplication. -Second, we convert the list of genes names for each genome (i.e. after orthography check) in the database to be a set of genes names. Mathematically speaking, if $g=\left[g_1,g_2,g_3,g_1,g_3,g_4\right]$ is a list of genes names, by using the definition of a set in mathematics, we will have $set(g)=\{g_1,g_2,g_3,g_4\}$, where each gene represented only ones. With NCBI genomes, we do not have a problem of genes fragments because they already treated it, but there are a problem of genes orthography. This can generate the problem of gene lost in our method and effect in turn the core genes. +The objective from this step is to organize, solve genes duplications, and generate sets of genes for each genome. The input to the system is a list of genomes from NCBI stored as \textit{.fasta} files that include a collection of Protein coding genes\cite{parra2007cegma,RDogma}(genes that produce protein) with its coding sequences. +As a preparation step to achieve the set of core genes, we need to translate these genomes using \textit{BioPython} package\cite{chapman2000biopython}, and extracting all information needed to find the core genes. The process starts by converting each genome in fasta format to GenVision\cite{geneVision} format from DNASTAR, and this is not an easy job. The output from this operation is a lists of genes stored in a local database for genomes, their genes names and genes counts. In this stage, we will accumulate some Gene duplications with each genome treated. In other words, duplication in gene name can comes from genes fragments as long as chloroplast DNA sequences. We defines \textit{Identical state} to be the state that each gene present only one time in a genome (i.e Gene has no copy) without considering the position or gene orientation. This state can be reached by filtering the database from redundant gene name. To do this, we have two solutions: first, we made an orthography checking. Orthography checking is used to merge fragments of a gene to form one gene. +Second, we convert the list of genes names for each genome (i.e. after orthography check) in the database to be a set of genes names. Mathematically speaking, if $G=\left[g_1,g_2,g_3,g_1,g_3,g_4\right]$ is a list of genes names, by using the definition of a set in mathematics, we will have $set(G)=\{g_1,g_2,g_3,g_4\}$, and $|G|=4$ where $|G|$ is the cardinality number of the set $G$ which represent the number of genes in the set. With NCBI genomes, we do not have a problem of genes fragments because they already treated it, but there are a problem of genes orthography. In our method, this can generate the problem of gene lost and effect in turn the core genes. The whole process of extracting core genome based on genes names and counts among genomes is illustrate in Figure \ref{Fig2}. \begin{figure}[H] @@ -152,8 +152,9 @@ The whole process of extracting core genome based on genes names and counts amon \end{figure} \textbf{Step II: Gene Intersection}\\ -The main objective of this step is try to find best core genes from sets of genes in the database. The idea for finding core genes is to collect in each iteration the maximum number of common genes. To do this, the system build an intersection core matrix(ICM). ICM here is a two dimensional symmetric matrix where each row and column represent the list of genomes in the local database. Each position in ICM stores the \textit{intersection scores}. Intersection Score(IS), is the score by intersect in each iteration two sets of genes for two different genomes in the database. Taking maximum score from each row and then taking the maximum of them will result to draw the two genomes with their maximum core. Then, the system remove these two genomes from ICM and add the core of them under a specific name to ICM for the calculation in next iteration. The core genes generated with its set of genes will store in a database for reused in the future. this process repeat until all genomes treated. If maximum intersection core(MIC) equal to 0, the system will avoid this intersection operation and ignore the genome that smash the maximum core genes.\\ -We observe that ICM will result to be very large because of the huge amount of data that it stores. In addition, this will results to be time and memory consuming for calculating the intersection scores by using just genes names. To increase the speed of calculations, we can calculate the upper triangle scores only and exclude diagonal scores. This will reduce whole processing time and memory to half. The time complexity for this process after enhancement changed from $O(n^2)$ to $O((n-1)\log{n})$.\\ +The goal of this step is trying to find maximum core genes from sets of genes in the database. The idea for finding core genes is to collect in each iteration the maximum number of common genes. To do this, the system build an \textit{Intersection core matrix(ICM)}. ICM here is a two dimensional symmetric matrix where each row and column represent a set of genes for one genome in the local database. Each position in ICM stores the \textit{intersection scores}. Intersection Score(IS), is the cardinality number of a core genes comes from intersecting in each iteration the set of genes for one genome with all other gene sets belong to the rest of genomes in the database. Taking maximum cardinality from each row and then taking the maximum of them will result to select the best two genomes with their maximum core. Mathematically speaking, if we have an $mxn$ matrix where $m,n=$number of genomes in database. lets consider $Z=max_{i