I have been implementing a microarray data management application using Oracle APEX, which is a database centric rapid application builder. One task was to calculate a histogram of measurement data and show it in a chart. The histogram shows the distribution of values and can reveal problems concerning the data. So, the user selects one or more samples of interest and the application shows the histogram chart. The problem with this is performance. One microarray sample contains approximately 20000 measurements of gene expression levels and the user may select several samples. The total amount of rows hiling the measurements can be millions.
The first step is to create the query that calculates the data for the graph. Oracle has an analytic function called dense_rank, which computes the rank of a row in an ordered group of rows. In this case, the order comes from rounded measurement value. The rank is used as a bin, the rows are grouped by the bin and row count calculated for each bin. The result is the histogram.To limit the number of rows returned, the query combines buckets 150 and above together. APEX will truncate the result if there are more rows than the chart is configured for.
The query is executed against all measurements (about 12 million) to get the upper limit for the execution time:
Submitted by Jussi Volanen on Wed, 09/08/2010 - 17:01
I was reading a blog entry by Jan Aerts where he uses MongoDB to calculate statistics of SNPs from the 1000genomes project in different populations: CEU (European descent), YRI (African) and JPTCHB (Asian). The question was, how many SNPs are in common between those populations. Jan Aerts used Ruby and MongoDB to answer that question. He reported execution time of calculating the statistics to be about 55 minutes in his laptop. That is unreasonable for this smallish data. After all, there are only about 30 million SNP entries, or 800 MB of data. I will demonstrate here how the same thing can be done in about 3 minutes including data loading.
So, I decided to try the same exercise using Oracle XE, the free (and limited) edition of Oracle database. The most important limitation here is memory as Oracle XE can use only total of 1 GB of memory. I gave most of it to PGA, which is program global area and used for sorting and hash operations. Since I am the only user of this database, I changed the memory management of work areas to manual mode and allocated more memory for hash and sort operations:
Submitted by Jussi Volanen on Sun, 04/11/2010 - 14:23