Wednesday, February 26, 2014

Taking a Random Sample on Amazon Redshift

Recently, I was approached by Vicky whom I'm working with at a client, to help with a particular problem.  She wanted to calculate page view summaries for a random sample of visitors from a table containing about a billion page views.  This is a common problem, especially as data gets larger and larger.  Note that the sample itself is based on visitors, so a simple random sample is not sufficient.  We needed a sample of visitors and then all the pages for each visitor.

Along the way, I learned some interesting things about Redshift, taking random samples, and working with parallel and columnar databases.

For those not familiar with it, Redshift is an Amazon cloud data store that uses ParAccel, a parallel columnar database a based on Postgres (an older version of Postgres).  Postgres is a standard-enough relational databases, used by several database vendors as the basis of their products.

Columnar databases have interesting performance characteristics, because the database stores each column separately from other columns.  Although generally bad performance-wise for ACID-compliant transactions (if you don't know what ACID is, then you don't need to know), columnar databases are good for analysis.

However, your intuition about how things work may not apply.  A seemingly simple query such as this:

select *
from PageViews
limit 10;

takes a relatively long time (several minutes) because all the columns have to be read independently.  On the other hand, a query such as:

select min(BrowserId), max(BrowserId)
from PageViews;

Goes quite fast (a few seconds), because only one column has to be read into memory.  The more columns the queries reads, the slower it is -- other things being equal.

Back to the random sample.  A typical way of getting this type of random sample is to first find the reduced set of visitors and then join them back to the full page views.   This sounds cumbersome, but the strategy actually works well on many databases.  Applied to the query we were working with, the resulting query looks something like:

select pv.BrowserId,
from (select distinct BrowserId
      from PageViews
      order by random()
      limit 100000
     ) list join
     PageViews pv
     on list.BrowserId = pv.BrowserId
group by BrowserId;

This is a reasonable and standard approach to reduce the processing overhead.  The subquery list produces all the BrowserIds and then sorts them randomly (courtesy of the random() function).  The limit clause then takes a sample of one hundred thousand (out of many tens of millions).  The join would normally use an indexed key, so it should go pretty fast.  On Redshift, the subquery to get list performs relatively well.  But the entire query did not finish (our queries time out after 15-30 minutes). We experimented with a several variations, to no avail.

What finally worked?  Well, a much simpler query and this surprised us.  The following returned in just a few minutes:

select BrowserId,
from PageViews pv
group by BrowserId
order by random()
limit 100000;

In other words, doing the full aggregation on all the data and then doing the sorting is actually faster than trying to speed up the aggregation by working on a subset of the data.

I've been working with parallel databases for over twenty years.  I understand why this works better than trying to first reduce the size of the data.  Nevertheless, I am surprised.  My intuition about what works well in databases can be inverted when using parallel and columnar databases.

One of Vicky's requirements was for a repeatable random sample.  That means that we can get exactly the same sample when running the same query again.  The random() function does not provide the repeatability.  In theory, by setting the seed, it should.  In practice, this did not seem to work.  I suspect that aspects of load balancing in the parallel environment cause problems.

Fortunately, Postgres supports the md5() function.  This is a hash function that converts a perfectly readable string into a long string containing hexadecimal digits.  These digits have the property that two similar strings have produce very different results, so this is a good way to randomize strings.  It is not perfect, because two BrowserIds could have the same hash value, so they would always be included or excluded together.  But, we don't need perfection; we are not trying to land a little Curiousity lander in a small landing zone on a planet tens of millions of miles away.

The final form of the query was essentially:

select BrowserId,
from PageViews pv
group by BrowserId
order by md5('seed' || BrowserId)
limit 100000;

The constant "seed" allows us to get different, repeatable sample when necessary.  And Vicky can extract her sample in just a few minutes, whenever she wants to.