log(v)

back
Need help setting up snorkel?
Please send an email to okay.zed at gmail and I'll be more than glad to help you get setup with an instance and answer any questions about your use case and requirements.
Happy snorkeling!

This post outlines several performance improvements for sybil that allow it to scale to accomodate large tables with 100mm+ records in them.

Status: this post is still being written

Background

In early 2020, sybil was tested as a potential backend for jaeger tracing by an external party. The current jaeger DBs do not support dynamic queries or the queries in them are a bit slow. The idea was to use sybil as an index on top of the spans and query badger DB to retrieve the binary blobs for a full trace.

With some work, sybil was setup and tested in a high volume environment and was able to out perform a 6 machine scylla DB cluster on the same data: both in terms of db size and querying speed.

The initial testing with sybil exposed several issues that required hacks and workarounds for sybil to perform properly, but over time sybil was upgraded and the hacks are no longer necessary. This post details some of the performance issues that were ran into and the fixes for them.

Digest Path Fixes

Reduction in number of FS calls

During digestion, sybil was performing too many filesystem calls with lstat(). By reducing the lstat calls, the digestion path was sped up. Before the fix, the digestion path was slowing down linearly in respect to the number of blocks in the table. After the fix, the digestion path runtime no longer grows with the number of blocks in the table.

Query Path Fixes

Select subset of columns in samples query

Adding the ability to specify of subset of columns makes the samples query run faster, especially if we are looking for a small number of samples in the whole table.

Fast memory recycling

When using slab allocation, the records need to be scrubbed before re-use. Using fast memory recycling means we only scrub the populated field for each record and don’t worry about the values in the actual field. When the field gets re-populated, we trust that it will over-write the previous value.

Filter Skipping

In addition to less than and greater than int filter skipping, now int filter equality was added: if a block does not contain a particular value when using equality filters, we can skip the whole block

Smaller Keytable

The largest performance gain for large and wide tables was reducing the size of the records we create during the query. Previously, sybil records were allocated to be able to hold all the values in the table - so if a table has 10 columns, the record would have 10 columns. This is not a huge problem when the tables are smaller, but when the tables start approaching 100mm records, we start noticing performance problems.

The fix is to create records that are dynamically sized. There was some amount of work to be done to make this possible, but once implemented, the query time of a table is strictly proportional to the number of records processed by the query, whether its 100k records, 1m records or 100mm records.

Changelog

10-10-2020