Bigtable: A Distributed Storage System for Structured Data [pdf]
Big Table is basically Google's distributed, scaleable, semi-structured datastore from 2006. It is interesting because it is partition tolerant, consistent, and scaleable and highly performant *for its time *(scales less than linearly), and allows a lot client flexibility to control data locality via schema choice and dynamically control whether their data is served from memory or disk. It is designed for systems that have high read volumes and low batched write volumes, well integrated with Google's GFS + Map Reduce ecosystem, and flexible enough to serve the needs of varying Google products.
Usage
Big Table offers users table. For each table, it maps a row key, a column key, and a timestamp to an arbitrary string and offers row-atomic set
and delete
operations on cell values.
(row:string, column:string, time:int64) → string
Example: Row key is com.cnn.www; Column keys are “contents:”, or “anchor:cnnsi.com”. Contents are the text of contents, or the text of anchors that reference the page.
Clients can also group columns that are usually accessed together by creating a column family. Columns within the same family will usually be stored in the same SSTable, which ensures locality.
How is data stored?
Data for each table is stored in tablets. Each tablet is assigned a row range and stores data in the format of SSTables. A tablet will have a separate SSTable for each column family. SSTables provide a persistent, ordered immutable map from keys to values, where both keys and values are arbitrary byte strings. They usually look like this (source):
Indexes are usually kept in memory. You can also keep an entire SSTable in memory. Each user table can consist of one or more tablets. Each tablets can consist of one or more (usually more) SSTables. We create SSTables for each column family, so that column family may exploit the idea of locality groups—per SSTable format, columns that belong to the same SSTable are stored together in the SSTable data file.Tablet locations are stored in a 3-level B+ Tree type structure that consists of a Chubby file pointing to a root tablet that points to metadata tablets, that in turn point to actual data tablets forming user tables. METADATA tablets comprise the METADATA table, which stores tablet locations keyed by table identifier + end row. Tablet locations are usually stored and cached by clients.
How is data served?
Tablets are served by tablet servers. One tablet server can serve multiple tablets. Each tablet is served by one tablet server only.
Writes
Writes preserve write ahead logging, where a redo records are written to a per-tablet commit log stored in GFS. After writes hit the commit log, they are written to an in-memory memtable (sorted buffer). When memtables get too big, they are written to GFS as a new SSTable—this operation is called a* minor compaction*.
Reads
Read operations are executed on a merged view of the memtable and SSTables. This is efficient because both memtables and SSTables are lexicographically ordered.
Compactions
Basically involve merging memtables and SSTables to form new SSTables. This will “coalesce” writes and make reads more efficient, so that merge operations are written to disk and will not have to be redone for consequent reads.
How are scalability and consistency guarantees met?
Assignments of tablets to tablet servers is managed by a master server. The master is responsible for rebalancing tablet server load, detecting tablet server membership changes, and assigning tablets to servers.
Big Table uses Chubby as a distributed lock service to coordinate tablet assignment. Tablet servers attempt to acquire an exclusive lock on a uniquely named Chubby file on startup and will need to periodically renew the lock lease. If a Chubby server loses its lock, the master will reassign its tablets and delete its Chubby file.
Each tablet server will know of (and track) which tablets it is assigned to. The master can also get this information (as well as know which tablets are unassigned) by scanning the METADATA table. The METADATA table tracks the locations of all tablets. When a tablet grows too large, its tablet server will initiate a tablet split.
Big Table scales less than linearly. It scales well for sequential reads, sequential writes, as well as random writes. It scales extremely poorly for random reads, since a random read (from disk) will always cause a 64 KB block fetch from GFS. This can be mitigated by configuring the particular table / column family to have the SSTable stored in memory.
Failure Modes
- Big Table is dependent on Chubby. If Chubby dies, Big Table will become unavailable.
- Tablet movements (tablet server reassignment) will cause the tablet to become temporarily unavailable for up to 1 second.
Key Benefits
- Semi-structured data provides a little bit more than pure key-value stores.
- Locality groups realize similar compression and disk read performance benefits observed for other systems that organize data on disk using column-based rather than row-based storage.
- Provides a lower level read and write interface and is designed to support many thousands of such operations per second per server.
Questions
A few follow up questions we couldn't answer during our discussion.
- Can tablet servers exploit GFS locality? The way GFS is described in the paper seems like a black box. Can a tablet server configure things so that the tablets that it serves are geographically / network-wise closer?
- How can we cheaply access S3 or exploit S3 locality? (extending to Amazon S3)
- How is the way Chubby is used by Big Table similar or different to the way Kafka or Cassandra uses Zookeeper?
Unless otherwise specified, the source of this information is the original Big Tablet paper from OSDI 2006 and our discussion notes here.
Additional Reading
http://distributeddatastore.blogspot.com/2013/08/cassandra-sstable-storage-format.html