MySQL is one of the most widely deployed and relied-upon open source databases. With a very small amount of instrumentation, we can reliably track down the source(s) of any slowness or odd behavior in just a few clicks. For example: figuring out the user responsible for most lock % time held.
By consuming MySQL logs and parsing them into structured data, Honeycomb makes it possible to answer deep questions about database operation instantly.
The honeytail agent captures logs, structures to JSON, and streams the data into Honeycomb as it’s written. Or if you’re on RDS, the RDS connector. For very high traffic MySQL, you’ll want to use a tcpdump sniffer and probably also apply dynamic sampling. We also recommend running a cron job on the node (or from a utility node) to submit a process event every minute or so, capturing details like queue length, InnoDB process stats, index counters, etc–i.e. things you can only get out of a db shell.
You can also backfill old logs into Honeycomb to look at past data.
With MySQL events in Honeycomb, you can ask questions like:
- What is the sum of all lock times held, over time?
- Which locks are being held, and for how long?
- What user is responsible for the most lock% time held?
- What % of the write time is being consumed by any given user, app, shopping cart?
- What is the slowest query family, either in read time or write lock held?
- What are 20 most commonly-run normalized query families (avg, 95th, 99th, 99.9th, 99.999th percentiles, and MAX duration), ordered by the slowest raw query?
- What is the scan efficiency–how many rows are being scanned by each normalized query family, relative to the number of rows in the table?