Cache Monitoring

Learn more about Sentry's Cache monitoring capabilities and how it can help improve your application's performance.

This feature is currently in Alpha. Alpha features are still in-progress and may have bugs. We recognize the irony.

A cache can be used to speed up data retrieval, thereby improving application performance. Because instead of getting data from a potentially slow data layer, your application will be getting data from, in a best case scenarios, memory instead. Caching can speed up read-heavy workloads for applications like Q&A portals, gaming, media sharing, and social networking.

A successful cache results in a high hit rate which means the data was present when fetched. A cache miss occurs when the data fetched was not present in the cache. If you have performance monitoring enabled and your application uses caching, you can see how your caches are performing with Sentry.

Sentry's cache monitoring provides insights into cache utilization and latency so that you can improve the performance of the endpoints that interact with caches.

Starting with the Cache page, you get an overview of the transactions within your application that are making at least one lookup against a cache. From there, you can dig into specific cache span operations by clicking a transaction and viewing its sample list.

Cache monitoring currently supports auto instrumentation for Django's cache framework. Other frameworks, such as Node.js' redis, require opt-in or custom instrumentation.

If available, opt-in instrumentation is documented on an environment-by-environment basis as listed below:

Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").