Memcarrot is a caching server fully compatible with the Memcached protocol, offering superior memory utilization (with memory overhead as low as 6 bytes per object), real-time data compression for keys and values, efficient handling of expired items, zero internal and external memory fragmentation, intelligent data tiering, and complete persistence support. It provides a cost effective caching solution compared to Memcached or Redis.
Quick Overview of Features
- SmartReal Compression - Server-side, real-time data compression (both keys and values) with a compression algorithm that continuously adapts to the current workload. It is up to 5.4x more memory efficient than using client-side compression with Memcached. We have conducted a series of memory benchmark tests, which you can find in our membench repository.
- True Data Tiering - With only 11 bytes of in-memory metadata overhead (including expiration time), virtually any object can be efficiently stored on SSD. This makes it the most memory-efficient disk-based cache available today. Unlike Memcached extstore or Redis Enterprise, Memcarrot does not keep keys in RAM, significantly reducing RAM requirements to support the data index.
- Multitiering - Supports hybrid (RAM -> SSD) and tandem (RAM -> compressed RAM) configurations.
- Highly Configurable - Users can customize cache admission policies (important for SSD), promotion policies (from victim cache back to the parent cache), eviction policies, and throughput controllers. Additional customizable components include memory index formats, internal GC recycling selectors, data writers, and data readers.
- AI/ML Ready - Custom cache admission and eviction policies can leverage sophisticated machine learning models tailored to specific workloads.
- CacheGuard Protected - Combines a cache admission policy with a scan-resistant cache eviction algorithm, significantly reducing SSD wear and increasing longevity.
- Low SSD Write Amplification (DWA) and Cache Level Write Amplification (CLWA) - With estimates of DLWA = 1.1 at 75% SSD usage, and 1.8 at 100%, even nearly full SSDs do not incur significant DLWA.
- Low RAM Overhead for Cached Items - Overhead ranges from 8 bytes per item for both RAM and SSD, including expiration support. The overhead depends on the index format used. Several index formats, both with and without expiration support, are provided out of the box.
- Low Meta Overhead in RAM - For example, managing 10M data items in Memcarrot requires less than 1MB of Java heap and less than 100MB of Java off-heap memory for metadata. To keep index data for 5B objects in memory 55GB of RAM is required, this is roughly 11 bytes per object.
- Fragmentation and slab calcification free storage engine. No more periodic server restarts are reqired to fight these problems (as for Redis and Memcached)
- Multiple Eviction Algorithms - Available out of the box, including Segmented LRU (default), LRU, and FIFO. Segmented LRU is a scan-resistant algorithm. Eviction policies are pluggable, allowing customers to implement their own.
- Scalability - Supports multiple terabytes of storage, up to 256TB, with only 11 bytes of RAM overhead per cached item for disk storage.
- Efficient Expired Item Eviction - Designed for applications requiring expiration support.
- Warm Restart - Allows cache data to survive a full server reboot. Data saving and loading are very fast, dependent only on available disk I/O throughput (GBs per second).
- Memcached Support - Currently supports the text protocol only, including all data commands like cas, stats, and version. There is no support for Memcached-specific server commands (as they are not needed). We are working on improving compatibility with Memcached, so stay tuned.
- Carrot Cache Powered - See Carrot Cache for more information and additional features.