Understanding 3 Common Cache Problems in Redis | by Dwen | May, 2022

With the convenience, Redis can also cause many problems — largely due to improper use

Photo by Lukas Blazek on Unsplash

Redis is a cache solution that we use a lot in our daily work. Using cache can improve the performance of our application and greatly reduce the pressure on the database.

But if used improperly, it will also cause many problems, among which the three classic problems include:

  1. Cache Penetration
  2. Cache Breakdown
  3. Cache Avalanche

Does it sound dumbfounded? It doesn’t matter, you will understand after reading this.

Cache penetration means that when a user looks for a piece of data, he finds a piece of data that does not exist at all.

According to the cache design process, first query the Redis cache and find that there is no such data, and then directly query the database and find that there is no, so the query result ended in failure.

When there are a large number of such requests or malicious use of non-existent data for access attacks, a large number of requests will directly access the database, causing database pressure or even direct paralysis.

Taking the Amazon mall as an example, the product id is used to query the product. At this time, if a non-existing id is used to attack, each attack will be accessed on the database.

Let’s take a look at the solutions:

Solution 1: Cache empty objects.

Modify the database write-back cache logic. For data that does not exist in the cache and does not exist in the database, we still cache it and set a cache expiration time.

As shown in the figure above, when querying the database fails, an empty object (key, null) is still cached with the queried key value. But there are still many problems with doing so:

  • At this time, when looking up the key value in the cache, a null empty object will be returned. It should be noted that this empty object may not be needed by the client, so it is necessary to process the result if it is empty, and then return it to the client.
  • Occupies a lot of memory in Redis. Because empty objects can be cached, Redis will use a lot of memory to store these empty keys.
  • If the data of this key stored in the database after the cache is written, because the cache has not expired, the obtained value is still empty, so there may be a short-term data inconsistency problem.

Solution 2: Bloom Filter

Bloom filter is a binary vector, binary array, or bit array.

Because it is a binary vector, each bit of it can only store 0 or 1.

When a data map needs to be added to the Bloom filter, the original data is not added, but multiple hash functions are used to generate multiple hash values, and each generated hash value points to the lower mark position is set to 1.

So, let’s not talk about fetching data from bloom filters, we don’t store the raw data at all.

For example, the subscripts generated by the three hash functions Hydraare 1, 3, 6respectively, then set these three positions as 1, and so on for other data. So what is the effect of such a data structure? We can use this bit vector to determine whether the data exists.

specific process:

  • Calculate multiple hash values ​​of the data.
  • Determine whether these bits are 1 and all are 1, then the data may exist.
  • If one or more of the bits are not 1, it is judged that the data does not exist.

It should be noted that the Bloom filter is misjudged.

As the amount of data storage increases, the number of bits that are set to 1 will also increase. It may happen that when querying a data that does not exist, all bits have been set to 1 by other data, that is, a hash collision occurs.

Therefore, the Bloom filter can only judge whether the data may exist, and cannot be 100% certain.

Google’s guava package provides us with a stand-alone version of the Bloom filter implementation, let’s take a look at the specific use

First, introduce maven dependencies:

Simulate the input of 1,000,000 pieces of data into the Bloom filter, give the false positive rate, and then use the data that does not exist for judgment:

Test results are:

Total error count: 10314
Total error count: 994

It can be seen that when the given false positive rate is 0.01, there are 10,314 false positives, and when the false positive rate is 0.001, 994 false positives are made, which is generally in line with our expectations.

However, because guava’s bloom filter runs in the JVM memory, it only supports monolithic applications and does not support microservice distribution.

So is there any support for distributed bloom filters? At this time, Redis stood up and solved the problems caused itself.

Redis’s BitMap supports the operation of bits, and a bit is used to represent the value or state corresponding to an element.

//Sets or clears the bit at the specified offset for the string value stored at key$ setbit key offset value//For the string value stored in the key, get the bit at the specified offset$ getbit key offset

Since the Bloom filter assigns bits to bits, we can use the setbitand getbitcommands provided by BitMap to implement it very simply.

And the setbit function can realize automatic array expansion, so there is no need to worry about the insufficient number of array bits during use.

In the process of implementing a distributed bloom filter based on BitMap, the number of hash functions and the length of the bit array are dynamically calculated.

It can be said that the lower the given fault tolerance rate, the greater the number of hash functions, the longer the array length, and the greater the Redis memory overhead used.

The maximum length of the array of bloom filters in guava is determined by the upper limit of the int value, which is about 2.1 billion, and the bit array of Redis is 512MB, which is 2³² bits, so the maximum length can reach 4.2 billion , and the capacity is guava twice.

Cache breakdown refers to data that is not in the cache but has in the database.

Due to a large number of concurrent requests, at the same time, the read cache does not read the data, and at the same time, the database goes to the database to fetch data, causing the database pressure to increase instantaneously, resulting in excessive pressure.

There are roughly two scenarios that cause this situation:

  • When the data is queried for the first time, the cache is not preheated, and the data is not added to the cache.
  • The cache is invalid due to the expiration time.


  • When the cache misses, use the Redis distributed lock before querying the database, and use the query key value as the lock condition.
  • The thread that acquires the lock queries the cache again before querying the database. This is because high concurrent requests cause queues when acquiring locks, but the thread that comes in for the first time will write to the cache after querying the database, and the thread that acquires the lock later can directly query the cache to obtain data.
  • Release the distributed lock after reading the data.

The implementation of specific distributed locks can use the powerful setnxcommand in Redis:

// lock
// param: nx(can only be set if the key does not exist)
// param: ex(unit second)
// param: time (expiration)
jedis.set(key, value, nx, ex, time);
// unlock

Setting the expiration time while adding the lock, can also prevent the thread from hanging up and still occupying the lock.

Leave a Comment