Have you ever been in a situation where your database queries are slowing down? Query caching is the technique you need to speed database queries by using different caching methods while keeping costs down! Imagine that you built an e-commerce application. It started small but is growing fast. By now, you have an extensive product catalog and millions of customers.
That's good for business, but a hardship for technology. Your queries to primary database (MongoDB/ Postgressql) are beginning to slow down, even though you already attempted to optimize them. Even though you can squeak out a little extra performance, it isn't enough to satisfy your customers.
Redis is an in-memory datastore, best known for caching. Redis allows you to reduce the load on a primary database while speeding up database reads.
With any e-commerce application, there is one specific type of query that is most often requested. If you guessed that it’s the product search query, you’re correct!
To improve product search in an e-commerce application, you can implement one of following caching patterns:
If you use Redis Cloud, cache aside is easier due to its support for JSON and search. You also get additional features such as real-time performance, High scalability, resiliency, and fault tolerance. You can also call upon high-availability features such as Active-Active geo-redundancy.
This tutorial focuses on the cache-aside pattern. The goal of this design pattern is to set up optimal caching (load-as-you-go) for better read operations. With caching, you might be familiar with a "cache miss," where you do not find data in the cache, and a "cache hit," where you can find data in the cache. Let's look at how the cache-aside pattern works with Redis for both a "cache miss" and a "cache hit."
This diagram illustrates the steps taken in the cache-aside pattern when there is a "cache miss." To understand how this works, consider the following process:
Now that you have seen what a "cache miss" looks like, let's cover a "cache hit." Here is the same diagram, but with the "cache hit" steps highlighted in green.
The cache-aside pattern is useful when you need to:
If you use Redis Cloud and a database that uses a JDBC driver, you can take advantage of Redis Smart Cache, which lets you add caching to an application without changing the code. Click here to learn more!
The e-commerce microservices application discussed in the rest of this tutorial uses the following architecture:
products service
: handles querying products from the database and returning them to the frontendorders service
: handles validating and creating ordersorder history service
: handles querying a customer's order historypayments service
: handles processing orders for paymentdigital identity service
: handles storing digital identity and calculating identity scoreapi gateway
: unifies services under a single endpointmongodb/ postgresql
: serves as the primary database, storing orders, order history, products, etc.redis
: serves as the stream processor and caching databaseYou don't need to use MongoDB/ Postgresql as your primary database in the demo application; you can use other prisma supported databases as well. This is just an example.
The e-commerce microservices application consists of a frontend, built using Next.js with TailwindCSS. The application backend uses Node.js. The data is stored in Redis and MongoDB/ Postgressql using Prisma. Below you will find screenshots of the frontend of the e-commerce app:
Dashboard
: Shows the list of products with search functionalityShopping Cart
: Add products to the cart, then check out using the "Buy Now" button
Below is a command to the clone the source code for the application used in this tutorial
git clone --branch v4.2.0 https://github.com/redis-developer/redis-microservices-ecommerce-solutions
In our sample application, the products service publishes an API for filtering products. Here's what a call to the API looks like:
// POST http://localhost:3000/products/getProductsByFilter
{
"productDisplayName": "puma"
}
{
"data": [
{
"productId": "11000",
"price": 3995,
"productDisplayName": "Puma Men Slick 3HD Yellow Black Watches",
"variantName": "Slick 3HD Yellow",
"brandName": "Puma",
"ageGroup": "Adults-Men",
"gender": "Men",
"displayCategories": "Accessories",
"masterCategory_typeName": "Accessories",
"subCategory_typeName": "Watches",
"styleImages_default_imageURL": "http://host.docker.internal:8080/images/11000.jpg",
"productDescriptors_description_value": "<p style=\"text-align: justify;\">Stylish and comfortable, ...",
"createdOn": "2023-07-13T14:07:38.020Z",
"createdBy": "ADMIN",
"lastUpdatedOn": "2023-07-13T14:07:38.020Z",
"lastUpdatedBy": null,
"statusCode": 1
}
//...
],
"error": null,
"isFromCache": false
}
{
"data": [
//...same data as above
],
"error": null,
"isFromCache": true // now the data comes from the cache rather DB
}
The following code shows the function used to search for products in primary database:
async function getProductsByFilter(productFilter: Product) {
const prisma = getPrismaClient();
const whereQuery: Prisma.ProductWhereInput = {
statusCode: DB_ROW_STATUS.ACTIVE,
};
if (productFilter && productFilter.productDisplayName) {
whereQuery.productDisplayName = {
contains: productFilter.productDisplayName,
mode: 'insensitive',
};
}
const products: Product[] = await prisma.product.findMany({
where: whereQuery,
});
return products;
}
You simply make a call to primary database (MongoDB/ Postgressql) to find products based on a filter on the product's displayName
property. You can set up multiple columns for better fuzzy searching, but we simplified it for the purposes of this tutorial.
Using primary database directly without Redis works for a while, but eventually it slows down. That's why you might use Redis, to speed things up. The cache-aside pattern helps you balance performance with cost.
The basic decision tree for cache-aside is as follows.
When the frontend requests products:
Here’s the code used to implement the decision tree:
const getHashKey = (_filter: Document) => {
let retKey = '';
if (_filter) {
const text = JSON.stringify(_filter);
retKey = crypto.createHash('sha256').update(text).digest('hex');
}
return 'CACHE_ASIDE_' + retKey;
};
router.post(API.GET_PRODUCTS_BY_FILTER, async (req: Request, res: Response) => {
const body = req.body;
// using node-redis
const redis = getNodeRedisClient();
//get data from redis
const hashKey = getHashKey(req.body);
const cachedData = await redis.get(hashKey);
const docArr = cachedData ? JSON.parse(cachedData) : [];
if (docArr && docArr.length) {
result.data = docArr;
result.isFromCache = true;
} else {
// get data from primary database
const dbData = await getProductsByFilter(body); //method shown earlier
if (body && body.productDisplayName && dbData.length) {
// set data in redis (no need to wait)
redis.set(hashKey, JSON.stringify(dbData), {
EX: 60, // cache expiration in seconds
});
}
result.data = dbData;
}
res.send(result);
});
You need to decide what expiry or time to live (TTL) works best for your particular use case.
You now know how to use Redis for caching with one of the most common caching patterns (cache-aside). It's possible to incrementally adopt Redis wherever needed with different strategies/patterns. For more resources on the topic of microservices, check out the links below:
Microservices with Redis
General