Mastering Serverless Framework: A Comprehensive Guide for Developers
Take a deep dive into the Serverless Framework, understanding its core aspects from initialization to deployment and managing complex workflows.
Hello everyone, welcome to a new devspedia story. Today I'll talk about different ways to cache your serverless application, assuming an AWS-based architecture.
Hint: You can rapid start your serverless project by using Serverless framework.
As you see in the flow above, we have 4 places where we can utilize cache:
Developers prefer to cache on the client side where possible, it's both cost efficient, and offers the lowest possible latency.
Let's discuss them one by one.
By caching on the client side, we can allow best possible low latency, since no requests will reach next component – Route 53 – which means basically skipping latency and execution of the rest of the stack.
That happens by using caching techniques such as Cache Control headers, ETag, Web Storage,.. etc
By caching in CloudFront, we've skipped 2 actually fetching a file from S3, this makes response time a lot faster, due to serving the cached version from edge locations distributed around the world, meaning that the client will always get a cached version from the nearest location.
API Gateway allows caching on their side, internally, it's utilizing CloudFront to cache across edge locations, there's a limit on how long you can keep API Gateway cache, which is up to 1 hour.
API Gateway caching is powerful, you can cache via request cache headers, URL query parameters, path parameters, and more.
There are multiple ways to use caching when the request have finally actually reached your lambda function.
First common case, is by adding an extra component such as ElastiCache, which is a fully managed services for RDS or Memcached, where your Lambda can start requesting them to see whether there's a cached version of a piece of data.
This is a common practice, and many developers already use this in production.
However there's also another way to utilize caching inside your Lambda by by utilizing invocations-shared code, check the example below:
In the above example, we've assigned a shared const cache
inside the Lambda function, this variable will be cached across all Lambda invocations.
So as long as the Lambda is being requested and is warm, since we mentioned warm and cold states of a Lambda, I'd like to share this excellent article that describes them in details:
That's all, thanks for reading devspedia, I love you, and I'll see you the next time :)