Serverless computing has revolutionised how developers build applications by eliminating the need to manage servers. Among the most popular solutions are AWS Lambda, Vercel Edge Functions, and the latest innovation, Vercel Edge Functions with Fluid Compute. Each of these options serves different use cases and offers unique advantages in terms of cost, latency, and efficiency.
AWS Lambda is a serverless compute service that runs code in response to events. It is deeply integrated with the AWS ecosystem, making it an excellent choice for backend tasks like data processing, API handling, and event-driven workflows.
Let’s say you’re building an image optimisation pipeline where users upload images that need to be resized and compressed before storage.
const AWS = require("aws-sdk");
const sharp = require("sharp");
exports.handler = async (event) => {
const s3 = new AWS.S3();
const bucketName = "optimised-images";
const key = event.Records[0].s3.object.key;
// Get the uploaded image
const image = await s3.getObject({ Bucket: "uploads", Key: key }).promise();
// Optimise the image
const optimisedImage = await sharp(image.Body)
.resize(800)
.jpeg({ quality: 80 })
.toBuffer();
// Upload the optimised image
await s3
.putObject({
Bucket: bucketName,
Key: `optimised-${key}`,
Body: optimisedImage,
ContentType: "image/jpeg",
})
.promise();
return { statusCode: 200, body: "Image optimised successfully!" };
};
Vercel Edge Functions run at the edge of Vercel’s global network, ensuring low latency by executing code close to users. These functions are ideal for frontend- tasks like authentication, A/B testing, or real-time personalisation.
A web app that personalises content based on user location.
export const config = { runtime: "edge" };
export default function handler(req) {
const userCountry = req.headers.get("x-vercel-ip-country");
return new Response(
JSON.stringify({ message: `Hello from ${userCountry}!` }),
{ headers: { "Content-Type": "application/json" } }
);
}
Fluid Compute enhances Vercel Edge Functions by enabling concurrent execution within a single function instance. This reduces cold starts, improves resource efficiency, and lowers costs. It’s particularly useful for AI workloads or I/O-bound tasks that require efficient scaling.
Here is an example an app that generates movie recommendations using OpenAI’s GPT-3 API.
export const config = { runtime: "edge" };
export default async function handler(req) {
const { query } = await req.json();
// Call OpenAI API for recommendations
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "text-davinci-003",
prompt: `Suggest movies based on: ${query}`,
max_tokens: 100,
}),
});
const data = await response.json();
return new Response(
JSON.stringify({ recommendations: data.choices[0].text }),
{
headers: { "Content-Type": "application/json" },
}
);
}
Feature | AWS Lambda | Vercel Edge Functions | Vercel Fluid Compute |
---|---|---|---|
Pros | - Deep integration with AWS services like S3, DynamoDB, and API Gateway. - Regional execution ensures high availability. - Event-driven architecture supports diverse triggers (e.g., S3 events). | - Ultra-low latency due to edge execution. - Simple deployment process integrated with Git-based workflows. - Global reach ensures consistent performance for users worldwide. | - Optimised concurrency reduces cold starts and costs. - Ideal for real-time AI/ML workloads or streaming responses. - Maintains low latency while scaling dynamically based on demand. |
Cons | - Higher latency for global users due to regional execution. - Cold starts can affect performance unless provisioned concurrency is enabled. - Costs can rise with high traffic due to isolated function instances. | - Limited backend capabilities compared to AWS Lambda. - Not ideal for long-running or compute-intensive tasks. | - Still evolving; may not yet support all use cases that traditional serverless functions handle. - Requires careful testing for highly complex workloads. |
Feature | AWS Lambda | Vercel Edge Functions | Vercel Fluid Compute |
---|---|---|---|
Execution Location | Regional (e.g., us-east-1 ) | Global edge network | Global edge network |
Latency | ~200ms+ for global users | ~50ms | ~50ms |
Concurrency | Per-request isolation | Per-request isolation | Shared instance concurrency |
Cold Start Mitigation | Provisioned concurrency required | Bytecode caching | Bytecode caching + pre-warming |
Cost Efficiency | Higher for spiky workloads | Moderate | Lower due to shared instances |
Best Use Cases | Backend processing (e.g., S3 events) | Frontend personalisation | Real-time AI/ML apps |
AWS Lambda:
Vercel Edge Functions:
Vercel Fluid Compute:
To track the efficiency of these functions:
Waiting
time).Time
column).Choosing between AWS Lambda, Vercel Edge Functions, and Fluid Compute depends on your application’s requirements: