umma.dev

AWS Lambda < Edge Functions < Fluid Compute (?)

Serverless computing has revolutionised how developers build applications by eliminating the need to manage servers. Among the most popular solutions are AWS Lambda, Vercel Edge Functions, and the latest innovation, Vercel Edge Functions with Fluid Compute. Each of these options serves different use cases and offers unique advantages in terms of cost, latency, and efficiency.

AWS Lambda

What Is AWS Lambda?

AWS Lambda is a serverless compute service that runs code in response to events. It is deeply integrated with the AWS ecosystem, making it an excellent choice for backend tasks like data processing, API handling, and event-driven workflows.

Example

Let’s say you’re building an image optimisation pipeline where users upload images that need to be resized and compressed before storage.

const AWS = require("aws-sdk");
const sharp = require("sharp");

exports.handler = async (event) => {
  const s3 = new AWS.S3();
  const bucketName = "optimised-images";
  const key = event.Records[0].s3.object.key;

  // Get the uploaded image
  const image = await s3.getObject({ Bucket: "uploads", Key: key }).promise();

  // Optimise the image
  const optimisedImage = await sharp(image.Body)
    .resize(800)
    .jpeg({ quality: 80 })
    .toBuffer();

  // Upload the optimised image
  await s3
    .putObject({
      Bucket: bucketName,
      Key: `optimised-${key}`,
      Body: optimisedImage,
      ContentType: "image/jpeg",
    })
    .promise();

  return { statusCode: 200, body: "Image optimised successfully!" };
};

Vercel Edge Functions

What are Vercel Edge Functions?

Vercel Edge Functions run at the edge of Vercel’s global network, ensuring low latency by executing code close to users. These functions are ideal for frontend- tasks like authentication, A/B testing, or real-time personalisation.

Example

A web app that personalises content based on user location.

export const config = { runtime: "edge" };

export default function handler(req) {
  const userCountry = req.headers.get("x-vercel-ip-country");
  return new Response(
    JSON.stringify({ message: `Hello from ${userCountry}!` }),
    { headers: { "Content-Type": "application/json" } }
  );
}

Vercel Edge Functions with Fluid Compute

What Is Fluid Compute?

Fluid Compute enhances Vercel Edge Functions by enabling concurrent execution within a single function instance. This reduces cold starts, improves resource efficiency, and lowers costs. It’s particularly useful for AI workloads or I/O-bound tasks that require efficient scaling.

Example

Here is an example an app that generates movie recommendations using OpenAI’s GPT-3 API.

export const config = { runtime: "edge" };

export default async function handler(req) {
  const { query } = await req.json();

  // Call OpenAI API for recommendations
  const response = await fetch("https://api.openai.com/v1/completions", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
    },
    body: JSON.stringify({
      model: "text-davinci-003",
      prompt: `Suggest movies based on: ${query}`,
      max_tokens: 100,
    }),
  });

  const data = await response.json();
  return new Response(
    JSON.stringify({ recommendations: data.choices[0].text }),
    {
      headers: { "Content-Type": "application/json" },
    }
  );
}

Pros and Cons Comparison Table

FeatureAWS LambdaVercel Edge FunctionsVercel Fluid Compute
Pros- Deep integration with AWS services like S3, DynamoDB, and API Gateway.
- Regional execution ensures high availability.
- Event-driven architecture supports diverse triggers (e.g., S3 events).
- Ultra-low latency due to edge execution.
- Simple deployment process integrated with Git-based workflows.
- Global reach ensures consistent performance for users worldwide.
- Optimised concurrency reduces cold starts and costs.
- Ideal for real-time AI/ML workloads or streaming responses.
- Maintains low latency while scaling dynamically based on demand.
Cons- Higher latency for global users due to regional execution.
- Cold starts can affect performance unless provisioned concurrency is enabled.
- Costs can rise with high traffic due to isolated function instances.
- Limited backend capabilities compared to AWS Lambda.
- Not ideal for long-running or compute-intensive tasks.
- Still evolving; may not yet support all use cases that traditional serverless functions handle.
- Requires careful testing for highly complex workloads.

Cost, Time, and Efficiency Comparison

FeatureAWS LambdaVercel Edge FunctionsVercel Fluid Compute
Execution LocationRegional (e.g., us-east-1)Global edge networkGlobal edge network
Latency~200ms+ for global users~50ms~50ms
ConcurrencyPer-request isolationPer-request isolationShared instance concurrency
Cold Start MitigationProvisioned concurrency requiredBytecode cachingBytecode caching + pre-warming
Cost EfficiencyHigher for spiky workloadsModerateLower due to shared instances
Best Use CasesBackend processing (e.g., S3 events)Frontend personalisationReal-time AI/ML apps

When to Use Each Solution

  • AWS Lambda:

    • Best for backend-heavy applications requiring deep integration with AWS services (e.g., S3 processing pipelines).
    • Ideal for regional workloads where latency isn’t a critical factor.
  • Vercel Edge Functions:

    • Perfect for frontend-focused tasks like authentication, A/B testing, or serving dynamic content close to users.
    • Use when ultra-low latency is essential but compute requirements are light.
  • Vercel Fluid Compute:

    • Optimal for real-time AI/ML applications or I/O-heavy workloads requiring low latency and high concurrency.
    • Use when scaling efficiently under heavy traffic is a priority (e.g., chatbots or recommendation engines).

How to Measure Performance

To track the efficiency of these functions:

  1. Open Chrome DevTools → Network Tab.
  2. Trigger API calls and inspect:
    • Latency (Waiting time).
    • Total time (Time column).
  3. Simulate slow networks using throttling options like “Slow 3G” to test real-world conditions.

Conclusion

Choosing between AWS Lambda, Vercel Edge Functions, and Fluid Compute depends on your application’s requirements:

  • For backend-heavy tasks requiring AWS integrations, go with AWS Lambda.
  • For low-latency frontend tasks, choose Vercel Edge Functions.
  • For cutting-edge AI/ML workloads or high-concurrency needs, leverage Vercel Fluid Compute.