🚀 Executive Summary

TL;DR: Server-side PDF generation in Next.js on serverless platforms like Vercel often fails because these environments lack the full operating system and resources required by headless browser engines like Chromium. The article offers three production-ready solutions: client-side generation for simple needs, a dedicated Dockerized microservice for robust and scalable solutions, or a serverless-native approach using specialized Chromium builds.

🎯 Key Takeaways

  • Most powerful PDF generation libraries (e.g., Puppeteer) rely on headless browser engines (Chromium) that require a full OS, persistent writable filesystem, and generous resources, which are absent in Next.js serverless environments.
  • Client-side PDF generation using `jspdf` and `html2canvas` is a quick, zero-server-cost fix for non-critical features, but it lacks consistency, security, and automation capabilities.
  • For mission-critical PDF generation (invoicing, reporting), a dedicated Dockerized microservice (e.g., Express.js with Puppeteer on AWS Fargate or Google Cloud Run) offers the most robust, scalable, and consistent solution by isolating complex dependencies.
  • A serverless-native approach for Next.js involves using `playwright-core` with `@sparticuz/chromium-min` to leverage stripped-down Chromium builds, but it requires careful management of serverless function memory/timeout limits and can be more fragile.
  • Choosing the right solution depends on project needs: client-side for quick, non-critical tasks; microservice for reliability and scalability; and serverless-native for a fully integrated, albeit potentially fragile, serverless stack.

Anyone generating PDF’s server-side in Next.js?

Struggling with server-side PDF generation in Next.js on Vercel or Netlify? Here’s a battle-tested guide explaining why it breaks and three production-ready fixes to escape dependency hell for good.

Server-Side PDFs in Next.js: A Senior Engineer’s Guide to Escaping Dependency Hell

It was 10 PM on a Thursday. A PagerDuty alert screams from my phone, waking up my dog and probably the neighbors. A critical feature—our new automated invoicing system—was failing silently in production. The app looked fine, but our finance team wasn’t getting the monthly reports they needed to close the books. I dove into the logs for our `api-prod-invoicing-01` service and saw nothing but a string of cryptic `504 Gateway Timeout` errors from our Next.js API route. No stack trace, no “file not found,” just… silence. After two hours of frantic debugging, I found the culprit: a beautiful, elegant PDF generation library that was trying to call a local Chromium binary that didn’t exist in our lean Vercel serverless environment. This, my friends, is a rite of passage for many of us working with modern frontend frameworks that blur the line with the backend.

First, Why Does This Even Happen?

Let’s get to the root of the problem. Most powerful PDF generation libraries, like Puppeteer, are just wrappers around a headless browser engine (like Chromium). They were designed for a traditional server environment—think an EC2 instance or a VPS running Ubuntu. These environments have a few key things a serverless function doesn’t:

  • A full-blown operating system with expected system libraries (like libnss3, libgconf-2-4, etc.).
  • A persistent, writable file system where it can unpack and run a massive browser binary.
  • Generous memory and execution time limits.

When you deploy your Next.js app to Vercel, Netlify, or AWS Lambda, you’re not getting a full server. You’re getting a micro-VM, a lightweight container with a read-only filesystem (except for the /tmp directory) and a stripped-down OS. When your code calls puppeteer.launch(), it’s like asking someone to find a book in a library that was never built. The function searches for files that don’t exist, hits a wall, and eventually times out without a useful error message. Frustrating, right?

The Fixes: From Duct Tape to a New Engine

I’ve seen this problem crop up at least three times on different projects. Over the years, my team and I have developed a hierarchy of solutions, from the “get us through the night” fix to the “never worry about this again” architecture. Here they are.

Solution 1: The Quick & Dirty Fix (Generate it on the Client)

Sometimes you just need to ship the feature and unblock your users. The fastest way to do that is to stop trying to solve the problem on the server and push the responsibility to the client’s browser.

The How: You use a client-side JavaScript library like jspdf and html2canvas. When the user clicks “Download Report,” your React component grabs the necessary HTML from the page, uses html2canvas to render it onto a canvas element, and then feeds that canvas into jspdf to create the PDF. The user’s own browser does all the heavy lifting.

The Code:


// In your React component
import jsPDF from 'jspdf';
import html2canvas from 'html2canvas';

const downloadPdf = () => {
  const input = document.getElementById('divToPrint'); // The ID of the div you want to capture
  html2canvas(input).then((canvas) => {
    const imgData = canvas.toDataURL('image/png');
    const pdf = new jsPDF();
    const pdfWidth = pdf.internal.pageSize.getWidth();
    const pdfHeight = (canvas.height * pdfWidth) / canvas.width;
    pdf.addImage(imgData, 'PNG', 0, 0, pdfWidth, pdfHeight);
    pdf.save("my-report.pdf");
  });
};

// ... JSX
<button onClick={downloadPdf}>Download Report</button>

The Reality: This is a great “hacky” solution for non-critical features like letting a user download a blog post or a receipt. It’s fast to implement and has zero server cost. However, it’s terrible for anything requiring consistency (different browsers render things differently), security (the data is all client-side), or automation (you can’t generate reports in a cron job).

Solution 2: The Permanent Fix (The Dedicated Microservice)

This is my preferred approach for any serious business requirement. If PDF generation is core to your product (invoicing, reporting, certificate generation), give it the respect it deserves: its own dedicated service.

The How: You create a simple, standalone service—I usually use Express.js—whose only job is to receive data (like HTML or a JSON payload) and return a PDF. The magic is that you run this service inside a Docker container. The Dockerfile is based on a full OS image (like node:18-bullseye) where you can properly install Chromium and all its dependencies. You deploy this container to a service like AWS Fargate, Google Cloud Run, or even a cheap DigitalOcean Droplet. Your Next.js API route then simply makes an HTTP request to this service.

The Code (Dockerfile example):


# Use an official Node.js runtime as a parent image
FROM node:18-slim

# Install Google Chrome dependencies
# This is the key part that serverless environments lack!
RUN apt-get update && apt-get install -y \
    wget \
    gnupg \
    ca-certificates \
    procps \
    libxss1 \
    libasound2 \
    libnss3 \
    libatk-bridge2.0-0 \
    libgtk-3-0 \
    --no-install-recommends

# Install Puppeteer which downloads a compatible Chromium revision
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install puppeteer

COPY . .

EXPOSE 3001
CMD [ "node", "server.js" ]

The Reality: This is the most robust and scalable solution. It completely isolates the complex dependency, so your Next.js app stays light and fast. The output is perfectly consistent every single time. The downside? It’s more infrastructure to manage and introduces another network hop. But for mission-critical stuff, the reliability is worth it ten times over.

Solution 3: The Serverless-Native Way (Wrestling with Playwright)

What if you’re all-in on the serverless Vercel ecosystem and don’t want to manage a separate service? You can still make this work, but you have to be very deliberate about it. You can’t just npm install puppeteer and call it a day.

The How: The community has created special, stripped-down, and compressed builds of Chromium designed to fit within the constraints of a serverless function. A popular package for this is @sparticuz/chromium-min. You’ll pair this with a library like Playwright, which is often better suited for these environments than Puppeteer. You configure Playwright to use the executable from the special package instead of trying to download its own.

Warning: Pay close attention to your serverless function’s memory and timeout limits. Generating a PDF is resource-intensive. On Vercel’s Hobby plan, the 10-second timeout might not be enough for a complex PDF. You will likely need to be on a paid plan with extended function timeouts.

The Code (Next.js API Route):


import { NextApiRequest, NextApiResponse } from 'next';
import playwright from 'playwright-core';
import chromium from '@sparticuz/chromium-min';

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  let browser = null;
  try {
    // We need to launch the browser with the path to the chromium binary
    browser = await playwright.chromium.launch({
      args: chromium.args,
      executablePath: await chromium.executablePath(
        'https://github.com/Sparticuz/chromium/releases/download/v123.0.1/chromium-v123.0.1-pack.tar'
      ),
      headless: true,
    });

    const page = await browser.newPage();
    await page.setContent('<h1>Hello from a Serverless PDF!</h1>');
    const pdfBuffer = await page.pdf({ format: 'A4' });

    res.setHeader('Content-Type', 'application/pdf');
    res.setHeader('Content-Disposition', 'attachment; filename=report.pdf');
    res.status(200).send(pdfBuffer);

  } catch (error) {
    console.error(error);
    res.status(500).json({ error: 'Failed to generate PDF.' });
  } finally {
    if (browser) {
      await browser.close();
    }
  }
}

The Reality: This works, and it’s great for keeping your entire stack in one place. But it can feel fragile. You’re at the mercy of the package maintainer, and updates can sometimes break your build. It’s a trade-off between convenience and the rock-solid stability of a dedicated service.

My Final Take: A Quick Comparison

Choosing the right path depends entirely on your project’s needs. Here’s how I break it down for my team:

Solution Best For Pros Cons
1. Client-Side Non-critical, user-initiated downloads (e.g., receipts). Fastest to implement, no server cost. Inconsistent, insecure, not for automation.
2. Microservice Core business functions (invoicing, reporting). Extremely reliable, scalable, consistent. More complex setup, adds infrastructure cost.
3. Serverless-Native Teams committed to a pure-serverless Vercel/Netlify stack. Single codebase, no extra infra to manage. Can be fragile, subject to platform limits.

My advice? Don’t let the elegance of an “all-in-one” solution tempt you into a corner. For that invoicing system that woke me up at 10 PM, we immediately implemented the client-side fix to stop the bleeding. The following week, we built out a proper Dockerized microservice for PDF generation. It’s been running on Fargate without a single issue for over a year. Sometimes, the “boring,” well-architected solution is the one that lets you sleep at night.

Darian Vance - Lead Cloud Architect

Darian Vance

Lead Cloud Architect & DevOps Strategist

With over 12 years in system architecture and automation, Darian specializes in simplifying complex cloud infrastructures. An advocate for open-source solutions, he founded TechResolve to provide engineers with actionable, battle-tested troubleshooting guides and robust software alternatives.


🤖 Frequently Asked Questions

❓ Why do server-side PDF generation libraries like Puppeteer fail in Next.js serverless environments?

They fail because serverless functions (micro-VMs on Vercel, Netlify, AWS Lambda) lack a full operating system with required system libraries (e.g., `libnss3`), a persistent writable filesystem for browser binaries, and sufficient memory/execution time limits. Puppeteer tries to call a local Chromium binary that doesn’t exist or cannot run.

❓ How do the different PDF generation solutions (client-side, microservice, serverless-native) compare in terms of use cases and trade-offs?

Client-side is best for non-critical, user-initiated downloads, offering fast implementation and no server cost but sacrificing consistency, security, and automation. A dedicated microservice is ideal for core business functions, providing extreme reliability, scalability, and consistency at the cost of more infrastructure management. The serverless-native way suits teams committed to a pure-serverless stack, offering a single codebase but can be fragile and subject to platform limits.

❓ What is a common implementation pitfall when attempting server-side PDF generation in Next.js on platforms like Vercel, and how can it be avoided?

A common pitfall is attempting to directly use powerful PDF libraries like Puppeteer that rely on a full headless browser engine (Chromium) within a constrained serverless environment. This leads to `504 Gateway Timeout` errors due to missing binaries and system dependencies. It can be avoided by either offloading generation to the client, creating a dedicated Dockerized microservice, or using specialized serverless-compatible Chromium builds (e.g., `@sparticuz/chromium-min` with Playwright) within the serverless function.

Leave a Reply

Discover more from TechResolve - SaaS Troubleshooting & Software Alternatives

Subscribe now to keep reading and get access to the full archive.

Continue reading