🚀 Executive Summary

TL;DR: Manually tracking Vercel function execution times and timeouts is a time-consuming and reactive process. This guide provides a Python-based automated solution that fetches Vercel API logs, identifies slow or timed-out functions, and sends proactive Slack alerts, significantly improving monitoring efficiency and application resilience.

🎯 Key Takeaways

  • An automated Python script can effectively monitor Vercel function performance by pulling logs from the Vercel API and sending alerts to Slack.
  • The script identifies performance issues by parsing `FUNCTION_INVOCATION` events for `durationMs` against configurable `DURATION_THRESHOLD_MS` and `FUNCTION_TIMEOUT_MS`.
  • State management using a `last_run.txt` file is crucial for tracking the `since` timestamp, ensuring only new logs are processed and preventing duplicate analysis.

Track Vercel Function Execution Times and Timeouts

Track Vercel Function Execution Times and Timeouts

Alright, let’s talk about Vercel logs. For a while, part of my weekly routine involved manually combing through function logs, hunting for performance bottlenecks or, worse, silent timeouts. I was probably wasting a couple of hours a week on this reactive, tedious task. I realized I could build a simple, automated monitor to do this for me and pipe the alerts directly into Slack. It turned a manual chore into a proactive system that lets my team know about issues before they become critical.

This guide will walk you through that exact setup. We’ll write a Python script to pull logs from the Vercel API, parse them for long execution times or timeouts, and send a clean notification to a Slack channel. Let’s get you those hours back.

Prerequisites

Before we dive in, make sure you have the following ready:

  • A Vercel account with a deployed project (Team plan or higher to access logs via API).
  • Python 3 installed on the machine where you’ll run the script.
  • A Vercel Personal Access Token.
  • A Slack workspace where you can create an incoming webhook.

The Guide: Step-by-Step

Step 1: Get Your Vercel API Token

First, we need to authenticate with Vercel. Head over to your Vercel account settings, find the “Tokens” section, and create a new Personal Access Token. Give it a descriptive name like “FunctionLogMonitor” and copy it somewhere safe. We’ll need it in a moment.

Step 2: Create a Slack Incoming Webhook

Next, we need a way to post messages to Slack. Go to your Slack workspace’s App Directory, search for “Incoming WebHooks,” and add it to a channel of your choice (I usually have a dedicated #devops-alerts channel). Once configured, Slack will give you a Webhook URL. Copy that as well.

Step 3: The Python Script

Now for the fun part. Let’s build the script that does the heavy lifting. I’ll skip the standard virtualenv setup since you likely have your own workflow for that. The important part is to create a project directory and install the necessary Python libraries. You’ll need `requests` for making HTTP calls and `python-dotenv` for managing our secrets. You can install them with pip.

Create a file named `monitor_functions.py` and paste the following code into it. I’ll explain what each part does right below.


import os
import requests
import json
from datetime import datetime, timedelta, timezone
from dotenv import load_dotenv

def get_vercel_logs(api_token, team_id, project_id, since_timestamp):
    """Fetches Vercel function logs since the last check."""
    headers = {
        'Authorization': f'Bearer {api_token}'
    }
    # Vercel API uses milliseconds for timestamps
    params = {
        'limit': 100,
        'since': since_timestamp,
        'direction': 'forward',
        'type': 'lambda' # Focus only on function logs
    }
    
    # Construct the URL, adding teamId if it's provided
    url = f"https://api.vercel.com/v2/deployments/{project_id}/events"
    if team_id:
        url += f"?teamId={team_id}"

    try:
        response = requests.get(url, headers=headers, params=params)
        response.raise_for_status()  # This will raise an exception for 4xx/5xx errors
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error fetching Vercel logs: {e}")
        return None

def parse_logs_for_issues(logs, duration_threshold_ms, timeout_limit_ms):
    """Parses logs for long execution times or timeouts."""
    issues = []
    if not logs or 'events' not in logs:
        return issues, 0

    latest_timestamp = 0
    for event in logs['events']:
        # We only care about the invocation result
        if event['payload'].get('type') == 'FUNCTION_INVOCATION':
            details = event['payload']
            duration = details.get('durationMs')
            
            # Update the latest timestamp we've seen
            if event['created'] > latest_timestamp:
                latest_timestamp = event['created']

            # Check if the function timed out
            # Vercel indicates a timeout by duration being very close to the limit
            if duration >= (timeout_limit_ms - 50): # 50ms buffer
                issues.append(
                    f":alert: *Timeout Detected!* "
                    f"`Function: {details.get('name')}` "
                    f"`Duration: {duration}ms` "
                    f"`Request ID: {details.get('requestId')}`"
                )
            # Check if the function exceeded our custom threshold
            elif duration > duration_threshold_ms:
                issues.append(
                    f":warning: *Slow Execution Detected* "
                    f"`Function: {details.get('name')}` "
                    f"`Duration: {duration}ms` "
                    f"`Request ID: {details.get('requestId')}`"
                )
    
    return issues, latest_timestamp

def send_slack_notification(webhook_url, messages):
    """Sends a formatted message to a Slack channel."""
    if not messages:
        return

    payload = {
        "text": "Vercel Function Performance Alert",
        "blocks": [
            {
                "type": "header",
                "text": {
                    "type": "plain_text",
                    "text": ":rocket: Vercel Function Monitor"
                }
            },
            {
                "type": "section",
                "text": {
                    "type": "mrkdwn",
                    "text": "\\n".join(messages)
                }
            }
        ]
    }
    try:
        response = requests.post(webhook_url, data=json.dumps(payload), headers={'Content-Type': 'application/json'})
        response.raise_for_status()
    except requests.exceptions.RequestException as e:
        print(f"Error sending Slack notification: {e}")

def main():
    """Main execution function."""
    load_dotenv('config.env')

    VERCEL_API_TOKEN = os.getenv('VERCEL_API_TOKEN')
    VERCEL_PROJECT_ID = os.getenv('VERCEL_PROJECT_ID')
    VERCEL_TEAM_ID = os.getenv('VERCEL_TEAM_ID') # Optional
    SLACK_WEBHOOK_URL = os.getenv('SLACK_WEBHOOK_URL')
    
    # Configurable thresholds (in milliseconds)
    DURATION_THRESHOLD_MS = 2500  # e.g., flag anything over 2.5 seconds
    FUNCTION_TIMEOUT_MS = 9800    # Default for Hobby plan is 10s

    # State management for 'since' timestamp
    last_run_file = 'last_run.txt'
    try:
        with open(last_run_file, 'r') as f:
            since_timestamp = int(f.read().strip())
    except (FileNotFoundError, ValueError):
        # If file doesn't exist or is empty, go back 5 minutes for the first run
        since_timestamp = int((datetime.now(timezone.utc) - timedelta(minutes=5)).timestamp() * 1000)

    logs = get_vercel_logs(VERCEL_API_TOKEN, VERCEL_TEAM_ID, VERCEL_PROJECT_ID, since_timestamp)
    
    if logs:
        issues, latest_timestamp = parse_logs_for_issues(logs, DURATION_THRESHOLD_MS, FUNCTION_TIMEOUT_MS)
        send_slack_notification(SLACK_WEBHOOK_URL, issues)
        
        # Save the latest timestamp for the next run
        if latest_timestamp > since_timestamp:
            with open(last_run_file, 'w') as f:
                f.write(str(latest_timestamp))

if __name__ == "__main__":
    main()

Pro Tip: The script uses a simple last_run.txt file to keep track of the last log it processed. This prevents us from re-analyzing the same logs over and over. In my production setups, I might use a more robust solution like a small Redis instance or a database for state management, but for most cases, a text file is perfectly fine and keeps things simple.

Step 4: Configure Your Environment

This script doesn’t hardcode secrets, which is good practice. Instead, it reads them from an environment file. In your project directory, create a file named `config.env` (we avoid the common `.env` name here for compatibility reasons in some systems).

Your `config.env` file should look like this. Fill in the values you copied earlier.


VERCEL_API_TOKEN="your_vercel_api_token_here"
VERCEL_PROJECT_ID="your_vercel_project_id_here"
# VERCEL_TEAM_ID="your_team_id_if_project_is_under_a_team"
SLACK_WEBHOOK_URL="your_slack_webhook_url_here"

Step 5: Automate with a Cron Job

The final step is to run this script on a schedule. A classic cron job is perfect for this. You can edit your crontab and add a line to execute the Python script every 5 or 10 minutes.

Here’s an example of a cron entry that runs the script every 10 minutes:

*/10 * * * * python3 monitor_functions.py

Make sure you run this cron job from within your project directory, so the script can find the `config.env` and `last_run.txt` files.

Common Pitfalls

Here are a couple of spots where I’ve tripped up in the past:

  • Timezones and Timestamps: The Vercel API uses Unix timestamps in milliseconds and expects them in UTC. My script handles this, but if you modify it, be very careful with timezone conversions. Getting this wrong means you’ll either miss logs or process duplicates.
  • API Rate Limits: On a very active project, running the script too frequently (e.g., every minute) could hit Vercel’s API rate limits. For most use cases, checking every 5-15 minutes is a safe and effective interval.
  • Missing `VERCEL_TEAM_ID`: If your project is owned by a team and not your personal account, you *must* include the `VERCEL_TEAM_ID` in your `config.env`. If you omit it, the API call will fail with a ‘Project not found’ error.

Conclusion

And that’s it. You now have a lightweight, automated monitoring system for your Vercel functions. This isn’t just about catching errors; it’s about understanding your application’s performance in the real world. This simple script has saved my team countless hours and helped us build more resilient applications. I hope it does the same for you.

Darian Vance - Lead Cloud Architect

Darian Vance

Lead Cloud Architect & DevOps Strategist

With over 12 years in system architecture and automation, Darian specializes in simplifying complex cloud infrastructures. An advocate for open-source solutions, he founded TechResolve to provide engineers with actionable, battle-tested troubleshooting guides and robust software alternatives.


🤖 Frequently Asked Questions

âť“ How does the script identify Vercel function timeouts?

The script identifies timeouts by checking if a function’s `durationMs` from the `FUNCTION_INVOCATION` payload is very close to or exceeds the `FUNCTION_TIMEOUT_MS` (e.g., `duration >= (timeout_limit_ms – 50)`), indicating it has reached its execution limit.

âť“ How does this compare to alternatives for Vercel monitoring?

This custom Python script offers a lightweight, highly customizable, and cost-effective solution for specific Vercel function performance alerts. It contrasts with broader, often paid, third-party monitoring services that provide more extensive observability features but may lack the direct, tailored alerting this script offers.

âť“ What is a common implementation pitfall when setting up this Vercel log monitor?

A common pitfall is omitting the `VERCEL_TEAM_ID` in the `config.env` file if your Vercel project is owned by a team rather than a personal account. This omission will cause the Vercel API calls to fail with a ‘Project not found’ error.

Leave a Reply

Discover more from TechResolve - SaaS Troubleshooting & Software Alternatives

Subscribe now to keep reading and get access to the full archive.

Continue reading