🚀 Executive Summary

TL;DR: Manually checking network latency is inefficient and reactive. This guide provides a simple, automated Python solution to proactively monitor and visualize network latency to Google DNS (8.8.8.8) over time, replacing guesswork with data-driven diagnosis.

🎯 Key Takeaways

  • The `pythonping` library simplifies sending ICMP requests and retrieving average round-trip times (rtt_avg_ms) across different operating systems.
  • Pandas DataFrames are used for efficient logging of timestamped latency data to a CSV file and for reading it back for visualization.
  • Matplotlib is employed to generate time-series plots from the collected data, requiring conversion of string timestamps to `datetime` objects for accurate visualization.
  • Automating the data collection script with `cron` (or Task Scheduler) ensures continuous, passive monitoring without manual intervention.
  • For production setups, logging to a time-series database like InfluxDB is suggested as an alternative to CSV for better query performance, though CSV offers simplicity and portability.

Visualizing Network Latency (Ping) to Google DNS over Time

Visualizing Network Latency (Ping) to Google DNS over Time

Hey team, Darian Vance here. I want to walk you through a simple setup that has genuinely saved me hours of tedious work. For a long time, whenever a user reported “slowness,” my first step was to SSH into a server and manually run a few pings to an external host like Google’s DNS (8.8.8.8). It was a reactive, time-consuming mess. I was guessing, not diagnosing.

I realized I was wasting a good couple of hours a week on this. That’s when I built this simple, automated latency logger. Instead of manually checking, I now have a historical graph that immediately tells me if an issue is local, network-wide, or just a user’s perception. It’s the difference between flying blind and having a real dashboard. Let’s build it.

Prerequisites

Before we start, make sure you have the following ready. I’m assuming you’re comfortable with basic system administration and Python.

  • Python 3 installed on a machine with a stable internet connection.
  • Access to the command line on that machine.
  • A few Python libraries. You’ll need to install them using your standard package manager, typically with commands like pip install pythonping pandas matplotlib.

The Guide: Step-by-Step

We’re going to create two small Python scripts: one to collect the data and another to visualize it. I’ll skip the standard project setup steps like creating a directory or a virtual environment, since you likely have your own workflow for that. Let’s jump straight to the Python logic.

Step 1: The Ping and Data Collection Script

First, let’s create a script that pings Google’s DNS and records the average latency. I call mine latency_logger.py. We’ll use the pythonping library because it handles the complexities of ICMP requests cleanly across different operating systems, which saves me the hassle of dealing with raw sockets or subprocesses.

The goal is to run a ping, grab the average round-trip time, and append it to a CSV file with a timestamp.


import pandas as pd
from pythonping import ping
from datetime import datetime
import os

def log_latency(target='8.8.8.8', count=4, filename='latency_log.csv'):
    """
    Pings a target and logs the average latency to a CSV file.
    """
    print(f"Pinging {target}...")
    try:
        response_list = ping(target, count=count, timeout=2)
        avg_latency_ms = response_list.rtt_avg_ms

        # Prepare data for logging
        timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        data = {'timestamp': [timestamp], 'avg_latency_ms': [avg_latency_ms]}
        df_new = pd.DataFrame(data)

        # Check if file exists to decide on writing headers
        file_exists = os.path.isfile(filename)

        # Append data to CSV
        df_new.to_csv(filename, mode='a', header=not file_exists, index=False)
        print(f"Successfully logged latency: {avg_latency_ms:.2f} ms")

    except Exception as e:
        # This handles cases where the host is unreachable or there's a network error
        print(f"Error during ping: {e}")
        # We can still log the failure if we want, but for now, we'll just print
        return
        
if __name__ == "__main__":
    log_latency()

The Logic Here:

  • We import the necessary libraries: `pandas` for data handling, `pythonping` for the network call, `datetime` for timestamps, and `os` to check if our log file exists.
  • The `log_latency` function takes a target IP, a count of pings to send, and the output filename.
  • `ping(target, count=count, timeout=2)` sends the ICMP requests.
  • `response_list.rtt_avg_ms` gives us the average round-trip time in milliseconds, which is the key metric we want.
  • We then use `pandas` to create a DataFrame. This might seem like overkill for one line of data, but it makes writing to CSV incredibly simple and consistent.
  • The `os.path.isfile(filename)` check is crucial. We only want to write the CSV header (`timestamp,avg_latency_ms`) the very first time the script runs. On all subsequent runs, we just append the new data.

Pro Tip: In my production setups, I’d probably log to a time-series database like InfluxDB or even a simple SQLite database for better query performance. However, for a quick, effective, and portable monitor, a CSV file is fantastic. It’s human-readable and requires zero database server setup.

Step 2: Automating the Data Collection

Running this script manually isn’t much better than what I was doing before. We need to automate it. On a Linux or macOS system, `cron` is my go-to tool. For Windows, Task Scheduler achieves the same result.

To run this script every 15 minutes, you would set up a cron job. The command would look something like this (remember, no absolute paths in the command itself for security reasons):

*/15 * * * * python3 latency_logger.py

This tells the system to execute our Python script every 15 minutes of every hour, every day. Now we’re passively collecting data without any manual intervention.

Step 3: The Visualization Script

Data in a CSV is useful, but a graph is worth a thousand rows. Let’s create a second script, visualize_latency.py, to read our `latency_log.csv` and generate a chart.


import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates

def create_latency_graph(filename='latency_log.csv', output_image='latency_graph.png'):
    """
    Reads latency data from a CSV and generates a time-series plot.
    """
    try:
        df = pd.read_csv(filename)
    except FileNotFoundError:
        print(f"Log file not found: {filename}. Run the logger first.")
        return

    # Convert timestamp column to datetime objects for proper plotting
    df['timestamp'] = pd.to_datetime(df['timestamp'])

    # Create the plot
    plt.style.use('seaborn-v0_8-grid')
    fig, ax = plt.subplots(figsize=(15, 7))

    ax.plot(df['timestamp'], df['avg_latency_ms'], marker='o', linestyle='-', markersize=4)

    # Formatting the plot for readability
    ax.set_title('Network Latency to 8.8.8.8 Over Time', fontsize=16)
    ax.set_xlabel('Timestamp', fontsize=12)
    ax.set_ylabel('Average Latency (ms)', fontsize=12)
    
    # Improve date formatting on the x-axis
    ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d %H:%M'))
    plt.xticks(rotation=45)
    plt.tight_layout()

    # Save the figure
    plt.savefig(output_image)
    print(f"Graph saved to {output_image}")

if __name__ == "__main__":
    create_latency_graph()

The Logic Here:

  • We use `pandas` again, this time with `pd.read_csv()` to load our entire log history into a DataFrame.
  • `pd.to_datetime()` is a critical step. It converts our string timestamps into actual datetime objects, which `matplotlib` can understand and plot correctly on a time-series axis.
  • The rest of the code is `matplotlib` boilerplate: we set up a figure (`fig`) and an axes object (`ax`), plot the timestamp vs. latency, and then add labels and a title to make it understandable.
  • Finally, `plt.savefig()` saves our chart as a PNG file. You can run this script anytime to get an up-to-date visualization of your network’s performance.

Pro Tip: I often chain this visualization script to run right after my data collection in a cron job. For instance, once a day, I’ll run the visualizer to generate a daily report. I’ll have it save the `latency_graph.png` to a directory served by a simple internal web server. This gives my entire team a quick, visual status page they can check anytime.

Common Pitfalls (Where I Usually Mess Up)

This setup is straightforward, but a few things can trip you up. Here are the snags I’ve hit in the past:

  • Permissions Errors: The script, especially when run by `cron`, needs permission to write to its directory. If `latency_log.csv` or `latency_graph.png` can’t be created, it’s almost always a file permissions issue. Make sure the user running the cron job can write to that location.
  • Firewall/ICMP Blocking: The whole thing depends on ICMP packets (what `ping` uses). Some corporate firewalls or cloud security groups block ICMP by default. If your script always fails or shows 100% packet loss, this is the first thing I’d check.
  • Timestamp Timezones: A classic DevOps headache. The timestamps are based on the server’s system clock. If you’re correlating this data with logs from other systems in different timezones, make sure you’re consistent (I standardize on UTC for all my production logging).

Conclusion

And there you have it. With two simple scripts and a scheduler, you’ve built a powerful, proactive monitoring tool. This moves you from asking “is the network down?” to knowing “network latency spiked by 30ms at 2:15 PM, which correlates with our backup job.” It’s a small investment of time that provides immense value by replacing guesswork with data.

Hope this helps you reclaim some of your time. Happy monitoring.

– Darian

Darian Vance - Lead Cloud Architect

Darian Vance

Lead Cloud Architect & DevOps Strategist

With over 12 years in system architecture and automation, Darian specializes in simplifying complex cloud infrastructures. An advocate for open-source solutions, he founded TechResolve to provide engineers with actionable, battle-tested troubleshooting guides and robust software alternatives.


🤖 Frequently Asked Questions

âť“ How can I monitor network latency over time using Python?

You can monitor network latency by creating a Python script that uses the `pythonping` library to send ICMP requests to a target (e.g., Google DNS 8.8.8.8), logs the average round-trip time (rtt_avg_ms) with a timestamp to a CSV file using `pandas`, and then automates this collection with `cron`.

âť“ How does this automated ping solution compare to manual checks or dedicated monitoring tools?

This automated solution provides historical data and visualization, making it superior to manual checks for diagnosing network issues. While simpler than full-fledged monitoring tools like InfluxDB or commercial solutions, it offers a quick, effective, and portable monitor without requiring a database server setup, making it ideal for immediate insights.

âť“ What are common implementation pitfalls when setting up this latency monitoring system?

Common pitfalls include file permissions errors preventing the script from writing to the CSV or image files, corporate firewalls or security groups blocking ICMP packets (leading to 100% packet loss), and inconsistencies with timestamp timezones if correlating data from multiple systems, which should be standardized (e.g., UTC).

Leave a Reply

Discover more from TechResolve - SaaS Troubleshooting & Software Alternatives

Subscribe now to keep reading and get access to the full archive.

Continue reading