🚀 Executive Summary
TL;DR: The article presents a Python script to automate monitoring home office WiFi by logging signal strength and internet speed test results. This solution helps users collect historical data in a CSV file, eliminating manual checks and providing actionable insights into connection performance over time.
🎯 Key Takeaways
- The solution leverages `speedtest-cli` and `wifi-info` Python libraries for cross-platform measurement of internet speed and WiFi signal strength, respectively.
- Data is logged to a simple CSV file for personal use, with a recommendation to integrate with time-series databases like InfluxDB or Prometheus and visualize with Grafana for production environments.
- Robust error handling using `try…except` blocks is crucial to prevent script crashes due to network issues or failed speed tests, ensuring continuous data logging.
Monitor WiFi Signal Strength and Speedtest results over time
Hey there, Darian Vance here. As a Senior DevOps Engineer at TechResolve, I’m all about automation and data. For a long time, whenever my home office WiFi felt sluggish, I’d drop everything, run a manual speed test, check my signal strength, and jot it down. It felt productive, but it was a classic time sink. I finally realized I was wasting hours a month on a task a simple script could do for me. That’s what I want to share today—a straightforward way to automate this process so you can get back to your real work.
This guide will walk you through creating a Python script that logs your WiFi signal and internet speed to a CSV file automatically. It’s a set-it-and-forget-it solution that gives you historical data to finally prove (or disprove) that your connection drops every day at 3 PM.
Prerequisites
- Python 3 installed on your machine (Windows, macOS, or Linux).
- Basic comfort with the command line to install packages.
- A network connection you want to monitor.
The Guide: Step-by-Step
Step 1: Setting Up Your Project
First things first, you’ll want to set up a dedicated folder for this project. I’ll skip the standard virtualenv setup since you likely have your own workflow for that. The important part is to isolate your dependencies.
Once your environment is active, you’ll need two key Python libraries. You can install them from your terminal using pip:
- speedtest-cli: The official command-line interface for Speedtest.net, packaged as a Python library.
- wifi-info: A handy cross-platform wrapper for fetching WiFi details like signal strength.
Just run the necessary install commands for `speedtest-cli` and `wifi-info` and you’ll be ready for the next step.
Step 2: The Python Monitoring Script
Now, let’s build the script. Create a file named network_monitor.py and we’ll build it piece by piece. The logic is simple: get the WiFi data, get the speed test data, and then write it all to a file with a timestamp.
Pro Tip: I’m using a CSV file here because it’s simple and universally compatible. In my production setups, I’d pipe this data to a time-series database like InfluxDB or Prometheus and visualize it with Grafana. But for a personal tool, a CSV is perfect.
Here is the complete script. I’ve added comments to explain each part.
import csv
import datetime
import os
import speedtest
from wifi_info import get_wifi_info
def get_wifi_details():
"""
Fetches the current WiFi signal strength.
Returns a dictionary with the signal strength, or None on failure.
"""
try:
# This function directly returns the signal strength as an integer percentage.
signal_strength = get_wifi_info(interface=None)[0]
return {'signal_strength': int(signal_strength)}
except Exception as e:
print(f"Error getting WiFi info: {e}")
return {'signal_strength': None}
def run_speed_test():
"""
Runs a full speed test and returns key metrics.
Returns a dictionary with download/upload speeds in Mbps and ping in ms.
"""
print("Running speed test... this can take a moment.")
try:
s = speedtest.Speedtest()
s.get_best_server()
s.download(threads=1)
s.upload(threads=1)
results = s.results.dict()
# Convert from bits per second to megabits per second for readability
download_mbps = results.get("download", 0) / 1_000_000
upload_mbps = results.get("upload", 0) / 1_000_000
ping_ms = results.get("ping", 0)
return {
"download_mbps": round(download_mbps, 2),
"upload_mbps": round(upload_mbps, 2),
"ping_ms": round(ping_ms, 2)
}
except Exception as e:
print(f"Speed test failed: {e}")
return {
"download_mbps": 0,
"upload_mbps": 0,
"ping_ms": 0
}
def log_to_csv(data_dict):
"""
Appends a dictionary of data to a CSV file.
Creates the file and writes the header if it doesn't exist.
"""
log_file = 'network_log.csv'
file_exists = os.path.exists(log_file)
try:
with open(log_file, mode='a', newline='') as f:
# Using DictWriter makes handling CSVs much cleaner
writer = csv.DictWriter(f, fieldnames=data_dict.keys())
if not file_exists:
writer.writeheader() # Write header only on the first run
writer.writerow(data_dict)
print(f"Successfully logged data: {data_dict}")
except IOError as e:
print(f"Error writing to file {log_file}: {e}")
def main():
"""
Main function to orchestrate the monitoring and logging.
"""
wifi_data = get_wifi_details()
speed_data = run_speed_test()
timestamp = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
# Combine all our data into a single, flat dictionary for the CSV
log_entry = {
'timestamp': timestamp,
'signal_strength_percent': wifi_data.get('signal_strength'),
'download_mbps': speed_data.get('download_mbps'),
'upload_mbps': speed_data.get('upload_mbps'),
'ping_ms': speed_data.get('ping_ms')
}
log_to_csv(log_entry)
if __name__ == "__main__":
main()
Step 3: Scheduling the Script
A script is only useful if it runs automatically. We need to schedule it.
- On Linux or macOS: The classic tool for this is `cron`. You can edit your cron jobs to run the script on a schedule. To run it at the top of every hour, you’d add a line like this. Notice we don’t need a full path if the script is in a directory that your system’s PATH variable recognizes or if you run the cronjob from the correct directory.
0 * * * * python3 network_monitor.py - On Windows: You’ll use the built-in Task Scheduler. You can create a new task that triggers on a schedule (e.g., daily, hourly). The action for the task will be to start a program, where you’ll specify the path to your Python executable and provide your `network_monitor.py` script as an argument.
Common Pitfalls
I’ve set this up a few times, and here is where I usually mess up, so you don’t have to:
- Error Handling: The first version of my script had no `try…except` blocks. If the WiFi was disconnected or the speed test server failed, the whole script would crash and stop logging. The version above is more resilient; it logs a `0` or `None` and keeps going.
- File Permissions: When you set this up with a scheduler, the script might run as a different user. Make sure that the user running the script has permission to write the `network_log.csv` file in its target directory. It’s an easy one to forget.
- Speedtest Duration: A speed test isn’t instant. It can take 30-60 seconds to run. If you schedule your script too frequently (e.g., every minute), you might end up with overlapping processes. I find that once an hour is a good balance for tracking general trends.
Conclusion
And that’s it. You now have a simple, robust system for collecting network performance data over time. The next time you feel a slowdown, you won’t have to guess—you’ll have a CSV full of data to prove it. From here, you can open the data in any spreadsheet program to build charts, calculate averages, and find patterns. It’s a small investment of time that pays off by turning frustration into actionable data.
Happy monitoring!
– Darian Vance
🤖 Frequently Asked Questions
âť“ What is the primary purpose of the `network_monitor.py` script?
The script automates the collection of WiFi signal strength and internet speed test results, logging this data with timestamps to a `network_log.csv` file for historical analysis.
âť“ How does this monitoring approach compare to more advanced data solutions?
While the script uses a simple CSV for personal monitoring, the author suggests piping data to time-series databases like InfluxDB or Prometheus, visualized with Grafana, for production setups requiring more robust data management and visualization.
âť“ What is a common implementation pitfall when scheduling this script, and how is it addressed?
A common pitfall is the lack of robust error handling, which can cause the script to crash if WiFi disconnects or speed tests fail. The provided script addresses this with `try…except` blocks, logging `0` or `None` values to ensure continuous operation.
Leave a Reply