🚀 Executive Summary
TL;DR: Google Ads failing after a specific time is often caused by an IPv6 routing misconfiguration, where the server’s outbound API calls to Google’s IPv6-first endpoints are black-holed. The primary solution involves either temporarily disabling IPv6, permanently correcting the network’s IPv6 routing, or configuring the system to prefer IPv4 for external connections.
🎯 Key Takeaways
- Modern Linux distributions enable IPv6 by default, and Google services are IPv6-first, leading servers to attempt IPv6 connections to `googleads.googleapis.com`.
- An ‘IPv6 black hole’ occurs when a server’s default IPv6 route points to a gateway that cannot route public IPv6 traffic, causing API requests to Google to time out and fail.
- Three solutions exist: a quick fix by disabling IPv6 via `sysctl`, a permanent fix by correcting the IPv6 routing with network engineers, or an elegant workaround by forcing IPv4 preference system-wide using `/etc/gai.conf`.
Seeing your Google Ads traffic flatline at the same time every day? The culprit is likely a subtle IPv6 routing misconfiguration causing your server’s outbound API calls or health checks to get black-holed.
Your Google Ads Mysteriously Died at 10 AM? Let’s Talk About IPv6 Black Holes.
I remember it vividly. It was a Tuesday. We had a major e-commerce client, and every morning at 9:01 AM on the dot, their entire product feed to Google Merchant Center would fail. For a solid week, we chased ghosts. We blamed cron jobs, application code, database locks, and even Google’s API quotas. The junior engineers were pulling their hair out, and management was breathing down my neck. The problem? A default IPv6 route on our main application server that led to a firewall that didn’t know how to speak IPv6 to the outside world. Our server was screaming into the void, and nobody was listening.
The “Why”: What’s Really Happening Here?
So, why does this happen at a specific time like 10 AM? It’s a classic red herring. The time itself is mostly a coincidence, likely tied to when a specific set of Google’s crawlers or API endpoints become active or when their internal load balancing shifts.
Here’s the real root cause:
- Most modern Linux distributions enable IPv6 by default.
- Google, and many other major services, are IPv6-first. When your server looks up
googleads.googleapis.com, it gets back both an IPv4 address (A record) and an IPv6 address (AAAA record). - Your server’s networking stack says, “Great, I’ll use the shiny new IPv6 address!”
- The problem is, your server might have a default IPv6 route pointing to a router or gateway that cannot actually route public IPv6 traffic. This is common in cloud environments or on-prem networks where IPv6 was “enabled” but never fully configured.
The result? Your server sends its API requests into a network black hole. The request times out, your connection fails, and Google’s systems see your site as offline or your API endpoint as unresponsive. They stop serving ads until the system can successfully re-verify. You’ve been ghosted by your own network.
Three Ways to Slay the IPv6 Dragon
Alright, you’re in the hot seat and need to fix this. I’ve got three paths for you, ranging from a quick-and-dirty fix to the proper, long-term solution. Pick your poison.
1. The Quick & Dirty Fix: Disable IPv6 (On the Host)
This is my “the building is on fire” solution. Marketing is losing money every minute, and you need to get the ads running yesterday. This fix forces the problematic server, let’s call it ad-fetcher-prod-01, to only use IPv4, completely bypassing the broken IPv6 path.
Run these commands as root on the affected machine:
# Disable IPv6 for all network adapters
sysctl -w net.ipv6.conf.all.disable_ipv6=1
# Disable IPv6 for default network adapters
sysctl -w net.ipv6.conf.default.disable_ipv6=1
To make it permanent across reboots, add these lines to /etc/sysctl.conf:
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
Darian’s Warning: This is a sledgehammer, not a scalpel. It solves the immediate problem on this one host, but it’s a band-aid. You haven’t fixed the underlying network issue, and you’re just kicking the can down the road. Use this to restore service, then plan to implement a proper fix.
2. The Permanent Fix: Correct Your IPv6 Routing
This is the “do it right” solution. The problem isn’t that IPv6 is bad; it’s that your configuration is lying to your server. Let’s fix the lie. First, you need to identify the bad route. SSH into your server and check the IPv6 routing table:
ip -6 route
You might see something like this, where fe80::... is a link-local address for a router that can’t actually route to the public internet:
default via fe80::dead:beef:cafe:babe dev eth0 proto ra metric 1024 expires 1787sec hoplimit 64 pref medium
If you’ve confirmed with your network team that this gateway is a dead end, you can remove it. Be careful here, as this can cut off other connectivity if you’re wrong.
# Example command to remove the bad default route
ip -6 route del default via fe80::dead:beef:cafe:babe dev eth0
The real fix involves working with your network engineers or cloud provider (e.g., in your AWS VPC or Azure VNet settings) to either provide a functional IPv6 gateway or to stop advertising a useless one via Router Advertisements (RA).
3. The ‘Nuclear’ Option: Force IPv4 Preference System-Wide
Let’s say you’re in a situation where you can’t disable IPv6 (maybe other apps need it for internal communication) and you have no control over the network team. This is the compromise. You can tell the OS, “Hey, IPv6 is fine, but please, please try IPv4 first for everything.”
You do this by editing the resolver’s configuration file, /etc/gai.conf. If the file doesn’t exist, create it. Add this single line:
precedence ::ffff:0:0/96 100
This tells the getaddrinfo (gai) library to strongly prefer IPv4 addresses. No reboot is needed; applications will pick up the change as they make new connections.
Pro Tip: This is a much more elegant solution than disabling the whole IPv6 stack. It’s my go-to “hack” when I need to solve the problem without waiting for three other teams to approve a network change. It keeps IPv6 enabled for local traffic but avoids the black hole for external traffic.
Summary: Choose Your Weapon
Here’s a quick cheat sheet to help you decide.
| Solution | Pros | Cons |
| 1. Disable IPv6 (sysctl) | Fastest to implement, guaranteed to work immediately. | Blunt instrument, technical debt, doesn’t fix the root cause. |
| 2. Fix IPv6 Routing | The “correct” long-term fix, future-proofs your network. | Requires network access/coordination, slower, potential for misconfiguration. |
| 3. Prefer IPv4 (gai.conf) | Elegant, host-level fix, no network changes needed, easily reversible. | Still a workaround, might have unintended effects on IPv6-only internal services. |
Trust me, I’ve seen this exact issue bring down critical services more times than I can count. Don’t waste a week chasing application logs. When you see a service failing at the same time every day, start thinking about the network. Your first stop should always be checking for IPv6 black holes.
🤖 Frequently Asked Questions
âť“ Why would my Google Ads stop serving at a specific time every day, even with budget and no campaign changes?
This specific time is often a coincidence, likely when Google’s crawlers or API endpoints become active. The root cause is typically an IPv6 routing misconfiguration on your server, causing outbound API calls to Google’s IPv6-first services to be black-holed and time out.
âť“ How does disabling IPv6 via `sysctl` compare to preferring IPv4 via `gai.conf`?
Disabling IPv6 via `sysctl` is a ‘sledgehammer’ approach, immediately forcing the host to use only IPv4, but it’s a band-aid that doesn’t fix the underlying network issue. Preferring IPv4 via `gai.conf` is a more elegant, host-level fix that keeps IPv6 enabled for internal traffic while ensuring external connections prioritize IPv4, avoiding black holes without network changes.
âť“ What is a common implementation pitfall when dealing with IPv6 black holes affecting Google Ads?
A common pitfall is chasing application logs, cron jobs, or API quotas for a week, overlooking the underlying network issue. The solution is to immediately suspect IPv6 routing misconfigurations when a service consistently fails at the same time daily, especially if the server is attempting to use an IPv6 route to a non-functional public gateway.
Leave a Reply