🚀 Executive Summary
TL;DR: Traditional acquisition due diligence often overlooks critical technical debt and hidden risks due to information asymmetry and human fatigue. This article provides three targeted LLM prompts to act as a ‘BS Detector,’ ‘Tech Stack Archeologist,’ and ‘Pre-Mortem’ analyst, enabling rapid identification of red flags and deeper technical scrutiny.
🎯 Key Takeaways
- The ‘BS Detector’ prompt helps identify contradictions in Confidential Information Memoranda (CIMs) by forcing the LLM to act as a skeptical Private Equity analyst, flagging discrepancies in AI/ML claims, churn rates, and vague technical debt mentions.
- The ‘Tech Stack Archeology’ prompt, used with API docs or schema exports, enables an LLM acting as a Principal Cloud Architect to audit for Single Points of Failure (SPOFs), Bus Factor risks, licensing landmines (e.g., AGPL), and infrastructure fragility like disguised monolithic patterns.
- The ‘Nuclear Option’ or ‘Pre-Mortem’ prompt simulates a catastrophic acquisition failure 12 months post-deal, forcing teams to confront potential weak points and hidden costs (e.g., legacy-billing-v1 migration) that could sink ROI, counteracting ‘deal fever’.
Quick Summary: Cut through the fluff of confidential information memoranda (CIMs) and uncover hidden technical debt using these three targeted LLM prompts designed for due diligence.
Engineering the Deal: My Go-To Prompts for Vetting Acquisitions
I still wake up in a cold sweat thinking about the “Project Phoenix” acquisition we looked at back in 2019. On paper, the target company was a rocket ship—Waitlist functionality, 10k DAU, low churn. The suits upstairs were already popping champagne. Then I got access to the repo. It wasn’t a microservices architecture like they claimed; it was a monolith so brittle that prod-api-01 had to be manually restarted every six hours via a cron job because of a memory leak nobody could find. We almost bought a burning building because the due diligence focused on the P&L, not the pull requests. If I had the LLM tools then that I have now, I could have flagged that disaster in ten minutes.
The Root Cause: Information Asymmetry
The problem with evaluating an acquisition—whether it’s a small SaaS or a mid-sized competitor—is that you are drinking from a firehose of curated data. The seller hands you a “Data Room” (usually a chaotic Google Drive or a posh Box folder) filled with hundreds of PDFs, spreadsheets, and architectural diagrams that look suspiciously like marketing material.
They know where the bodies are buried; you don’t. Humans get tired reading page 400 of a technical disclosure. We start skimming. We miss the footnote that says the entire IP is actually owned by a third-party contractor in a non-extradition country. We need to treat the Data Room like a massive log file and grep it for errors. That’s where AI comes in.
Solution 1: The “BS Detector” (Initial Screen)
This is your smoke test. Before you even look at the code, you need to validate the business narrative. Sellers love to use buzzwords to mask stagnation. I use this prompt to rip through the CIM (Confidential Information Memorandum) and financial summaries. It forces the LLM to act like a cynical Private Equity analyst.
The Prompt:
Act as a skeptical Senior Private Equity Analyst and Technical Due Diligence Lead.
I am pasting the text from the company's CIM (Confidential Information Memorandum) and their last 12 months of meeting notes below.
Your goal is to find contradictions. Specifically, look for:
1. Discrepancies between their claimed "AI/ML capabilities" and their described headcount (e.g., claiming proprietary AI but having zero ML engineers).
2. Conflicts between "low churn" claims and any mentions of "legacy customer migration" or "support ticket volume."
3. Vague language regarding "Technical Debt" or "Re-platforming."
Output a bulleted list of "Red Flags" that require immediate clarification from the seller.
[PASTE DOCUMENTS HERE]
Pro Tip: If the model tells you “The roadmap relies heavily on ‘future hires’ to deliver core features,” run away. That means the product doesn’t exist yet.
Solution 2: The “Tech Stack Archeology” (Deep Dive)
This is for us, the engineers. Once I get access to their documentation (API docs, schema exports, or infrastructure diagrams), I need to know if I’m inheriting a modern stack or a museum piece. I don’t care about the “vision”; I care if payment-service is hardcoded to talk to a Stripe test key.
I use this prompt to identify Single Points of Failure (SPOFs) and scalability nightmares. It’s hacky, but pasting a schema dump or a text-based architecture description works wonders.
The Prompt:
Act as a Principal Cloud Architect (AWS/Azure focus). I am providing you with the target company's technology stack summary, database schema export (sanitized), and a list of third-party dependencies.
Conduct a "Scalability & Risk Audit." Tell me:
1. Bus Factor Risk: Based on the commit logs or team structure provided, does the knowledge seem concentrated in one person?
2. Licensing Landmines: Are there dependencies listed here (like AGPL libraries) that could infect our commercial codebase?
3. Infrastructure Fragility: Highlight any "Monolithic" patterns disguised as microservices (e.g., shared databases across services).
Be harsh. I need to know what will break when we double the user load.
[PASTE TECH SPECS HERE]
Solution 3: The “Nuclear” Option (The Pre-Mortem)
Sometimes you fall in love with a deal. That’s dangerous. You get “deal fever.” When the team at TechResolve starts getting too excited, I pull this prompt out. It’s designed to simulate the worst-case scenario. It stops us from ignoring the warning signs just because we want to close.
The Prompt:
We are considering acquiring this company. Based on all the data provided in this thread (financials, tech stack, customer sentiment), perform a "Pre-Mortem."
Imagine it is 12 months after the acquisition, and the deal has been a catastrophic failure. The company has shut down, and the key engineers have quit.
Write the "Post-Incident Review" explaining exactly WHY it failed. Base your narrative on the weak points you identified earlier. Did the tech debt crush the roadmap? Did the culture clash cause the lead dev to leave? Be specific.
This output is usually sobering. It forces you to look at the legacy-billing-v1 table and realize that migrating it isn’t a “Q3 Task,” it’s a two-year nightmare that will sink the ROI.
Realism Check
Look, LLMs hallucinate. You cannot let an AI make the final call on a $5M or $50k purchase. I use these prompts to generate questions for the interview with the seller’s CTO. When the AI points out that their “proprietary algorithm” looks a lot like a wrapper around an OpenAI API key, you don’t accuse them immediately.
You sit down, you open your notebook, and you ask: “Can you walk me through the unit economics of your AI calls in prod-inference-worker? The margins seem tight.” Watch their face. That’s where the real due diligence happens.
🤖 Frequently Asked Questions
âť“ How can LLMs enhance technical due diligence in acquisitions?
LLMs can rapidly process vast amounts of curated data (CIMs, tech docs) to identify contradictions, assess technical risks like SPOFs and licensing issues, and simulate failure scenarios (pre-mortems), generating critical questions for human experts.
âť“ How do these LLM-driven prompts compare to traditional human-led due diligence?
LLM-driven prompts augment human-led due diligence by efficiently sifting through data rooms to flag potential red flags and generate specific questions, overcoming human fatigue and information asymmetry, rather than replacing expert judgment.
âť“ What is a common pitfall when using LLMs for acquisition evaluation?
A common pitfall is over-reliance on LLM outputs, as they can hallucinate. LLM-generated insights should be treated as ‘questions for the CTO’ to guide human-led interviews and deeper investigation, not as definitive answers.
Leave a Reply