Technical Due Diligence
What You're Actually Evaluating
In 2015, a mid-size SaaS company acquired a startup for $40M, primarily for its real-time analytics engine. The due diligence team spent three weeks reviewing architecture decks and interviewing leadership. Everything looked solid on paper. Six months after closing, they discovered the analytics engine depended on a single Postgres instance with no replication, the "microservices architecture" was actually a monolith with HTTP calls to itself, and the three engineers who understood the data pipeline had already left. The integration cost $12M more than budgeted, took 30 months instead of 12, and the analytics product was eventually rewritten from scratch. The $40M acquisition delivered maybe $10M of actual value.
Technical due diligence is not a code review. You are answering a fundamental question: can this engineering organization continue to deliver value after the acquisition, and what will it cost to integrate them?
The assessment covers four areas. Architecture and infrastructure: is the system designed to evolve, or is it held together by tribal knowledge? Team capability: do the engineers understand their own system deeply enough to operate it without the founders? Technical debt: what is the real cost of deferred maintenance, and does it block the roadmap you are acquiring them to execute? Security and compliance: are there vulnerabilities or regulatory gaps that create liability?
Architecture Assessment
Request architecture diagrams, then set them aside. They are always out of date, sometimes by years. The real assessment happens when you ask an engineer to walk you through a recent feature from idea to production. Watch for hesitation, hand-waving, and "I think it goes through..." statements. If the people building the system cannot trace a request end-to-end with confidence, the architecture has outgrown the team's understanding.
The most revealing question is not "how does it work?" but "what happens when it breaks?" Ask about the last three production incidents. How were they detected? How long did resolution take? What changed afterward? A team that can narrate their failure modes fluently has genuine operational maturity. A team that says "we haven't had any major incidents" either has exceptional engineering or poor observability. The latter is far more common.
Pay attention to the gap between stated architecture and actual architecture. A company might describe a microservices architecture, but if eight of twelve services share a single database and deploy together, that is a distributed monolith with network calls instead of function calls. This is worse than an honest monolith because it carries the complexity of distribution without the benefits of independence. Look at the deployment graph: do services deploy independently, or does deploying service A require coordinating with services B and C?
Team Capability Evaluation
Talk to engineers at every level, not just leadership. Ask them what they'd change about the system if they had six months of free time. Their answers tell you what's painful and what they've been unable to fix. If senior engineers can't explain the system's failure modes, that's a serious concern.
Check the git history. How many people contribute regularly? Is knowledge concentrated in two or three people? If the top contributor has 60% of commits, you have a bus factor problem that becomes an acquisition risk.
Tech Debt Quantification
Quantify tech debt in terms of developer-months, not abstract severity scores. "We have significant tech debt" means nothing. "We estimate 18 developer-months of work to modernize the data layer, which currently causes 3 production incidents per month" is actionable.
Common debt categories to assess: outdated dependencies with known CVEs, missing observability (no structured logging, no distributed tracing), manual operational procedures, incomplete test coverage on critical paths, and hard-coded configuration that prevents multi-tenancy.
Integration Planning
Start integration planning during diligence, not after close. The two biggest decisions: do you migrate them to your stack or maintain theirs, and how do you handle authentication and data access across systems?
Timeline reality check: plan for 18-24 months for meaningful integration. The "we'll migrate them in 6 months" plan has failed at almost every company that's tried it. HP's acquisition of Autonomy, eBay's acquisition of Skype, and countless smaller deals have stumbled on unrealistic integration timelines.
Budget for attrition. Industry data shows 30-50% of acquired engineering teams leave within 18 months. If the deal thesis depends on retaining specific engineers, get retention agreements in writing with meaningful vesting schedules.
Key Points
- •Technical due diligence should evaluate architecture, team capability, and tech debt in equal measure
- •Code quality metrics alone are misleading. Focus on how quickly the team can ship changes safely
- •Integration planning must start during due diligence, not after the deal closes
- •The acqui-hire math changes completely if key engineers leave within 12 months post-acquisition
- •Red flags in deployment practices (manual deploys, no CI/CD, no staging env) predict post-acquisition pain
Common Mistakes
- ✗Relying on management presentations instead of talking directly to senior engineers and reading actual code
- ✗Underestimating integration costs by 3-5x because you assessed systems in isolation, not connectivity
- ✗Skipping security and compliance review under time pressure. This is where the expensive surprises hide
- ✗Assuming you can replace the acquired team's stack with yours quickly. Migrations always take longer than the integration plan promises