Due Diligence: Notes from a Strategy Lead Navigating the AI Maze
Day 1 – Monday
I was handed the AI vendor shortlist today. Five companies, each promising revolutionary outcomes—automated decisions, predictive insights, operational overhaul. As Head of Strategy, I’ve been asked to lead the evaluation. My first instinct? Caution. I’ve seen tech hype before, and AI, for all its potential, demands a different flavour of due diligence. Not just about feasibility. It’s about responsibility.
Day 3 – Wednesday
We had our first call with Vendor A. Their deck was glossy. Their demo was smooth. But when I asked about model training data, bias controls, or how their system performed under regulatory scrutiny, the answers were… vague. I’ve sat through enough investor pitches to know the difference between confidence and deflection. It’s unsettling how many executives mistake flash for depth. I reminded myself that true diligence that is due in AI means pulling the thread until you find what’s really under the surface—sometimes it’s gold, sometimes it’s glue and wishful thinking.
Day 5 – Friday
I met with our legal and compliance leads today. Their biggest concern? Explainability. “If the AI makes a bad decision, who’s accountable?” one of them asked. A valid question. We’re not just buying software—we’re buying a decision-making framework. The conversation veered into GDPR, consent models, audit trails. It hit me that AI isn’t just a technical leap—it’s an ethical one. I made a note to include our risk team in the next vendor roundtable. Due diligence, I’m learning, is a cross-functional sport.
Day 7 – Sunday
I spent the weekend digging into post-deployment AI failures. An airline that trusted sentiment analysis to price tickets dynamically—until sarcasm on social media skewed the model. A hospital where patient triage AI underperformed on minority groups due to underrepresented data. These weren’t coding errors; they were strategic oversights. The kind that happen when teams treat due diligence as a one-time audit rather than a holistic mindset. I wrote myself a checklist today: not just performance and scalability, but governance, transparency, retraining protocols, and above all—human context.
Day 10 – Wednesday
Vendor B surprised me—in the best way. No sweeping promises. No AI-as-magic. Instead, they walked us through their model lifecycle: raw data handling, stakeholder sign-offs, bias detection stages, performance boundaries, and even ethical opt-outs. They brought their head of compliance to the call. It wasn’t theatre—it was rigour. For the first time, I felt we weren’t just vetting a platform, we were evaluating a philosophy. This is what real diligence that is due feels like: not box-ticking, but worldview-checking. I’ve come to believe that if a company treats its AI like a black box, we should treat their proposal like a red flag.
Day 12 – Friday
A junior engineer on our team raised a good question: “Who retrains the model when business goals change?” Silence. It reminded me that AI isn’t static—it evolves. And unless we plan for drift, for obsolescence, and for re-alignment, we’re setting ourselves up for disappointment. We looped in HR and Ops for input on how AI outputs might influence team workflows. Not sexy, but essential.
Day 13 – Saturday
I caught myself sketching AI’s role across our five-year roadmap. Strange, considering just a week ago I was suspicious of the whole exercise. But something has shifted. I’ve stopped seeing AI as a tool and started seeing it as a lens—one that reflects how we think, decide, and evolve. The challenge isn’t finding the right vendor; it’s becoming the kind of company that deserves the right partner. And that starts with better due diligence. Not only in the systems we buy, but in the questions we ask of ourselves.