Five Principles for AI Leaders
The courage to act.
During our time together, we explored how to think about, manage, and guide one of the most transformative technologies of our lifetime. With that understanding comes the responsibility—and the opportunity—to shape institutions, strengthen societies, and influence the future. But none of it matters without the courage to act.
Five Principles for AI Leaders
1. Begin by asking whether AI should be used at all.
Before writing a line of code or evaluating a vendor, determine whether the task genuinely requires AI.
Clarify:
What problem does this solve?
What value will users actually gain?
What trade-offs, costs, or risks will they assume?
If AI does not clearly improve the user’s experience or outcomes, reconsider the approach.
2. Speak directly with the people your AI is meant to serve.
Never design in isolation.
Interview, observe, and validate with real users before building, selecting, or deploying anything.
If you haven’t talked to them, you don’t yet understand the problem.
3. Ensure your data is clean, relevant, and representative—and keep monitoring it.
AI quality depends on data quality.
Check that your data reflects the people and contexts your system will impact.
Maintain an ongoing process for auditing, monitoring, and updating data to avoid drift, bias, or unintended harm.
4. Challenge and refine requirements before committing to a solution.
Do not accept initial requirements as final.
Test whether they make sense, align with the problem, and include clear success metrics.
Push back when needed, refine collaboratively, and repeat until the requirements truly support a responsible, effective solution.
5. Have the integrity—and the courage—to say no.
If a system risks harming users, communities, or the business, your duty is to raise the issue and, if necessary, refuse to build, deploy, or procure it.
True leadership begins with protecting and elevating the people you will directly and indirectly serve.
Carry that responsibility with clarity and pride.