AI is everywhere right now: new tools, new opinions, new predictions. And most of it sits somewhere between hype and fear.
For me, it did not start with a tool. It started with a book.
About two years ago, I kept seeing people I follow on X mentioning the same book: Co-Intelligence by Ethan Mollick. Enough times that it made me curious. So I read it.
And that was where things started to shift.
The idea that stayed with me the most was simple: AI is still evolving and we are supposed to work with it, not just use it: co-create, co-innovate, let it amplify what we already do. That got me excited. And that excitement turned into action.
I started looking for newsletters, people to follow, tools to actually use in my day-to-day. But more importantly, I started looking at my own work differently: What am I doing repeatedly? Where do I get stuck? Where could AI actually help me? Not in theory, but in my actual work.
At the same time, I found myself questioning something in how we talk about AI. A lot of the conversation around AI comes from a place of scarcity: What will I lose? Will this replace me?
I am choosing to see it differently: How can this make me better? At work, as a leader, in my personal life. And that shift in perspective is what led me to think more deeply about what an AI-first mindset actually means.
What AI-first actually means
For me, AI-first mindset is about inviting AI to the table. Not as a shortcut, but as part of how I think, explore, and make decisions. It’s less about using AI and more about working with it: as a thinking partner, as something that helps me move faster, but also think a bit better.
This became clearer to me after reading Co-Intelligence. I didn’t treat it like a framework to follow, but more like a set of principles that stayed with me and slowly became my baseline.
These are the ones I keep coming back to:
- Always invite AI to the table
- Be the human in the loop
- Treat AI like a person (but be clear what kind of person it is)
- Assume this is the worst AI you will ever use – AI will only get better from here, so whatever you learn now compounds
I don’t apply them perfectly, but they shape how I work now.
How it started
At the beginning, I kept it simple: I used AI for emails, for structuring documents, for improving slides, mostly to save time or get unstuck. And honestly, I wasn’t always sure what I was doing. 😆
Sometimes I was just rewriting the same prompt in different ways (sometimes even in CAPS LOCK), trying to get something useful. It worked. But it also felt basic.
The real shift: working patterns
The real shift came when I started thinking in terms of skills. Instead of starting from scratch every time, I began turning parts of my work into reusable ways of working:
- a way to prepare for customer conversations: instead of going into a call with a checklist, I use a structured AI-assisted process to research the customer’s context, anticipate their likely concerns, and prepare the questions that will actually move the conversation forward.
- a way to look at recurring problems and spot patterns: when I collect feedback or observations, I use AI to surface recurring themes faster. What used to mean hours of reading and tagging now becomes a structured conversation where I can see patterns, challenge them, and decide what deserves attention.
- a way to explore new ideas: instead of starting from a blank page, I use AI as a guided conversation partner. I bring the rough thought; we develop it together. It pushes back, adds angles I had not considered, and helps me stress-test the idea before I bring it to the team.
The first time one of these worked consistently, it felt different. Not like a one-time good result, but something I could rely on. That’s when AI stopped being a tool I used from time to time and became something I could build with.
Patterns of patterns
One of my biggest “aha” moments was this: a good agentic skill is just a well-structured way of thinking.
It usually starts from something very real:
- something I do often
- something that takes time
- or something where I tend to get stuck
Then I turn it into something I can reuse.
Sometimes I even build skills of skills, not just how to do something, but how to choose the right approach. That’s when AI-first becomes very real. It’s no longer theory, it’s just how I work.
From my work to my team’s work
And once I started seeing my own work this way, I started to look at my team the same way, not just what we do differently, but what we pay attention to.
I started asking:
- Where is my team spending time on repeatable work?
- What are we each doing in our own way that could actually be shared?
Because the value is not only in doing things faster, it’s in making the way we think more visible and easier to reuse and improve over time.
And maybe the part I enjoy the most is the one I didn’t expect: research. What used to take time (going through sources, structuring information, connecting ideas) feels very different now. It’s faster, yes. But more than that, it’s more open because I can follow ideas further, ask better questions, go deeper, without constantly thinking about time.
For me, this is what AI-first really means: not replacing how I work, but reshaping it, so I can think better, move faster, and build on top of my own patterns, strengths, role and responsibilities.
Don’t let AI make you forget the problem
A faster answer to the wrong question is still the wrong answer.
AI can generate an answer in seconds. But is it the right answer to the right question? This is the most underrated risk of the AI-first moment we are living in. The speed is amazing. The outputs look impressive. And the dopamine hit of moving fast makes it genuinely hard to stop and ask: What problem are we actually solving?
The best PM discipline still applies, now more than ever: obsess over the problem, not the solution. AI amplifies good thinking, but it has the same effect on bad thinking, only faster and with more confidence.
Painkillers, not vitamins
There is a useful mental model inspired by Tony Fadell in his book Build for evaluating whether an idea is truly worth pursuing. Vitamin pills are good for you, but they are not essential. You can skip your morning vitamin for a day, a month, a lifetime, and possibly never notice the difference. A painkiller is different. When you need it, you really need it.
“You have to understand what problem you’re solving before you can possibly come up with the solution.” – Tony Fadell
Before committing to any AI-powered feature or initiative, ask honestly: is this a painkiller or a vitamin?
Every great idea that survives this test tends to share three elements:
- It solves for “why”. Long before you figure out what a product will do, you need to understand why people will want it. The “why” drives the “what” and no amount of prompt engineering changes that.
- It solves a problem that a lot of people have in their daily lives. Not a niche edge case, not a demo scenario, but a real friction that real people encounter regularly.
- It follows you around. Even after you research it, try it, and realise how hard it will be to get right, you can’t stop thinking about it. That persistence is a signal worth trusting.
Why teams skip the problem anyway
In theory, we all agree on “problem-first.” In practice, engineering teams are often solution-led, and right now, that solution is an LLM by default.
This is a combination of three forces working together:
- AI bias → the tool is powerful and available, so it becomes the answer before the question is even formed
- Bias of action → moving fast feels productive, and shipping feels better than thinking
- Dopamine → early demos are impressive. Stakeholders are excited. The feedback loop rewards speed over substance.
Being aware of this pattern is the first step to breaking it. The goal is not to slow everything down, but to make sure the speed is pointed in the right direction.
Problem discovery takes time, and that is the point
Research on innovation teams shows that debating, pivoting, and sitting with ambiguity early actually leads to better outcomes — as long as clarity arrives by the midpoint. The discomfort of not yet knowing what you are solving is not a problem to eliminate. It is part of the process. AI can shortcut exactly that friction in a way that feels like progress but is not. A well-structured answer in seconds is not the same as well-structured thinking.
This also has a very practical dimension that will matter more and more: token costs. Right now, most teams are not worried about the cost of inference. But that window will not stay open forever. When it closes, the question that will separate teams that built something real from those that built something fast is simple: Are we producing the same outcome cheaper, faster, or at higher quality than before? You want to be sure you have been spending that compute solving the right problem.
From possible questions to sustainable ones
There is also a trap in how we evaluate solutions. Too many teams are still asking possible questions. The more important set of questions is about sustainability.
| Possible Questions | Sustainable Questions |
|---|---|
| Can we build it? | Can we own it? |
| Can it impress in a demo? | Can we govern it? |
| Can we show early gains? | Can we repeat it cheaply? |
| Can it survive scrutiny, scale, and time? |
Before you throw an LLM at a problem, it is worth asking whether you might be reinventing the wheel or over-engineering a solution that a simpler, more maintainable approach could handle just as well.
Spec-driven development as a guardrail
One concrete practice that helps teams stay problem-anchored is spec-driven development.
The idea is straightforward: a formal, written specification detailing the what and why must be finalised, reviewed, and approved before any technical planning or coding begins. The specification is the primary artifact. The code follows it, not the other way around.
A well-written spec forces the team to articulate the problem clearly before they reach for any solution, AI-powered or otherwise.
The goal is not to slow down AI adoption. It is to make sure that when you move fast, you are solving something that is genuinely worth solving.
Move fast, but make sure you know what you are running toward.
AI as an accelerator, for you and your team
The most common question I hear about AI is some version of: “Will it replace me?” The question I find more useful is: “What could I do more of, better, with this?”
That shift, from scarcity to abundance, from fear to curiosity is not a mindset trick. It is the difference between using AI with reservation and actually discovering what it can do for you specifically, in your actual role, with your actual strengths.
Curiosity over pressure
No one can tell you exactly how AI applies to your specific role. You have to try it yourself to find out and to learn how to be more curious about it. And the teams I have seen make the most progress did not start because they were told to. They started because someone asked ‘what if?’ and actually tried something.
AI does not add new capabilities on top of what you do. It amplifies what you already do, and that is a very different thing.
The whole team needs to be in the room
Partial adoption does not produce partial results. It produces uneven ones, and those who are not experimenting are increasingly working from a different map of reality.
I have seen this play out directly. When only part of the team is actively using AI, two things happen:
- First, a language gap opens up: the terminology, the mental models, the assumptions about what is possible start to diverge. Conversations that should be collaborative become harder to navigate because people are not starting from the same place.
- Second, and more visibly, the quality of ideas starts to differ. The people who are experimenting consistently bring better, more relevant solutions to the table. Not because they are smarter, but because they have a better thinking partner.
Ethan Mollick’s research confirms what I observed. His field experiment found that individuals working with AI matched the performance of entire teams working without it, and the people who benefited most were the less experienced ones, who were suddenly able to contribute at levels far above what they could achieve alone. AI democratised expertise within the team.
The implication is direct: this is not something you roll out to your most technical people first. It works best when the whole team is in the room.
But only if it is safe to experiment
Getting everyone experimenting is necessary but not sufficient. What makes it actually work is whether people feel safe enough to try things that might not work and share the unhappy paths without judgment.
Without psychological safety, something specific happens: people become afraid to be honest about their experience with AI. They worry that admitting confusion, or sharing an experiment that failed, will be used to judge their capabilities. So they stay quiet. And when people stay quiet, you lose the ability to truly understand where AI is helping, where it is not, and where the team actually needs support. You are making decisions based on incomplete information, not because people do not have opinions, but because they do not feel safe enough to share them.
When that safety is present, failed experiments become collective intelligence. People say “I tried this and it did not work, here is what I learned” and that observation becomes useful for everyone.
As a leader, your job is not to mandate adoption. It is to create the conditions where curiosity is safe. A few practices that help:
- Create a shared space to log experiments: what was tried, what worked, what did not
- Celebrate honest failures as openly as the wins
- Ask in 1:1s: “What have you tried lately? What did you notice?”
- Model it yourself: share your own experiments, including the ones that felt basic
The question is not whether AI will change how your team works. It is whether that change will happen to your team, or with them.
Introducing AI is change management, and you are accountable
There is a pattern I have seen repeat itself across teams navigating AI adoption: someone senior announces a new direction, tools get introduced, and then… not much happens. Or worse, things happen unevenly, quietly, and without the buy-in that makes change actually stick.
Introducing AI is not a technical decision. It is a human one. And it follows all the same rules as every other significant change you have ever led.
It takes time, and that is not failure
The pressure from above is often significant. Leaders want to see results. Boards want AI in the strategy deck. The urgency is real. But urgency applied without care creates a partial, uneven adoption.
What worked for my team was simple but deliberate: demos and honest feedback sessions on those demos. Not polished presentations, real working sessions where someone showed what they had tried, what worked, what did not, what surprised them, and what they actually learned.
That format did two things. It made experimentation visible and normal. And it created a shared language, because when you hear ten different people describe what they tried and what they noticed, the team’s collective understanding moves much faster than any individual could alone.
The teams that move most sustainably are the ones who were invited into the conversation, not the ones who had the decision handed to them. In practice, this means:
- Involving people early, before the tools or practices are chosen, not after
- Asking “what problems do you have that this could help with?” rather than “here is the tool, now use it” (again, product mindset)
- Making space for honest reactions, including the sceptical ones
- Being transparent about what you know, what you do not know, and what you are figuring out together
Your job as a leader is to hold that space, even when the pressure from above wants to move faster than the team is ready to go.
The new skills in focus
Bringing AI into your team is also a skills conversation, and one that is easy to get wrong by focusing only on the technical side.
From what I have observed, the skills that matter most are not the most obvious ones. Yes, AI literacy is the foundation: understanding what the tools can and cannot do, and being honest about where they fail in your specific context. But the skill I see people struggle with most is resilience. The pace of change right now is unlike anything we have navigated before. Tools shift, models improve, best practices evolve, sometimes within weeks. The teams that thrive are the ones comfortable with continuous recalibration, not the ones who mastered a specific tool.
And the skill I wish more people leaned into is analytical thinking, not as an abstract concept, but as a leadership practice. Be the expert in your domain. Understand the why behind what you are building. Talk to your customers. AI can generate answers fast, but the judgment about whether those answers are right still has to come from you.
What Ethan Mollick calls the jagged frontier, the idea that AI capabilities are uneven and often surprising, makes this even more important. You cannot predict where the edges are without experimenting. Curiosity and critical thinking matter more than any single tool.
You are accountable for every output
Once your team is experimenting, learning, and building new skills, the next question is ownership. And this is where many leaders underestimate what being the human in the loop actually requires day to day.
AI can generate the code, the document, the wiki page, the performance review draft, the meeting summary. It does all of this fast and often impressively well. But your name is on it. Full stop.
This is not a principle on a slide. It is a daily habit. The way I approach it has settled into three concrete practices:
- Ask AI to verify itself. Before accepting an output, I ask directly: “What could be wrong here? What assumptions are you making? What should I double check? Explain how you arrived at this.” It surfaces gaps I would otherwise miss.
- Generate in small increments. Rather than asking for the complete output in one go, I work iteratively, review each piece before moving to the next. It keeps me thinking instead of just passively accepting a long, polished answer.
- Get input from other people. Especially for anything significant, I share the output with someone who can challenge it. A second pair of human eyes, with real context and real accountability, is still irreplaceable.
We talked earlier about the cost of running AI and why it matters to spend that compute solving the right problems. But there is another cost worth naming: the cost of getting it wrong. Code is getting cheaper. Documents are getting cheaper. First drafts of almost everything are getting cheaper. What is not getting cheaper is trust, the trust your team has in your judgment, the trust your stakeholders and customers have in your outputs, the trust that builds over years and can erode quickly.
AI generates the answer. You are responsible for whether it was the right one.
Conclusion
AI-first is not a destination. It is something you practice every day: in how you frame a problem, how you bring your team along, and how you take ownership of what comes out.
The tools will keep changing. The models will keep improving. What stays constant is the mindset: stay curious, keep people at the centre, and never forget the problem you are actually solving.
And the product mindset was true before AI, and it is more important now than ever. The best AI-first teams are not the ones with the most tools. They are the ones who know why they are building, who they are building for, and what success actually looks like for a real person with a real problem.
This AI-first journey started with curiosity, it keeps me honest through accountability, but none of it means anything without the people.
