Can someone explain AI to my grandma?
Mar 21, 2026

For everyone who ever felt stupid for asking. This one’s for you.
I’ve been to a few hackathons over the past months (not something I thought I’d ever say).
Sitting between people who started coding at twelve, growth hackers spending their Sunday afternoons reverse-engineering TikTok algorithms, I spent a good portion of my first one nodding along to things I didn’t fully understand. The vocabulary alone was its own barrier, and tech is genuinely good at acronyms. RAG, MCP, AGI, GPU, embeddings, inference. A language that sounds like it was designed to keep non-technical people out. And I noticed something in myself that I recognise from a lot of rooms lately: that small internal calculation of whether asking what something means will make you look like you don’t belong.
We’ve all been in that room. Maybe it’s a hackathon. Maybe it’s a workshop, a dinner conversation, a standup where everyone else seems to already know. The specific room changes. The feeling doesn’t.
I think about this a lot right now because in a way, we are all sitting in that first hackathon.
AI is moving fast. The vocabulary is dense and multiplying. I’ve spent the past year deep in AI conversations. I ask better questions now than I did at the start. But the more I learn, the more I realise how much there is to know. And the pressure to already know is everywhere. Which is strange, because this is genuinely new for almost everyone. And yet somehow the norm has become: perform confidence, figure it out later. Ask a basic question about AI in the wrong room and you’ll feel it – that raised eyebrow, that slightly too-long pause. As if somewhere along the way there was a class on all of this, and you missed it.
There wasn’t. And it’s worth asking when exactly we decided that not knowing something was cause for embarrassment. A child learning to read gets celebrated for every new letter – someone clapped for you when you first understood how an A sounds. Somewhere between then and now, that impulse got buried. Curiosity became something to perform rather than feel. Not knowing became a liability rather than the starting point of every interesting conversation. That’s a strange thing to have decided.
And I think we’re paying for it now. AI is not just a strategy decided in a board meeting or an agent system that runs itself. It’s about people actually using it. Being excited about it. Getting creative with it. And if someone can’t even ask a basic question without feeling stupid – well, I understand why they’d rather opt out entirely.
There’s a particular kind of person I’ve come to deeply admire in these rooms. The one who stops and says: sorry, I didn’t quite get that. Not because they’re unprepared. Because they’re honest. And in doing so, they bring the whole room back into the conversation.
Why is this more important than ever? Because nobody was here before. Not the person who’s been in tech for twenty years, not the founder on stage, not the one who hasn’t looked up from their laptop all morning. Things are changing fast, in ways nobody fully predicted. The difference between those who seem ahead and those who feel behind is often just a higher tolerance for sitting with not-knowing – and then doing something with it. Which, it turns out, is a learnable skill.
On explaining things well – and asking anyway
The professor who opened my AI specialisation this semester announced one goal on the first day: for us to understand the history, the algorithms, the fundamental logic behind AI well enough to explain it to our grandparents.
Bold goal, I thought. But he pulled it off. I genuinely feel like I could now sit down with Gisela and explain how a language model works, not in a hand-wavy way, but in a way that would actually make sense to her. At least for a bit.
Because complexity is not the same as difficulty. Things can be genuinely complex and still be explainable. What it takes is someone willing to find the right analogy, the right entry point, the right level for the person in front of them. That’s not dumbing down. It’s the highest form of understanding. As Richard Feynman put it: if you can’t explain something simply, you don’t understand it well enough yet. My professor knew that. And it made all the difference.
We need more of that. More people willing to explain clearly. More people willing to ask. Fewer people making others feel small for trying.
What gives me genuine hope is that some people have already figured this out. Even on social media – perhaps the last place you’d expect – there are voices doing exactly what my professor did: taking something genuinely complex and finding the precise angle that makes it land. Not simplifying it into nothing, but earning the simplicity. Making understanding feel available, if someone is willing to meet you halfway. That willingness, in a moment so defined by gatekeeping and performance, feels quietly radical.
And it can happen anywhere. Over dinner, I asked a colleague who’s a data scientist what she actually does all day (yes, that felt like a stupid question). She laughed, and twenty minutes later I understood something I’d been nodding at for months. Coffee chats with AI consultants and software engineers. Founders at hackathons walking me through their entire tech stack because I asked. Every single time, the same thing happened: someone lit up. And so did I. And for a moment, you’ve taught each other something – which is, when you think about it, one of the best things humans get to do.
You could ask your LLM of choice, of course. But it won’t get that excited about it. Trust me.
It’s okay to be scared. It’s also okay to be excited.
Some of what’s shifting right now is worth grieving. Work that felt meaningful, skills built over years, roles that gave people their identity. That loss is real, and pretending otherwise helps nobody. But I think a lot of the fear circulating right now is also slightly misdirected. Most people aren’t scared of AI itself – they’re scared of being rendered irrelevant by people who seem to understand it better. Of a gap forming, quietly, between those who get it and those who don’t. And that fear is being fed by a culture that treats not-knowing as a personal failure rather than a shared starting point. The conversation around AI is extraordinarily good at generating anxiety and considerably less good at offering anything to do with it.
Here’s what I’ve found actually helps: showing up. Asking the question. Being the person who admits they don’t know yet – and then actually finds out. Not through a course or a certification, but through the kind of conversation that starts with a question over dinner and ends with you understanding something you didn’t before. That path is open to everyone. It always was.
Because the questions you ask, the context you bring, the ability to sense when something feels wrong even if it looks right on paper – none of that is automated away. There’s a reason everyone keeps talking about keeping humans in the loop. Turns out the loop genuinely needs us. Who knew.
And what if this is also the most exciting opportunity our careers have seen? What if the version of your job that comes out the other side is actually closer to what you imagined it would be when you first chose it, before the admin, the legacy processes, the doing-it-this-way-because-we-always-have? The technology is moving faster than our collective imagination of what to do with it. That’s not a reason to opt out. It’s the best possible reason to show up and help figure it out.
Nobody has a map for this. We’re all navigating a moment that didn’t come with instructions – and that’s exactly why it matters that we do it together. Ask the questions you have. Share what you know. We used to celebrate every letter a child learned. That impulse was right. Someone wanting to learn something is still worth celebrating, at any age, in any room.
This might be the most exciting moment of our careers. Don’t let anyone make you feel stupid for wanting to understand it.