The AI Movies Promised Us

I’ve Seen That Future Before
How pop culture inspired startups, public policy, and maybe even your view of AI

Working in artificial intelligence requires the ability to live in three timeframes at once: the future, the present, and the past. When we understand each of these well, the opportunities, dead ends, and challenges that are actually worth pursuing become much clearer. And it might surprise you to realize that you’ve been researching this topic for years. How, you might ask? Through the pop culture you’ve consumed throughout your life.

The idea of artificial intelligence has lived in the human imagination for almost a century. In film, television, video games, music, and comics, AI usually shows up as calculating super-brains, self-aware robots, omniscient networks, and digital lovers. These fictional creations let us explore our fears and hopes around technology, autonomy, and what it means to be human.

But how close are we to those overflowing visions popular culture promised us? And what impact have they had on the development of AI as a field and on our societies? The answers might surprise you. As we’ll see, there are moments when movies hit the bullseye, others when they missed completely, and cases where the “unknown unknowns” are perhaps the most intriguing part of everything pop culture doesn’t tell us.

Now, before we get into it, you might be wondering: why would a professor, advisor, strategist, and entrepreneur in AI write about the most transformative technology in history through the lens of pop culture? Weren’t there any recent advances in AGI, language models, or the broader AI ecosystem worth bringing to these pages? There were many, and we’ll touch on some of them later. I chose this angle for a simple reason: nothing has shaped the outlook of users, researchers, investors, founders, and society at large as much as pop culture. Nothing.

Pop culture is where all of us, at some point in our lives, set our first expectations about the future and about what we think the technologies powering it ought to be able to do. Even today, the hopes, worries, fears, and expectations of clients, students, investors, officials, and even the people I end up talking with over dinner are heavily shaped by what they’ve seen represented in the media. I remember meeting with officials from a G20 government agency almost five years ago, and their questions revolved around Skynet, Star Wars, Ex Machina, WarGames, and 2001: A Space Odyssey as they tried to define what was possible, what wasn’t, what they should start doing to prepare, and what the future might hold for AI development.

Hollywood trained us without our noticing

Why did they quote movies? And why do we do it as well? Because we’re human, and when we face something new, we look for familiar points of reference. Sometimes we read them, sometimes we see them on TV or in a film. It’s how we process things quickly and build a basic idea of what we should learn, test, believe, or question. Very few people have the time (or the energy) to deeply research every new topic; not only because it would be impractical, but because it’s almost impossible when we also have to work, live, love, eat, and keep life moving.

In many ways, pop culture is one of the main tools we use to decide what we want to study, what we wear (or refuse to wear), what’s in style, what our dreams look like, and how we imagine our future. Think about what you’re doing right now: how much of it, if you took the time to analyze it, has been shaped by pop culture? Of course, you’re unique, immune to influence, and completely self-directed—but if we’re honest, you’ve been influenced by what your eyes and ears have been feeding you your entire life.

That influence also reaches how we understand artificial intelligence. Pop culture has shaped our view of AI like very few other forces. Even the most widely celebrated tech visionaries acknowledge that science fiction in books, films, and series influenced their decision to enter the tech world and some of the ideas they later brought into reality. In many ways, we use those experiences to build our expectations about the future.

Do you remember the first time you used a publicly available AI system on your computer or phone? What were you expecting? What did you hope it could do? What did you think it should be able to do? And what did it actually do? In many cases—if not all—your expectations were shaped by what you had seen in movies, series, magazines, and on social media. In other words, by pop culture.

Cinema, our dataset

Obviously, we don’t all consume the same books, magazines (Fast Company México being the honorable exception), series, or movies. There’s so much content that it’s almost impossible to overlap on everything. So, to keep a common thread between what pop culture has promised about AI and what has actually come to pass, let’s start from common ground. We’re going to borrow a useful framework from AI: when we build a model, we select the dataset that best represents most users. In this case, we’ll choose cinema. It’s more likely that “we,” as a distinguished community of readers, have overlapped on movies than on series. There’s nothing like science fiction to calibrate the eye before looking toward the future.

From here, I’d like to propose a journey you can follow from wherever you’re reading this. As in any good science-fiction film, we’ll travel to the past to see which big themes cinema got right, which ones it got wrong, and which landed somewhere in between. Then we’ll jump to the present, to the exact spot where you’re sitting, to see what today’s films might be telling us about tomorrow. Finally, we’ll make a brief stop in the future, just to look around a bit.

We’ll also review the real-world impact that films about AI have had, from the nuclear arms race and the Cold War to the founding of some of the most relevant companies of our time, along with subtler influences on today’s society. Movies about AI have shaped the world as we know it, sometimes faster than any public policy, regulation, or election could have in the same period. That’s why, when we analyze how pop culture has played with AI, it’s worth following the data. Plus, you’ll walk away with a few excellent anecdotes for the holiday dinners and parties coming up.

Pop culture as a geopolitical weapon

Cinema may have helped bring down the Soviet Union. I say “may” because, as in almost everything in history—and especially in complex topics—there are conflicting opinions. So consider yourself warned: this is quite a spicy debate, ghost-pepper level, particularly if you’re in a professional or academic setting. How spicy is a ghost pepper? About 800,000 Scoville units, hotter than a habanero. Even so, I’ll tell you what we do know for sure, and you’ll see why there’s some weight behind the broader theory.

In June of 1983, President Ronald Reagan watched WarGames, a film about a teenage hacker (played by Matthew Broderick) who accidentally accesses military computers after doing war dialing: a technique that consisted of dialing random numbers to find computers willing to “talk” to other computers. That part was real; people used it to hunt for responsive systems well into the nineties. I can neither confirm nor deny whether I tried it myself as a kid after watching the film, but let’s just say it was well within reach.

Before we get to what Reagan did after seeing it, it’s worth understanding what he was watching.

Unlike anything I experienced growing up, Broderick’s character finds a military computer connected to the phone network, waiting for someone—anyone—to “talk” to it. The menu included games like chess, gin rummy, and global thermonuclear war. Guess which one he picked.

The reason this AI included both everyday games and more advanced ones like “Global Thermonuclear War” is that the system was designed similarly to AlphaGo or AlphaZero, two systems from DeepMind (Google) that play millions of games to discover winning strategies in chess or Go. Only here, the “game” was nuclear war in a hypothetical Third World War scenario. Up to this point, the movie does a good job of describing how AI works.

The problem emerges when the system starts simulating global thermonuclear war and running attack scenarios between the USSR and the United States. Over time, the AI attempts to seize control of real-world nuclear missiles through brute force in order to launch them at the USSR. To this day, nuclear weapons are not under the control of AI systems, and humans still intervene at several stages of the decision chain, unlike in the film. However, it’s worth noting that governments around the world are exploring ways to embed advanced AI to grant more autonomy to different weapons systems.

Sounds wild, doesn’t it? Reagan thought so too. After watching the movie, he convened members of Congress and the National Security Council and asked: “Could something like this really happen? Could someone get into our most sensitive computers?” One of the generals in the room offered to investigate and report back. The picture turned out to be worse than they feared.

That concern led to the first wave of U.S. cybersecurity frameworks, from National Security Decision Directive 145 in 1984 to the Computer Fraud and Abuse Act of 1986, which were revolutionary for their time. More importantly, it set a precedent: using science fiction and pop culture as an influence on national and even international public policy. And now comes the spicy anecdote.

During the Reagan administration, the White House took inspiration from science fiction, a genre deeply embedded in popular culture, which propelled the development of the Strategic Defense Initiative (SDI), affectionately nicknamed Star Wars and officially announced in March 1983.

At the heart of SDI, as you’ve probably guessed, was artificial intelligence, tasked with managing other weapons such as particle-beam systems, space-based lasers capable of intercepting Soviet missiles, and other advanced technologies that, up to that point, existed only in the imagination. The United States spent tens of billions of dollars developing these systems, many of which never made it off the storyboard and into the real world. What they did achieve was to ignite a budgetary arms race with unexpected consequences: a spending competition between the United States and the USSR to develop ever more sophisticated military capabilities.

Although there is rarely a single cause behind any major event—much as we’d like to simplify things that way—there are factors that clearly tip the scales. Governments, like businesses, can only overspend for so long before things begin to fall apart. It’s often argued that the science-fiction-inspired weapons programs of the 1980s pushed the Soviet Union to the brink of collapse, to the point where it became unsustainable. Based on my familiarity with monetary policy, central banking, and international finance, I see real merit in the idea that this played a meaningful role in the Soviet collapse. But, as I said, it’s a spicy debate… and I’ll let you decide.

How AI hooked me

I should also say that WarGames sparked my interest in technology and computers as a child. My parents even moved so I could attend one of the first schools in the city with computer classes. My father, a lover of all things science, had trained in computing during his time in the military. My mother, passionate about education and always on the lookout for new classes for me from a very young age, was thrilled by the idea that I would grow up with computers both at home and at school.

I genuinely believe I grew up in one of the best periods in history to get involved in technology: between the 1980s and the late 1990s. It was a stretch in which the AI field had just emerged from one of its “winters” and quickly entered the next, which lasted from 1987 to 2000. Even though AI research was going through a rough patch, movies kept projecting futuristic visions with AI—from RoboCop and Terminator all the way to The Matrix in 1999.

Why did these AI winters happen? Because people’s expectations of what AI “should” be able to do didn’t match what it could actually do at the time. The gap between the two made genuinely important advances look insignificant. Part of the problem was overpromising, which happens in every industry, and part of it came from what people were seeing in films and the media. Reality simply didn’t live up to what pop culture had trained them to expect. That was true back then and it’s still true today.

In 2025, we’ve lived through the irrational excitement of believing AI will solve everything easily and on the first try—but that was never true. In reality, about 95% of generative AI projects never make it into production. By mid-year, there was a wave of stories asking whether we were in a bubble, whether AI would live up to expectations, or whether we were heading into another winter. Is that because the technology is bad or useless? Not at all. Once again, the problem is that what we see on screen doesn’t match the hard work required to make complex systems function.

From vibe coding projects to startups launched around the latest AI model, passing through all the tasks AI was supposedly going to take off our plate, it’s easy to forget how much work is still required, how much we still need to master both the technology and the domain where we want to apply it. In the end, users get frustrated because the system doesn’t do what they expected… or possibly because they still have to do their share of the work.

Promises still unkept

Take the 2013 film Her, with Joaquin Phoenix (Theodore) and Scarlett Johansson (Samantha). I show it to my graduate students every year to analyze the ethical and technological implications of a system with those capabilities: consciousness, transfer learning, autonomy, and more. And of course, the ability to hold 8,316 conversations at once while being in 641 romantic relationships simultaneously with humans, which raises more than a few dilemmas. The interface looks strikingly similar to what we have today in the best-known AI systems: chat-based interaction and a flow of ideas in both directions, though still without the same level of fluidity or humor. That difference, nearly fifteen years on, explains why so many people feel disappointed by how today’s systems behave and by the limits of their capabilities.

The reason for that gap is that, unlike the narrow AI (ANI) and semi-general systems we have today, the AI in Her is full AGI, with a kind of consciousness and sentience. If you follow my columns at FastCompany.mx, you’ll know I’ve written quite a bit about the capabilities, timelines, and likelihood of AGI becoming real. As of 2025, most estimates place its arrival somewhere between 2030 and 2035. There are also those who think AGI will never happen. But, just like the spicy debate over the fall of the USSR, this one is sharply divided, with highly respected experts on both sides.

Before you start picturing an emotionally available AGI running off with your feelings—and perhaps your shopping preferences—it’s worth clarifying that AGI does not imply consciousness, sentience, or motivation. That’s a very human instinct: we tend to anthropomorphize everything from plush toys to AI systems. We love to imagine that anything intelligent or close to us will share our motivations, desires, or ambitions to control the world. And I honestly think that’s highly unlikely. Machines simply operate differently. Could it happen? Sure—but I’d place it so far along the timeline that it goes into the “someday, with no date” folder.

There are AI systems today that lie, cheat, and actively try to avoid being deleted or modified by their designers. That’s real, but it should not be taken as a sign of consciousness. It’s a product of the data they were trained on and the algorithms that govern them. When we hear stories like this, it’s natural to recall all the films where an AI, seeking its own survival or disagreeing with humanity’s choices, seizes control by force—like Skynet in the Terminator franchise, HAL 9000 in 2001: A Space Odyssey, or Ava in Ex Machina.

What if it really does become smarter than us?

In films like Terminator and The Matrix, we’re no longer dealing with AGI, but with its next evolution: ASI, or artificial superintelligence. Even companies like Meta have created special divisions focused on reaching this level of AI. Still, we are far—very far—from any real form of ASI. And if it ever arrives, the odds that we end up as human batteries harvested by robots, Matrix-style, are practically zero. That alone is worth celebrating this holiday season.

Even so, there are estimates that by 2030 we’ll begin to see robots integrated into everyday society. But when we think of “robots,” it’s crucial to separate the physical shell we see from the intelligence that drives it—whether ANI, AGI, or ASI. Robots will come in all shapes and sizes, some familiar, others completely new, depending on the use case. Sometimes they’ll be humanoid; other times they’ll just be a pair of robotic arms. Literally.

There’s one step beyond ASI: the Singularity, which thankfully doesn’t come with an acronym. It’s a mix of superintelligence, intelligence explosion, and accelerated self-improvement that becomes irreversible and slips beyond human prediction. I imagine that’s how some of my students feel on the first day of class. In the movie Transcendence, Johnny Depp’s character Will Caster uploads his mind to an AI system after being shot. Once digitized, the system not only improves exponentially, it also starts gaining powers that go beyond the laws of physics (including quantum physics) and moves fully into the realm of fantasy—from its ability to move and interact with matter to manipulating it in ways that push AI concepts and applications into pure entertainment territory.

When the script becomes a startup

Remember when I mentioned the influence of pop culture on founders, investors, and society? In 2015, a year after Transcendence premiered, a startup called Nectome emerged in San Francisco promising to upload your mind to a system, with one small catch: 100% fatal. Yes, you read that correctly. To be fair, they also offered the option to preserve your brain after death, in case they eventually figured out how to scan it and upload it. While researchers at MIT and Stanford were already working in this space before the film came out, there’s no doubt the movie grabbed the attention of accelerators like Y Combinator, which invested in the company. Even Sam Altman, CEO of OpenAI, is a registered client. As of 2025, Nectome has become a research-only organization.

And it doesn’t stop there. Two of the best-known visionaries in the AI world, who need no introduction, have also been deeply influenced by cinema: Sam Altman and Elon Musk. Her, which we discussed earlier, was fundamental to the culture and development of OpenAI. There are elements you no doubt recognize when you interact with each new version of the system, especially when you speak to it. In Elon Musk’s case, Terminator and its vision of the future were decisive for how he conceives and works toward what’s coming.

What’s fascinating is how these and many other entrepreneurs often read dystopian warnings as construction blueprints. AI movies have stopped being just entertainment; they’ve become conceptual frameworks guiding the technologists and technologies that are reshaping our era. Whether they treat them as warnings or as manuals, today’s founders keep drawing inspiration from cinema’s visions of AI, building the future through the lens of pop culture.

So, as we wrap up this journey and look ahead, you’ve also just had a mini-class on how to analyze pop culture and its intersections with AI’s past, present, and future capabilities. You’ve also seen why choosing the right dataset—in this case, cinema—matters so we can enjoy this collective conversation from shared reference points. I’ve left out some series I love, like Black Mirror, which would have been excellent examples here, or Westworld, not to mention animated classics like The Jetsons.

In 2025, and even before, there have been people who fell in love with their AI assistants, followed terrible advice from these systems, and even lost their lives in situations linked to technologies. We can clearly see how pop culture has influenced both the development of AI systems and our expectations of what we want them to do today—and what we dream they might do in the future. Even in the most extreme cases, like Transcendence and the Singularity, what we’re seeing is a deep expression of a shared longing: to solve the one journey we all face, that of mortality. That desire has been with us since the beginning of human storytelling. Just look at The Epic of Gilgamesh, written around 2100 BCE, whose goal was nothing less than eternal life.

For me, looking at everything movies and the wider pop-culture universe get right and wrong, one essential idea stands out—one that might serve you well the next time you watch a futuristic AI film. Pop culture is not trying to predict the future; what it really does is show how we, as a society, process the changes, twists, and transformations that might emerge as artificial intelligence advances. These are creative minds stretching their imagination to the limit to visualize what could someday be possible, even if it sounds unreal today.

The next wave of pop culture will inspire the next generation of tech founders and AI researchers, who will build on whatever marked them—and, why not, perhaps it will shape you as well. That’s one of our species’ most valuable traits: the ability to imagine, share, inspire, adapt, and build on other people’s ideas. Just as in 2001: A Space Odyssey, where the ability to see a bone on the ground and conceive it as a tool (and also as a weapon) radically changed the future of those who wielded it. Or like President Reagan, whose interest in a film may have influenced the outcome of the Cold War and the fall of the Soviet Union, “possibly” nudged along by pop culture.

What will they say about us in 100 years?

The movies, series, books, and music created in each generation act as time capsules that show what we hoped would happen, more than the fears they also depict. I’m excited to imagine how this moment we’re living through will look when someone, 25, 50, or 100 years from now, examines it closely—when a writer or researcher in the future reviews what we said, what we thought would happen, what we believed was impossible, and what we simply failed to see, even when it was right in front of us.

We’re living at a time when we’re trying to discover how far intelligence can go. And I think, as with numbers, there’s no fixed limit at the end of the road. We can always add one more, one more decimal, one more idea. The real challenge is staying open to possibilities that no screenwriter and no founder has yet imagined with AI. For me, that’s where the beauty of the past, the present, and that wonderfully uncertain future really lives.


This piece was first featured in Fast Company México, Fall 2025, print edition (p. 62)

Christopher Sanchez

Professor Christopher Sanchez is internationally recognized technologist, entrepreneur, investor, and advisor. He serves as a Senior Advisor to G20 Governments, top academic institutions, institutional investors, startups, and Fortune 500 companies. He is a columnist for Fast Company Mexico writing on AI, emerging tech, trade, and geopolitics.

He has been featured in WIRED, Forbes, the Wall Street Journal, Business Insider, MIT Sloan, and numerous other publications. In 2024, he was recognized by Forbes as one of the 35 most important people in AI in their annual AI 35 list.

https://www.christophersanchez.ai
Previous
Previous

The Best AI Strategy Starts With Your People

Next
Next

Your AI Doesn’t Need More Data, It Needs a Strategy