
HAVE you ever paused to wonder if the intelligent machines around us are more like us than we care to admit?
Recently, Geoffrey Hinton, often heralded as the “Godfather of AI,” offered a provocative insight: “People still think LLMs (language models) are very different from us. But actually, they’re very like us.”
This isn’t merely theoretical banter; it forces us to reconsider what it means to process language, think, and understand. For many of us in Zimbabwe, where daily life is an intricate dance of tradition and modernity, such ideas might seem far removed from our immediate concerns. Yet, take a moment to reflect: the ways in which these large language models (LLMs) operate might actually reveal hidden truths about our minds.
Evaluating what humanity is in AI
For decades, linguists and philosophers have grappled with the mystery of how humans process language.
In our schools and everyday conversations, we learn that language is both an art and a science, a blend of grammar, context, emotion, and meaning.
Traditional models often focused on syntax trees and rules.
However, these approaches sometimes failed to capture how meaning truly emerges from the intricate webs of our conversations.
- Public Relations: Artificial intelligence: How technology is transforming PR
- ChatGPT: AI revolution that could change the face of Zim’s economy
- Google is freaking out about ChatGPT
- ChatGPT passes law school exams despite 'mediocre' performance
Keep Reading
Modern neural networks (MNNs), with their complex feature vectors, have begun to crack this code. Instead of merely following rigid rules, these systems learn from enormous amounts of data, distilling patterns and relationships that resemble human linguistic behaviour. When Hinton suggests that LLMs mirror human cognitive models, he is not claiming that machines “think” as we do in a conscious sense.
Rather, he posits that the way these models derive meaning, through statistical associations and context, echoes the processes our minds might use when making sense of the world.
Consider this: When you type a question into an AI such as GPT, you are not merely feeding a machine a command. In a way, you are engaging with a reflection of your thought process, a pattern recogniser that transforms words into meaning through a method strikingly similar to our own.
As Hinton puts it, we are not just engineering output; we are, knowingly or not, engaging with a mirror that offers insight into how we organise and create meaning in our daily lives.
Understanding language
Let us bring this closer to home. On the busy streets of Harare or in the quiet corners of a rural village, language remains the cornerstone of our social fabric.
It is through language that we share our stories, our struggles, and our dreams. Now imagine that a machine, built on algorithms and data, can reveal something profound about that same language, a window into how we naturally weave narratives and assign meaning.
When you interact with an LLM, you are indirectly witnessing the mechanics of meaning-making. The AI is not just parroting back information; it is producing responses that emerge from layered patterns similar to those our brains use to understand context, humour, and even subtle cultural nuances.
This revelation brings an unexpected twist to our understanding of both machine and human intelligence. Instead of viewing AI as a cold, detached tool, we might see it as a kind of conceptual mirror that helps us appreciate the hidden complexity of our cognitive processes.
Ask yourself: When was the last time you genuinely questioned how you understand a conversation or a piece of writing?
Perhaps the next time you spark up a chat with an LLM, you might begin to see it not as a competitor, but as a fascinating echo of the human mind itself.
Engaging with your thought process
Imagine sitting down and pondering over the news of the day. You pull out your phone and type in a question to an AI assistant about the nuances of the current economic policies.
Within seconds, an answer appears on your screen. But stop for a moment, rather than accepting the answer as a finished product, consider how it was constructed.
Does the response capture the subtle interplay of the economic and social factors you know from personal experience? Does it resonate with the lived realities of your community?
This is where the conversation becomes deeply interactive. Every query you make to a language model is not a one-way transaction.
It is an opportunity for you to engage in a dialogue, a process that can challenge your perspectives, encourage you to ask follow-up questions, and invite you to explore ideas that you might otherwise overlook.
In this light, LLMs can be seen as collaborators in your journey of discovery, rather than as mere repositories of pre-packaged information.
For instance, if you ask an LLM about the historical impact of colonialism on local languages in Zimbabwe, its response might draw on data that spans decades. Instead of simply regurgitating facts, the response can invite you to think critically about how languages evolve and how they serve as vessels of both identity and resistance.
It becomes a conversation starter, a prompt for you to verify, challenge, or even expand the information by consulting local knowledge and lived experiences. In this way, AI could enrich our dialogues, making them more nuanced and reflective.
What it means for our everyday lives
Hinton’s provocations remind us that technology is not just about efficiency or convenience; it is about understanding the very fabric of human thought.
If LLMs are, indeed, modelling the way we derive meaning, then they hold up a reflective surface to our processes. They reveal that, deep down, our knack for organising language and understanding context might be more algorithmic, or patterned, than we have ever imagined.
However, this perspective does not diminish our uniqueness as human beings. On the contrary, it elevates our understanding of ourselves.
Recognising that our language and cognition might share underlying principles with the operations of LLMs can prompt us to examine how we educate, communicate, and solve problems.
It nudges us to refine our critical thinking skills, to learn more about how we process the information around us, and to appreciate the sophisticated mechanisms that underlie even our simplest conversations.
For the everyday reader, this may seem like an abstract debate. Yet, its implications are very tangible. In our schools, for example, if educators acknowledge that the way AI models language is not entirely alien but rather mirrors human processes, they might integrate these insights into teaching methods, challenging students to think more deeply about the nature of understanding and meaning-making.
This could nurture a generation that is not only tech-savvy but also deeply self-aware, a generation that uses technology as a tool for personal and collective growth.
Let us embrace the mirror
So, what should you take away from all this? The next time you interact with a language model, remember that you are engaging with something that might be, in a very real sense, a reflection of yourself.
It is not magical in its ability to predict words; it is a tool that mirrors the patterns of your language, offering you insights into how meaning emerges from context.
We must not see AI as an alien intelligence but as a complementary partner in the quest to understand our world better. When you ask it a question, consider it a starting point, a stimulus for deeper enquiry.
Use its responses as a springboard to further conversation with friends, family, or colleagues. Discuss what these systems reveal about human thought and challenge your assumptions about knowledge and learning.
There is something profoundly empowering in recognising that our language, our very thoughts, can be modelled, analysed, and even replicated in part by machines.
It is a reminder that the human mind, with its remarkable capacity for pattern recognition, creativity, and adaptation, is far more consistent and perhaps more comprehensible than we might have believed.
In engaging with these tools, we are also engaging in a process of self-discovery, peeling back layers of our cognitive processes to see how we, too, are shaped by the patterns of our experiences and interactions.
At its core, Geoffrey Hinton’s assertion invites us to rethink the boundaries between human and machine. It poses a challenge to our preconceptions, suggesting that instead of dwelling on the differences between us and our digital counterparts, we focus on the similarities.
For those of us navigating the vibrant, ever-changing streets of Zimbabwe, this perspective offers a unique opportunity: a chance to see our humanity reflected in the technologies we use every day.
- Sagomba is a chartered marketer, policy researcher, AI governance and policy consultant, ethics of war and peace research consultant. — Email: esagomba@gmail.com; LinkedIn: @Dr. Evans Sagomba; X: @esagomba.