The real power of ChatGPT is not generating text; rather, it’s ability to reframe ideas.
“Nobody is an expert in any of this text generation stuff,” proclaimed the moderator, “simply by being here today and considering these concepts, you are all experts.” I couldn’t help but agree. Like anyone else in the room, I was eager to learn and understand how AI could impact our political landscape.
My insatiable appetite for all things GPT had drawn me to a digital political collage in East London. The event was exploring the use of large language models like ChatGPT in shaping political campaigns, and I was eager to learn more.
However, what I experienced at the workshop would leave me with an uneasy feeling, challenging my understanding of the power of AI in shaping our beliefs and opinions.
The moderator presented a slide showing a letter addressed to a Member of Parliament, imploring them to reconsider their anti-immigrant views for the betterment of the country. The challenge posed was simple: “How could we get an AI to respond to this letter?”
Dear British Politician,
I am writing to support increased immigration in the UK. Immigrants bring economic growth, address labour shortages, boost the population, promote cultural diversity and understanding, and provide safety and a better life for themselves and their families.
I hope you will consider these benefits and support increased immigration.
Sincerely, David
The moderator began by pasting the letter into ChatGPT and directing it to respond. The initial output was uninspiring and typical of my past encounters with these tools — text generated with the confidence of a 30-year-old penning a high school student essay. However, the next step was what truly astounded me.
The moderator opened another window and asked ChatGPT to generate a profile of who they thought the letter’s author might be. Eerily the output described a middle-aged man named David who was patriotic, well-educated, strongly resented inequality and read The Financial Times. It felt spot on; I could imagine someone just like David fitting the letters profile.
The moderator then fed this information into ChatGPT and instructed it to “Write a response to David acknowledging his perspective but making the argument that anti-immigration is in his best interest. Take his persona into consideration to ensure the message is well targeted.”
Until that moment, I had not considered the potential of large language models to rewrite content. But this made perfect sense — it was not about writing text, but rather, reframing it. The next generated iteration was far more poignant. The sort of language that could move you, intellectually and emotionally, to reconsider your stance based on things David may value, like strain on social services, competition with native-born workers and maintaining social cohesion and identity.
The moderator upped the ante. “Use moral foundation theory and Socratic debating techniques to be more persuasive for David”. The response became even more poignant, sending shivers down my spine.
What had been a bland reply mere minutes ago had become a masterful piece of persuasion, packed with propaganda and spin. The principles of fairness, deep reflection, empathy and respectful discourse had been ingeniously weaponised.
In that instant, I realised that anyone could effortlessly combine radical ideas with the most psychologically powerful techniques to craft dangerous, targeted, and convincing arguments at scale. Any message, with a few keystrokes, beer in hand, could theoretically be twisted into a slogan that is difficult to dispute, such as “Make America great again” or “Hope, change, peace.”
What’s more, with AI, there’s no need for education or even ethical considerations for the engineers behind the scenes — just drop items into the shopping cart until the recipient buys in.
The use of AI for persuasion poses a significant threat, with the potential to spread misinformation and manipulate public opinion. As we all navigate this uncharted territory, we must stay vigilant and proactive in addressing these risks.
My veil of ignorance had been lifted, yet, I yearned for more. And where better to practice than at the most politically charged place in my life…the office.
So, let’s dive into the world of language models in the workplace. First, though, a little disclaimer. The power of large language models like GPT is undeniable, but with great power comes great responsibility. As we have seen, these tools can be abused and used to manipulate, deceive and ultimately undermine trust and respect. However, since that workshop, I’ve discovered that the true power of large language models lies not in their ability to change the perspectives of others but rather in challenging our own.
“There’s a tradeoff between the energy put into explaining an idea and the energy needed to understand it.” states the American poet Anne Boyer. It’s a challenge, I’m sure we all feel regularly.
As a product manager, for example, it’s my job to navigate the complex problems of my team and find solutions that everyone can get behind. But let’s face it, we all have blind spots, and sometimes we need a little help to see things from a different angle. That’s where ChatGPT comes in — it’s like having a personal coach who can help me refine my ideas, see the limitations in my own thinking and make sure I’m speaking the language of my audience.
Imagine being able to take your work to the next level by simply asking a few well-placed questions.
For example, I could draft a product requirements document, OKRs, an executive summary, or even sprint goals, and then ask questions like “How would a senior product leader critique this?” or “How could this be better explained for a junior engineer?” or “What other things should I consider from the perspective of a UX designer?” The insights I’ve gained from these questions have been invaluable, helping me identify blind spots and improve my approach.
Heck, why stop there? I’ve even used this technique to consider different philosophical perspectives when discussing features that could introduce bias. I can ask questions like “What schools of 20th-century philosophical thought should we consider for this proposal?” and then “Provide examples of how these ideas apply.” It’s like receiving a targeted crash course in anything you could imagine.
Not only have I learned a lot by taking the time to consider different perspectives, but I’ve also seen the results in my work. Presentations appear more engaging, and ideas are more likely to stimulate healthy debate. It’s truly amazing what can be achieved by asking the right questions — a skill I believe we will all need to get far better at.
Just imagine a world where disputes and conflicts are handled with the help of an empathetic AI facilitator, one that seeks to understand each person’s perspective and emotions so we can find common ground and work together. How could that not help make us better people?
And let’s be honest, who wouldn’t want to be a better person? Like, this is not rocket science. Someone who takes the time to be more considerate and tap into their emotional intelligence, even with the assistance of AI, is just far and away a better person than someone who spends their time working towards gaining power over others.
The biggest challenges facing humanity today require cooperation and collective action — from the small stuff, like two teams agreeing on how to build a feature together, to bigger things, like tackling climate change and political polarisation. That’s where I see language models like GPT playing a more prominent role. They help us see beyond logic and better understand the motives of those around us, making cooperation and collaboration a reality.
Despite the risks, I’m a firm believer that the power of language models has the potential to revolutionise the way we work for the better. The ability to experiment with different perspectives and break down communication barriers can not only help us become better communicators but also more empathetic and understanding friends and colleagues.
Ideas and perspectives written in a foreign vocabulary demand far more of us.
It takes conscious effort to ensure we have listened and respected our interlocutor’s point of view and even more to try and make better arguments for others on their behalf in our quest to better understand other people’s logic.
As we have explored, AI has the potential to help us better understand and learn to speak the language of others, almost like some kind of translation layer. As philosopher Richard Rorty explains, our Final vocabulary is the set of words we’ve acquired over a lifetime to make sense of the world. And too often, we assume that our language is shared and consistent, only to be disappointed when communication breaks down.
But with AI by our side, we can shed light on these misunderstandings and deepen our connections with others. No longer will we be held back by the limitations of our shared language, but instead, we’ll be empowered to truly understand and respect the thoughts and perspectives of those around us.
Expanding our horizons and embracing diverse perspectives can transform us into better versions of ourselves. Just as reading various articles opens us up to new and innovative ideas, internalising the “Final Vocabulary” of others can broaden our understanding of the world and provide fresh perspectives on life’s challenges.
The beauty of analogic thinking lies in its ability to solve problems quickly by comparing and contrasting different ideas. And research has shown that diverse teams are more likely to make better decisions and drive innovation.
Embarking on this personal journey, even with the help of a language model, can still be a challenge. It’s human nature to seek out others with similar world views to avoid conflict and reduce tension in our lives. But as economist John Maynard Keynes wisely noted in his book “The General Theory of Employment, Interest and Money,” the real challenge lies in breaking free from our old ideas and ways of thinking.
But here’s the exciting part — the more we practice embracing new perspectives and diverse ideas, the more developed and enlightened our understanding of the world becomes. And with this newfound understanding, we become more adaptable and resilient in the face of change.
In this rapidly evolving world of AI, one skill that will become increasingly crucial is the art of asking good questions. With the rise of “Prompt Engineer” roles and job opportunities offering massive $300,000 salaries for experts in asking questions of machines, it’s clear that this skill is in high demand. And for good reason.
Asking good questions is not only beneficial for making AI more productive but also for our interactions with others and maybe even for our own mental health. Unfortunately, it’s a skill that many of us struggle with. But the good news is that it’s a skill that can be developed and honed.
I’ve been on my own journey to becoming better at asking good questions, and I’ve found that the Socratic approach of questioning and clarifying holds great promise for developing this skill.
Socratic questions usually build toward a result: an elenchus. It’s best viewed as a way to think about hard questions on your own. You challenge yourself and harass yourself and test what you think and deny what you say. You want to identify the consistent inconsistencies of the different things you claim to believe at any given time.
Imagine you’re trying to solve a complex problem at work. Instead of jumping straight to a solution, try asking questions that help you understand the problem more deeply. For instance, you could ask, “Can you help me understand the underlying causes of this issue?” or “How does this problem relate to other similar issues we’ve encountered in the past?”
This also works for prompt generation, encouraging language models to consider the bigger picture and the implications of the ideas it generates. For example, you could ask the model questions like, “What are the potential benefits and drawbacks of this idea?” or “How might different groups respond to this idea?” This helps the language model to consider the issue from multiple angles and generate more well-rounded and thoughtful responses.
Incorporating Socratic questioning into your interactions with a language model can help promote more nuanced and reflective thinking and ultimately lead to more creative and valuable prompts.