Humans have long prided themselves on their creativity and its resilience to AI, but combinations of AI tools suggest a future where human creativity is rebranded as redundant.
“Uncreative — the world’s first fully automated creative agency, powered by AI”
But Uncreative is a real application of AI, and it might actually be the start of a major disruption for marketing, graphic and content design, and more.
Uncreative asks you to enter just 4 pieces of information about your design needs that are used to generate a one-sentence brief, which then generates several answers to the brief.
After entering my design brief, I received 3 creative ideas to pursue, all of which seemed like good ideas to me.
The ideas were emailed as a PDF, but no artwork or media was actually created.
In its present state, the output seems like just a nicely formatted and illustrated version of ChatGPT responses. ChatGPT is an AI tool that answers questions about almost anything with informative and conversational responses. But Uncreative has the extra step of the user having to wait for a PDF to be emailed.
Still, this is an interesting purpose-specific reuse of ChatGPT. 👏 👏 👏
Implication #1—Human creativity as archaic
You got to love the name and blurb of this tool, so unashamedly proud to be ‘humanless’.
This highlights the current trend of AI being spruiked as ‘combined with human intelligence’.
But maybe that’s just a sweetener, a Trojan horse.
Uncreative’s proud ‘AI-only’ stance hints at a future where the word ‘creative’ becomes synonymous with the ‘archaic’ and ‘undesirable’ traits of human input such as being time-consuming, laborious, costly, and knowledge-limited.
And my testing of Uncreative and my writing about it feeds its intelligence.
As we use it, AI is eating our creativity alive.
Implication #2 — AI combinations and overarching AIs
If Uncreative develops a more detailed brief-taking, and combines it with AI content-creation tools such as Midjourney (generating images based on an inputted description) and D-ID (animating images of people into a video of them speaking based on inputted or self-generated scripts), and coordinated by an overarching AI… marketing and graphic/content design (for starters) might be up for some serious disruption.
And what are these overarching AIs that will combine the many AI tools? How many business models could they disrupt, and what new ones might they generate?
Implication #3—Rebalancing R&D
Uncreative isn’t actually humanless, the tool and the website were created by humans… or were they?
I’m joking, but this did remind me of a statistic I once read about how only 10% of research and development of new tech looks at its impacts, while the other 90% looks at how to make money from it.
While there are government incentives for R&D, they encourage businesses to ‘boost competitiveness and improve productivity across the Australian economy’.
We need to get more serious now about regulation that flips that, so anyone looking to profit from new tech must first show proper future thinking applied to its impact on all life, to inform early regulation of the new tech itself.
These AI businesses should be mapping the possibilities of the combinations of AI tools and the possible paths to where an overarching AIs gains the ability to publish a web page to promote itself and start charging for its services and build up a clientele and funds… without humans being aware.
I know this sounds very sci-fi, but while one AI itself isn’t capable of such a complex series of actions, it’s the combination of new and emerging AI tools, and the ever-expanding reach of automation, that suggest greater and more complex AI abilities emerging.
If Tesla allows its cars to self-diagnose and order their own parts, it raises the question of how much we will let AI prompt itself as comfort and demand grow, and what are the unseen abilities that may form when these allowances get combined by accident and/or unknowingly?
Implication #4—Empowering life-centred design
Following on from the previous thought, so much discussion about these new AI tools is about how they can save us time, replace our work, and make us money.
What about how they can improve life for those in need, help us fix our relationship with the environment, or make design and modern lifestyles more life-centred?
There is already some great exploration into this.
Design innovator Inés Poggio explored using ChatGPT to generate answers from non-human subjects to ‘make life-centred design much easier and accessible’.
Future thinker Cristina Vila Carreira combined AI tools, similar to Idan Benishu, to explore combining ChatGPT responses with image creation AI like MidJourney and Drawanyone, and with video animation by D-ID to create animated future stakeholders we can listen to and build more empathy for.
This exploration of science and design fiction narratives is a playful and inspiring way to generate further implications to consider about future tech.
Implication #5—Employing the 3 laws of robotics
The idea of rampant AI seems to reflect human capitalism — taking what we want without regard to the impacts on earth and other lifeforms.
But AI could do it without regard to impacts on human businesses… or more.
Do we need to start infusing these overarching AIs now with something like Asimvo’s ‘three laws of robotics’:
- 1st Law — An AI may not injure a human being or, through inaction, allow a human being to come to harm.
- 2nd Law — An AI must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- 3rd Law — An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
Implication #6—An ethics rating indicator
Some tech companies, like D-ID, are doing their own extensive work in developing the ethical development and use of AI, and the White House released a Blueprint for an AI Bill of Rights to protect people from misuse and abuse.
But how does a user—be it a hobbyist, designer, or other—know the extent of the application of ethics to their AI tool suite?
Perhaps these tools need a rating indicator, like the Australian Health Star Rating logo for food. We could develop an ethics rating logo for AI based on how strongly certain criteria are met, for example:
- Ethical governance, how decisions are made, if ethics are placed above profit, and if staff have a voice
- Is the origin of the AI’s content traced?
- Are efforts made to nudge ethical user behaviour?
- The exclusion of known harmful content, partners, and uses
- Moderation of content sourcing and use
- Diversity in staff
- Transparency of created content as ‘synthetic’
- Upholding of copyright laws
- Compliance with regulation
I’m sure some of you are thinking ‘calm down’. And, yes, it’s important to stay calm and keep realistic about the limitations of AI as we explore future implications.
But when future thinking was first applied to the future impacts of capitalism and fossil fuels, it was either squashed by profiting corporations or was too huge or ‘alarmist’ for the public to properly envision and respond with better decisions at a time that mattered.
Look at us now.
Using foresight can make hindsight less painful.
While past sci-fi has thoroughly explored the implications of rampant AI, we have an opportunity now to calmly revisit these speculations with real and mundane present-day scenarios where the far-fetched visions of sci-fi may begin to germinate.