Thoughts on possible interaction scenarios of the future
“The next generation of design will become less about screens and things, and more about scripts and cues” (User friendly by Cliff Kuang, Robert Fabricant)
Users become protagonists, user journeys become dramaturgical plots, interfaces become transitions between two film scenes, and the dramaturgy of the personal user journey becomes the emotional narrative. Emotional Design and Narrative Design will become essential players in all further digital design scenarios of the future.
While for decades we have been trying to explain to machines how to work, on a path of machine interactions, we are now at an inflection point. At the point which makes digital or in the future phygital interactions and their feedback appear human again. Apple recognized this already in 1984.
Since computers are so smart, wouldn’t it make sense to teach computers about people, instead of teaching people about computers?
If designs of the future are to become their own narrative, then we need to first understand what exactly makes a great story or a great movie and let them stand out. Of course, epic discourses can be held about this, but one basic ingredient is probably indispensable: the moment of impact. A movie can only become a good movie when it has a lasting effect, when it‘s profound, and when it becomes something to occupy oneself with afterward, which is also the case with good design. In his book Emotional Design, Dorn Normann distinguishes between three different levels. The visceral level includes the first impulse, the instinctive perception of visuality, smell, or tonal perception. The behavioral level is a kind of reaction to the first level. It describes the emotional bridge between the user and the product. The latter level, the reflective level, describes the reverberation, the inner engagement with the product after the interaction. The reflection of my experience. This emotional connection, embedded in a meaningful and resonant narrative, therefore provides a worthwhile foundation for all our future design.
It’s time to go over the mountain ridge of Jakob’s Law. To not take what’s behind us as a rationale for tomorrow’s designs, but be inspired by what’s waiting for us on the other side beyond the mountain ridge. We need to find new metaphors for future designs to tell tomorrow’s stories.
What is the future metaphor of a push notification? The future metaphor of an autonomous driving car? Of a hospital? Of a future bank? What is the contemporary metaphor of a search?
The following list is an exploration of today’s technologies, and experiences from yesterday’s interactions translated into tomorrow’s possibilism scenarios. I call them Speculative interfaces.
01. Receptive interfaces
What if in the future we could build 3D models in virtual space just with our hands? Like a sculptor. We don’t use clunky unnatural devices but shape, model, destroy and rebuild with the most natural tool: our hands. Fingertip sensors give haptic feedback about materiality and its texture. We feel liquid materials, distinguish stone from wood and feel the softness of merino wool. The artificial sensors on our fingers correlate with our mechanoreceptors and simulate different materialities for us. Google has filed a patent on this in 2019 called ‘Finger clip biometric virtual reality controllers’. The outlined design still seems a bit awkward, however, the technical details are quite exciting. At its core, this is not just about the motion sensors but rather about the recognition of biometric data. Researchers at the Chinese Academy of Sciences have gone a considerable step further by developing an artificial finger that can recognize and feel 90% of certain surface materials. Pornthep Preechayasomboon and Eric Rombokas of the University of Washington have taken another step, or rather an almost utopian leap. They developed Haplets, a finger-worn device that “provides vibrotactile feedback for hand tracking applications in virtual and augmented reality.” What’s exciting about Haplet is not only the fact that it relays haptic feedback to the user in virtual spaces, but it also allows users to continue grasping real, physical devices, which can be interacted with in augmented realities. This offers a whole new level of phygital experiences and builds a real seamless blurring bridge between the real and digital worlds. A bridge across which I, as a sculptor, can take real tools into a virtual world to create virtual artworks that I can virtually feel with my real hands.
02. Perceptive interfaces
Form follows emotion. Functionality is not at the center of interaction, but emotion. Hartmut Esslinger, the founder of frog Design, has always taken this perspective. People don’t just receive feedback haptically, but above all emotionally. So what if future interfaces are so empathetic that they perceive emotional situations and react situationally to them? What if bad news feels different on one day than the next one because I’m more ready for it on this day? If future interfaces have true emotional intelligence, then we will be able to build real bridges between humans and machines. But that, in turn, is also the biggest lever of fear. This lever is what researchers call Human-level AI. In the tension between utopian speculative design scenarios and dystopian end-time invasions. At its core, it describes the temporal point at which AI is able to replace natural human intelligence and autonomously master any human task. If we focus again on the possibilistic scenarios, those in which there is more hope and curiosity rather than fear, then empathic systems offer insane advantages. Perceptive interfaces could make a huge contribution to inclusion. They could, for example, recognize how people with autism or alexithymia feel, and support people in depressive phases without actively asking them to provide input.
03. Characteristical interfaces
What if personalization applied not only to content but also to interfaces? Just as content should adapt to my personal needs, this could also apply to future interfaces. People have different mental models for the same things, mainly due to cultural diversity and different beliefs. We can therefore fundamentally assume that people in other cultures have the same expectations of a button, a specific widget, or a specific icon. This is because mental models are based on beliefs, not facts. Characteristical interfaces would thus be based on the respective cultural and personal circumstances of the respective user and thus play out personalized interfaces along his or her mental model. For example, the formal appearance of a Like button and the feedback following the interaction could change along different mental models of different cultures. And what if my mental models change over time?
04. Reminiscent content
What if we visualize the past, present, and future in one content container? What if our memories and thoughts of the past now visualize the content of the future? The speculative thought behind Reminiscent content is based on the paper ‘High-resolution image reconstruction with latent diffusion models from human brain activity’ by Yu Takagi and Shinji Nishimoto. Therein, they describe their diffusion model (DM), which can extract images from human brain activity using functional magnetic resonance imaging (fMRI). Simply said, they visualize thoughts. So what if I’m looking at a landing page of the new Polestar and it’s driving along that exact beautiful coastal road I was dreaming about yesterday? Or if Midjourney is being fed by thoughts instead of text-based prompts? Reminiscent content could be a fantastic step towards true hyper-personalization aligned with human needs. Reminiscent content could build a real emotional connection between brands and people by bringing products with personal experiences and images in symbiotic harmony.
05. Environmental interfaces
The world in harmony and symbiosis with nature are our guiding principles.
That’s what Gordon Wagener, chief design officer for Mercedes-Benz Group AG, wrote in 2020 at the launch of Mercedes’ VISION AVTR.
Another essential step towards personalized and seamlessly integrated interfaces could be Environmental interfaces. Environmental interfaces turn your environment into an interface. In the future, it will not be a matter of developing new ideas into even more digital modules, but of embedding them gently into what surrounds us. In doing so, it follows a similar principle to Shy Tech. BMW describes Shy Tech as “a hidden world of interaction and functionality at your fingertips”. It is about a radical reduction of interfaces, which only appear when they are needed thanks to smart materials. Environmental interfaces not only rely on existing materials but also uses Augment Reality to situate what surrounds the user. Thus, the reeds by the lake become the interface of my banking balance, the sand by the sea rises to become the mountain of my next hiking tour or the thermostat that can be regulated on the sofa cushion. In the future, it will no longer be a matter of seeing interfaces as additive integration or an additional digital layer above the physical world, but rather as symbiotic integration into nature or everything that surrounds us.
06. Amicable interfaces
In 1943 Abraham Maslow started to visualize his social psychological model in a very simplified way in the well-known hierarchy of needs. It describes the idea that man, after satisfying his essential needs, always strives for the maxim, self-actualization (in the newer version it is transcendence). The search for the meaning, higher creativity. And relationships as well as friendships can support us in this. Friends give us stability, speak our language, and understand us even when we say little or nothing. Future Conversational AI interfaces will be primarily about replacing machine characters with human characters. However, I think we would need to go one step further. What if my digital banking bot is less of an advisor and more of a friend? When cold tech interfaces have passed the inflection point to friendly interfaces and dialogs with chatbots feel like familiar conversations among friends, then we’re talking about Amicable interfaces. Amicable interfaces literally speak on my linguistic level, know me, and my living conditions, and are the silent advisory companion for many of my daily decisions away.
07. Inclusive interfaces
Inclusive design is unfortunately still not an integral part of design processes today. Many designers see the development of accessible interfaces as a kind of restriction of their creativity. And yet real creativity arises precisely when interfaces that beg for inclusion are conceived in an inclusive way. Microsoft has created an essential contribution and a very easy-to-understand approach with their Persona spectrum as part of their inclusive design language. They distinguish constraints on four sensory levels, Touch, See, Hear, and Speak. Each of these levels is then further subdivided into Permanent, Temporary, and Situational. Simplified, we sum up 12 scenarios. What if Inclusive interfaces would look like there is a module that reacts in 12 different ways to the respective need? A search that works text-based, verbal, for one-handed people, or just works in my language. A search that shows me pictures instead of text, because I can’t read. A search that gives me image results, because I can’t see them. One module, infinite possibilities. This means that we designers have to go one step further in the systematization of design. What would future design systems look like? How would we describe tokens? If we understand Inclusive design as an opportunity for equal interactions, it has a huge potential to become an innovation. Matti Makkonen developed SMS with the intention of giving deaf people a way to communicate. And nowadays it’s used by everyone, not only by deaf people. It’s time for more Makkonen moments.