Metrics to evaluate collaborative human-robot interfaces and insights from user studies
Collaboration, as defined by Jeff Faneuff , is the action of working with someone to produce or create something. Robot technology was designed and developed to assist people in repetitive or mundane tasks like working on the assembly line.
During the early stages of robot technology development, only highly trained technicians and programmers were allowed to interact with the robots which were expensive and comprised of highly sensitive technology. In recent years, new technologies have enabled robots that work closely with humans, called collaborative robots (COBOTs).
As defined by Mike Beaupre of KUKA Robotics, a collaborative robot is defined as a robot which is designed for human interactions within a designated workspace.
With the explosion of COVID-19 and its impact, there is an increase in demand for human-robot collaboration across all industries. In the future, it’s not machines that will take over the workplace. Humans and robots will work together to accomplish the same goal.
Cobots can be used in a variety of settings, from manufacturing and logistics, food processing, restaurants, healthcare and education. Some of the examples of collaborative robots are Baxter and Sawyer by Rethinks Robotics, ABB Yumi, etc. The usual applications of collaborative robots are machine tending, pick and place, assembly, packaging, etc. But there are also other humanoid robots like the educational robot NAO, which mainly interact with humans in collaborative and mediating roles.

It is extremely difficult to design human-robot collaborative interfaces as there are no standardized metrics to measure the usability and user experience for each use case scenario before deploying the robot. In order to address this gap, I conducted a user study to evaluate the user experience of existing human-robot collaborative interfaces as an individual project towards a Master’s in Human-computer interaction. I worked under the guidance of Associate Professor Sue Cobb, Associate Professor Robert Houghton, and Professor David T Branson III of Nottingham Advanced Robotics Laboratory, NARLy from Faculty of Engineering at University of Nottingham.
The results of my study provided valuable insights into the design of effective Human-robot collaborative interfaces (HRCI).
Depending on the level of interaction, the HRI systems may be categorized as below.
The first is the shared workspace, where the robot and user are both in the same physical space. This design is best used for tasks that require direct interaction between the robot and user. In this system, the human and the robot may perform tasks either in collaboration or individually.

The second is the remote workspace, where the robot and user are in separate physical spaces. This design is best used for tasks that require the user to monitor the robot from a distance.
The third is the virtual workspace, where the robot and user are in separate virtual spaces. This design is best used for tasks that require the user to interact with the robot remotely.
Researchers Weiss et al. (2011) described in their study that one of the most important ways to evaluate human robot collaboration is by assessing the user experience . To do this, researchers use a variety of methods to measure different aspects of the user’s experience. These methods include assessing the robot’s form, emotion towards robot, human-oriented perception, feeling of security and co-experience when using the interface. In addition, researchers also look at the user’s satisfaction with the system. By studying all of these factors, researchers can get a good understanding of how well the interface functions and how satisfied users are with it. This information can help to improve the usability of human-robot collaboration systems and make them more user-friendly.
Another method is to evaluate the usability of human-robot collaborative system by measuring the usage of interface for successful completion of the task (utility), user achieving a goal with accuracy and completeness (efficiency), ease of learning a system by novice user (learnability), various possibilities of system-user communication (flexibility), error prevention and responsive support (robustness).
Other than these factors, social acceptance of robots and the societal impact on the user are also aspects that affect human-robot collaboration. User research methods of in-depth interviews, surveys, focus groups, expert evaluation, lab-based user studies, field-based user studies, and wizard of oz technique are all methods that can be used to measure the above factors.

Researchers Huang & Thomaz, (2011) found that gaze as a communication mode is a way of initiating joint attention in collaborative tasks and robots with gaze cues help in improving task performance and are perceived as more competent and socially interactive. Similarly, Goffman, E. (1969) Strategic interaction discovered speech plays a role in creating positive social interaction between participants. Studies by Leyzberg et al. (2011) have shown that robots that expressed emotions through facial expressions were better at human teaching. Zanchettin et al (2013) observed that the human-like movements of robots can evoke social acceptance.
Existing interfaces for collaborative robot include combinations of one or more gestures and display depending on the design of the robot — hand and head movements, gaze, text display, facial expressions, led lights and speech. There are many such multimodal interfaces for conveying robot intention in collaborative robots. Although there are a lot of interfaces, it is very hard to standardize the robot interface design from the studies. The studies are tailored to a particular task, environment conditions and so on.
So far, research on human-robot collaborative interfaces has shown great promise. New upcoming interfaces like neural interfaces and augmented reality are supposed to enhance natural interaction and cognitive abilities. However, there is still a lot more work to be done in this area.
I conducted a study to evaluate the user experience for existing collaborative robot interfaces for Baxter using an evaluation framework. I used the USUS evaluation framework which is a theoretical and methodological framework designed for evaluating the human-robot interaction(HRI) scenarios based on four factors: Usability, Social Acceptance, User Experience and Societal Impact. It is based on user-centred approaches in HRI.

I used standard questionnaires by Bartneck et al., (2008), Nomura et al., (2006), MacDorman et al., (2008) and Salem et al., (2015) with interviews to evaluate the user experience from the five indicators mentioned in the USUS framework. I interviewed and surveyed ten participants. The participants were asked to watch multiple videos of different Baxter robots performing different tasks and give their feedback on them. The tasks included teaching-learning assembly tasks, object identification via speech tasks, object fetching tasks, inspection tasks, pick and place tasks, and collaborative cooking tasks. Overall, the participants had clear preferences among the interfaces based on their perceptions.
I observed from our study that the user’s perception of cobot — anthropomorphism, animacy, likability, perceived intelligence, and perceived safety is dependent on the combinations of communication modes that are used within the context of the task being performed and the environment.

Overall, I found that the user’s perception of cobots is heavily dependent on the mode of communication that is used in the interface. Verbal cues were found to project the robot as more life-like, intelligent, likeable and safe. Gaze was perceived as more competent and socially interactive. Robot head movement made it appear more intelligent and likeable to the users. Facial expressions projected the robot as more life-like, human-like, likeable, safe and was attributed the most human nature traits in comparison with other Baxter robots. While all of these modes are effective in their own ways, it is important to consider the context of the task and environment when selecting the most appropriate mode or modes of communication.

I also observed that speech was the most successfully predicted Cue and facial expression was the least successfully predicted cue during a collaborative task. This shows that due to divided attention during a task, mutual coordination with robot and simultaneously receiving communication cues is challenging for the user. Here speech or verbal cues facilitate in turn-taking as observed by Duncan, S. (1972) much easily than facial expressions where the user has to shift complete attention from the task to the robot.
By measuring robot-related experiences and preconceived attitudes towards robot we found that user’s have a negative attitude towards robot if exposure to robots is either too much or too little. If users have a negative preconceived attitude towards robots, then exposure to robots may only serve to reinforce this attitude. However, if users have little or no exposure to robots, then introducing them to a robot may be met with apprehension or confusion. In either case, it is important to consider users’ attitudes towards robots when designing collaborative interfaces and to try to strike a balance between exposure and preconceived attitudes.
There are several potential limitations to using communication modes for natural interface in human-robot collaboration. First, the use of communication modes may limit the ability of robots to effectively communicate with all humans alike. There are currently accessibility barriers with robot interfaces that make them inaccessible to people with visual, hearing, motor or cognitive disabilities. Second, the use of communication modes may also limit the ability of robots to effectively communicate with humans from different cultures or backgrounds. For example, if a robot only uses verbal communication, it may not be able to effectively communicate with a human who does not speak the same language.
In the future, we can explore a specific application and task focussed inquiry of an appropriate interface for the robot. For example, we may want to study whether a certain type of interface works better for tasks such as cleaning or assisting an elderly person with basic needs.
One possibility is to study how people work with robots in collaborative tasks. For example, how do people share control of the robot, and how does the robot’s behavior affect the workflow?
Another direction for future research is to study how people communicate with robots. For example, do people prefer to use text or voice to communicate with robots? How do people use gestures and body language to communicate with robots?
Determining the most effective methods for human-robot interaction, considering the myriad of factors that can affect the interaction, will be important for the future development of safe, efficient, and useful robotic systems.
Human Robot Collaboration
What are the basis of human-machine teamwork?
How to build collaborative human-robot interfaces
Human-robot collaboration: 3 Case Studies
Robot-child interactions — helping children with autism learn skills
Why Robot Trainers will become an important asset of companies
Automation & Its Limitations, or, Why the Future is Humans & Robots/AI Working Together as Teammates
Human Perception Of Robots
Of Guilt, Humans, Robots, and Interaction Design
The Uncanny Valley Of Human-Robot Interactions
First Encounter with Robot Alpha: How Individuals Respond to Social Robot’s Vocal Cues and Gestures
Dr. Kun Xu Discusses Social Cues in Human-Machine Interactions
User Experience In Human Robot Interaction
How to include User experience in the design of Human-Robot Interaction