German

Presenter App

 

The goal of this experiment was to assess how the use of different presentational parameters affects the students' perception and the performance of a robot as a presenter in a teaching environment. Considering the current availability and performance limitations of socially interactive robots (speech recognition, movement, and discourse management), the most suitable role for them in the classroom has mostly been the teaching assistant. In order to fulfill that role efficiently, robots need to be informative, adaptive, entertaining and possess all other qualities of a good presenter. For some of them, the presenter applications have already been developed and could be utilized in the class. However, just like human teachers are different and have different qualities that they can rely on, so too are the humanoid robots. They have different sizes, levels of autonomy, movement potentials, and input and output channels. Thus, it is not easy to make a uniform set of requirements for a successful robotic presenter.

 

Motivation for the Experiment

The robot used in the experiment was Pepper , developed by the SoftBank Robotics. Pepper is a humanoid, autonomous robot designed to make interaction with people natural. To achieve this, Pepper analyzes that from multiple sensors, microphones and cameras located inside it. On one hand, the robot uses this information to orientate in space, avoid collisions and move. Since it moves on three wheels, it can easily turn around once it detects a human presence. On the other hand, Pepper also uses its sensors to detect the emotional state of the interlocutor by collecting information about the facial expression, posture, tone of the voice and word choice of the speaker. In the interaction mode, Pepper performs best in one-to-one situations. What makes the interaction with this robot unique is the fact that it is equipped with a tablet which means that it can interact with students through several channels: voice, change in eye color, gestures, and the tablet. When a human presenter uses visual support, it is usually in the form of a screen placed behind, above or next to the speaker. It is, therefore, clearly separated from the speaker and the audience decides where to focus their attention. With Pepper it is different. The tablet is placed in the chest level and it is seen as an inseparable part of the robot. Since the robot can use both the tablet and gestures to support its speech, it is debatable if these two elements are complementary or obtrusive to one another. On the other hand, research into the cognitive theory of multimedia learning suggest that a synchronized visual input can help reduce the cognitive load in students’ working memory and thus benefit the learning process (Shams & Seitz).

Description of the Experiment

The experiment was conducted with a class of students taking the course Introduction to Linguistics at the Philipps University of Marburg. Keeping in mind Pepper’s presentational features, the experiment was divided into two parts to test the impact of the use of gestures and the tablet on the perception of the robot by the students and the learning process. The overall topic of the presentations was the morphological processes in the English language. Before listening to the robot, the students filled in a questionnaire to determine their attitude towards robots. They were presented with a questionnaire based on the Negative Attitude towards Robots Scale (NARS). NARS is a widely used and accepted tool for measuring users’ attitude toward robots. It consists of 14 Likert-type questions which can be divided into three groups:


1. attitude towards the interaction with robots (interactive)
2. attitude towards the social influence of robots (social)
3. attitude towards emotions in interaction with robots (emotional)

 

ePaper
NARS survey sheet

Teilen:

Presentation 1

The topic of the first presentation was the inflection in the English language. In this part of the experiment, the manipulated condition was the use of gestures during the presentation. The robot delivered the same presentation on inflection to two different groups of students. With the first group (N=6) the robot was moving its head, arms, and torso while talking, and with the second group (N=6) it remained mostly motionless. In both cases, the robot used the tablet to display the content. Following the presentation, the students were presented with a questionnaire to determine their impressions of the robot and its performance. The questionnaire was an adapted version of the Godspeed Scale. It is a set of pairs of opposing adjectives that can describe the robot. The questions are meant to measure the users’ perception of the robot in five categories: anthropomorphism, animacy, likeability, perceived,  intelligence and perceived safety. The distribution of questions can be seen on the link for the presentation survey. Since the students were already familiar with the robot and the experiment was conducted in a safe environment and in the presence of the professor, the last part of the questionnaire concerning the safety was not used. Additionally, they were asked two more questions related to the position of the logo on the screen and the color of the titles on each of the slides. The aim was to determine whether the use of gestures during the speech influences the students’ perception of content presented on the tablet.

ePaper
Godspeed Scale

Teilen:

Presentation 2

The second presentation covers the topic of the word formation processes. The presentation was delivered to the same two groups of students, with the manipulated condition being the use of the tablet. While presenting to the first group, the robot did not use the tablet but only its voice and gestures. With the second group, the tablet was also used displaying accompanying slides during the presentation. Following the presentation, the students again filled in the Godspeed-based questionnaire to determine how the use of the tablet affects the perception of the robot. Additionally, they were asked to answer two content-related questions, one from the opening part of the presentation and one from the middle part. The aim was to determine whether the use of the tablet as an additional visual input helps the cognitive content processing.

Semantic Script

ePaper
The semantic script of the presenter app according to Schlank & Abelson
(Schank, R. C., & Abelson, R. P. (2013). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Psychology Press.)

Teilen:
Druckversion | Sitemap
© Linguistic Engineering Team der Philipps-Universität Marburg