I received my MS XXXXX My current research interest is XXXXX If you have any questions, please contact me at [at]unsw[dot]edu[dot]au
Teaching Assistant at UNSW
Course: XXXX, Fall 2022
Research Assistant (XXXX - XXXX)
Department of XXXX, University of XXXXX.
Software Engineer Intern, XXXXX
Teaching robots can be challenging, particularly for novice human users who struggle to understand the robot’s learning process. Current research in interactive robot learning lacks effective methods for assessing a user’s interpretation of the robot’s learning state, which makes it difficult to compare different teaching approaches. To address these issues, we propose and demonstrate a method for assessing the user’s interpretation of the robot’s learning state in an interactive learning scenario with a robotic manipulator. Additionally, we draw on existing literature to categorise types of interface interventions that can enhance the human-robot teaching process for novice users – both pragmatically and hedonically. In a user study (N=30), we implement two of these interventions and show how they improve robot performance, teaching efficiency and interpretability. These findings provide preliminary insights into the design of effective human-robot teaching interfaces and can be used to assist the development of future teaching approaches.
In interactive agent learning, the human may teach in a collaborative or adversarial manner. Past research has been focusing on collaborative teaching styles as these are common in human education settings, while overlooking adversarial ones despite promising results in recent research. Moreover, agent performance has been the main focal point while neglecting the perspective of the human teacher, who is crucial to the instructional process. In this work, we examine the impact of competitive and collaborative teaching styles on agent learning and human perception. We conducted a study (N=40) for participants to demonstrate a task in different interaction modes for teaching a computer agent: collaboratively, competitively, or without interacting with the agent. Most participants reported that they preferred competing against the computer agent to the other two modes. Despite smaller numbers of demonstrations given from the user, the agent performance from the interactive modes (collaborative and competitive) was comparable to the non-interactive mode (solo). The agent was perceived as being more competent in the competitive mode than the collaborative mode despite the marginally worse in-task performance. These preliminary findings suggest that competitive types of interaction, when agents or robots learn from humans, lead to better human perception of the agent’s learning when compared to collaborative, and better user engagement when compared to non-interactive learning from demonstrations.