U18 Symposium: Robot, AI and Human

Thanks a lotto all the amazing participants of the "U18 Symposium: Robot, AI and Human" for sharing their views and questioning robotics researchers with young eyes! Thanks for the speakers and the staff that made this event a great moment!

Thanks a lotto all the amazing participants of the "U18 Symposium: Robot, AI and Human" for sharing their views and questioning robotics researchers with young eyes! Thanks for the speakers and the staff that made this event a great moment!

Venue:    Ichijyo Hall, Yayoi Auditorium, University of Tokyo
Date:      Sunday, October 2, 2016, 13:00-17:30
Entrance: Free of charge

Purpose: International and Japanese researchers talk with high school students about the coming relationship between humans and robots and AI, and about the current state of the science and technology. Questions, ranging from the basics to future scope and challenges, are answered in plain words. The symposium is a unique experience for high school students to think and talk about the future of human life with leading researchers. Questions are welcome in Japanese and English. No translation is provided. Though the international researchers speak in English, a high school student of middle-level English skill will be able to understand the contents since there are many drawings, photos and movies. Students, whether in the humanities course or the science course, who are interested in robots, AI and human life, are welcome to participate.

Program:

13:00 Opening

13:10 Oussama Khatib (Stanford University, USA)

Ocean One: an Avatar for Human Reach

The promise of oceanic discovery has intrigued scientists and explorers, whether to study underwater ecology and climate change, or to uncover natural resources and historic secrets buried deep at archaeological sites. This quest to explore the oceans requires skilled human access, yet much of it is inaccessible to human divers as nearly nine-tenths of the ocean floor is at one kilometer or deeper. Accessing these depths is imperative since factors such as pollution and deep-sea trawling threaten ecology and archaeological sites. These needs demand a system that deploys human-level expertise at the depths but remotely operated vehicles (ROVs) are inadequate for the task - a robotic avatar could go where humans can not, and yet embody human intelligence and intentions through immersive interfaces. To meet the challenge of accessing oceanic depths, Stanford University, working with KAUST's Red Sea Research Center and MEKA Robotics, developed Ocean One, a bimanual forcecontrolled humanoid robot that affords immediate and intuitive haptic interaction in oceanic environments. Teaming with the French Ministry of Culture’s Underwater Archaeology Research Department, Stanford deployed Ocean One in an expedition in the Mediterranean to Louis XIV’s flagship Lune, lying at ninety-one meters depth off the coast of Toulon. Following extensive testing at Stanford University, Ocean One was flown to France in the spring of 2016 for its maiden deployment, where it became the first robot avatar to embody a human’s presence at the seabed.

 

 

13:40 Jean-Paul Laumond (LAAS, France)

Humanoid Robot: Anatomy Lesson

Humanoid robots are machines with an anthropomorphic shape that includes a head, a torso, two legs and two arms. As robots, they are able to perceive their surrounding environment via sensors, as well as to move through many motors controlling their internal degrees of freedom. The first challenge for such machines is to maintain balance when walking. The second challenge is to select the best motion to perform a given task. Both challenges are the purpose of many research projects. We will briefly report on the way humanoid robots are walking and on the way they are grasping an object. The presentation will be illustrated by various experiments conducted at LAAS-CNRS on the humanoid robot HRP2-14.

 

14:20 Gentiane Venture (Tokyo University of Agriculture and Technology, Japan) 

Making Robots that We -Humans- Like

For most humans it is “natural” to interact with other humans and in particular to use non-verbal communication as a mean to convey intention, emotion and to give a purpose to an interaction. This talk presents some of our latest advances in human science and in particular to model and to understand human behavior and non-verbal communication through the dynamics of motions. Our research results are applied to designing human-machine interactions and human-robot interactions to create awareness: social awareness, emotional awareness, health awareness… and aim at creating a seamless intelligent environment where humans and machines can share the same space and collaborate. The presentation entwines concepts from the fields of AI, robotics, biomechanics, psychology and sociology.
 
 
 
 

 

14:50 Tea Break

15:00 Marcelo H. Ang, Jr. (National University of Singapore, Singapore)

Robotics in our Daily Lives!

The talk starts with an explanation of what a robotics system and a description of its important sub-systems. It's relationship with information technology is highlighted and the evolving trend of robotic systems in human environments are discussed. We are in the midst of the robotics revolution where the highest impact of robotics is in our daily lives, where robots are no longer confined in factories, but are moving and helping us in every aspect of our daily lives. The focus of the talk is on how to create a robotic system and what are the existing tools. The challenges and opportunities are presented, with the objective of motivating you to contribute to the development of useful robotic systems. Many examples of robotics projects will be presented during the talk.
 
 
 
 

15:30 Dieter Fox (University of Washington, USA)

Toward Robots that Understand People and their Environment

To interact and collaborate with people in a natural way, robots must be able to recognize objects in their environments, accurately track the actions of humans, and estimate their goals and intentions. The last years have seen dramatic improvements in robotic capabilities to model, detect, and track non-rigid objects such as human bodies, hands, and their own manipulators. These recent developments can serve as the basis for providing robots with an unprecedented understanding of their environment and the people therein. I will use examples from our research on modeling, detecting, and tracking articulated objects to highlight these advances and discuss open problems that still need to be addressed.
 
 

16:00 Yoshihiko Nakamura (University of Tokyo, Japan)               

From Robotics to Human Simulation

Robotics and AI algorithms are making human modeling possible. The human model includes a biomechanical model, a neuro-musculoskeletal model, and a behavioral model. With a lot of data of human action and reaction, computers and robots will be able to understand the meaning of human movements, discover the skills of athletes, advise the training and exercise for rehabilitation, and communicates with humans in very natural ways. This talk will explain in plain words about the basic physics and mathematics behind, show the examples of analysis of skills of Olympic athletes, and introduce the results of recent research using the supercomputer, K, where the high precision model of the wholebody muscles and motor/sensory neurons has been developed.
 
 
 
 
16:30-17:00 Closing