Mercurial > hg > Members > atton > intelligence_robotics
changeset 10:62f384a20c2c
fix
author | tatsuki |
---|---|
date | Fri, 26 Jun 2015 09:09:58 +0900 |
parents | 8a6f547b72c0 |
children | 7104d522d2f0 |
files | slide.html slide.md |
diffstat | 2 files changed, 1114 insertions(+), 337 deletions(-) [+] |
line wrap: on
line diff
--- a/slide.html Thu Jun 25 06:07:48 2015 +0900 +++ b/slide.html Fri Jun 26 09:09:58 2015 +0900 @@ -1,377 +1,1155 @@ -<!DOCTYPE html> -<html> -<head> -<meta http-equiv="content-type" content="text/html;charset=utf-8"> -<title>知能ロボット</title> - -<!-- -Notes on CSS media types used: +<!DOCTYPE HTML> - 1) projection -> slideshow mode (display one slide at-a-time; hide all others) - 2) screen -> outline mode (display all slides-at-once on screen) -3) print -> print (and print preview) +<html lang="Japanese"> +<head> + <title>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</title> + <meta charset="UTF-8"> + <meta name="viewport" content="width=1274, user-scalable=no"> + <meta name="generator" content="Slide Show (S9)"> + <meta name="author" content="Tatsuki KANAGAWA <br> Yasutaka HIGA"> + <link rel="stylesheet" href="themes/ribbon/styles/style.css"> +</head> +<body class="list"> + <header class="caption"> + <h1>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</h1> + <p>Tatsuki KANAGAWA <br> Yasutaka HIGA</p> + </header> + <div class="slide cover" id="Cover"><div> + <section> + <header> + <h2>A Novel Greeting System Selection System for a Culture-Adaptive Humanoid Robot</h2> + <h3 id="author">Tatsuki KANAGAWA <br> Yasutaka HIGA</h3> + <h3 id="profile">Concurrency Reliance Lab</h3> + </header> + </section> + </div></div> - Note: toggle between projection/screen (that is, slideshow/outline) mode using t-key +<!-- todo: add slide.classes to div --> +<!-- todo: create slide id from header? like a slug in blogs? --> - Questions, comments? - - send them along to the mailinglist/forum online @ http://groups.google.com/group/webslideshow +<div class="slide" id="2"><div> + <section> + <header> + <h1 id="abstract-robots-and-cultures">Abstract: Robots and cultures</h1> + </header> + <!-- === begin markdown block === + + generated by markdown/1.2.0 on Ruby 1.9.3 (2011-10-30) [x86_64-darwin10] + on 2015-06-26 09:06:41 +0900 with Markdown engine kramdown (1.7.0) + using options {} --> - <!-- styles --> - <style media="screen,projection"> +<!-- _S9SLIDE_ --> + +<ul> + <li>Robots, especially humanoids, are expected to perform human-like actions and adapt to our ways of communication in order to facilitate their acceptance in human society.</li> + <li>Among humans, rules of communication change depending on background culture.</li> + <li>Greeting are a part of communication in which cultural differences are strong.</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="3"><div> + <section> + <header> + <h1 id="abstract-summary-of-this-paper">Abstract: Summary of this paper</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>In this paper, we present the modelling of social factors that influence greeting choice,</li> + <li>and the resulting novel culture-dependent greeting gesture and words selection system.</li> + <li>An experiment with German participants was run using the humanoid robot ARMAR-IIIb.</li> +</ul> - html, - body, - .presentation { margin: 0; padding: 0; } + + + </section> +</div></div> + +<div class="slide" id="4"><div> + <section> + <header> + <h1 id="introduction-acceptance-of-humanoid-robots">Introduction: Acceptance of humanoid robots</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Acceptance of humanoid robots in human societies is a critical issue.</li> + <li>One of the main factors is the relations ship between the background culture of human partners and acceptance. + <ul> + <li>ecologies, social structures, philosophies, educational systems.</li> + </ul> + </li> +</ul> + + + + </section> +</div></div> - .slide { display: none; -position: absolute; -top: 0; left: 0; -margin: 0; -border: none; -padding: 2% 4% 0% 4%; /* css note: order is => top right bottom left */ - -moz-box-sizing: border-box; - -webkit-box-sizing: border-box; - box-sizing: border-box; -width: 100%; height: 100%; /* css note: lets use border-box; no need to add padding+border to get to 100% */ - overflow-x: hidden; overflow-y: auto; - z-index: 2; - } +<div class="slide" id="5"><div> + <section> + <header> + <h1 id="introduction-culture-adapted-greetings">Introduction: Culture adapted greetings</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>In the work Trovat et al. culture-dependent acceptance and discomfort relating to greeting gestures were found in a comparative study with Egyptian and Japanese participants.</li> + <li>As the importance of culture-specific customization of greeting was confirmed.</li> + <li>Acceptance of robots can be improved if they are able to adapt to different kinds of greeting rules.</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="6"><div> + <section> + <header> + <h1 id="introduction-methods-of-implementation-adaptive-behaviour">Introduction: Methods of implementation adaptive behaviour</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Adaptive behaviour in robotics can be achieved through various methods: + <ul> + <li>reinforcement learning</li> + <li>neural networks</li> + <li>generic algorithms</li> + <li>function regression</li> + </ul> + </li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="7"><div> + <section> + <header> + <h1 id="introduction-greeting-interaction-with-robots">Introduction: Greeting interaction with robots</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Robots are expected to interact and communicate with humans of different cultural background in a natural way.</li> + <li>It is there therefore important to study greeting interaction between robots and humans. + <ul> + <li>ARMAR-III: greeted the Chancellor of Germany with a handshake</li> + <li>ASIMO: is capable of performing a wider range of greetings</li> + <li>(a handshake, waving both hands, and bowing)</li> + </ul> + </li> +</ul> + + + + </section> +</div></div> -.slide.current { display: block; } /* only display current slide in projection mode */ +<div class="slide" id="8"><div> + <section> + <header> + <h1 id="introduction-objectives-of-this-paper">Introduction: Objectives of this paper</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The robot should be trained with sociology data related to one country, and evolve its behaviour by engaging with people of another country in a small number of interactions.</li> + <li>For the implementation of the gestures and the interaction experiment, we used the humanoid robot ARMAR-IIIb.</li> + <li>As the experiment is carried out in Germany, the interactions are with German participants, while preliminary training is done with Japanese data, which is culturally extremely different.</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="9"><div> + <section> + <header> + <h1 id="introduction-armar-iiib">Introduction: ARMAR-IIIb</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/ARMAR-IIIb.png" style="width: 350px; height: 350px; margin-left: 200px;" /></p> + + -.slide .stepcurrent { color: black; } -.slide .step { color: silver; } /* or hide next steps e.g. .step { visibility: hidden; } */ + </section> +</div></div> + +<div class="slide" id="10"><div> + <section> + <header> + <h1 id="introduction-target-scenario">Introduction: Target scenario</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The idea behind this study is a typical scenario in which a foreigner visiting a country for the first time greets local people in an inappropriate way as long as he is unaware of the rules that define the greeting choice. + <ul> + <li>(e.g., a Westerner in Japan)</li> + </ul> + </li> + <li>For example, he might want to shake hands or hug, and will receive a bow instead.</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="11"><div> + <section> + <header> + <h1 id="introduction-objectives-of-this-work">Introduction: Objectives of this work</h1> + </header> + <!-- _S9SLIDE_ --> -.slide { - /* - background-image: -webkit-linear-gradient(top, blue, aqua, blue, aqua); - background-image: -moz-linear-gradient(top, blue, aqua, blue, aqua); - */ -} -</style> +<ul> + <li>This work is an application of a study of sociology into robotics.</li> + <li>Our contribution is to synthesize the complex and sparse data related to greeting types into a model;</li> + <li>create a selection and adaptation system;</li> + <li>and implement the greetings in a way that can potentially be applied to any robot.</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="12"><div> + <section> + <header> + <h1 id="greeting-selection-greetings-among-humans">Greeting Selection: Greetings among humans</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Greetings are the means of initiating and closing an interaction.</li> + <li>We desire that robots be able to greet people in a similar way to humans.</li> + <li>For this reason, understanding current research on greetings in sociological studies is necessary.</li> + <li>Moreover, depending on cultural background, there can be different rules of engagement in human-human interaction.</li> +</ul> + + + + </section> +</div></div> -<style media="screen"> -.slide { border-top: 1px solid #888; } -.slide:first-child { border: none; } -</style> +<div class="slide" id="13"><div> + <section> + <header> + <h1 id="greeting-selection-solution-for-selection">Greeting Selection: Solution for selection</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>A unified model of greetings does not seem to exist in the literature, but a few studies have attempted a classification of greetings.</li> + <li>Some more specific studies have been done on handshaking.</li> +</ul> + + -<style media="print"> -.slide { page-break-inside: avoid; } -.slide h1 { page-break-after: avoid; } -.slide ul { page-break-inside: avoid; } -</style> + </section> +</div></div> + +<div class="slide" id="14"><div> + <section> + <header> + <h1 id="greeting-selection-classes-for-greetings">Greeting Selection: Classes for greetings</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>A classification of greetings was first attempted by Friedman based on intimacy and commonness.</li> + <li>The following greeting types were mentioned: smile; wave; nod; kiss on mouth; kiss on cheek; hug; handshake; pat on back; rising; bow; salute; and kiss on hand.</li> + <li>Greenbaum et al. also performed a gender-related investigation, while [24] contained a comparative study between Germans and Japanese.</li> +</ul> + -<!-- add js lib (jquery) --> -<script src="js/jquery-1.7.min.js"></script> + </section> +</div></div> + +<div class="slide" id="15"><div> + <section> + <header> + <h1 id="greeting-selection-factors-on-classification">Greeting Selection: Factors on Classification</h1> + </header> + <!-- _S9SLIDE_ --> -<!-- S6 JS --> -<script src="js/jquery.slideshow.js"></script> -<script src="js/jquery.slideshow.counter.js"></script> -<script src="js/jquery.slideshow.controls.js"></script> -<script> -$(document).ready( function() { - Slideshow.init(); +<ul> + <li>‘terms’ : same terms with different meanings, or different terms with the same meaning.</li> + <li>‘location’ : influences intimacy and greeting words. (private or public)</li> + <li>‘intimacy’ : is influenced by physical distance, eye contact, gender, location, and culture. (Social Distance)</li> + <li>‘Time’ : time of the day is important for the choice of words.</li> + <li>‘Politeness’, ‘Power Relationship’, ‘culture’ and more.</li> +</ul> + + - // Example 2: Start Off in Outline Mode - // Slideshow.init( { mode: 'outline' } ); + </section> +</div></div> + +<div class="slide" id="16"><div> + <section> + <header> + <h1 id="greeting-selection-factors-on-classification-1">Greeting Selection: Factors on Classification</h1> + </header> + <!-- _S9SLIDE_ --> - // Example 3: Use Custom Transition - // Slideshow.transition = transitionScrollUp; - // Slideshow.init(); +<ul> + <li>the factors to be cut are greyed out.</li> +</ul> + +<p><img src="pictures/factors.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + + + + </section> +</div></div> - // Example 4: Start Off in Autoplay Mode with Custom Transition - // Slideshow.transition = transitionScrollUp; - // Slideshow.init( { mode: 'autoplay' } ); - } ); -</script> +<div class="slide" id="17"><div> + <section> + <header> + <h1 id="model-of-greetings-assumptions-1---5">Model of Greetings: Assumptions (1 - 5)</h1> + </header> + <!-- _S9SLIDE_ --> -</head> -<body> - -<div class="presentation"> +<ul> + <li>The simplification was guided by the following ten assumptions.</li> + <li>Only two individuals (a robot and a human participant): we do not take in consideration a higher number of individuals.</li> + <li>Eye contact is taken for granted.</li> + <li>Age is considered part of ‘power relationship’</li> + <li>Regionally is not considered.</li> + <li>Setting is not considered</li> +</ul> + + + + </section> +</div></div> -<!-- add slides here; example --> +<div class="slide" id="18"><div> + <section> + <header> + <h1 id="model-of-greetings-assumptions-6---10">Model of Greetings: Assumptions (6 - 10)</h1> + </header> + <!-- _S9SLIDE_ --> -<div class='cover'> -<h1>Implementation on ARMAR-IIIb(ARMAR-IIIbの実装)</h1> -<font size = 5> -<p>ARMAR-III is designed for close cooperation with humans(アーマーは人間との緊密な協力の為に設計されたロボットです)</p> -<p> ARMAR-III has a humanlike appearance(アーマーは人間に似た外見を持つ)</p> -<p> sensory capabilities similar to humans(それは人間に似た感覚等を持つため)</p> -<p>ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands(アーマーⅢ bは頭、胴体、手に修正を加えた物)</p> -</font> -</div> +<ul> + <li>Physical distance is close enough to allow interaction</li> + <li>Gender is intended to be a same-sex dyad</li> + <li>Affect is considered together with ‘social distance’</li> + <li>Time since the last interaction is partially included in ‘social distance’</li> + <li>Intimacy and politeness are not necessary</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="19"><div> + <section> + <header> + <h1 id="model-of-greetings-basis-of-classification">Model of Greetings: Basis of classification</h1> + </header> + <!-- _S9SLIDE_ --> -<div> -<h1> Implementation of gestures(ジェスチャーの実装)</h1> -<font size = 5> -<p>The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware.(ジェスチャのロボットへの実装はハードウェアに行っていない)</p> -<p>manually defining the patterns of the gestures(ジェスチャーのパターンを定義する)</p> -<p>Definition gesture is performed by Master Motor Map(MMM) format and is converted into robot(ジェスチャの定義はMMM形式で行い、ロボット用に変換している</p> -</font> -</div> +<ul> + <li>Input + <ul> + <li>All the other factors are then considered features of a mapping problem</li> + <li>They are categorical data, as they can assume only two or three values.</li> + </ul> + </li> + <li>Output + <ul> + <li>The outputs can also assume only a limited set of categorical values.</li> + </ul> + </li> +</ul> + + + + </section> +</div></div> -<div> -<h1>Master Motor Map</h1> -<font size=5> -<p>The MMM is a reference 3D kinematic model(MMMは3D運動学的モデル)</p> -<p>providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules(モーションキャプチャ、動作認識、模倣、可視化システムの統合した表現を提供する) </p> -<p>This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots(この表現は、異なるロボット用の表現に変換することができる)</p> -<p> The MMM is intended to become a common standard in the robotics community(MMMはロボット工学分野における標準になるのを目指している)</p> -<p>ここに図4を入れる</p> -<p>図4はMMMを使った表現の変換図かな?</p> +<div class="slide" id="20"><div> + <section> + <header> + <h1 id="model-of-greetings-features-mapping-discriminants-classes-and-possible-status">Model of Greetings: Features, mapping discriminants, classes, and possible status</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/classes.png" style="width: 60%; margin-left: 150px;" /></p> + + + + </section> +</div></div> + +<div class="slide" id="21"><div> + <section> + <header> + <h1 id="model-of-greetings-overview-of-the-greeting-model">Model of Greetings: Overview of the greeting model</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Greeting model takes context data as input and produces the appropriate robot posture and speech for that input.</li> + <li>The two outputs evaluated by the participants of the experiment through written questionnaires.</li> + <li>These training data that we get from the experience are given as feedback to the two mappings.</li> +</ul> + -<p>MMMのモデルのデータ展開とかはいらないかな</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="22"><div> + <section> + <header> + <h1 id="model-of-greetings-overview-of-the-greeting-model-1">Model of Greetings: Overview of the greeting model</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/model_overview.png" style="width: 75%; margin-left: 120px;" /></p> + + + + </section> +</div></div> + +<div class="slide" id="23"><div> + <section> + <header> + <h1 id="greeting-selection-system-training-data">Greeting selection system training data</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Mappings can be trained to an initial state with data taken from the literature of sociology studies.</li> + <li>Training data should be classified through some machine learning method or formula.</li> + <li>We decided to use conditional probabilities: in particular the Naive Bayes formula to map data.</li> + <li>Naive Bayes only requires a small amount of training data.</li> +</ul> + + -<div> -<h1>MMM2</h1> -<font size=5> -<p>The body model of MMM model can be seen in the left-hand illustration in Figure 5(図5の左のモデルは、MMMのボディーモデルです</p> -<p>It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots(このモデルには通常のロボットには実装されていない鎖骨等の関節が含まれている)</p> -<p>A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model(ARMAR-IIIbとMMMモデル間の変換を、変換モジュールを用いて行う必要がある)</p> -<p></p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="24"><div> + <section> + <header> + <h1 id="model-of-greetings-details-of-training-data">Model of Greetings: Details of training data</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>While training data of gestures can be obtained from the literature, data of words can also be obtained from text corpora.</li> + <li>English: English corpora, such as British National Corpus, or the Corpus of Historical American English, are used.</li> + <li>Japanese: extracted from data sets by [24, 37, 41-43]. Analyze Corpus on Japanese is difficult.</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="25"><div> + <section> + <header> + <h1 id="model-of-greetings-location-assumption">Model of Greetings: Location Assumption</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The location of the experiment was Germany.</li> + <li>For this reason, the only dataset needed was the Japanese.</li> + <li>As stated in the motivations at the beginning of this paper, the robot should initially behave like a foreigner.</li> + <li>ARMAR-IIIb, trained with Japanese data, will have to interact with German people and adapt to their customs.</li> +</ul> + -<div> -<h1>converter</h1> -<font size=5> -<p>converter given joint angles would consist in a one-to-one mapping between an observed human subject and the robot(コンバーターは与えられた関節角度等から人とロボット間の1対1のマッピングを構成する)</p> -<p>differences in the kinematic structures of a human and the robot one-to-one mapping can hardly show acceptable results in terms of a human like appearance of the reproduced movement(通常は人間とロボットの構造の違いがあるため変換できない。)ここちょっと怪しい</p> -<p> this problem is addressed by applying a post-processing procedure in joint angle space(この問題は関節角度を以下のように調整することで解決する</p> -<p>the joint angles, given in the MMM format,are optimized concerning the tool centre point position(MMM形式で与えられた関節角度はtool centre point position(調べる何かのロボット用語っぽい)で最適化されている)</p> -<p>solution is estimated by using the joint configuration of the MMM model on the robot(MMMモデルを使ってロボットの関節構造を推定することでこの問題を解決する)</p> -<p>もうちょい詰めたい感ある</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="26"><div> + <section> + <header> + <h1 id="model-of-greetings-mappings-and-questionnaires">Model of Greetings: Mappings and questionnaires</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The mapping is represented by a dataset, initially built from training data, as a table containing weights for each context vector corresponding to each greeting type.</li> + <li>We now need to update these weights.</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="27"><div> + <section> + <header> + <h1 id="feedback-from-three-questionnaires">feedback from three questionnaires</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Whenever a new feature vector is given as an input, it is checked to see whether it is already contained in the dataset or not.</li> + <li>In the former case, the weights are directly read from the dataset</li> + <li>in the latter case, they get assigned the values of probabilities calculated through the Naive Bayes classifier.</li> + <li>The output is the chosen greeting, after which the interaction will be evaluated through a questionnaires.</li> +</ul> + -<div> -<h1>MMMのサポートみたいなタイトル</h1> -<font size=5> -<p>The MMM framework has a high support for every kind of human-like robot(MMMはほとんどの人型ロボットをサポートしています)</p> -<p>MMM can define the transfer rules(転送のルールを定義することができる)</p> -<p>Using the conversion rules, it can be converted from the MMM Model to the movement of the robot(転送ルールを用いて、MMMモデルからロボットのモーションへ変換する</p> -<p>may not be able to convert from MMM model for a specific robot(特定のロボットに対してMMMモデルからの変換ができない可能性がある</p> -<p> the motion representation parts of the MMM can be used nevertheless(しかしMMMのモーションの表現部分は使用できる</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="28"><div> + <section> + <header> + <h1 id="model-of-greetings-three-questionnaires-for-feedback">Model of Greetings: Three questionnaires for feedback</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>answers of questionnaires are five-point semantic differential scale: + <ol> + <li>How appropriate was the greeting chosen by the robot for the current context?</li> + <li>(If the evaluation at point 1 was <= 3) which greeting type would have been appropriate instead?</li> + <li>(If the evaluation at point 1 was <= 3) which context would have been appropriate, if any, for the greeting type of point 1?</li> + </ol> + </li> +</ul> -<div> -<h1>MMMを使った変換例みたいな感じ</h1> -<font size=5> -<p>After programming the postures directly on the MMM model they were processed by the converter(図6の説明、MMMモデルに直立姿勢をプログラミングした後変換した)</p> -<p>Conversion is not easy(変換は簡単ではない)</p> -<p>the human model contains many joints, which are not present in the robot configuration(なぜなら人体はロボットに存在しない多くの関節を持っている)</p> -<p>ARMAR is not bending the body when performing a bow(ARMARは挨拶を行う際身体を大きく曲げれない)</p> -<p>It was expressed using a portion present in the robot (e.g., the neck)(ロボットに存在する部分を使って表現した)</p> -<p>図7はARMARでの変換後の動き</p> -</font> -</div> + + </section> +</div></div> + +<div class="slide" id="29"><div> + <section> + <header> + <h1 id="model-of-greetings-feedback-and-terminate-condition">Model of Greetings: feedback and terminate condition</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Weights of the affected features are multiplied by a positive or negative reward (inspired by reinforcement learning) which is calculated proportionally to the evaluation.</li> + <li>Mappings stop evolving when the following two stopping conditions are satisfied</li> + <li>all possible values of all features have been explored</li> + <li>and the moving average of the latest 10 state transitions has decreased below a certain threshold.</li> +</ul> + -<div> -<h1>MCA</h1> -<font size=5> -<p>The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented(姿勢や挨拶のモデルはMCAインターフェースを使用して実行する)</p> -<p>the list of postures is on the left together with the option(姿勢のリストはオプションと一緒に左側にある)</p> -<p>When that option is activated, it is possible to select the context parameters through the radio buttons on the right(このオプションが有効になると右側のラジオボタンから増強のパラメータを選択することが可能になる)</p> + </section> +</div></div> + +<div class="slide" id="30"><div> + <section> + <header> + <h1 id="model-of-greetings-summary">Model of Greetings: Summary</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Thanks to this implementation, mappings can evolve quickly, without requiring hundreds or thousands of iterations</li> + <li>but rather a number comparable to the low number of interactions humans need to understand and adapt to social rules.</li> +</ul> + + + + </section> +</div></div> -<p>ここに図8を入れる</p> -</font> -</div> +<div class="slide" id="31"><div> + <section> + <header> + <h1 id="todo-please-add-slides-over-chapter-3-implementation-of-armar-iiib">TODO: Please Add slides over chapter (3. implementation of ARMAR-IIIb)</h1> + </header> + <!-- _S9SLIDE_ --> + + + + + </section> +</div></div> + +<div class="slide" id="32"><div> + <section> + <header> + <h1 id="implementation-on-armar-iiib">Implementation on ARMAR-IIIb</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>ARMAR-III is designed for close cooperation with humans</li> + <li>ARMAR-III has a humanlike appearance</li> + <li>sensory capabilities similar to humans</li> + <li>ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands</li> +</ul> + -<div> -<h1>Implementation of words</h1> -<font size=5> -<p> Word set of greetings has been translated into both German and Japanese, as in Table 2(挨拶の単語のセットはドイツ語と日本語)で表2のようになります</p> -<p>For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」(そして、例えば日本の職場ではお疲れ様ですという挨拶を使うのが一般的です)</p> -<p> where a standard greeting like 「konnichi wa」 would be inappropriate(職場で「こんにちは」と言った挨拶は不適切です)</p> -<p>In German, such a greeting type does not exist(しかし、ドイツではこのような特定の挨拶のタイプは存在しない)</p> -<p>but the meaning of “thank you for your effort” at work can be directly translated into German(しかし、お疲れ様です)をドイツ語に翻訳することが出来ます</p> -<p> the robot knows dictionary terms, but does not understand the difference in usage of these words in different contexts(ロボットは言葉の意味は知っているが、状況に応じた使い方の違いを理解していない)</p> -<p>ここに表8を書かなきゃ!!</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="33"><div> + <section> + <header> + <h1 id="implementation-of-gestures">Implementation of gestures</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The implementation on the robot of the set of gestures it is not strictly hardwired to the specific hardware</li> + <li>manually defining the patterns of the gestures</li> + <li>Definition gesture is performed by Master Motor Map(MMM) format and is converted into robot</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="34"><div> + <section> + <header> + <h1 id="master-motor-map">Master Motor Map</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The MMM is a reference 3D kinematic model</li> + <li>providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules</li> + <li>This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots</li> + <li>The MMM is intended to become a common standard in the robotics community</li> +</ul> + -<div> -<h1>Implementation of words</h1> -<font size=5> -<p>These words have been recorded through free text-to-speech software into wave files that could be played by the robot(また、これらの単語は、テキストをwaveファイルに変換するソフトウェアを介して記録されています)</p> -<p>ARMAR does not have embedded speakers in its body(アーマーは本体にスピーカーを持っていない)</p> -<p>added two small speakers behind the head and connected them to another computer(頭の後ろに小さな2つのスピーカーを追加し別のコンピュータに接続している)</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="35"><div> + <section> + <header> + <h1 id="master-motor-map-1">Master Motor Map</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/MMM.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + + + + </section> +</div></div> + +<div class="slide" id="36"><div> + <section> + <header> + <h1 id="master-motor-map-2">Master Motor Map</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The body model of MMM model can be seen in the left-hand illustration in Figure</li> + <li>It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots</li> + <li>A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="37"><div> + <section> + <header> + <h1 id="master-motor-map-3">Master Motor Map</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/MMMModel.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + -<div> -<h1> Experiment description</h1> -<font size=5> -<p>Experiments were conducted at room as shown in Figure 9 , Germany.(実験は、ドイツの図9に示されているような部屋で行われました)</p> -<p>Participants were 18 German people of different ages, genders, workplaces(参加者は、異なる年齢、性別、職場の18人のドイツ人です)</p> -<p>robot could be trained with various combinations of context(ロボットは様々な状況の組み合わせでトレーニングができる)</p> -<p>It was not possible to include all combinations of feature values in the experiment(しかし、全ての状況の組み合わせで実験は行えなかった)</p> -<p>for example there cannot be a profile with both [‘location’: ‘workplace’] and [‘social distance’: ‘unknown’](例えば、職場の同僚は必ず社会的立場を知っているはずなので、両方を満たすことは出来ない)</p> -<p>the [‘location’:‘private’] case was left out, because it is impossible to simulate the interaction in a private context, such as one’s home(また、実験室では、自分の家のような場所での交流をシミュレートすることは不可能であるため、このような状況でも実験も行っていない)</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="38"><div> + <section> + <header> + <h1 id="converter">converter</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>converter given joint angles would consist in a one-to-one mapping between an observed human subject and the robot</li> + <li>convert is addressed by applying a post-processing procedure in joint angle space</li> + <li>the joint angles, given in the MMM format,are optimized concerning the tool centre point position</li> + <li>solution is estimated by using the joint configuration of the MMM model on the robot</li> +</ul> + -<div> -<h1>Experiment description</h1> -<font size=5> -<p>repeated the experiment more than(また、参加者の一部は複数回実験を行いました)</p> -<p>for example experiment is repeated at different times(異なる時間に繰り返し行なったり)</p> -<p>Change the acquaintance from unknown social distance at the time of exchange(交流時の社会的立場を不明から知人に変える)</p> -<p>we could collect more data by manipulating the value of a single feature(このように、特徴の値を操作することで多くのデータを取得した)</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="39"><div> + <section> + <header> + <h1 id="mmm-support">MMM support</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The MMM framework has a high support for every kind of human-like robot</li> + <li>MMM can define the transfer rules</li> + <li>Using the conversion rules, it can be converted from the MMM Model to the movement of the robot</li> + <li>may not be able to convert from MMM model for a specific robot</li> + <li>the motion representation parts of the MMM can be used nevertheless</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="40"><div> + <section> + <header> + <h1 id="conversion-example-of-mmm">Conversion example of MMM</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>After programming the postures directly on the MMM model they were processed by the converter</li> + <li>the human model contains many joints, which are not present in the robot configuration</li> + <li>ARMAR is not bending the body when performing a bow</li> + <li>It was expressed using a portion present in the robot (e.g., the neck)</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="41"><div> + <section> + <header> + <h1 id="gestureexample">GestureExample</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/GestureExample.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + -<div> -<h1>Statistics of participants参加 の統計)</h1> -<font size=5> -<p>The demographics of the 18 participants were as follows(実験の参加者の統計は以下のとおりでした)</p> -<li>gender :M: 10; F: 8;</li> -<li>average age: 31.33;</li> -<li>age standard deviation:13.16</li> -<p>the number of interactions was determined by the stopping condition of the algorithm(実験の交流回数は使用したアルゴリズムの停止条件で決定されました)</p> -<p>The number of interactions taking repetitions into account was 30(停止までの交流会数は30回でした)</p> -<li>gender :M: 18; F: 12(交流会数は、男性18回、女性12回) </li> -<li>average age: 29.43;(平均年齢は19.43)</li> -<li> age standard deviation: 12.46(年齢の標準偏差は12.46才)</li> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="42"><div> + <section> + <header> + <h1 id="implementgesturearmar">ImplementGestureARMARⅢ</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/ImplementGestureARMARⅢ.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + + + + </section> +</div></div> + +<div class="slide" id="43"><div> + <section> + <header> + <h1 id="modular-controller-architecture-a-modular-software-framework">Modular Controller Architecture, a modular software framework</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented</li> + <li>the list of postures is on the left together with the option</li> + <li>When that option is activated, it is possible to select the context parameters through the radio buttons on the right</li> +</ul> + + + + </section> +</div></div> -<div> -<h1> Experiment setup</h1> -<font size=5> -<p>The objective of the experiment was to adapt ARMAR-IIIb greeting behaviour from Japanese to German culture.(実験の目的は、日本の挨拶データを持ったARMARをドイツの文化へ適応させること)</p> -<p>the algorithm working for ARMAR was trained with only Japanese sociology data and two mappings M0J were built for gestures and words(そのため、ARMARは日本の社会学データとジェスチャーと言葉のマッピングのために作られたM0Jの2つを使って訓練されました)</p> -<p> After interacting with German people, the resulting mappings M1 were expected to synthesize the rules of greeting interaction in Germany(ドイツの人々と対話した後得られたマッピングM1は日本とドイツの交流時の挨拶のルールが統合されていることを期待された)</p> -<p>M0Gについては触れてない、後で使いそうならちゃんと書く</p> -</font> -</div> +<div class="slide" id="44"><div> + <section> + <header> + <h1 id="modular-controller-architecture-a-modular-software-framework-1">Modular Controller Architecture, a modular software framework</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/MCA.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + + + + </section> +</div></div> + +<div class="slide" id="45"><div> + <section> + <header> + <h1 id="implementation-of-words">Implementation of words</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Word of greeting uses two of the Japanese and German</li> + <li>For example,Japan it is common to use a specific greeting in the workplace 「otsukaresama desu」</li> + <li>where a standard greeting like 「konnichi wa」 would be inappropriate</li> + <li>In German, such a greeting type does not exist</li> + <li>but the meaning of “thank you for your effort” at work can be directly translated into German</li> + <li>the robot knows dictionary terms, but does not understand the difference in usage of these words in different contexts</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="46"><div> + <section> + <header> + <h1 id="table-of-greeting-words">table of greeting words</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/tableofgreetingwords.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + -<div> -<h1>The experiment protocol is as follows(実験の流れ)</h1> -<font size=5> -<ol> -<li>ARMAR-IIIb is trained with Japanese data(ARMAR-Ⅲbを日本のデータでトレーニングする)</li> -<li>encounter are given as inputs to the algorithm and the robot is prepared(ロボットに、今の状況が入力される)</li> -<li>Participants entered the room , you are prompted to interact with consideration robot the current situation(参加者は部屋に入って、現在の状況を考慮しロボットと交流するように指示される)</li> -<li>The participant enters the room shown in Figure 9(参加者は、図9に示されている部屋に入る)カーテンの説明はいるかな?</li> -<li>The robot’s greeting is triggered by an operator as the human participant approaches(参加者がロボットに近づくと、オペレータが挨拶を開始する命令をロボットに出します)</li> -<li>After the two parties have greeted each other, the robot is turned off(挨拶が終わった後ロボットの電源を切る)</li> -<li>the participant evaluates the robot’s behaviour through a questionnaire(参加者はアンケートを介してロボットの行動を評価する)</li> -<li>The mapping is updated using the subject’s feedback (マッピングは被験者のフィードバックを使用して更新されます)</li> -<li>Repeat steps 2–8 for each participant(2~8のステップを繰り返す)</li> -<li> Training stops after the state changes are stabilized(状態の変化が安定したらトレーニングを終了する)</li> -</ol> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="47"><div> + <section> + <header> + <h1 id="implementation-of-words-1">Implementation of words</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>These words have been recorded through free text-to-speech software into wave files that could be played by the robot</li> + <li>ARMAR does not have embedded speakers in its body</li> + <li>added two small speakers behind the head and connected them to another computer</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="48"><div> + <section> + <header> + <h1 id="experiment-description">Experiment description</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Experiments were conducted at room as shown in Figure , Germany +<img src="pictures/room.png" style="width: 60%; margin-left: 150px; margin-top: 50px;" /></li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="49"><div> + <section> + <header> + <h1 id="experiment-description2">Experiment description2</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Participants were 18 German people of different ages, genders, workplaces</li> + <li>robot could be trained with various combinations of context</li> + <li>It was not possible to include all combinations of feature values in the experiment</li> + <li>for example there cannot be a profile with both [‘location’: ‘workplace’] and [‘social distance’: ‘unknown’]</li> + <li>the [‘location’:‘private’] case was left out, because it is impossible to simulate the interaction in a private context, such as one’s home</li> +</ul> + -<div> -<h1>Results</h1> -<font size=5> -<p>The experiment was carried out through 30 interactions(実験は30回の交流を介して行われました)</p> -<p>all greeting gestures and word types had the chance of being selected at least once(全ての挨拶のジェスチャーや単語は、少なくとも一度は選択される機会があった)</p> -<p>When searching below , show the number of times that has been the use of context(以下に条件の使用回数を記す)</p> -<li>gender 34 times</li> -<li>location 50 times</li> -<li>power relationship 56 times</li> -<li>social distance 46 times</li> -<li>time of the day 39 times</li> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="50"><div> + <section> + <header> + <h1 id="experiment-description3">Experiment description3</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>repeated the experiment more than</li> + <li>for example experiment is repeated at different times</li> + <li>Change the acquaintance from unknown social distance at the time of exchange</li> + <li>we could collect more data by manipulating the value of a single feature</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="51"><div> + <section> + <header> + <h1 id="statistics-of-participants">Statistics of participants</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The demographics of the 18 participants were as follows + <ol> + <li>gender :M: 10; F: 8</li> + <li>average age: 31.33</li> + <li>age standard deviation:13.16</li> + </ol> + </li> +</ul> + -<div> -<h1>Results</h1> -<font size=5> -<p>式の前後のところは後で訳す</p> -<p>The new mapping of gestures was verified からJ towards M0G.まで</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="52"><div> + <section> + <header> + <h1 id="tatistics-of-participants">tatistics of participants</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>the number of interactions was determined by the stopping condition of the algorithm</li> + <li>The number of interactions taking repetitions into account was 30 + <ol> + <li>gender :M: 18; F: 12</li> + <li>average age: 29.43</li> + <li>age standard deviation: 12.46</li> + </ol> + </li> +</ul> -<div> -<h1>Result</h1> -<font size=5> -<p>It referred to how the change in the gesture of the experiment(ジェスチャーの実験での変化を以下に記す)</p> -<p>It has become common Bowing is greatly reduced handshake(深い挨拶が大きく減少し、握手が一般的になってきている)</p> -<p>It has appeared hug that does not exist in Japan of mapping(日本のマッピングに存在しないハグが出現している)</p> -<p>This is because the participants issued a feedback that hug is appropriate(これは、参加者がハグが適切であるというフィードバックを出したからである)</p> -<p>ここに表3かな</p> -</font> -</div> + + </section> +</div></div> + +<div class="slide" id="53"><div> + <section> + <header> + <h1 id="the-experiment-protocol-is-as-follows-15">The experiment protocol is as follows 1~5</h1> + </header> + <!-- _S9SLIDE_ --> + +<ol> + <li>ARMAR-IIIb is trained with Japanese data</li> + <li>encounter are given as inputs to the algorithm and the robot is prepared</li> + <li>Participants entered the room , you are prompted to interact with consideration robot the current situation</li> + <li>The participant enters the room</li> + <li>The robot’s greeting is triggered by an operator as the human participant approaches</li> +</ol> + -<div> -<h1>Result</h1> -<font size=5> -<p>The biggest change in the words of the mapping , are gone workplace of greeting(言葉のマッピングの最大の変化は、職場の挨拶がなくなっている)</p> -<p>Is the use of informal greeting as a small amount of change(微量な変化としては軽い挨拶の使用がある)</p> -<p> the changes cannot be considered significant.の訳がわからん</p> -<p>some other patterns can be found in the gestures mappings judging from the columns in Table 3 for T = 0,(テーブル3のT=0の時の列からジャスチャーのマッピングパターンを見つけることができる)</p> -<p>Japan there is a pattern to the gesture by social distance(日本には社会的立ち位置によってジェスチャーにパターンがある)</p> -<p>But in Germany not the pattern(しかしドイツにはそのパターンはない)</p> -<p>This is characteristic of Japanese society(これは日本の社会の特徴である)</p> -<p>The two mapping has been referring to the feedback of the Japanese sociology literature and the German participants(この2つのマッピングは日本の社会学の文献とドイツの参加者のフィードバックを参考にしている)</p> -<p>ここに表4</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="54"><div> + <section> + <header> + <h1 id="the-experiment-protocol-is-as-follows-610">The experiment protocol is as follows 6~10</h1> + </header> + <!-- _S9SLIDE_ --> + +<ol> + <li>After the two parties have greeted each other, the robot is turned off</li> + <li>the participant evaluates the robot’s behaviour through a questionnaire</li> + <li>The mapping is updated using the subject’s feedback</li> + <li>Repeat steps 2–8 for each participant</li> + <li>Training stops after the state changes are stabilized</li> +</ol> + + + + </section> +</div></div> + +<div class="slide" id="55"><div> + <section> + <header> + <h1 id="results">Results</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>It referred to how the change in the gesture of the experiment</li> + <li>It has become common Bowing is greatly reduced handshake</li> + <li>It has appeared hug that does not exist in Japan of mapping</li> + <li>This is because the participants issued a feedback that hug is appropriate</li> +</ul> + + -<div> -<h1>Discussion</h1> -<font size=5> -<p>ここはいらない説ある5.1~5.2は前と同じようなこと書いてるからスライドにはいらない</p> -<p>ここもいらないかな?</p> -<p> concept of a greeting selection system is novel(挨拶選択システムの概念は新しい)</p> -<p>its modelling and application can be useful in robotics(モデリングとアプリケーションはロボット工学において非常に有用)</p> -<p>In particular, one advantage of the current implementation is that gestures are not robot-specific, since the Master Motor Map framework can be used and converted to any other humanoid robot(特にMMMを使用しているので、ジェスチャを他のロボットに変換することができるのでロボット固有ではない)</p> -</font> -</div> + </section> +</div></div> + +<div class="slide" id="56"><div> + <section> + <header> + <h1 id="results-1">Results</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/GestureTable.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + + + + </section> +</div></div> + +<div class="slide" id="57"><div> + <section> + <header> + <h1 id="results-2">Results</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The biggest change in the words of the mapping , are gone workplace of greeting</li> + <li>Is the use of informal greeting as a small amount of change</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="58"><div> + <section> + <header> + <h1 id="results-3">Results</h1> + </header> + <!-- _S9SLIDE_ --> + +<p><img src="pictures/GreetingWordTable.png" style="width: 60%; margin-left: 150px; margin-top: -50px;" /></p> + -<div> -<h1>Limitations and improvements</h1> -<font size=5> -<p>In the current implementation, there are also a few limitations. (今の実装には色々制限(問題)がある)</p> -<p>The first obvious limitation is related to the manual input of context data(はじめに状況のデータ入力が手動入力であること)</p> -<p>→ The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human(カメラ等を用いることにより、人の性別、年齢、人種などを判断できるようになる)</p> -<p>Speech recognition system and cameras could also detect the human's own greeting(音声認識システムとカメラは人の挨拶を検出することが出来た)</p> -<p>→ Robot itself , to determine whether the greeting was correct(ロボット自身が、挨拶が正しかったかを判断する)</p> -<p>The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected(判断には、相手との距離、挨拶のタイミング、頭の向き、やその他の情報を使い、挨拶に対する反応が予想されているものと合っているかを調べる)</p> -<p>Accurate der than this information is information collected using a questionnaire(この情報はアンケートを使った情報収集より正確である</p> -<p>It is possible to extend the set of context by using a plurality of documents(複数の文献を使用することでコンテキストのセットを拡張することができる)</p> -<p>This in simplification of greeting model was canceled(挨拶モデルの簡易化中のこれは中止になった</p> -</font> -</div> + + </section> +</div></div> + +<div class="slide" id="59"><div> + <section> + <header> + <h1 id="limitations-and-improvements">Limitations and improvements</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>The first obvious limitation is related to the manual input of context data</li> + <li>The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human</li> +</ul> + + + + </section> +</div></div> + +<div class="slide" id="60"><div> + <section> + <header> + <h1 id="limitations-and-improvements-1">Limitations and improvements</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Robot itself , to determine whether the greeting was correct</li> + <li>Speech recognition system and cameras could also detect the human own greeting</li> + <li>The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected</li> +</ul> + + + + </section> +</div></div> -<div> -<h1>Different kinds of embodiment</h1> -<font size=5> -<p>Humanoid robot has a body similar to the human(ヒューマノイドロボットは人間に似たボディを持っている</p> -<p>But the robot can change shape , the size capability(しかしロボットは形状、大きさ能力を変えることができる</p> -<p>Type of greeting me to select the appropriate effect for each robot(挨拶のタイプは各ロボットに対して合った効果を選択する</p> -<p>By expanding this robot , depending on their physical characteristics , it is possible to start discovering interaction method with the best human yourself(これを拡張することで、ロボットが、自身の身体的特徴に応じて、自分で最適な人間との対話法を発見し開始することができる</p> -<p>And thus rely on visual or hearing the communication(通信には視覚や聴覚に頼ることになる</p> -</font> -</div> +<div class="slide" id="61"><div> + <section> + <header> + <h1 id="limitations-and-improvements-2">Limitations and improvements</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>It is possible to extend the set of context by using a plurality of documents</li> +</ul> + + + + </section> +</div></div> -</div> +<div class="slide" id="62"><div> + <section> + <header> + <h1 id="different-kinds-of-embodiment">Different kinds of embodiment</h1> + </header> + <!-- _S9SLIDE_ --> + +<ul> + <li>Humanoid robot has a body similar to the human</li> + <li>robot can change shape , the size capability</li> + <li>Type of greeting me to select the appropriate effect for each robot</li> + <li>By expanding this robot , depending on their physical characteristics , it is possible to start discovering interaction method with the best human yourself</li> +</ul> -</div> <!-- presentation --> +<style> + .slide.cover H2 { font-size: 60px; } +</style> + +<!-- vim: set filetype=markdown.slide: --> +<!-- === end markdown block === --> + + </section> +</div></div> + + + <script src="scripts/script.js"></script> + <!-- Copyright © 2010–2011 Vadim Makeev, http://pepelsbey.net/ --> </body> </html>
--- a/slide.md Thu Jun 25 06:07:48 2015 +0900 +++ b/slide.md Fri Jun 26 09:09:58 2015 +0900 @@ -164,7 +164,7 @@ # Implementation on ARMAR-IIIb * ARMAR-III is designed for close cooperation with humans * ARMAR-III has a humanlike appearance -* sensory capabilities similar to humans( +* sensory capabilities similar to humans * ARMAR-IIIb is a slightly modified version with different shape to the head, the trunk, and the hands # Implementation of gestures @@ -177,18 +177,21 @@ * providing a unified representation of various human motion capture systems, action recognition systems, imitation systems, visualization modules * This representation can be subsequently converted to other representations, such as action recognizers, 3D visualization, or implementation into different robots * The MMM is intended to become a common standard in the robotics community + +# Master Motor Map <img src="pictures/MMM.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> -# Master Motor Map2 +# Master Motor Map * The body model of MMM model can be seen in the left-hand illustration in Figure * It contains some joints, such as the clavicula, which are usually not implemented in humanoid robots * A conversion module is necessary to perform a transformation between this kinematic model and ARMAR-IIIb kinematic model + +# Master Motor Map <img src="pictures/MMMModel.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> # converter * converter given joint angles would consist in a one-to-one mapping between an observed human subject and the robot -* differences in the kinematic structures of a human and the robot one-to-one mapping can hardly show acceptable results in terms of a human like appearance of the reproduced movement -* this problem is addressed by applying a post-processing procedure in joint angle space +* convert is addressed by applying a post-processing procedure in joint angle space * the joint angles, given in the MMM format,are optimized concerning the tool centre point position * solution is estimated by using the joint configuration of the MMM model on the robot @@ -201,7 +204,6 @@ # Conversion example of MMM * After programming the postures directly on the MMM model they were processed by the converter -* Conversion is not easy * the human model contains many joints, which are not present in the robot configuration * ARMAR is not bending the body when performing a bow * It was expressed using a portion present in the robot (e.g., the neck) @@ -217,6 +219,8 @@ * The postures could be triggered from the MCA (Modular Controller Architecture, a modular software framework)interface, where the greetings model was also implemented * the list of postures is on the left together with the option * When that option is activated, it is possible to select the context parameters through the radio buttons on the right + +# Modular Controller Architecture, a modular software framework <img src="pictures/MCA.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> # Implementation of words @@ -237,8 +241,8 @@ * added two small speakers behind the head and connected them to another computer # Experiment description -* Experiments were conducted at room as shown in Figure 9 , Germany -<img src="pictures/room.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> +* Experiments were conducted at room as shown in Figure , Germany +<img src="pictures/room.png" style='width: 60%; margin-left: 150px; margin-top: 50px;'> # Experiment description2 @@ -268,11 +272,6 @@ 2. average age: 29.43 3. age standard deviation: 12.46 -# The purpose of the experiment -* The objective of the experiment was to adapt ARMAR-IIIb greeting behaviour from Japanese to German culture -* the algorithm working for ARMAR was trained with only Japanese sociology data and two mappings M0J were built for gestures and words -* After interacting with German people, the resulting mappings M1 were expected to synthesize the rules of greeting interaction in Germany - # The experiment protocol is as follows 1~5 1. ARMAR-IIIb is trained with Japanese data 2. encounter are given as inputs to the algorithm and the robot is prepared @@ -292,28 +291,28 @@ * It has become common Bowing is greatly reduced handshake * It has appeared hug that does not exist in Japan of mapping * This is because the participants issued a feedback that hug is appropriate + +# Results <img src="pictures/GestureTable.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> -# Results2 +# Results * The biggest change in the words of the mapping , are gone workplace of greeting * Is the use of informal greeting as a small amount of change -* some other patterns can be found in the gestures mappings judging from the columns in Table 3 for T = 0 -* Japan there is a pattern to the gesture by social distance -* But in Germany not the pattern -* This is characteristic of Japanese society -* The two mapping has been referring to the feedback of the Japanese sociology literature and the German participants + +# Results <img src="pictures/GreetingWordTable.png" style='width: 60%; margin-left: 150px; margin-top: -50px;'> # Limitations and improvements -* In the current implementation, there are also a few limitations * The first obvious limitation is related to the manual input of context data -* → The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human +* The integrated use of cameras would make it possible to determine features such as gender, age, and race of the human + +# Limitations and improvements +* Robot itself , to determine whether the greeting was correct * Speech recognition system and cameras could also detect the human own greeting -* → Robot itself , to determine whether the greeting was correct * The decision to check the distance to the partner , the timing of the greeting , head orientation , or to use other information , whether the response to a greeting is correct and what is expected -* Accurate der than this information is information collected using a questionnaire + +#Limitations and improvements * It is possible to extend the set of context by using a plurality of documents -* This in simplification of greeting model was canceled # Different kinds of embodiment * Humanoid robot has a body similar to the human