{"id":1405,"date":"2011-11-10T06:59:22","date_gmt":"2011-11-10T14:59:22","guid":{"rendered":"http:\/\/www.robotic-lab.com\/blog\/?p=1405"},"modified":"2011-11-10T06:59:45","modified_gmt":"2011-11-10T14:59:45","slug":"nao-distingue-la-expresion-de-nuestro-rostro","status":"publish","type":"post","link":"http:\/\/www.robotic-lab.com\/blog\/2011\/11\/10\/nao-distingue-la-expresion-de-nuestro-rostro\/","title":{"rendered":"Nao, distingue la expresi\u00f3n de nuestro rostro."},"content":{"rendered":"<p><a href=\"http:\/\/www.robotic-lab.com\/blog\/uploads\/2011\/11\/ereader-1320328664138.png\"><img loading=\"lazy\" decoding=\"async\" data-attachment-id=\"1406\" data-permalink=\"http:\/\/www.robotic-lab.com\/blog\/2011\/11\/10\/nao-distingue-la-expresion-de-nuestro-rostro\/ereader-1320328664138\/\" data-orig-file=\"http:\/\/www.robotic-lab.com\/blog\/uploads\/2011\/11\/ereader-1320328664138.png\" data-orig-size=\"450,279\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}\" data-image-title=\"ereader-1320328664138\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"http:\/\/www.robotic-lab.com\/blog\/uploads\/2011\/11\/ereader-1320328664138.png\" data-large-file=\"http:\/\/www.robotic-lab.com\/blog\/uploads\/2011\/11\/ereader-1320328664138.png\" class=\"size-full wp-image-1406 aligncenter\" title=\"ereader-1320328664138\" src=\"http:\/\/www.robotic-lab.com\/blog\/uploads\/2011\/11\/ereader-1320328664138.png\" alt=\"\" width=\"450\" height=\"279\" \/><\/a><\/p>\n<p>El Laboratorio de Inteligencia Artificial de la Universidad de Tsukuba (Jap\u00f3n) ha presentado un robot capaz de actuar seg\u00fan las emociones de \u201csu cuidador\u201d para que el robot aprenda lo que est\u00e1 bien y lo que est\u00e1 mal.<\/p>\n<p>Nao, as\u00ed es como lo han llamado<strong>,\u00a0<\/strong> interpreta el <em>feedback<\/em> que damos seg\u00fan la expresi\u00f3n de nuestro rostro, por lo tanto, si sonre\u00edmos, Nao interpretar\u00e1 que lo est\u00e1 haciendo bien y asumir\u00e1 que la acci\u00f3n que est\u00e1 ejecutando es correcta, pero si fruncimos el ce\u00f1o como si estuvi\u00e9ramos enfadados o que no nos parece correcto lo que est\u00e1 haciendo el robot, Nao asumir\u00e1 que lo est\u00e1 haciendo mal.<\/p>\n<p>El funcionamiento se basa en unos sensores\u00a0 y electrodos que, bas\u00e1ndose en la Electromiograf\u00eda y puestos en la cabeza del cuidador, son capaces de detectar el gesto (actividad el\u00e9ctrica producida por los m\u00fasculos) que estamos haciendo con la cara y traducirlo a una se\u00f1al que es enviado a Nao en forma de <em>feedback<\/em> de sus actuaciones.<\/p>\n<p style=\"text-align: center;\"><!--Error de YouTube: URL introducida incorrecta--><\/p>\n<p>&nbsp;<\/p>\n<p><em>Via\/<a href=\"http:\/\/spectrum.ieee.org\">ieespectrum<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>El Laboratorio de Inteligencia Artificial de la Universidad de Tsukuba (Jap\u00f3n) ha presentado un robot capaz de actuar seg\u00fan las emociones de \u201csu cuidador\u201d para que el robot aprenda lo que est\u00e1 bien y lo que est\u00e1 mal. Nao, as\u00ed es como lo han llamado,\u00a0 interpreta el feedback que damos seg\u00fan la expresi\u00f3n de nuestro [&hellip;]<\/p>\n","protected":false},"author":92,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[15,39],"tags":[142,140,139,141],"class_list":["post-1405","post","type-post","status-publish","format-standard","hentry","category-noticias","category-videos","tag-electromiografia","tag-expresion-rostro","tag-nao","tag-tsukuba-japon"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p1YYAx-mF","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/posts\/1405","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/users\/92"}],"replies":[{"embeddable":true,"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/comments?post=1405"}],"version-history":[{"count":4,"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/posts\/1405\/revisions"}],"predecessor-version":[{"id":1410,"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/posts\/1405\/revisions\/1410"}],"wp:attachment":[{"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/media?parent=1405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/categories?post=1405"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.robotic-lab.com\/blog\/wp-json\/wp\/v2\/tags?post=1405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}