Vous êtes sur la page 1sur 8

Submitted to 2008 IROS/RSJ Conference, Nice, France, 22-26 Sept. 2008.

Learning Affordance for Semantic Robots Using Ontology Approach

Sidiq S. Hidayat, B.K. Kim, and Kohtaro Ohba, Member, IEEE

Abstract— Learning affordance is very important for affordances.


autonomous robots to know what the necessary actions should In this paper, we propose affordance-based ontology as
be performed to complete a given task with its abilities. part of main framework for Semantic Robots in our
However, it’s difficult to identify and apply affordances in Ubiquitous Robots environment. The rest of the paper is
practice for robots. In order for a robot to get benefits from
this concept, this paper proposes ontology approach based
organized as follows. Section II describes affordance
affordance concept for ubiquitous robots. The goal of this concept, some controversies discussion in its
affordance-based ontology effort is to develop and begin to implementation for robotics, and our proposed work.
populate a neutral knowledge representation capturing Connecting physical world into digital world in order to
relevant information about robots capabilities and what the learn affordances is mentioned in Section III. The next
environment afford to assist human life in ubiquitous robots section describes our ontology used by robots to learn
environment. We propose a new method to ‘teach’ robot learn
objects and context affordances using ontology approach for
affordances, and show how it fits in. Implementation and
Semantic Robots in Ubiquitous Robots space. some cases study with semantic solutions using Semantic
Robots are described in Section V. Finally, we conclude this
I. INTRODUCTION paper and describe our future works in Section VI.

U NDERSTANDING the environment is very important


aspect in order for robots to carry out its mission with
its perception, recognition and primitives behaviors system.
II. OVERVIEW OF AFFORDANCE CONCEPT FOR ROBOTICS

A. Basic Concept
Mobile robots such household service robots, have to
The concept of affordance has been coined by J.J. Gibson
understand user’s requirements and the environments where
in his seminal work on the ecological approach to visual
they exist. However, today’s mobile robot perception is
perception and its link to action. The concept of affordance
insufficient for acting goal-directedly in unconstrained,
is as he wrote:” The affordances of the environment are
dynamic everyday environments like a home [1]. Therefore
what it offers the animal, what it provides or furnishes,
robust and general engineering methods for effectively and
either for good or ill.” [26]
efficiently coupling perception, action and reasoning are
In the context of ecological perception, visual perception
urgently needed.
would enable agents to experience in a direct way the
The best and robust system example of coupling
opportunities for acting. However, Gibson remained unclear
perception and action can be found in ecology, who studies
about both how this concept could be implemented in
animal behaviors. The perceivable potentiality of the
technical system and which representation to be used.
environment that support the intended action, without
Although J.J. Gibson introduced the term to clarify his
requiring memory, inference, or interpretation [3], named
ideas in psychology, it turned out to be one of the most
affordance, is very interesting because of its powerful if can
elusive concepts that influenced in many field of studies
be applied for robotics. However, affordances are difficult
[1],[2],[4]-[13], [24].
and can be quite challenging to be applied in robotics [23].
In order to avoid the difficulties of implementing B. Learning Affordances for Robots
ecological approach for robotics, instead of getting the The concept of affordances is highly related to
benefit of affordances, we proposed Semantic Robots which autonomous robot control and influenced many studies in
coupling perception, recognition and action using semantic this field [1]-[5]. Starting from [2] she has developed and
data. The Semantic Robot term refers to the real robots applied affordances approaches to mobile robots since last
which have capabilities to access the Semantic Web data two decades. Recently, there are also other studies that
and perform some actions based on these data and their exploit how affordances reflect to high-level processes such
as learning [8, 9], tool-use [7], or decision-making [10].
Sidiq S. Hidayat is with System and Information Engineering, How the relation between the concept of affordances and
University of Tsukuba, 305-8572, Japan (e-mail: s.hidayat@aist.go.jp)
Bong Keun Kim and Kohtaro Ohba are with Intelligent Systems robotics and how robots learn affordances has started to be
Research Institute, National Institute of Advanced Industrial Science and explicitly discussed by many roboticists. The co-relation
Technology (AIST), 305-8568, Japan (e-mail: {bk.kim, between the theory of affordances and
k.ohba}@aist.go.jp)
Objects in Perception Hierarchy

Space
Sensor Object Ontology
Sensor
Aggregators
Sensor Ontology
Aggregator
& Object
Aggregator
& Object
Object Interpreters
& Object Knowledge
Object Interpreter
Sensors
Object’s Interpreter Base Ontology
Sensors
Sensors
Service
Service/Action Hierarchy
Service Ontology
Service
Robots Service
Robots Service
Provider &
Service
Provider &
Coordinator Robots Specification
Service Provider &
Coordinator
Planner
Space Coordinator
Planner
Planner
….

Fig.1. Architecture of the affordance learning for service robots using knowledge based ontology. This framework utilized affordance concept in the
ontology to provide service robots with enhances the matching of robots action and environment (object and context) affordances.

reactive/behavior-based robotics has already been pointed actions.


out [5], [23]. Stoytchev [6, 7] studied robot tools behavior
C. Controversy of Implementing Affordance in Robotics
as an approach to autonomous tool use, where the robot
learns tools affordance to discovering tool-behavior pair Even many roboticists have been successfully conducting
that gives the desired effects. and implementing affordance concept to robotics, however,
In [11], Fitzpatrick et al. also study learning of object this concept is still controversy for some others. [2], [9] and
affordances in a developmental framework where a robot [13] have different perspective with other researchers. In [2],
can learn what it can do with an object (e.g. rolling by in order to identifying and applying affordances to mobile
tapping). Fritz et al. [8] demonstrated a system that learns to robots, if the percept for the behavior requires a label, she
predict the lift-ability affordance for different objects. In considered that such kind of object is not a good candidate
this study, predictions are made based upon features of for affordance because it is a recognition type of task. While
object regions, like color, center of mass, top/bottom according to [13], an affordance that a knowledge engineer
location information, and shape description, which are creates and just places somewhere would, in his
extracted from the robot camera images. In both Stoytchev's understanding of this concept, not be an affordance as a
and Fitzpatrick's studies, no association between the visual possibility for action, it would be a mere landmark that
features of the objects and their affordances, instead they marks a point of interest.
used in both experiments the objects are differentiate using However, our point of view is different from them. In our
their colors only. perception, affordances for robotics are quite different with
How the robot learns the traversability affordance has affordances for human. The difference key point is that
recently been studied by Kim et al. [12] and Ugur et al. [9]. human are different from robots in coupling perception and
Contradicting with the last two researchers above, they used action. And we highlighted that the merit of robots creation
low level features, which are extracted from stereo vision or is for supporting human being. Therefore trying to put
range image and used in learning and predicting of affordance concept for robotics is mean to support human
traversability affordance in unknown environment. life rather than consider the problem only in terms of robot
Different from previous researcher, in [28] used imitation abilities with its specific sensors just to “see” the world as
learning algorithm in order a humanoid robot learns object’s human perceptual abilities.
affordances. They used a probabilistic graphical model D. Proposed Work
known as Bayesian networks to encode the dependencies In order to learning affordances for robotics make senses
between actions, object features and the effects of those for human being, we are investigating semantic integration
and ontology mapping and applied to ubiquitous robots to
learn what the environment (object and context) affords to.
Contradicting with other roboticists, we used physical
landmark information as perceptual source for robots and
process it semantically in order to obtain real affordances
for appropriate/certain robots.
To prove our concept, first we classify every physical
object and the affordances driver into several classes, create
ontology, define general properties, and make reasoning in
order to verify the logical relation. Second, applying query
engine to obtain the appropriate action which afforded by
physical objects in certain situation and condition. And the
last step is ‘grounding’ the obtained affordance from text
into context (robots and its environment). The proposed Fig. 2 Ubiquitous robots environment. We have developed UFAM,
framework is depicted in Fig. 1. This work has similar CLUE, and FUSEN to help robots and human get services from the
environment.
approach with [27] in using ontology-based knowledge for
robots intelligence, however, we emphasize in
implementation of affordances concept for robots using IV. AFFORDANCE-BASED ONTOLOGY
landmark tags.
More detail of how this method works will be explained A. Motivations
in the following sections. The usage of ontology in robotics system allows robot
designers to generate events within a specific domain, and
III. UBIQUITOUS ROBOTS CONCEPT allows robots to ‘submit subscription’ that can match a
In ubiquitous robots, the environment consists of whole or partial event. This means that a single event can
numbers of sensors, actuators, robots, and networked have a different semantic meaning or different affordances
appliances. Each devices manages and processes local across different robots.
information depending on specific location, situation and its We are investigating semantic integration and ontology
conditions/properties. Robots communicate with mapping as applied to ubiquitous robotics in new domain
environmnet via a reader devices provided in each robot and system to learn what the environment afforded.
through wireless communication [25] The overall goal of this work is to apply semantic robot
Our lab has been working on ubiquitous robots using based on ontology of ubiquitous environment to enhance the
landmark tags such RFID tags, QR code, etc. to help the capabilities and performance of our current robotics system,
robots perceive and recognize their environment. The idea particularly in the area of object perception and recognition
is to tags every object in space, such table, plate, glass, book, to obtain what the environment really afforded for robotics.
book shelf, floor, refrigerator, etc. and connecting these In this Section we present affordance-based for
physical worlds into digital world [22]. We have developed ubiquitous robots system which utilizes ontology to classify
UFAM (Ubiquitous Functions Activation Module), CLUE and query physical object in the ubiquitous robots
(Coded Lanmark for Ubiquitous Environment) and FUSEN environment. The framework, as shown in Fig.1, utilizes
(Flexible and User-intuitive Service using Electric Note) Semantic Web technologies to provide the expressiveness
[19], [20], [21] as a landmark to help robots and human get and query resolution accuracy to enhance the matching of
services from the environment. robots actions and environment affordances.
In the past works, we have successfully implemented
B. The Proposed Affordance-based Ontology
primitive robots control base on sense-model-act locally.
These kinds of robots as shown in Fig.2 had good In our system, information is represented at three
performances to accomplish requested tasks such table different semantic levels, according to the Semantic Web1
cleaning task, book arranging task, and door opening/ vision. At the lower level, information is encoded in XML.
closing for appliances such a microwave’s door and
refrigerator’s doors. In the future, we will implement
semantic robot base on affordances concept and integrate it
into existing ubiquitous robots system.

1
http://www.w3.org/2001/sw/
Fig. 3 Example of affordance-identifying ontology diagram (no all classes or individuals are shown). Each individual in individual objects has ‘real’
affordance depend on its properties asserted and restricted in affordances drivers.

At the intermediate model level, the state of the world is A-box and used by appropriate robot as obtained affordance
represented as a collection of individuals, linked through a to perform relevant action.
set of labeled relations. Each individual represents a real To demonstrate the feasibility of our ontological
object in the conceptualization, like a robot, refrigerator or approach to ubiquitous system development, we studied a
an obstacle, or an abstract concept instance, like a push simple role assignment schema as described in the next sub
behavior or a closing door’s action. The collection of such Section. When the coordinator receives information of
individuals builds up the A-box (assertional box) of certain object in certain location, it searches the set of
description logic (DL2) based frameworks [18]. registered robots able to execute it. If it finds such robots,
The higher ontological schema level describes the the ontological model of the obtained affordance gives the
ontological concepts, interpreted in a classical Tarski information to assign specific roles to the robots and to
extensional semantics. This level contains the definitions of dispatch an operative behavior (a sequence of instructions)
the types for the model level and corresponds to the T- box to each robot according to the what the received
(terminological box) in DL based frameworks. affordances.
The proposed affordance-based ontology roughly
C. Learning Affordance of Object and Context
describe in two abstraction level (omit XML level) as shown
in Fig.3. The mechanism to obtain real affordance can be In the real world, objects and context have specific
described as follow. During initialization, each robotic features or properties that differentiate them from others.
agent and all individual registers to the coordinator. Then, The object types tend to have unique properties that
the coordinator asks for their complete ontological models. human can easily distinguish. However, this is a big
In this phase, each robot sends to the coordinator its A-box, challenging area for most robot based vision systems where
the coordinator reads it and imports all the necessary objects are recognized through shapes and line drawings
T-boxes for its interpretation and matching. After this, all [29]. To combat these difficulties, we utilized objects and
the A-boxes and T-boxes are merged together by the contexts ontologies and distributed knowledge based
coordinator in an affordance ontological model. Once the through embedded landmarks in order for robots to learn
ontological model has been built, the coordinator uses it in affordances.
task execution. A task is an individual inside a specific Ontologies have defined classes that can be related to the
real world. For example, we can define a class of object
‘Box’ and also define its properties – has shape cube like,
2
http://dl.kr.org/
has weight some weight, has size some size, and can be recognized door002 without obstacle (box), have cruise
pushed by some robots: behavior”.  the door affords pass through affordance.

Box (hasShape ∋ Cube) ∩ (hasWeight ∃ Weight) ∩


(hasSize ∃Size) ∩ (isPushable ∃ robots)
Using this approach, we explore whether a robots with
specific domain ontology could successfully classify objects
and perform some necessary action based on what it affords
to. In this case, the box affords pushability for some robots.
However, for service robots, that condition is not always
true. It really also depend on other situation, such
environment/context where the box is placed. For context
affordance, we can define the Box:

∋ ∩
Box (hasLocation EntranceDoor) (hasShape Cube) ∋ Fig. 4 What should robots do? Box and door001 offer different affordances

∩ ∃ ∩
(hasWeight Weight) (hasSize ∃ ∩
Size) (isPushable
for different robots. Door002 offers same affordance for all robots.

∃ ∩
robots) (isAvoidable ∃
robots)

That means if the box is located in front of the entrance door


and blocked the way, in order some robots to entrance into
the room (assume the door offers pass through affordance);
the obtained affordances are the box afford pushability for
some ‘big’ robots and affords avoid ability for some ‘small’
robots.

V. IMPLEMENTATION Fig. 5 Example of metadata enrichment (not the real ontology). The box
We will investigate specific issues that are unique to this located at door001 affords pushability for robot_X, afford avoidability for
robot_Y. The door002 afford pass through (cruise) ability for all robots.
robotics and this concept of Semantic: i) what should robots
do in such situation when two robots facing same object? ii)
Who should perform some appropriate action in case many In this approach, we structure the ontology based upon
robots available in the space? the abstract goal that the action is attempting to accomplish.
We organize actions by an abstract look as to why the action
A. Case 1 is being performed.
As shown in Fig.4, there are two robots in the space, say
B. Case 2
Beego 3 (symbolized as pentagon, X) for medium mobile
robot and Aibo 4 (symbolized as circle, Y) for small pet In case 2, Fig.6, the sensor device attached on the
robot. The both robots faced a situation such there is an refrigerator says that the door is open. Suppose there are
empty box near door001 and block the entrance. The many robots there, so the question is who should close the
question is what should robots do? Similar case for door002, door? Using semantic reasoning we can obtain real
if there is no box blocked the entrance, what should robots affordance for every agent in the ubiquitous robots. In this
do? The semantic solution can be obtained using ontology situation, perhaps the possibility answers could be such like
as illustrated in Fig.5. these:
The simplified answers based on affordance concept are: A1: Robot 1, because he/she is in the kitchen AND has
“Robot X recognized the box located in the door001, has push ability
push behavior”.  the box affords pushability for robot X. A2: Robot 2, because he/she has push ability and Robot 1
“Robot Y recognized the box located in the door001, has is busy, even he/she is in other room
avoid behavior”.  the box affords avoid ability for robot A3: NOT Robot 3, but 2 or 1, because even in the kitchen
Y. room, but he/she has no push ability.
While for the opened door002: “Both robot X and robot Y A4: Human, because Robot 3 tells to human that all
robots has no ability to perform the task.
3
http://www.youtube.com/watch?v=gOgT7of5ywE, developed by
University of Tsukuba, Japan.
4
http://www.sony.net/SonyInfo/News/Press_Archive/199905/99-046/
We can define a class of ‘Refrigerator’ and also define its the system could say, the refrigerator must be in the kitchen
properties and its context – it is located in the kitchen has (100%), to close the door need push abilities and this weight
shape hexahedron, and the door’s status is open caused by much important (125%). Additionally the sensors system
someone forgot to close it: could weight the availability of food information in the
fridge lower (50%), etc. as shown in Table 1. For each
Refrigerator (hasLocation Kitchen) ∋ (hasShape ∩ ∋ product pi the relevance affordance R is calculated as
hexahedron) ∩
(hasPart ∋
door) (hasSensor ∋ follows:

DoorSwitch) (hasDoorStatus = ‘open’)
R (r, pi)= φlocation(r, pi)1.25φfunction(r, pi)0.5φIntension(r, pi) (1)

φx is the similarity function used for a specific object feature


x. For instance, φlocation can be calculated as the geographical
location from desired robot and the object location. If R
exceeds a pre-defined threshold t, the robot will be informed
about object pi and this information will affect robots with
highest affordance value to close the refrigerator’s door.
TABLE I
WEIGHTING VALUE OF CERTAIN AFFORDED ROBOTS TO
CLOSE THE REFRIGERATOR’S DOOR

Robot Number
Conditions
R1 R2 R3
Fig. 6 Who should close the refrigerator’s door? Robot 1, 2 and 3 has Located in the kitchen 100 50 100
different affordances perceived from refrigerator. The affordance may Has push ability 125 125 100
vary depend on distance/location, possibility of actions and intention to Has sensor value 50 50 50
the refrigerator. Has mobility 75 75 50
Has arm 75 75 0
Using this approach, we explore whether a robots with
∑ Affordance value (R) 425 375 300
Pre-defined threshold (t) = 250 (show the estimated
specific domain ontology could successfully classify objects
affordance value to close the refrigerator’s door)
affordances and perform some necessary action based on
what the robots abilities have. In this case, Refrigerator D. Implementation
affords Robot 1 and Robot 2 to close its door. In case for
Robot 2 (located in other room), the Refrigerator linked to In order to realize the proposed learning method, we used
Kitchen by hasLocation property, can be a landmark for W3C standardized web ontology markup language, OWL5.
localization. To construct our ontology, we used open-source platform
In this case, we organize the ontology to emphasize the Protégé OWL6 editor and its OWL plugin, and Jena7 is used
agent who is performing the activity as opposed to the type to manage ontology. Jena is a Java framework for building
of activity that is being performed (Case 1). In this approach, Semantic Web applications. This framework allows us to
we structure the ontology around the agent who is executing use SPARQL8 as the subscription/query language for RDF9
the action. This ontology works in a similar way of human data. SPARQL is a new W3C recommendation (15 January
thinking, where a class can have certain relationships with 2008). For checking the consistency of the knowledge base,
other classes or instances (Refrigerator and Kitchen). Pellet10 is used as Java-based description logic reasoners.
The RDF/XML expression of these cases can be seen in The usage of these technologies to obtain appropriate
Fig.8, Fig.9 and Fig.11. affordance is shown in Fig.7. Some example of RDF/XML
expression and query result of what the environment affords
C. Affordance formulation to robots can be seen in Fig.8, Fig.9, Fig.10, Fig.11 and
When the system specifying a delegation task to the Fig.12. Owl and RDF/XML files is generated by Protégé
robots, the sensors devices such as RFID, StarLite GPS, OWL, and the query result is produced by using Rasql11 to
camera arrays, etc. can be used to define and also weight perform queries in SPARQL language over RDF data.
various features as affordance consideration or as a ranking
of priority. The ranking algorithm can be based on a simple 5
http://www.w3.org/2004/OWL/
6
utility function using weight, a vector space model [17]. 7
http://protege.stanford.edu/
Those features are defined as part of the object’s data http://jena.sourceforge.net/
8
http://www.w3.org/TR/rdf-sparql-query/
schema. For example, given the ‘push affordance’ offered 9
http://www.w3.org/RDF/
10
the robot to close refrigerator’s door located in the kitchen, http://pellet.owldl.com/
11
http://librdf.org/query
#Kitchen

#Robot
#Box #MW Fig. 10 The query result of who has push ability and the
appropriate location to perform push action (in BillCorridor)
RDFS/OWL
Schemas
SPARQL - < !--

http://staff.aist.go.jp/s.hidayat/SR/Affontob
ibot.owl#BillMicrowave
Obtained -->
affordance
M icrow a ve rdf:ab ou t= "# B illM ic ro w av e">
< A fford 2A voidB y> A ib o < /A fford2A v oid B y>
< h asN a m e> T o sh ib a M W < /h asN am e>
Robot Controller
< h asExpiryD ate
Middleware
rdf:datatype= "h ttp :/ / w w w .w 3 .o rg / 20 0 1 /X M LS ch
Fig. 7 Components usage to obtain appropriate
em a # da te T im e"> 2 0 1 5 01 < /h asExp iryD ate >
affordances for robots
< isLo cated A t rdf:resou rce = "# B illK itch e n " />
< A fford 2Pu sh B y> P u m a < /A fford 2Pu sh B y>
< A fford 2A voidB y> B e e g o < /A fford 2A voidB y>
< /M icro w av e>
< !--

http://staff.aist.go.jp/s.hidayat/SR/Affontob
ibot.owl#BillRefrigerator
-->
R efrig erato r rdf:ab ou t= "# B illR efrig e rato r">
< A fford 2A voidB y> B e e g o < /A fford 2A voidB y>
< h asN a m e> H ita ch i R e z o k o < /h asN am e>
< A fford 2A voidB y> A ib o < /A fford2A v oid B y>
< A fford 2Pu sh B y> P u m a < /A fford 2Pu sh B y>
< isLo cated A t rdf:resou rce = "# B illK itch e n " />
Fig.8 The box blocked robots to enter the room; affords to push by
< /R efrigerator>
Beego and afford to avoid by Aibo robot.

- < !- - Fig.11 Only Puma has ability to close the refrigerator and
microwave’s door.
http://staff.aist.go.jp/s.hidayat/SR/Affon
tobibot.owl#BillBall
-->
P h y s ic a lO b je c ts r d f: a b o u t= " # B illB a ll" >
< A ffo r d 2 P u s h B y > A ib o < / A ffo rd 2 P u s h B y >
< is P u s h e d B y rd f: r e s o u rc e = " # B illB a ll" / >
Fig.12 The query result of who has push ability, the location to
< o w l: s a m e A s r d f: r e s o u r c e = " # B illB a ll1 " / >
perform push action and some appropriate objects that offer push
< A ffo r d 2 P u s h B y > B e e g o < / A ffo r d 2 P u s h B y > ability to robots.
< is P u s h e d B y rd f: r e s o u rc e = " # B illR o b o t - A ib o " / >
< is L o c a te d A t r d f: r e s o u r c e = " # B illC o r id o r " / >
< / P h y s ic a lO b je c ts > VI. CONCLUSION AND FUTURE WORKS
We have proposed new formulation and method to learn
Fig. 9. The ball in the corridor, affords push ability for both Beego
and Aibo robot.
objects and contexts affordance for robots.
Affordance-based ontology has been used to solve the
difficulties in implementing affordance concept for robotics
fields. Different from other researchers, we used Semantic
Web which combined with information from sensing [14] S. Mcilraith, T.C. Son, & H. Zeng, Semantic Web Services, IEEE
Intelligent Systems, 2001. Vol. 16(2), pp.46-53.
devices to learn environment affordances for robots. [15] M. Paolucci, T. Kawamura, T. Payne, K. Sycara, Semantic matching
The proposed approach has been designed and will be of Web Service Capabilities. The first Internatuional Semantic Web
tested with an implementation in ubiquitous environment. Conference (ISWC), 2002
Testing will initially use simulation to validate the concept, [16] A. Langegger and R. Wagner, Product Finding on Next Generation,
proc. iiWAS 2006 49-58
and will be grounded in ubiquitous robots using [17] G. Salton, A. Wong, and C. S. Yang. A vector space model for
OpenRTM-aist (RT Middlewere)12 for realizing open robot automatic indexing. Communication ACM, 18(11):613-620, 1975.
architecture and creation of service robots. [18] F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, and P. F.
Patel-Schneider, editors. The Description Logic Handbook: Theory,
Implementation, and Applications. Cambridge University Press,
ACKNOWLEDGMENT 2003.
[19] K. Ohara, K. Ohba, B.K. Kim, T. Tanikawa, S. Hirai. System Design
During this work, Sidiq S. Hidayat was supported by for "Ubiquitous Robotics" with Functions. The 2nd International
Hasyia Foundation Scholarship, which he greatly Conference on Ubiquitous Robots and Ambient Intelligence,
acknowledges. pp.66-70, 2006
[20] K. Ohba, H. Onda, T.Tanikawa, B.K.Kim, T.Tomizawa, K.Ohara,
X.Liang, Y.S.Kim, H.M.Do, T.Sugawara. Universal-Design for
REFERENCES Environment and Manipulation Framework (in Japanese), 8th System
[1] E. Rome, L. Paletta, G. Fritz, H. Surmann, S. May, and C. L¨orken. Integration Division Annual Conference, pp. 926-927, 2007.
Multi-sensor affordance recognition. Deliverable D3.2.1, MACS [21] T. Suzuki, K. Ohara, N. Shimoyama, N. Ando, K. Ohba, K. Wada.
Internal Technical Report, Institute for Intelligent Analysis and Proposal for Ubiquitous Robot Interface “FUSEN” 1st Report:
Information Systems (FhG-IAIS), Sankt Augustin, Germany, August Discussion about a Universal Middleware for FUSEN System (in
2006. Japanese), 8th System Integration Division Annual Conference, pp.
[2] R. Murphy, “Case studies of applying Gibson's ecological approach 107-108, 2007.
to mobile robots,” IEEE Transactions on Systems, Man, and [22] S.S. Hidayat, B.K. Kim, T. Tanikawa, K. Ohba. Connecting the
Cybernetics, vol. 29, no. 1, pp. 105-111, 1999. Physical World and Events Schedule of User Calendar for Symbiotic
[3] H. Gardner, The mind’s New Science. New York basic Books, 1987, Systems, 16th IEEE RO-MAN pp.173-177, 2007.
[4] W. Warren, “Perceiving affordances: Visual guidance of stair [23] R. R. Murphy. Introduction to AI Robotics. Intelligent Robots and
climbing," Journal of Experimental Psychology, vol. 105, no. 5, pp. Autonomous Agents. MIT Press, Cambridge, MA, USA, 2000. ISBN
683-703, 1984 0262133830.
[5] R. Arkin, Behavior-based Robotics. Cambridge, MA, USA: MIT [24] D. A. Norman. Affordance, conventions, and design. Interactions,
Press, 1998. ISBN:0262011654. 6(3):38–42, 1999.
[6] A. Stoytchev, Toward learning the binding affordances of objects: A [25] Ohara, K., Kim, B.K., Tanikawa, T., Ohba, K., Ubiquitous Spot
behavior-grounded approach," in In Proceedings of AAAI Service for Robot Environment.
Symposium on Developmental Robotics, pp. 21-23, March 2005. [26] J. J. Gibson. The Ecological Approach to Visual Perception.
[7] A. Stoytchev, Behavior-grounded representation of tool Lawrence Erlbaum Associates, Hillsdale, pp.127, 1979.
affordances," in In Proceedings of IEEE International Conference on [27] Il Hong Suh, Gi Hyun Lim, Wonil Hwang, Hyowon Suh, Jung-Hwa
Robotics and Automation (ICRA), (Barcelona, Spain), pp. 18-22, Choi, Young-Tack Park, Ontology-based Multy-layered Robot
April 2005. Knowledge Framework (OMRKF) for Robot Intelligence, IEEE/RSJ
[8] G. Fritz, L. Paletta, M. Kumar, G. Dorner, R. Breithaupt, and R. International Conference on Intelligent Robots and Systems, IROS,
Erich, “Visual learning of affordance based cues," in From animals pp.429-436, 2007.
to animats 9th Proceedings of the Ninth International Conference on [28] Lopes, M, F.S. Melo, L. Montesano, Affordance Based immitation
Simulation of Adaptive Behaviour (SAB) (S. Nolfi, G. Baldassarre, learning in Robotics, IEEE/RSJ International Conference on
R. Calabretta, J. Hallam, D. Marocco, J.-A. Meyer, and D. Parisi, Intelligent Robots and Systems, IROS, pp. 1015-1021, 2007.
eds.), LNAI. Volume 4095., (Roma, Italy), pp. 52-64, [29] Strat, T.M. 1992, “Natural Object Recognition”, Springer-Verlag
Springer-Verlag, Berlin, 25-29 September 2006. in press. New York, Inc. New York, USA.
[9] E. Ugur, M. R. Dogar, O. Soysal, M. Cakmak, and E. Sahin,
MACSim: Physics-based simulation of the KURT3D robot platform
for studying affordances," 2006. MACS Project Deliverable 1.2.1,
version 1.
[10] I. Cos-Aguilera, L. Canamero, and G. Hayes, “Motivation-driven
learning of object affordances: First experiments using a simulated
Khepera robot," in In Proceedings of the 9th International Conference
in Cognitive Modelling (ICCM'03), (Bamberg, Germany), 4 2003.
[11] P. Fitzpatrick, G. Metta, L. Natale, A. Rao, and G. Sandini,
“Learning about objects through action-initial steps towards
artificial cognition," in Proceedings of the 2003 IEEE International
Conference on Robotics and Automation, ICRA, pp. 3140{3145,
2003.
[12] D.Kim, J. Sun, et.al. Traversability classification using unsupervised
on-line visual learning for outdoor robot navigation. In IEEE Intl.
conf on Robotics and Automation, 2006.
[13] Christopher Lörken, Introducing Affordances into Robot Task
Execution, Publications of the Institute of Cognitive Science (PICS),
Volume 2-2007, 2007

12
http://www.is.aist.go.jp/rt/OpenRTM-aist/html-en/

Vous aimerez peut-être aussi