Académique Documents
Professionnel Documents
Culture Documents
A. Basic Concept
Mobile robots such household service robots, have to
The concept of affordance has been coined by J.J. Gibson
understand user’s requirements and the environments where
in his seminal work on the ecological approach to visual
they exist. However, today’s mobile robot perception is
perception and its link to action. The concept of affordance
insufficient for acting goal-directedly in unconstrained,
is as he wrote:” The affordances of the environment are
dynamic everyday environments like a home [1]. Therefore
what it offers the animal, what it provides or furnishes,
robust and general engineering methods for effectively and
either for good or ill.” [26]
efficiently coupling perception, action and reasoning are
In the context of ecological perception, visual perception
urgently needed.
would enable agents to experience in a direct way the
The best and robust system example of coupling
opportunities for acting. However, Gibson remained unclear
perception and action can be found in ecology, who studies
about both how this concept could be implemented in
animal behaviors. The perceivable potentiality of the
technical system and which representation to be used.
environment that support the intended action, without
Although J.J. Gibson introduced the term to clarify his
requiring memory, inference, or interpretation [3], named
ideas in psychology, it turned out to be one of the most
affordance, is very interesting because of its powerful if can
elusive concepts that influenced in many field of studies
be applied for robotics. However, affordances are difficult
[1],[2],[4]-[13], [24].
and can be quite challenging to be applied in robotics [23].
In order to avoid the difficulties of implementing B. Learning Affordances for Robots
ecological approach for robotics, instead of getting the The concept of affordances is highly related to
benefit of affordances, we proposed Semantic Robots which autonomous robot control and influenced many studies in
coupling perception, recognition and action using semantic this field [1]-[5]. Starting from [2] she has developed and
data. The Semantic Robot term refers to the real robots applied affordances approaches to mobile robots since last
which have capabilities to access the Semantic Web data two decades. Recently, there are also other studies that
and perform some actions based on these data and their exploit how affordances reflect to high-level processes such
as learning [8, 9], tool-use [7], or decision-making [10].
Sidiq S. Hidayat is with System and Information Engineering, How the relation between the concept of affordances and
University of Tsukuba, 305-8572, Japan (e-mail: s.hidayat@aist.go.jp)
Bong Keun Kim and Kohtaro Ohba are with Intelligent Systems robotics and how robots learn affordances has started to be
Research Institute, National Institute of Advanced Industrial Science and explicitly discussed by many roboticists. The co-relation
Technology (AIST), 305-8568, Japan (e-mail: {bk.kim, between the theory of affordances and
k.ohba}@aist.go.jp)
Objects in Perception Hierarchy
Space
Sensor Object Ontology
Sensor
Aggregators
Sensor Ontology
Aggregator
& Object
Aggregator
& Object
Object Interpreters
& Object Knowledge
Object Interpreter
Sensors
Object’s Interpreter Base Ontology
Sensors
Sensors
Service
Service/Action Hierarchy
Service Ontology
Service
Robots Service
Robots Service
Provider &
Service
Provider &
Coordinator Robots Specification
Service Provider &
Coordinator
Planner
Space Coordinator
Planner
Planner
….
Fig.1. Architecture of the affordance learning for service robots using knowledge based ontology. This framework utilized affordance concept in the
ontology to provide service robots with enhances the matching of robots action and environment (object and context) affordances.
1
http://www.w3.org/2001/sw/
Fig. 3 Example of affordance-identifying ontology diagram (no all classes or individuals are shown). Each individual in individual objects has ‘real’
affordance depend on its properties asserted and restricted in affordances drivers.
At the intermediate model level, the state of the world is A-box and used by appropriate robot as obtained affordance
represented as a collection of individuals, linked through a to perform relevant action.
set of labeled relations. Each individual represents a real To demonstrate the feasibility of our ontological
object in the conceptualization, like a robot, refrigerator or approach to ubiquitous system development, we studied a
an obstacle, or an abstract concept instance, like a push simple role assignment schema as described in the next sub
behavior or a closing door’s action. The collection of such Section. When the coordinator receives information of
individuals builds up the A-box (assertional box) of certain object in certain location, it searches the set of
description logic (DL2) based frameworks [18]. registered robots able to execute it. If it finds such robots,
The higher ontological schema level describes the the ontological model of the obtained affordance gives the
ontological concepts, interpreted in a classical Tarski information to assign specific roles to the robots and to
extensional semantics. This level contains the definitions of dispatch an operative behavior (a sequence of instructions)
the types for the model level and corresponds to the T- box to each robot according to the what the received
(terminological box) in DL based frameworks. affordances.
The proposed affordance-based ontology roughly
C. Learning Affordance of Object and Context
describe in two abstraction level (omit XML level) as shown
in Fig.3. The mechanism to obtain real affordance can be In the real world, objects and context have specific
described as follow. During initialization, each robotic features or properties that differentiate them from others.
agent and all individual registers to the coordinator. Then, The object types tend to have unique properties that
the coordinator asks for their complete ontological models. human can easily distinguish. However, this is a big
In this phase, each robot sends to the coordinator its A-box, challenging area for most robot based vision systems where
the coordinator reads it and imports all the necessary objects are recognized through shapes and line drawings
T-boxes for its interpretation and matching. After this, all [29]. To combat these difficulties, we utilized objects and
the A-boxes and T-boxes are merged together by the contexts ontologies and distributed knowledge based
coordinator in an affordance ontological model. Once the through embedded landmarks in order for robots to learn
ontological model has been built, the coordinator uses it in affordances.
task execution. A task is an individual inside a specific Ontologies have defined classes that can be related to the
real world. For example, we can define a class of object
‘Box’ and also define its properties – has shape cube like,
2
http://dl.kr.org/
has weight some weight, has size some size, and can be recognized door002 without obstacle (box), have cruise
pushed by some robots: behavior”. the door affords pass through affordance.
∋ ∩
Box (hasLocation EntranceDoor) (hasShape Cube) ∋ Fig. 4 What should robots do? Box and door001 offer different affordances
∩ ∃ ∩
(hasWeight Weight) (hasSize ∃ ∩
Size) (isPushable
for different robots. Door002 offers same affordance for all robots.
∃ ∩
robots) (isAvoidable ∃
robots)
V. IMPLEMENTATION Fig. 5 Example of metadata enrichment (not the real ontology). The box
We will investigate specific issues that are unique to this located at door001 affords pushability for robot_X, afford avoidability for
robot_Y. The door002 afford pass through (cruise) ability for all robots.
robotics and this concept of Semantic: i) what should robots
do in such situation when two robots facing same object? ii)
Who should perform some appropriate action in case many In this approach, we structure the ontology based upon
robots available in the space? the abstract goal that the action is attempting to accomplish.
We organize actions by an abstract look as to why the action
A. Case 1 is being performed.
As shown in Fig.4, there are two robots in the space, say
B. Case 2
Beego 3 (symbolized as pentagon, X) for medium mobile
robot and Aibo 4 (symbolized as circle, Y) for small pet In case 2, Fig.6, the sensor device attached on the
robot. The both robots faced a situation such there is an refrigerator says that the door is open. Suppose there are
empty box near door001 and block the entrance. The many robots there, so the question is who should close the
question is what should robots do? Similar case for door002, door? Using semantic reasoning we can obtain real
if there is no box blocked the entrance, what should robots affordance for every agent in the ubiquitous robots. In this
do? The semantic solution can be obtained using ontology situation, perhaps the possibility answers could be such like
as illustrated in Fig.5. these:
The simplified answers based on affordance concept are: A1: Robot 1, because he/she is in the kitchen AND has
“Robot X recognized the box located in the door001, has push ability
push behavior”. the box affords pushability for robot X. A2: Robot 2, because he/she has push ability and Robot 1
“Robot Y recognized the box located in the door001, has is busy, even he/she is in other room
avoid behavior”. the box affords avoid ability for robot A3: NOT Robot 3, but 2 or 1, because even in the kitchen
Y. room, but he/she has no push ability.
While for the opened door002: “Both robot X and robot Y A4: Human, because Robot 3 tells to human that all
robots has no ability to perform the task.
3
http://www.youtube.com/watch?v=gOgT7of5ywE, developed by
University of Tsukuba, Japan.
4
http://www.sony.net/SonyInfo/News/Press_Archive/199905/99-046/
We can define a class of ‘Refrigerator’ and also define its the system could say, the refrigerator must be in the kitchen
properties and its context – it is located in the kitchen has (100%), to close the door need push abilities and this weight
shape hexahedron, and the door’s status is open caused by much important (125%). Additionally the sensors system
someone forgot to close it: could weight the availability of food information in the
fridge lower (50%), etc. as shown in Table 1. For each
Refrigerator (hasLocation Kitchen) ∋ (hasShape ∩ ∋ product pi the relevance affordance R is calculated as
hexahedron) ∩
(hasPart ∋
door) (hasSensor ∋ follows:
∩
DoorSwitch) (hasDoorStatus = ‘open’)
R (r, pi)= φlocation(r, pi)1.25φfunction(r, pi)0.5φIntension(r, pi) (1)
Robot Number
Conditions
R1 R2 R3
Fig. 6 Who should close the refrigerator’s door? Robot 1, 2 and 3 has Located in the kitchen 100 50 100
different affordances perceived from refrigerator. The affordance may Has push ability 125 125 100
vary depend on distance/location, possibility of actions and intention to Has sensor value 50 50 50
the refrigerator. Has mobility 75 75 50
Has arm 75 75 0
Using this approach, we explore whether a robots with
∑ Affordance value (R) 425 375 300
Pre-defined threshold (t) = 250 (show the estimated
specific domain ontology could successfully classify objects
affordance value to close the refrigerator’s door)
affordances and perform some necessary action based on
what the robots abilities have. In this case, Refrigerator D. Implementation
affords Robot 1 and Robot 2 to close its door. In case for
Robot 2 (located in other room), the Refrigerator linked to In order to realize the proposed learning method, we used
Kitchen by hasLocation property, can be a landmark for W3C standardized web ontology markup language, OWL5.
localization. To construct our ontology, we used open-source platform
In this case, we organize the ontology to emphasize the Protégé OWL6 editor and its OWL plugin, and Jena7 is used
agent who is performing the activity as opposed to the type to manage ontology. Jena is a Java framework for building
of activity that is being performed (Case 1). In this approach, Semantic Web applications. This framework allows us to
we structure the ontology around the agent who is executing use SPARQL8 as the subscription/query language for RDF9
the action. This ontology works in a similar way of human data. SPARQL is a new W3C recommendation (15 January
thinking, where a class can have certain relationships with 2008). For checking the consistency of the knowledge base,
other classes or instances (Refrigerator and Kitchen). Pellet10 is used as Java-based description logic reasoners.
The RDF/XML expression of these cases can be seen in The usage of these technologies to obtain appropriate
Fig.8, Fig.9 and Fig.11. affordance is shown in Fig.7. Some example of RDF/XML
expression and query result of what the environment affords
C. Affordance formulation to robots can be seen in Fig.8, Fig.9, Fig.10, Fig.11 and
When the system specifying a delegation task to the Fig.12. Owl and RDF/XML files is generated by Protégé
robots, the sensors devices such as RFID, StarLite GPS, OWL, and the query result is produced by using Rasql11 to
camera arrays, etc. can be used to define and also weight perform queries in SPARQL language over RDF data.
various features as affordance consideration or as a ranking
of priority. The ranking algorithm can be based on a simple 5
http://www.w3.org/2004/OWL/
6
utility function using weight, a vector space model [17]. 7
http://protege.stanford.edu/
Those features are defined as part of the object’s data http://jena.sourceforge.net/
8
http://www.w3.org/TR/rdf-sparql-query/
schema. For example, given the ‘push affordance’ offered 9
http://www.w3.org/RDF/
10
the robot to close refrigerator’s door located in the kitchen, http://pellet.owldl.com/
11
http://librdf.org/query
#Kitchen
#Robot
#Box #MW Fig. 10 The query result of who has push ability and the
appropriate location to perform push action (in BillCorridor)
RDFS/OWL
Schemas
SPARQL - < !--
http://staff.aist.go.jp/s.hidayat/SR/Affontob
ibot.owl#BillMicrowave
Obtained -->
affordance
M icrow a ve rdf:ab ou t= "# B illM ic ro w av e">
< A fford 2A voidB y> A ib o < /A fford2A v oid B y>
< h asN a m e> T o sh ib a M W < /h asN am e>
Robot Controller
< h asExpiryD ate
Middleware
rdf:datatype= "h ttp :/ / w w w .w 3 .o rg / 20 0 1 /X M LS ch
Fig. 7 Components usage to obtain appropriate
em a # da te T im e"> 2 0 1 5 01 < /h asExp iryD ate >
affordances for robots
< isLo cated A t rdf:resou rce = "# B illK itch e n " />
< A fford 2Pu sh B y> P u m a < /A fford 2Pu sh B y>
< A fford 2A voidB y> B e e g o < /A fford 2A voidB y>
< /M icro w av e>
< !--
http://staff.aist.go.jp/s.hidayat/SR/Affontob
ibot.owl#BillRefrigerator
-->
R efrig erato r rdf:ab ou t= "# B illR efrig e rato r">
< A fford 2A voidB y> B e e g o < /A fford 2A voidB y>
< h asN a m e> H ita ch i R e z o k o < /h asN am e>
< A fford 2A voidB y> A ib o < /A fford2A v oid B y>
< A fford 2Pu sh B y> P u m a < /A fford 2Pu sh B y>
< isLo cated A t rdf:resou rce = "# B illK itch e n " />
Fig.8 The box blocked robots to enter the room; affords to push by
< /R efrigerator>
Beego and afford to avoid by Aibo robot.
- < !- - Fig.11 Only Puma has ability to close the refrigerator and
microwave’s door.
http://staff.aist.go.jp/s.hidayat/SR/Affon
tobibot.owl#BillBall
-->
P h y s ic a lO b je c ts r d f: a b o u t= " # B illB a ll" >
< A ffo r d 2 P u s h B y > A ib o < / A ffo rd 2 P u s h B y >
< is P u s h e d B y rd f: r e s o u rc e = " # B illB a ll" / >
Fig.12 The query result of who has push ability, the location to
< o w l: s a m e A s r d f: r e s o u r c e = " # B illB a ll1 " / >
perform push action and some appropriate objects that offer push
< A ffo r d 2 P u s h B y > B e e g o < / A ffo r d 2 P u s h B y > ability to robots.
< is P u s h e d B y rd f: r e s o u rc e = " # B illR o b o t - A ib o " / >
< is L o c a te d A t r d f: r e s o u r c e = " # B illC o r id o r " / >
< / P h y s ic a lO b je c ts > VI. CONCLUSION AND FUTURE WORKS
We have proposed new formulation and method to learn
Fig. 9. The ball in the corridor, affords push ability for both Beego
and Aibo robot.
objects and contexts affordance for robots.
Affordance-based ontology has been used to solve the
difficulties in implementing affordance concept for robotics
fields. Different from other researchers, we used Semantic
Web which combined with information from sensing [14] S. Mcilraith, T.C. Son, & H. Zeng, Semantic Web Services, IEEE
Intelligent Systems, 2001. Vol. 16(2), pp.46-53.
devices to learn environment affordances for robots. [15] M. Paolucci, T. Kawamura, T. Payne, K. Sycara, Semantic matching
The proposed approach has been designed and will be of Web Service Capabilities. The first Internatuional Semantic Web
tested with an implementation in ubiquitous environment. Conference (ISWC), 2002
Testing will initially use simulation to validate the concept, [16] A. Langegger and R. Wagner, Product Finding on Next Generation,
proc. iiWAS 2006 49-58
and will be grounded in ubiquitous robots using [17] G. Salton, A. Wong, and C. S. Yang. A vector space model for
OpenRTM-aist (RT Middlewere)12 for realizing open robot automatic indexing. Communication ACM, 18(11):613-620, 1975.
architecture and creation of service robots. [18] F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, and P. F.
Patel-Schneider, editors. The Description Logic Handbook: Theory,
Implementation, and Applications. Cambridge University Press,
ACKNOWLEDGMENT 2003.
[19] K. Ohara, K. Ohba, B.K. Kim, T. Tanikawa, S. Hirai. System Design
During this work, Sidiq S. Hidayat was supported by for "Ubiquitous Robotics" with Functions. The 2nd International
Hasyia Foundation Scholarship, which he greatly Conference on Ubiquitous Robots and Ambient Intelligence,
acknowledges. pp.66-70, 2006
[20] K. Ohba, H. Onda, T.Tanikawa, B.K.Kim, T.Tomizawa, K.Ohara,
X.Liang, Y.S.Kim, H.M.Do, T.Sugawara. Universal-Design for
REFERENCES Environment and Manipulation Framework (in Japanese), 8th System
[1] E. Rome, L. Paletta, G. Fritz, H. Surmann, S. May, and C. L¨orken. Integration Division Annual Conference, pp. 926-927, 2007.
Multi-sensor affordance recognition. Deliverable D3.2.1, MACS [21] T. Suzuki, K. Ohara, N. Shimoyama, N. Ando, K. Ohba, K. Wada.
Internal Technical Report, Institute for Intelligent Analysis and Proposal for Ubiquitous Robot Interface “FUSEN” 1st Report:
Information Systems (FhG-IAIS), Sankt Augustin, Germany, August Discussion about a Universal Middleware for FUSEN System (in
2006. Japanese), 8th System Integration Division Annual Conference, pp.
[2] R. Murphy, “Case studies of applying Gibson's ecological approach 107-108, 2007.
to mobile robots,” IEEE Transactions on Systems, Man, and [22] S.S. Hidayat, B.K. Kim, T. Tanikawa, K. Ohba. Connecting the
Cybernetics, vol. 29, no. 1, pp. 105-111, 1999. Physical World and Events Schedule of User Calendar for Symbiotic
[3] H. Gardner, The mind’s New Science. New York basic Books, 1987, Systems, 16th IEEE RO-MAN pp.173-177, 2007.
[4] W. Warren, “Perceiving affordances: Visual guidance of stair [23] R. R. Murphy. Introduction to AI Robotics. Intelligent Robots and
climbing," Journal of Experimental Psychology, vol. 105, no. 5, pp. Autonomous Agents. MIT Press, Cambridge, MA, USA, 2000. ISBN
683-703, 1984 0262133830.
[5] R. Arkin, Behavior-based Robotics. Cambridge, MA, USA: MIT [24] D. A. Norman. Affordance, conventions, and design. Interactions,
Press, 1998. ISBN:0262011654. 6(3):38–42, 1999.
[6] A. Stoytchev, Toward learning the binding affordances of objects: A [25] Ohara, K., Kim, B.K., Tanikawa, T., Ohba, K., Ubiquitous Spot
behavior-grounded approach," in In Proceedings of AAAI Service for Robot Environment.
Symposium on Developmental Robotics, pp. 21-23, March 2005. [26] J. J. Gibson. The Ecological Approach to Visual Perception.
[7] A. Stoytchev, Behavior-grounded representation of tool Lawrence Erlbaum Associates, Hillsdale, pp.127, 1979.
affordances," in In Proceedings of IEEE International Conference on [27] Il Hong Suh, Gi Hyun Lim, Wonil Hwang, Hyowon Suh, Jung-Hwa
Robotics and Automation (ICRA), (Barcelona, Spain), pp. 18-22, Choi, Young-Tack Park, Ontology-based Multy-layered Robot
April 2005. Knowledge Framework (OMRKF) for Robot Intelligence, IEEE/RSJ
[8] G. Fritz, L. Paletta, M. Kumar, G. Dorner, R. Breithaupt, and R. International Conference on Intelligent Robots and Systems, IROS,
Erich, “Visual learning of affordance based cues," in From animals pp.429-436, 2007.
to animats 9th Proceedings of the Ninth International Conference on [28] Lopes, M, F.S. Melo, L. Montesano, Affordance Based immitation
Simulation of Adaptive Behaviour (SAB) (S. Nolfi, G. Baldassarre, learning in Robotics, IEEE/RSJ International Conference on
R. Calabretta, J. Hallam, D. Marocco, J.-A. Meyer, and D. Parisi, Intelligent Robots and Systems, IROS, pp. 1015-1021, 2007.
eds.), LNAI. Volume 4095., (Roma, Italy), pp. 52-64, [29] Strat, T.M. 1992, “Natural Object Recognition”, Springer-Verlag
Springer-Verlag, Berlin, 25-29 September 2006. in press. New York, Inc. New York, USA.
[9] E. Ugur, M. R. Dogar, O. Soysal, M. Cakmak, and E. Sahin,
MACSim: Physics-based simulation of the KURT3D robot platform
for studying affordances," 2006. MACS Project Deliverable 1.2.1,
version 1.
[10] I. Cos-Aguilera, L. Canamero, and G. Hayes, “Motivation-driven
learning of object affordances: First experiments using a simulated
Khepera robot," in In Proceedings of the 9th International Conference
in Cognitive Modelling (ICCM'03), (Bamberg, Germany), 4 2003.
[11] P. Fitzpatrick, G. Metta, L. Natale, A. Rao, and G. Sandini,
“Learning about objects through action-initial steps towards
artificial cognition," in Proceedings of the 2003 IEEE International
Conference on Robotics and Automation, ICRA, pp. 3140{3145,
2003.
[12] D.Kim, J. Sun, et.al. Traversability classification using unsupervised
on-line visual learning for outdoor robot navigation. In IEEE Intl.
conf on Robotics and Automation, 2006.
[13] Christopher Lörken, Introducing Affordances into Robot Task
Execution, Publications of the Institute of Cognitive Science (PICS),
Volume 2-2007, 2007
12
http://www.is.aist.go.jp/rt/OpenRTM-aist/html-en/