Publications

2010

Learning the Consequences of Actions: Representing Effects as Feature Changes [PDF]
M. Rudolph, M. Mühlig, M. Gienger, H.-J. Böhme, “Learning the Consequences of Actions: Representing Effects as Feature Changes”, in Proc. EST2010.
Abstract [spoiler]In advanced Programming by Demonstration (PbD) it is important to give a robot the ability to understand the effects of an action. This ability can enable a robot to not only mimic an action but to imitate, by determining whether an action succeeded or not, or to emulate, by finding another action that causes the same effects as observed. In this paper we propose a system that uses a Bayesian Network structure to store actions as a representation of their effects. The effects in turn are implicitly stored as representation of feature changes in the perceived environment. In a more general form the system can be used to differentiate between actions. In a more specific form it can be used to learn complex mapping functions. We will show three different experiments. The first one shows how to learn actions as a representation of effects. The second one shows how our system can be used to learn a complex mapping function on robot movement and in the third experiment, we illustrate how to combine these independently learned systems to achieve more complex tasks.[/spoiler]
Diploma Thesis [PDF]
Mathias Rudolph, “Representing Feature Changes Caused by Actions,” Hochschule für Technik und Wirtschaft Dresden, Honda Research Institute Europe, 2010
Abstract [spoiler]In Programming by Demonstration, to solve complex tasks, a robot needs the ability to split a task into subgoals and find actions that will fulfill these subgoals. This is called planning. To be able to plan, the actions need to be stored in an inferable way. A simple generalizable method to store actions and their results is therefore one step into the direction of giving a robot the ability to solve complex tasks. This thesis proposes an approach to provide a system that can be used for planning. This system is called Storage of Attribute Differences in Bayesian Nets (SADiNets). A SADiNet is representing actions by their effects, which are differences of observed object features over time. The representation is realized with Bayesian networks where the object features are stored in the nodes and are split into two groups, one group representing the initial state of the environment before the action took place and the other group representing the end state of the environment after the action took place. SADiNets have the ability to categorize unknown actions to already learned actions. Knowing only one state and the action, a SADiNet can predict the outcome by inferring the possible end state or infer the initial state to plan backwards. In this thesis the configuration of the SADiNets is explained and SADiNets will be evaluated with real-world and simulated experiments.[/spoiler]