Abstract: This thesis presents a novel approach to how a high dimensional humanoid robot of 18 dimensions can learn within a few hours to control its body so that it is able to perform simple tasks such as rolling around or to sit up. The method is robust and works equally well when an arm is removed, and in a case where the robot was trained to use two arms and one was removed it quickly adapted to its new body. The robot is equipped with an accelerometer that measures the tilt of the torso in 2 dimensions. This "tilt"-space is divided into a discrete set of states, and the way in which the dimensionality of the servo-space is made irrelevant is to only allow one servo-configuration per state. These configurations are evolved using a Self-Organizing Map, while an Artificial Curiosity-driven Reinforcement Learner chooses what state to state transitions to attempt. An additional parameter is added in a final experiment, to see if the agent can even learn to stand. This experiment was however unsuccessful.
Keywords: Self-organized robotics, Developmental Robotics, Reinforcement Learning, High DoF, Physical Environment, Humanoid Robot, Bioloid, Embodiment, Artificial Curiosity, Kullback-Liebler Divergence, Self-organizing map.