CS365 : Artificial Intelligence : Project

Nao : Unsupervised learning of maps for common motions

Unsupervised learning :

In unsupervised learning the machine simply receives inputs x1, x2, . . ., but obtains neither supervised target outputs, nor rewards from its environment. It may seem somewhat mysterious to imagine what the machine could possibly learn given that it doesn´t get any feedback from its environment. However, it is possible to develop of formal framework for unsupervised learning based on the notion that the machine´s goal is to build representations of the input that can be used for decision making, predicting future inputs, efficiently communicating the inputs to another machine, etc. In a sense, unsupervised learning can be thought of as finding patterns in the data above and beyond what would be considered pure unstructured noise.

About Aldebaran Nao :

There are in-built functions in SDK of Nao which can provide details of motion of each type of joint. Nao has two CMOS 640 x 480 cameras, which can capture up to 30 images per second.

Proposal :

To take inputs from the various joints and the images seen by the Nao, and to recognize patterns b/w these two different types of inputs. In the first step we will keep the log of the inputs. Second step we will divide the inputs based on some patterns also derived from the inputs by the known algorithms like LLE (for density estimation and dimensionality reduction), Modelling time series and other structured data, SOM (self organizing maps) etc.

Further Extension :

This idea can be extended to recognize patterns b/w the inputs from all types of sensors like microphones and bumpers to develop an advanced artificial mind.

Use :

It will help in decision making, and predicting future inputs. It can recognize by itself that when it moves forward then the image it is seeing is getting enlarged.

References :

[1] Unsupervised Learning Zoubin Ghahramani Gatsby Computational Neuroscience Unit University college London, UK zoubin@gatsby.ucl.ac.uk http://www.gatsby.ucl.ac.uk/~zoubin, http://learning.eng.cam.ac.uk/zoubin/

[2] Artificial neural networks: Unsupervised learning Laboratorio Calcolo Matematico Modulo Reti eurali Juan Rojo INFN, Sezione di Milano.

[3] Technical Report MS CIS-02-18, University of Pennsylvania. Think Globally, Fit Locally: Unsupervised Learning of Nonlinear Manifolds.




Motion Tracking in Aldebaran Nao

Abstract :

This work is aimed at trying to learn the motion flow by observing the Temporal-Informational Correlations between the sensors (visual) and actuators of Aldebaran Nao. We will also try to learn to follow a particular motion by learning the motion flow.
Our work will be primarily based on the work of Olsson and Nehaniv[1].

Introduction :

Many advanced sensory systems have the ability to adapt to the current environment. We are trying to achieve a sensory adaptaion mechanism which enables the robot to adapt it´s sensors to its current environment by online estimation of the statistical structure of the robot´s sensory environment.
Here we will try to learn the correlation between the sonsors and the actuators of the robot by studying the optical flow caused by the action of actuators and then to perform motion tracking.
This is loosely inspired by the way motion detection seems to work in the fly, where sensors are connected to correlators using temporal delays (Harris et al., 1999). The method is exemplified by experiments performed with a real robot where the robot starts by learning the structure of its sensors, then learns to detect motion flow, and finally is able to perform simple motion tracking based on motion.

Approach

In this work we plan to follow the same approach as done by Olsson and Nehaniv [1] in their work, which is given in short here.
The method for learning motion flow detection utilizes body babbling, cf. ( [3] Meltzo and Moore, 1997), whereby the robot discovers relations between its motors and temporal correlations in its sensory input, based on the method presented in ([4] Olsson et al., 2005a). Our approach to detect motion flow is based on the sensory reconstruction method (Pierce and Kuipers, 1997, Olsson et al., 2004), extended by considering temporal correlations between sensors.

References

[1] Olsson, L., Nehaniv, C. L., and Polani, D. 2005. Discovering motion flow by temporal-informational correlations in sensors. Proceedings of the Fifth International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, pp. 117-120. Lund University Cognitive Studies, 2005. ---- Click here to download.

[2] Crutchfield, J. P. (1990). Information and its Metric. In Lam, L. and Morris, H. C., (Eds.), Nonlinear Structures in Physical Systems {Pattern Formation, Chaos and Waves, pages 119{130. Springer Verlag.

[3] Meltzo, A. and Moore, M. (1997). Explaining facial imitation: a theoretical model. Early development and parenting, 6:179{192.

[4] Olsson, L., Nehaniv, C. L., and Polani, D. (2005a). From unknown sensors and actuators to visually guided movement. In Proceedings of the 4th IEEE International Conference on Development and Learning (ICDL-05). IEEE Computer Society Press, in press.