Behold The Artificial Intelligent Baby X
Behold The Future…Imagine a machine that can laugh and cry, learn and dream, and can express its inner responses to how it perceives you to feel. It can express itself in a natural manner but also allows you to visualise the mental imagery emerging in its mind.
The Laboratory for Animate Technologies is creating ?live? computational models of the face and brain by combining Bioengineering, Computational and Theoretical Neuroscience, Artificial Intelligence and Interactive Computer Graphics Research.
We are developing multidisciplinary technologies to create interactive autonomously animated systems which will define the next generation of human computer interaction and facial animation.
?If I had my time again I?d want to spend it in this lab? – Alvy Ray Smith, Co-founder of Pixar (on his visit to the Laboratory for Animate Technologies).
We believe the best way to simulate biological behaviour is through biological models. We model the brain processes which give rise to behaviour and social learning and use these to animate lifelike models of the face that can interact with you.
BabyX is an interactive animated virtual infant prototype. BabyX is a computer generated psychobiological simulation under development in the Laboratory of Animate Technologies and is an experimental vehicle incorporating computational models of basic neural systems involved in interactive behaviour and learning.
These models are embodied through advanced 3D computer graphics models of the face and upper body of an infant. The system can analyse video and audio inputs in real time to react to the caregiver?s or peer?s behaviour using behavioural models.
BabyX embodies many of the technologies we work on in the Laboratory and is under continuous development, in its neural models, sensing systems and also the realism of its real time computer graphics.
We create interactive models of neural systems and neuroanatomy enabling visualisation of the internal processes generated by computational simulations giving rise to behaviour.
The Auckland Face Simulator is being developed to cost effectively create extremely realistic and precisely controllable models of the human face and its expressive dynamics for Psychology research.
We are developing the technology to simulate faces both inside and out. We simulate how faces move and how they look, and even their underlying anatomic structure.
We are developing a visual modelling methodology for the construction, visualisation and animation of neural systems called Brain Language [BL], a novel simulation environment for neural models.
This allows users to create animations and real-time visualisations from biologically based neural network models, allowing simulation effects to be viewed in an interactive context. Such a visual environment is not only suitable for visualising a simulation; it is also ideal for model development.
We are developing computer vision based systems to track and analyse facial expression and state of the art algorithms to solve for individual facial muscle activation.
Applications range from real-time expression recognition to microdynamic interaction analysis for psychology research.
http://www.abi.auckland.ac.nz/en/about/our-research/animate-technologies.html
————————————————————————————
This Freaky Baby Could Be the Future of AI. Watch It in Action
https://www.youtube.com/watch?v=yzFW4-dvFDA
its ugly not cute cause its a fake human being with no identy no soul!!
Does it age or do you just trade it in for an older one, like a later model car?
I don't want it. It's creepy. I guess if they made present people types, I could use it for a troll account
there is no way that network is able to tell apart different data types in the same network. would be extremly diffucult to hold all those different patterns in the same network given the unlikely way these networks probe all possible configurations before they deceide upon a fixed pattern. anyway neural nets do not really need real eyes anyway as all they do is interprating data based on a set of coordinates. the algoritm they use is sum = sum + input * weight. the same goes for a array of these called a network.
It's a cute baby robot. How long did it take to build and program it? – Science fiction goes real.
I sorta want to see a adult AI and this child one communicate. I wonder if the adult would recognize that it's a baby, and the baby recognize that its an adult.