In our work, we are concerned with simulating a mirror for augmented reality purposes. The long term goal is such that the user’s mirror reflection is interaction agent, capable of moving/speaking independently of the user. The reflection agent, at the same time, will maintain a non-canny valley resemblance to the user (No freaking out).

In this age of VR/AR headsets, it is natural to ask Can’t this be done in the VR world?, what is the big deal with using mirrors and 2D image processing for this? Short answer, … well yes. You can create your a great VR interaction agent with kinects, and other capture systems. Port that model into a Virtual world, put on your oculus/htcvive/(VR) headset and have a great conversation with your AI controlled self. In general, there is no great need to simulate a mirror with our prototype (which I will describe later). However, not everyone wants to wear headsets nor does everyone want to/can sit still to get a 3D model as an interaction agent. More reasons for why we are making our prototype, and the usefulness of the mirrors can be seen in my (still in production) previous post.

RoboMirror Prototype

The Robomirror was designed and built by my former lab mate, Yuqi Zhang, for her Master’s project. Initially conceived as an entertainment device, the RoboMirror is more important for its possible medical applications (see my earlier post)