**Contrast Enhancement Model for a Two-way Mirror** Nkiruka Uzuegbunam Filming through a Two-way Mirror: Contrast Enhancement model =============================================================================== This post describes my research into enhancing images taken through a Two-way mirror (TWM). In particular, I seek to derive from first principles, the radiance hitting the camera from the transparent side of a TWM. This involves modeling the transmission of light and also the effects of haze in the degredation of the image. [Click here](../index.html) to navigate back to home page. What are TWMs and how do they work? ------------------------------------------------------------------------------- One-way/Two-way mirrors (TWMs) are partially reflective and partially transparent constructions, typically composed of a transparent medium, thinly coated on one side by a reflective medium. Often used in security, entertainment, observation, they are designed to provide privacy in various situations. Unlike plane mirrors (common everyday mirror), TWMs are generally designed to allow light to pass equally in both directions. TWM will appear like a plane mirror when one side of the mirror is *much more* brightly lit than the other side. Most TWM vendors state that the reflective side (i.e. the metal coated side) should face the brightly lit room, while the transparent layer faces the darker room. The reason for this can be seen in the figure below. Further information as to the physics and uses of TWMs can be found in my earlier post, [here](one_way_mirror.md.html). ![Figure [model1]: Essential operation of a TWM. Picture obtained from TWM Vendor site: http://www.twowaymirrors.com/](../assets/how-does-a-two-way-mirror-work.jpg) Degredation of Image taken through a TWM ------------------------------------------------------------------------------- Consider the scenerios depicted below. In the first scenerio, Figure [model2], the camera views the scene without any obstructions. It is able to receive all the light from the object and from the environment. Let us call the radiance of the object, i.e, the transmitted light from the object, $L_{T}$. *I ignore radiance from the environment. I don't know if I should model it. I will update as I need to. * The radiance from the scene that hits the camera forms the image. Incoming radiance is termed irradiance, and let us call the irrandiance on the camera, hence, the image formed, $I\left(x\right)$. I make it dependent on the pixel location as different points on the camera receive different amounts of light. Thus, with no obstruction, the image formed will be: $ I\left(x\right) \approx L_{T} $ ![Figure [model2]: Light from object and environment reaches camera without obstruction](../assets/SceneWithoutMirror2.png) In the second scenario, Figure [model3], the camera views the scene with the TWM in the way. I assume the camera faces the transparent side of the TWM. The object faces the reflective side of the TWM. ![Figure [model3]: Light from object and environment reaches camera *after* passing through the TWM obstruction](../assets/SceneWithMirror.png) As such, we have that the image formed depends on some function of the radiance it receives from the object, with some added noise: $ I\left(x\right) = {f}\left( L_{T} \right) + noise $ The question is: *what is this function?* For my scenerio, the before and after photos of a static scene taken without, and with the TWM in place are shown below. In top row are images taken with right camera of a stereo camera. In the middle row are images taken with the left camera of a stereo camera. In the last two rows are images taken with the Microsoft Kinect v2 color camera. ![Figure [model4a]: Left Image without TWM](../assets/camera1_stereo_before1.png width="300px" border="1") ![Figure [model4b]: Left Image with TWM](../assets/camera1_16.png width="300px" border="1") ![Figure [model4c]: Right Image without TWM](../assets/camera2_stereo_before2.png width="300px" border="1") ![Figure [model4d]: Right Image with TWM](../assets/camera2_8.png width="300px" border="1") ![Figure [model4e]: Kinect Image without TWM](../assets/colorKinect_before1.png width="300px" border="1") ![Figure [model4f]: Kinect Image with TWM](../assets/colorKinect_after1.png width="300px" border="1") ![Figure [model4g]: Kinect Image without TWM](../assets/colorKinect_before2.png width="300px" border="1") ![Figure [model4h]: Kinect Image with TWM](../assets/colorKinect_after2.png width="300px" border="1") From right column of Figure 4 above, we can see two things across all images: 1. The image is consistently darker. This implies a reduction in amount of light reaching the image. It is darker for stereo camera than for the kinect camera. In addition, image of the torch in top 3 figures is different for the kinect camera (third row) than for the stereo camera (top 2 rows above). The saturated pixels about the torch in third row appears to be larger in the after image. This is in contrast to stereo images. In the top two rows, the saturated pixels about the torch in the stereo images appear to reduce in area in the after image. This suggests that the stereo camera employs an auto correction function, perharps contrast equalization function. In contrast, the Microsoft Kinect does not employ any correction algorithms (as far as I know). Hence, in order to capture the true function affecting the irradiance of the camera, I need to use a camera with no auto correction algorithms. Thus, I am ignoring data collected with the stereo camera and using only the data from Kinect color camera for reconstruction. 2. The image is hazy, appearing blurred in some places (Figure [model4f], Figure [model4h]). In addition, as stated in first point, the TWM appears to have increased the number of saturated pixels overall in the image (see Figure [model5] below). This implies some sort of convolution and some noise. For noise, I am thinking some kind of salt-and-pepper noise given the speckle looking nature of the picture. For the convolution, it may be a directional blur (moving from bottom left to top right). I think this is plausible given the introduction of a new light streak running across the blue folder and the shape of the reflection on the side of the red folder. 3. Finally, *this* TWM appears to have some deterministic artifacts (not random) as evidenced by the single white pixel in Figure [model5] that appears in Figure [model4f] and Figure [model4h]. If it was random, the bright spot would not be in the same location for two different pictures. This implies I should either clean the TWM or it has some structural qualities that produce that each time. I will need to verify this by taking more images. ![Figure [model5]: Saturated pixels, blurring, speckle caused by TWM in kinect images](../assets/mirrorDefects.png) In order to decipher what is happening and what to do, I researched, and am researching, several different lines of work: 0. Contrast Enhancement: a. I inherited this project from my colleague, Yuqi Zhang. She tried various options such as adaptive histogram, CLAHE histogram method, and histogram equalization methods available in MATLAB $^{TM}$, and OpenCV. She used a model of the form, $ I\left(x\right) = g (f * L) $. She obtained g by a polynomial fit of the points derived from imaging constant amplitude pictures. Then she obtained the parameters of f using a regression. She achieved much better enhancement results than CLAHE, and other adaptive histogram methods. However, it is a long and tedious process that, I believe, can be improved upon with current research in image reconstruction. In particular, I was interested in areas that employed a prior in the image enhancement process, i.e., where some (input, output) pairs were available. 1. Weiner Filtering: a. Model: I looked at the problem like it was Channel estimation in signal processing. In this case, we know input, X, we know the output, Y, and we want the inverse of the channel that produced Y. That is, $ Y = H ( X ) $, where H is the channel operating on X. The inverse, $H^{-1}$, should, in theory, produce a clean Y. So, in general, doing this: $ fft(H) = \frac{fft(Y)}{fft(X)} $ should get the channel (take an average over several inputs). Then an $H^{-1}$ should pop out in theory. Theory always needs to be refined with practice in the particular problem. That did not work out so well. Going to a full Weiner filter as described in many Image Processing texts did not work out either. So, I moved on to see what were the present literature on various forms of image restoration. First, I looked at deblurring since the TWM images have a hazy looking (handwavyyy) quality. 2. Deblurring (Blind or Non-Blind deconvolution): a. Model : $ f(x) = k * I(x) + noise $ . Noise is usually Guassian, and k is the convolutional kernel. If known, it is Non-Blind deconvolution, else it is Blind deconvolution. This model did not fit, at first, because the TWM images did not appear blurred as the images in many of the papers I read. In addition, many of these papers talked of blurring reconstruction as recovering an image that had undergone camera motion. My system had no camera motion. This discouraged me and led me to look to other processes. This led me to hazing. Summary ------------------------------------------------------------------------------- In this work, we have explained how a TWM works. A TWM, designed for privacy, reflects most of the light hitting it, while allowing others to pass through. It is composed of a transparent medium, often glass, acrylic or some transparent plastic, coated with a very thin layer of a reflective medium, often silver or aluminium. A TWM acts like a mirror when the reflective side is much more brightly lit than the transparent side. In my next post, I will further explain the robotic mirror, my intended usage of the TWM. [Click here](../index.html) to navigate back to home page.