Automatic generating anaglyphs images

✔️ Generating Anaglyphs from Light Field Images




✔️ Overview


Light-field imaging systems have received a lot of attention recently, especially with the release of Lytro cameras for consumer application. Conventional cameras capture 2D images, which are projections of a 3D scene. Light-field imaging systems capture not only the projection but also the directions of incoming lighting that project onto a sensor. Specifically, Lytro camara consist of an array of microlenses placed in front of the photosensor used to separate the light rays striking each microlens, and to focus them on
different sensors according to their directions. The spatial resolution of the acquired images are significantly lower in comparison to those obtain from conventional cameras; however, the acquired light-field allows for more flexible image manipulating. Extensive research has been conducted in understanding and developing applications for light-field images [1,2,3].
One potential application for light-field images is the generation of anaglyphs. The standard method for generating anaglyphs requires a pair of images, which have been taken at slightly different perspective viewing angles to create the desired 3D effect. It is possible to create an anaglyphs from just one image taken from a conventional camera, but that requires extensive work in a graphics editing program such as Photoshop and it isn’t an automated process. Using the rich information captured from a light-field image, it is possible to create an anaglyph with an automated image processing algorithm.

✔️ Objective.

Develop an automated image processing algorithm that generates anaglyphs images from light-field images acquired from a Lytro camera using two different techniques.

✔️ Implementation

The image processing algorithm will involve two different techniques. The first technique is based on perspective views, while the second technique is based on depth-field information.For the first technique, an anaglyph image will be generated using the standard method for generating anaglyphs from a conventional camera. Lytro uses a hexagonal arrangement of a microlens to generate the resulting light-field image. Understanding how a light-field camera converts from its raw image to its resulting refocused image, one can extract two focused images from slightly different perspective viewing angles. The anaglyphs can be easily generated using the two extracted images to created the desired 3Deffect. For the second technique, which will require extensive image processing, an anaglyph image will be generated using a depth-field information extracted from the light-field image. A depth map of the image scene can be computed by identifying the the focal stack where each pixel is the sharpest. Once the depth map of the image scene is known, then the image scene can be segmented into different regions corresponding to different depths. Displacing the segmented regions depending on depth-field information allows us to create the desired 3D effect. A blur will be applied to the image scene to reduce the harshness of the edges after the displacement of the image.In order to test the rigidity of the image processing algorithm several images will be acquired andprocessed. A comparison between the two different techniques will be perform to determine the best method for generating anaglyph images from light field images.Note that this project will be implemented using MATLAB and does not require the use of an Android device.


✔️ References


[1] A. Mousnier, E. Vural, and C. Guillemot, “Partial light field tomographic reconstruction from a fixed- camera focal stack,” Campus Universitaire de Beaulieu, Rennes, France, 2015
[2] M.W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from Combining Defocus and Correspon- dence Using Light-Field Cameras,” International Conference on Computer Vision, Sydney, Australia, 2013.
[3] W. Lu, W.K Mok, and J Neiman, “3D and Image Stitching With the Lytro Light-Field Camera,” Dept. of Comp. Sci., City College of New York, New York, NY, 2013.


✔️ SOURCE CODE CLICK HERE

Comments

Popular posts from this blog

Using Image Processing to Identify and Score Darts thrown into a Dartboard

Chess Board Detection

Create Pointillism Art with 3 Primary Colors from Natural Images