Light Field Images for Background Removal

Light Field Images for Background Removal

OUTPUT



OVERVIEW

Standard edge detection or foreground/background separation techniques, such as Otzu’smethod, require color or intensity differences between the background and regions that need tobe separated. For example, green screens are routinely set up as the background in a scene so thatthere is a clear difference in color between the background and foreground.Light filed images, captured in 4D and passively containing depth information for thescene can be used to approximate this effect. Depth estimation in the scene alone could provide a metric to separate the foreground and background, but more sophisticated methods are available.Considering the edge detection from a single image from a single viewpoint and analyzing the depths from the light field around the edges in that image, occluded edges and intensity or color edges can be distinguished. Along with the rest of the depth information this can allow the foreground and background, in terms of their relative depths, be identified and then a variety of processing could be done including removal of the background.
In this project I’d like to implement some form of the occluded edge detection and compare it to the depth map from the light field images alone. With the occluded edges detected and formed into regions with the depth map I could take a single image from a single view focused at the foreground and remove the background pixels, effectively clipping out the foreground even with a complicated background scene. For example, I’d like to get experiment photos from the lab ready for publication by removing the often cluttered and confusing background of the lab, complete with other students performing experiments, etc.

References:

Gelman, A., Berent, J., and Dragotti, P. L. “Layer-based sparse representation of multiview
images.” EURASIP Journal on Advances in Signal Processing. 2012.
Tosic, I., Berkner, K. “Light Field Scale-Depth Space Transform for Dense Depth Estimation.”
IEEE Conference on Computer Vision and Pattern Recognition. 2014.
Wang, T.-C., Efros, A. A., and Ramamoorthi, R. “Depth Estimation with Occlusion Modeling
Using Light-field Cameras.” IEEE Transactions on Pattern Analysis and Machine Intelligence.

Comments

Popular posts from this blog

Using Image Processing to Identify and Score Darts thrown into a Dartboard

aerodynamic performance