Light Field Images for Background Removal OUTPUT OVERVIEW Standard edge detection or foreground/background separation techniques, such as Otzu’smethod, require color or intensity differences between the background and regions that need tobe separated. For example, green screens are routinely set up as the background in a scene so thatthere is a clear difference in color between the background and foreground.Light filed images, captured in 4D and passively containing depth information for thescene can be used to approximate this effect. Depth estimation in the scene alone could provide a metric to separate the foreground and background, but more sophisticated methods are available.Considering the edge detection from a single image from a single viewpoint and analyzing the depths from the light field around the edges in that image, occluded edges and intensity or color edges can be distinguished. Along with the rest of the depth information this can allow the foreground and b...
✔️Using Image Processing to Identify and Score Darts thrown into a Dartboard ✔️Introduction The game of darts is a throwing sport in which participants toss projectiles into a circular target attached to a vertical surface. The target is divided into many regions which correspond to different point and multiplier values. Darts is commonly played across North America and Europe and is a popular past time for many so an application that allows players to more conveniently keep score would be widely beneficial. “Darts” is a general term for a targeting game following this basic premise and many game variations exist within this archetype; all utilizing the standard dart projectiles with a regulation dart board. Each game variant may have different objectives for which players to aim but identifying the region in which the dart has hit in the dartboard is necessary for proper scoring. This project proposes the use of image processing to identify thrown da...
Face Detection, Extraction, and Swapping on Mobile Devices OUTPUT: DOCUMENTATION: We will use the Viola-Jones face detector in OpenCV as a starting point. Once we have found two faces we will apply some post processing to make sure that we have the entire face without any holes. Once we have a binary mask of the faces’ fully connected components, we plan to use cvBlobsLib (similar to regionprops function in Matlab) to do face region labelling and extraction. Choosing the faces to swap will either be random or done through a simple UI. Initially we will translate the faces to match the new face’s centroid with the centroid of the old face. Then, using the relative locations of features detected using OpenCV (such as the eyes, nose, and mouth) we will determine the orientation of the face in the plane parallel to the camera lens. Then, the relative sizes of these features will allow us to determine the degree of rotation away from the camera. However, if the faces we ar...
Comments
Post a Comment