Posts

Digital Make up Face Generation

Image
Digital Make up Face Generation Goal Current make-up applications rely on using photoshop tools to apply makeup on the target's digital  faces and generate results. While these applications allow customization, a customer who wants to  quickly decide on the type of makeup kit to buy at a store will not find it useful. The customer might just  want to find out how the make-up look on the cover or billboard will look like on her face. The goal of  this project is to use an existing reference image of another subject with a make-up applied, and  transfer the reference's make up on the target's face. The application can be further extended to photo  retouching and illumination transfer from the reference image to the target. Methodology In order to transfer the make-up from the reference onto target in a pixel by pixel basis, the areas of  interest must align. Face features such as eyes, nose, mouth and contours of the face, will be recognized  using Active Shape

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image
Image Processing Pipeline for Facial Expression Recognition under Variable Lighting 1 Introduction Automated facial expression recognition has proven to be beneficial in a variety of settings. For instance, in the Wall Lab of Stanford Medical School, expression recognition is used in a Google Glass application that helps children and adults with Autism detect the emotions of people they are interacting with. Thus, research into increased classification accuracy for expression recognition can have great impact. Many studies addressing this subject use images with uniform lighting conditions[1]. This is understandable because it allows for accurate evaluation of the recognition algorithm. However, for most practical applications, the emotions recognition task is done in real-world conditions where the lighting is diverse and far from being uniform[2]. In this project, we aim to study the effects of different lighting/shadowing on the emotions recognition task and find t

Create Pointillism Art with 3 Primary Colors from Natural Images

Image
Create Pointillism Art with 3 Primary Colors from Natural Images Description Pointillism is a branch of impressionism that can be dated back to the late 19th century. It is a painting technique that only uses tiny, distinct dots to form patterns of color. It enjoys a duality in being both discrete up close (the dots) and continuous from distance (the patterns). Combining this artistic inspiration with the techniques of digital image processing, we want to develop a special image filter that creates pointillism art from ordinary digital images. Plan he process of creating pointillism art will be as followed: (1) We take an image. (2) We will apply various image processing techniques to our image, such as blurring the image with a structuring element that is larger and proportional to the size of the dots we want to use in our pointil

Automatic Cell Detection of Liver Tissue Section Image

Image
Automatic Cell Detection of Liver Tissue Section Image 1 Introduction The Nusse Lab of the Stanford Institute of Stem Cell Biology & Regenerative Medicine studies the regenerative properties of the liver. The goal of this project is to help graduate students in the Nusse Lab automate the tasks of cell counting and characterization of liver tissue section images, leveraging image processing and machine learning techniques. Currently, the cell counting tasks of tissue section images are done by hand, in a manual and laborious manner, because general purpose image processing software such as Image J does not adequately address the specific need for these types of images and there are no commercially available products solving this problem [Gri15]. While previous projects have dealt with cell counting or characterization of cell culture images, this project tackles the more difficult problems presented by tissue section images due to their non-homogeneous nature and t

Object tracking via adaptive prediction of initial search point on mobile devices

Image
Object tracking via adaptive prediction of initial search point on mobile devices Goal The goal of this project would be to implement the mapping portion of SLAM, i.e. to develop a mobile application that could create a planar grid given a known reference, then track an object along that grid. In order to constrain the object field, the camera will be fixed, and the object shall be a cube (although the object location given will be strictly planar). The first step would be to create the grid in the first place, then try and detect the object and track it. Approach Tracking an object is difficult because a camera could be easily distracted by local maxima in similarity. Pan et al, however, developed a method for robustly tracking an object by using a Kalman filter to determine the next location of a traveling object [1]. In the test results, it could accurately detect the location of a car traveling down a highway. From here, the location of the object in the im

Light Field Images for Background Removal

Image
Light Field Images for Background Removal OUTPUT OVERVIEW Standard edge detection or foreground/background separation techniques, such as Otzu’smethod, require color or intensity differences between the background and regions that need tobe separated. For example, green screens are routinely set up as the background in a scene so thatthere is a clear difference in color between the background and foreground.Light filed images, captured in 4D and passively containing depth information for thescene can be used to approximate this effect. Depth estimation in the scene alone could provide a metric to separate the foreground and background, but more sophisticated methods are available.Considering the edge detection from a single image from a single viewpoint and analyzing the depths from the light field around the edges in that image, occluded edges and intensity or color edges can be distinguished. Along with the rest of the depth information this can allow the foreground and b

InSAR-derived Active-Layer Thickness Distributions

Image
InSAR-derived Active-Layer Thickness Distributions Background: Large-scale thawing of arctic permafrost has a poorly-understood feedback effect on global climate through the release of CO2 and methane. Active Layer Thickness (ALT) is the maximum annual depth of thaw of surface soils and is designated by the World Meteorological Organization (WMO) as an essential climate variable for monitoring the status of permafrost.Interferometric Synthetic Aperture Radar (InSAR) is a widely-used geophysical technique for measuring surface deformation at high spatial resolution (Rosen et al. 2000). In recent years, InSAR has been successfully used to measure ground deformation due to seasonal permafrost freeze/thaw cycles and invert this deformation signature for a spatially extensive and finely-sampled map of ALT (Liu et al. 2012; Schaefer et al. 2015). Proposal:  The ALT retrieval algorithm developed by Liu et al. 2012 generates continuous spatial solutions of ALT within an individua