Posts

Depth from Defocus for Mobile Cameras

Image
Depth from De-focus for Mobile phone camera Output OVERVIEW Depth from Defocus (DFD) is a technique in which a depth image of a scene is reconstructed from multiple images with varying camera parameters from a single camera [1]. Parameters that affect defocus characteristics of an image are; distance to the focus plane, the focal length, and the depth of field which is controlled by the aperture size. I would like to explore and implement DFD methods on smartphones [2]. My aim is to display a captured scene with a some kind of 3D technique, i.e. parallax mapping. A use for this could be simple capturing of 3D photos of people or of sculptured art. Technical Details In the first stage I will implement a DFD method in MATLAB with image stacks taken by a stationary DSLR camera, i.e. no translation or parallax between images. This will give me a solid understanding of the mathematics behind the optics and the algorithms. In the second stage I will extend the above me...

Image Enhancement using Machine Learning

Image
Image Enhancement using Machine Learning OUTPUT OVERVIEW Automatic image enhancement is an active field of research and is used widely in professional image processing software. For our project, we want to create an automatic image enhancement tool that learns a user’s preferences so that subsequent images can be automatically enhanced in a personalized way. Our work will be mainly derived from [1]. The parameters learned in [1] are associated with contrast and color correction. We will attempt to learn these parameters, and if time permits, experiment with other parameters as well. The first part of this project is to select an optimal subset of training images from a large set of images, using the optimization technique described in [1] and [3]. The subset will be around 15-20 training images. For each training image, there will be a large possible number of combinations of parameters (depending on how many we choose to learn). To reduce the subset of possible param...

Panorama Based on Light Field Images

Image
Panorama Based on Light Field Images OUTPUT Introduction: Light field camera captures an array of images by microlens near the detector while traditional camera  just takes one 2D picture of a 3D scene via one single lens. Though the resolution is not as good as  traditional camera, light field device still brings a lot of advantages in many ways [1,2]. For example,  depth detection, post-focus, 4D feature detection or even animated thumbnail 3D camera shake are  feasible from processing light field images. In the recent years, panorama is also developed as a picture  containing a wide viewing angle which is usually achieved by stitching a couple of 2D images  together[3,4]. Our project will look into rendering panorama pictures by stitching individual light field  images which will both contains a wide viewing angle and has the ability to calculate a lot of depth  related characteristic via MTALAB Light Field Toolbox. Objective Ach...

Cell Segmentation in Slide Images

Image
Cell Segmentation in Slide Images Output OVERVIEW The advent of whole slide images (i.e. digitally scanning pathology slides can help usher in a new era of quantitative analysis of tissue samples. Whereas traditional judgments are made subjectively by a pathologist, image processing and computer vision algorithms can help determine cell densities in neoplasmic tissues, quantify the amount of Her2 in breast cancer, and much more. One of the largest challenges in digital pathology is accurate cell segmentation. In tissue samples, cells are closely packed, and sometimes overlap since the tissue being imaged is actually a 3D specimen, of which we are taking a 2D image. We propose to develop an image processing algorithm for segmenting distinct cells in tissue samples. A number of algorithms have already been proposed for this, including Euclidean distance transform, watershed segmentation, a combination of morphology operators, Laplacian of Gaussian filters, MSERdetectors, an...

Model based markerless tracking

Image
Model based markerless tracking OUTPUT OVERVIEW For our project we would like to implement a markerless augmented reality application on a mobile device. We would like to use an android device for this project. The goal of the project is to implement a real time tracking system using a model of a simple object, like a rubik’s cube. The principal deliverable in this project is a model-based tracking system implemented on a mobile device. We plan to implement the tracking algorithm ourselves. We will not use any in-built tracking tools available in ARtoolkit. Milestones : Camera Calibration: The first step in the project is to calibrate the camera on the mobile device.Accurate camera calibration is essential for implementing a good tracking system. This can be done using the Camera Calibration plugin provided by the ARToolKit orMATLAB. Object tracking and camera pose estimation: Next we would like to estimate the camera’s pose by using a control object whose CAD model i...

Creating Drawings from Digital Images

Image
Creating Drawings from Digital Images Description There are many algorithms out there for non-photorealistic rendering of images to look like   drawings. Jin et al and Li et al create drawings from images by creating directional lines that   match the intensity of the image. However, the resulting images don’t stylistically look like an   artist drew them. Efros et al found that image quilting can be used to transfer a specific drawing   style to an image such that it looks like it has actually been drawn by an artist. I will attempt to  implement his algorithm to create drawings from digital images. Plan I believe the following steps will be necessary to creating drawings from digital images that  resemble the drawings of famous artists. 1. Split the source texture (for example Enrico Donati’s drawing) into overlapping blocks of  size B. 2. Split the source image into overlapping blocks of size B. 3. Choose a block f...

Plane Extraction on Surfaces

Image
Plane Extraction on Surfaces OUTPUT Description :  The use of SLAM has many applications in drones and augmented reality to track the 6DOF  position of the camera. Currently there are two state of the art methods: ORB-SLAM and LSD-SLAM. We would be choosing to use ORB-SLAM because of the speed benefits of constructing a sparse feature  map instead of a dense feature map and its robustness in tracking from literature. On the other hand, an  implementation of semi-direct visual odometry is similar to ORB-SLAM but uses direct methods by use of photometric error to estimate pose instead of feature-based methods in SLAM which allows for speed-up. For now, we plan on using ORB-SLAM because of Android support online, but if possible we would like  to use the implementation of Semi-Direct Visual Odometry to do the tracking since it is faster, however  there is no loop closure or relocalization included with the open-source code at the current time....