Computer Vision
Homomorphic and Laplace transforms for Image analysis
Course Project: Digital Image Processing at University of Houston
Any image can be described as a product of illumination and reflectance of the subject. Very often, high variations in the illumination lead to poor quality photos since the camera is unable to capture the entire range of intensities. The resulting image is over exposed or under exposed in some areas. Such an image can be enhanced with the help of Homomorphic Transform. In this method the illumination and reflectance are separated and filtered in frequency domain to achieve the desired result. As a final project for a graduate course on Digital Image Processing at University of Houston, I have developed Homomorphic and Laplace transform filters for image enhancement. The programing has been done in Python. The work has been completed in collaboration with five other classmates, each working on a unique set of filters. Our efforts led to a user-friendly GUI which incorporates all filters on a single platform. The code can be provided upon request.
GUI interface with an image filtered through Homomorphic transform
ChemSpecNet: Deep Learning for Hyper-Spectral Chemical Imaging
Optics Communications, Volume 507, 15 March 2022, 127691.
ChemSpecNet is a deep learning framework I developed to bring modern computer vision and machine learning techniques into the field of chemical imaging. It was designed to address a core limitation of Sum Frequency Generation (SFG) spectroscopic imaging: the need for spatial averaging (pixel binning) to overcome low signal-to-noise ratios, which traditionally comes at the cost of spatial resolution.
SFG imaging is a uniquely powerful method for probing surface chemistry, but its weak signals often demand long acquisition times or heavy post-processing. Conventional methods like spectral curve fitting break down in noisy environments and are computationally expensive. ChemSpecNet tackles this challenge by reimagining the problem as a spectral classification task. It uses a supervised neural network to directly identify chemical signatures from noisy pixel-level spectra, enabling high-resolution imaging without compromising detail or speed.
Trained on over a million spectra from Self-Assembled Monolayers (SAMs) on gold substrates, ChemSpecNet achieves:
92% classification accuracy at the single-pixel level (no binning)
Up to 99.5% accuracy using minimal 8×8 binning
Robust generalization across experimental variations
Full-resolution, real-time chemical mapping without the need for long acquisition times
Technically, ChemSpecNet is built as a fully connected neural network for Hyper-Spectral imaging using TensorFlow, with:
Input: mid-IR SFG spectra with 71 wavenumbers per pixel
Outputs: Chemical identities of pixels and chemical maps for full image.
This project shows the power of data-driven models in domains traditionally governed by physics-based approaches. ChemSpecNet opens new possibilities for fast, high-resolution chemical imaging in materials science, nanotechnology, and biomedical sensing—setting a new standard for applying machine learning in hyperspectral and spectroscopic imaging.
Image generated from 1x1 binning using ChemSpecNet
Image generated from 8x8 binning using ChemSpecNet