Low-light image enhancement represents a vital area of research within computer vision, with diverse applications spanning surveillance, autonomous vehicles, mobile photography, and medical imaging. Conventional image processing methods frequently encounter difficulties in addressing the challenges posed by low light environments, including noise amplification, color distortion, and detail loss. This study investigates the application of deep learning methodologies to enhance images captured in low-light conditions and to elevate their visual quality. In particular, we focus on convolutional neural networks (CNNs) and generative adversarial networks (GANs) to establish intricate mappings between poorly lit images and their well-illuminated equivalents. We employ benchmark datasets, including the LOL (Low-Light) and SID (See-in-the-Dark) datasets, for the training and evaluation of our models. The performance of these models is measured through both quantitative metrics—such as PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index)—and qualitative visual assessments. Our findings indicate that deep learning-based techniques significantly surpass traditional approaches in generating clearer, more detailed, and color- accurate enhanced images. This research underscores the promise of data-driven models in addressing complex image enhancement challenges and lays the groundwork for future developments in real-time and resource-constrained settings.
Low-Light Image Enhancement, Deep Learning, Retinex Theory, Convolutional Neural Networks, Image Decomposition, Illumination Map, Noise Suppression, Image Quality Assessment.