Lens Flare Attenuation Accelerator Design with Deep Learning and High-Level Synthesis
Chapter, Conference object
Accepted version
Permanent lenke
https://hdl.handle.net/11250/3121593Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Originalversjon
2023 IEEE Nordic Circuits and Systems Conference (NorCAS) 10.1109/NorCAS58970.2023.10305455Sammendrag
Lens flare artifacts are undesired visual distortions caused by stray light, which can negatively impact the integrity and quality of an image. These artifacts pose a significant challenge in industrial applications like automotive and surveillance, where the quality and reliability of input images from cameras are crucial. Artificial intelligence, particularly deep learning neural networks, have shown promising results in attenuating lens flare. In this work, a synthetic flare dataset is generated, and an iterative training process that includes evaluation of transfer learning is employed to develop FlareNet, the first compact and lightweight U-Net based model for lens flare reduction. The FlareNet architecture, with less than 150,000 parameters comprising convolutional layers, demonstrates improvement in image quality by reducing flare artifacts on synthetic test images and real-life images, indicating its potential for achieving visually satisfactory results despite having less than 0.5% of the weights of the state-of-the-art neural architecture used for this same application. To demonstrate the viability of using a model such as FlareNet as a hardware accelerator, the neural network is implemented in C++ using Vitis HLS. Synthesis and validation are performed using the Vitis tool, and reports are analyzed while experimenting with HLS optimization directives. Resource utilization of less than 20% on a Zeus Zynq UltraScale FPGA is shown but further work is needed to optimize the design for real-time applications and effectively deploy the solution on an FPGA. Lens Flare Attenuation Accelerator Design with Deep Learning and High-Level Synthesis