Spatio-temporal colour correction of strongly degraded movies
Journal article, Peer reviewed
MetadataShow full item record
Original versionIslam, A. B. M. T. & Farup, I. (2011). Spatio-temporal colour correction of strongly degraded movies. I: Proceedings of SPIE, the International Society for Optical Engineering, Color imaging XVI : displaying, processing, hardcopy, and applications : 24-27 January 2011, San Francisco, California, United States, SPIE - International Society for Optical Engineering. http://dx.doi.org/10.1117/12.872105
The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms. We present an automatic colour correction technique for digital colour restoration of strongly degraded movie material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve an 80 percent reduction of the computational time compared to processing every single frame individually. We performed two user experiments and found that the visual quality of the resulting frames was significantly better than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality and computational efficiency.
This is the copy of journal's version originally published in Proc. SPIE 7866: http://spie.org/x10.xml?WT.svl=tn7. Reprinted with permission of SPIE.