Menu
Loading...

Track 2: Atmospheric Turbulence Mitigation

Register for this track

Left: New text recognition data this year (sub-track 1). Right: Sample coded target data (sub-track 2)

Imaging in environments that degrade the quality of an image sets a hard constraint on where our computer vision algorithms are effectively applied. Often, an image domain model for these situations is utilized with great success; points of interest are first used to estimate the distortion, the rectified images are then fed into a deblurring framework, and finally, a patch-matching scheme can be used to effectively denoise the image. More recently, large amounts of imagery, typically input-output pairs, can be used to train a complex network to remove effects that are difficult to model in the image domain. Despite these successes, there remain real-world scenarios in which neither of the previous methodologies is feasible. Imaging over long distances through atmospheric turbulence presents such a problem. Atmospheric turbulence imparts phase errors in the propagating light field, manifesting in the image as anisoplanatic, temporally-varying blur and distortion. Restoring an image or video sequence degraded by atmospheric turbulence is extremely ill-posed, so it remains a challenging but relevant computer vision problem.

The theories of turbulence and light propagation through random media have been studied for the better part of a century. Yet, progress for associated image reconstruction algorithms has been slow, as the turbulence mitigation problem has not thoroughly been given the modern treatments of advanced image processing approaches (e.g., deep learning methods) that have positively impacted a wide variety of other imaging domains (e.g., super resolution). This problem is due, in part, to the lack of access to imagery degraded by atmospheric turbulence, standardization in testing procedures, and suitable evaluation metrics. UG2+ Challenge 2 aims to promote the development of new image reconstruction algorithms for incoherent imaging through turbulence.

UG2+ Challenge 2 aims to promote the development of new image reconstruction algorithms for incoherent imaging through anisoplanatic turbulence. These approaches will likely require novel techniques for incorporating unique image formation processes. With the recent development of a scalable simulation tool and efforts to acquire a large-scale dataset, we aim to present, to our knowledge, the most comprehensive set of tools and resources for approaching this problem. The offered simulator for this challenge is differentiable, and this property would facilitate integration with deep networks. It has been shown that the models trained using these simulation tools produce near state-of-the-art performances relative to classical methods on real-world sequences. Through this challenge, we aim to bring increased attention to this problem and alleviate many aspects limiting its development in image reconstruction efforts. The participants will be asked to develop image reconstruction algorithms that targets at arbitary generic images.

There are two tracks available for this challenge:

  1. Text Recognition through Atmospheric Turbulence. Following the previous year's success, we continue running the same challenge track with an upgraded dataset. We use the prestige images from the COCO-Text dataset and add the real-world turbulence effect using the heat chamber [2]. The participant's task is to improve the image quality so that the text recognition system can successfully recognize the text sequence in the restored images. To participate this challenge, please go to the CodaLab page.
  2. Coded Target Restoration through Atmospheric Turbulence. In this challenge, the participants are asked to develop restoration algorithms that can faithfully recover a series of “coded targets” degraded by atmospheric turbulence. The coded target was patented by U.S. Army, DEVCOM, C5ISR Center, Research and Technology Integration Directorate. The coded target is used to measure the proportion of information retained after the image restoration is applied. The participant's task is to improve the image quality so that the infomation encoded in the target patterns can be successfully decoded. To participate this challenge, please go to the CodaLab page.

Materials provided:

  1. Atmospheric Turbulence Simulator. As in previous years, we offer a state-of-the-art atmospheric turbulences simulator [1] at this repository..
  2. Datasets for training (New to 7th UG2+!). New to this year, we additionally offer participants generated training data from our most recent simulation. This data is spread across four datasets. The first pair comes from our Turb-Syn dataset consisting of dynamic and static with details accessible from the project page. The second pair of data is the ATSyn dataset, also divided into dynamic and static data, with details coming from its respective project page.

If you have any questions about this challenge track please feel free to email cvpr2024.ug2challenge@gmail.com

References:
[1] Mao, Z., Chimitt, N., Chan, S. H., Oct, 2021. Accelerating Atmospheric Turbulence Simulation via Learned Phase-to-Space Transform, In Proc. of IEEE/CVF International Conference on Computer Vision. pp 14759-14768
[2] Mao, Z., Jaiswal, A., Wang, Z., Chan, S. H., Oct, 2022. Single Frame Atmospheric Turbulence Mitigation: A Benchmark Study and A New Physics-Inspired Transformer Model, ECCV 2022


Footer