News
Using Adversarial Attacks for Localized Generation of Super-Resolution Artifacts
Abstract
The task of image super-resolution, addressed using deep neural networks, particularly generative adversarial models, faces the problem of visual artifacts. These distortions degrade the result quality, and their automatic detection is challenging due to the lack of large-scale labeled datasets. This work aims to develop an automated method for creating such datasets to train and evaluate artifact detection models. The proposed method utilizes an adversarial attack approach to deliberately create artifacts in the output images of super-resolution models. The core of the method is a modification of the Iterative Fast Gradient Sign Method. The key innovation lies in the modified loss function, which maximizes distortions in a specified image area, defined by a binary mask, while simultaneously minimizing them in the remaining parts. This enables the generation of localized artifacts that mimic natural defects. To validate the method, a dataset containing over 2000 examples has been created. Experimental results confirmed that the proposed dataset possesses high-quality annotations. Detection methods demonstrated an IoU value exceeding 0.7 on it, which is substantially higher than results achieved on existing datasets. The developed method allows for the efficient creation of scalable and high-quality labeled datasets. A neural network method was also developed, which shows better results compared to the baseline method. This opens up opportunities for developing more robust super-resolution methods, their subsequent post-processing, and creating effective artifact detectors.
Keywords
Edition
Proceedings of the Institute for System Programming, vol. 38, issue 2, 2026, pp. 7-20
ISSN 2220-6426 (Online), ISSN 2079-8156 (Print).
DOI: 10.15514/ISPRAS-2026-38(2)-1
For citation
Full text of the paper in pdf (in Russian)
Back to the contents of the volume