Restoration of the Lost Map Area of the Underlying Image Surface Using the Reconstruction Method

. The article presents a method for recovering lost sections of the map of the underlying surface. We consider spatial autocorrelation as image pre-processing for its subsequent analysis when restoring a part of the map in the direction of the radar carrier course. An algorithm for recovering images of lost areas has been proposed and a software implementing the algorithm has been developed. The efficiency of the developed algorithm has been evaluated on a test set of images using a statistical criterion.


Introduction
Recognizing the underlying surface and objects on them in all weather conditions with high resolution is becoming an increasingly important task. Here we consider three methods for modeling radar images and develop a method for recovering lost sections of the map of the underlying surface.

1.1.1
Method for synthesizing reflective characteristics of complex radar targets in the shortwave wavelength range First, the basic structures of geometric primitives are formed, which have a certain spatial configuration and are endowed with a certain set of electrophysical properties: a point, a triangle, and an edge. Elements of the scene are created from primitives, each of which is assigned its own index characterizing the object as a unique element of the radar scene. Further, the studied objects of observation are formed from the elements. The considered method allows you to simulate the radar characteristics of complex objects in different parts of the path of the radar carrier with a synthetic antenna aperture (SAR) [1]. In this case, a discrete representation of the trajectory of motion in the form of a set of separate SAR positions in space relative to a complex object is used. Each position of the SAR is characterized by the coordinates of the phase center of the antenna system in the coordinate system of the observed scene, the position vector characterizing the direction of the maximum antenna pattern, the coordinates of the polarization unit vectors of the system, and the velocity vector of the SAR carrier. To calculate the reflected signal in a given part of the SAR trajectory, the visible part of the object's surface is determined from a given viewing angle. Using the well-known algorithms for shading and masking the elements of a complex object, the numbers of elements visible to the SAR from this angle are selected from previously created arrays of instances of the structures of triangles and edges of the geometric model. Accordingly, when calculating the signal reflected from the object, only the surface elements of the object visible at a given time are used. The disadvantage of this analogue method is the need to have, as initial data, a developed geometric model of a real target and background formation with their individual electrodynamic and statistical models. Another disadvantage is the impossibility of obtaining radar images of complex three-dimensional terrain scenes.

A method based on asymptotic methods and numerical methods for solving diffraction problems
The method includes solving problems: geometric modeling, diffraction of electromagnetic waves on the surface of an object of complex shape and modeling the path of formation and processing of the path signal. The approximation of the surface of an object is based on the use of bicubic surfaces in the form of Koons, B-splines, Beziers, and Hermite, as well as tube modules, with subsequent discretization and presentation as a set of facets and edges. When solving the diffraction problem, asymptotic methods are used: physical optics, equivalent currents, elementary boundary waves and a rigorous method of integral equations based on the reduction of boundary conditions to singular and hypersingular integral equations, which are solved numerically by the method of discrete features [2]. The disadvantage of this method is the inability to obtain the radar image of the observed three-dimensional scene as a whole.

A method of constructing a radar image of the underlying surface for radar systems with Doppler beam sharpening (DBS) based on information obtained about the surface in the optical wavelength range
In this method, an approach to the construction of radar images based on optical images is proposed. The initial data is an optical image converted into digital form with a fixed value of the resolution element, invariant to its position in angle and range. This image is a rectangularshaped frame that is formed in a geographic coordinate system with a specific orientation. The radar image is also formed in a geographical coordinate system, where the direction of the carrier coincides with the orientation (with one of the axes) of the optical image frame. This allows one to obtain radar images that reproduce the initial optical sources with the greatest accuracy, all other things being equal. The disadvantage of this method is the availability of source information of the optical image of the area, the radar image of which is necessary to obtain, in addition, the quality and reliability of the simulated radar image depends on the quality of the optical image [3].

Saliency map for the underlying surface
Most models of visual search (regardless of whether they use the participation of explicit eye movements or displacement of attention, are based on the concept of visual saliency maps -explicitly defined twodimensional cards that encode salience. Salience («noticeable position», «protruding details»)hereinafter will be used as a synonym for visual attention in the context of this task. As noted by L. Itti and C. Koch [4], the resources of any computer system are limited, which leads to the bottlenecks effect, and human vision (like a similar system) is no exception. The optical nerve bandwidth is about 10 8 bits/s, which is several times greater than the ability of the brain to completely process incoming information and translate, interpret it into conscious experience. Our vision selects certain portions of the incoming information that are more preferable for processing, shifting focus from one area to another, performing a series of calculations, instead of trying to process everything at once. In other words, despite the error that we see everything that surrounds us, at one point in time, our vision registers and processes only a small portion of the incoming information [4]. Let us illustrate the construction of a salience map (attention map) based on the model (

Image reconstruction algorithm
To restore the blind zone in the direction of movement of the carrier radar in this work, is use the image recovery method based on the search for similar blocks [5-7].

Correlation analysis
The need to apply a correlation analysis arises when determining the influence of the type of objects on the mapped terrain on the effectiveness of applying the methods of restoring a lost site at its mapping [8].
Autocorrelation is a statistical relationship between random variables from a single row, taken with a shift, on the local map. In the image analysis during autocorrelation, the pattern coincides with the image, and the shift occurs along the directions of the y x, axes.
Next, to compare the image areas, we use the autocorrelation functions , (1) where zz R and tt R is the values of autocorrelation functions and a template.
We break the original image shown in Fig. 2 into blocks, 256 by 256 pixels in size, and we count in each block the spatial correlation function shown in Fig.3. If there are long fragments, then at them the correlation of spaces is higher, that is it fades at shift more slowly, so the possibility of restoration is more. Fig. 3. Spatial correlation in blocks of 256 by 256 pixels.

Image recovery method
The reconstruction method is based on the synthesis of textures. At the first step, for each pixel of boundary using the inversion method, the shape of the region for the similarity search is adaptively determined, which is formed by combining two adjacent homogeneous subregions in the direction of the maximum gradient [9]. In the second step, the priority value ) ( S P δ is calculated for each border pixel value, which consists of three factors: In the third step, there are blocks the area of accessible pixels S for which the Euclidean metric is minimal [10].
The values of pixels in the area η adjacent to the pixel with the maximum priority p are restored by averaging the corresponding pixels from the found regions The following is the result of processing the proposed method on a test image with lost pixels in the blind zone. In Fig.4, 5 shows the result of processing the proposed method.
Analysis of the results shows that the proposed method allows you to effectively recover the lost areas of the image.

Calculation RMSE
For quantitative evaluation of the work of a method is used the random mean square error (RMSE) [6]. This quality criterion is fairly common to determine the differences between a pair of data. The input data used are the observed image and the original image. The expression RMSE shows how to get the numeric value of a given quality criterion.
Analysis of the results shows that the higher the spatial correlation, the smaller the recovery error.
An analysis of the results shows that the higher the spatial correlation, the smaller the reconstruction error.

Conclusion
The paper presents a method for restoring a blind zone in the direction of movement carrier of a radar based on a search for similar blocks and their combination. The presented examples demonstrate the effectiveness of the