Radar Images

We humans like to classify things and are actually very good at it. It’s natural to us. If we see a birch, spruce, and pine, we can easily refer to them as trees. If we are told about another kind of tree that we have never seen, we can assume the general properties it has. Usually, we’d be quite close to the truth. This kind of classification of ideas and things helps us to remember, understand and relate different things. We can also group the types of sensors in the satellites in the orbit several different ways. One is to divide them to active and passive. Active ones use their own source of energy to highlight different objects. It can be thought of as using a flashlight in the dark to see compared to watching the same area during the day, with the help of sun, which is what the passive ones do. Another could be dividing them by the sensor type they use, optical similar to cameras or radar-based. Below in Figure 1, you can see how the Sentinel-1 (active, radar) and Sentinel-2 satellites (passive, optical) operate.

Figure 1. Example of Sentinel-1 and Sentinel-2 satellites doing observation. Sentinel-2 satellites rely on Sun and (on some wavelengths) Earth generated electromagnetic radiation for imaging. Sentinel-1 sends its own microwave pulses that it uses for imaging. Click on the image to enlarge.

 

Radar does not make pictures as a camera does. It measures distances from where the radar is located, to something that bounces the radar beam back. The radar system knows the time it sent the pulse and when the reflected pulse came back. As we know, the travel speed of that radar pulse is the speed of light. With known speed and the known time the pulse took for travel, it is then possible to calculate the distance to the object. That is how the traditional radars work in principle. If that is repeated quickly, measurement after measurement, then based on how the distance is changing and how much time there was between these distance observations, it is possible to see how fast that object is going! An example of this in Figure 2 below.

Figure 2. Illustration of the classical radar principles, discussed above. Green radar image on right Shutterstock. Icons Freepik. Click on the image to enlarge

 

In more detail, the word “radar” is short for Radio Detection and ranging. It simply means that the pulse radar sends is in the radio frequencies. That means that the electromagnetic radiation it sends, is in the frequencies between 30 Hz and 110 GHz, depending on the intended use of the radar. Different wavelengths are suited for different purposes. You can remind yourself about the frequencies and the wavelengths in our earlier article. There are different kinds of radars as well. Some of them sending a pulse as we discussed, some measure continuously. Some send and receive the signals in different places or in different antennas. Radar altimeters are used to track the height,  radar scatterometers are used to make very fine measurements of how different things reflect the radar pulse back and finally, we have the imaging radars.  If we want to create an image with radar, something similar to the optical images we get from the orbit, the process gets more complicated. We will not cover those details in this article, just aim to create a general understanding of it. One important thing to remember is the nature of radar. Even if the result looks like an image of the ground, it is just a visualization of the strength of returned reflections on each distance. An example of some errors in distance measurement can be found in Figure 3 below.

Figure 3. Distance measurement errors in radar imaging. Discussed below. Adapted from Luis Veci, Sentinel-1 Toolbox SAR Basics Tutorial, 2015. Click on the image to enlarge.

 

The Figure 3 illustrates three typical problems when only measuring range. At top on the line are the distances how the radar sees them. On the line below, are the distances of how they are in reality. Between points one and two, colored green, we get an effect called foreshortening. Radar pulse reflects back from point two quicker than it would if point two was on an as low level as point one. This leads the radar to see the green area as shorter in the radar observation (Figure 3, top line) than it is in reality (Figure 3, bottom line). Between points three and four, we get something called layover, colored blue. It just means that the radar sees point four being closer than point three. If you look at the distance from point four to the radar, it is much shorter than the distance from point three. Unfortunately, that does not depict the area correctly. Between points five and six we get a shadow. It can be thought similarly to the stronger shadows we see outside. We see just black, as we do not get any return reflection from this area. We do not know what is in there. In radar imaging, there are many more error sources. These are corrected with different computational steps and on the information, we know of the area, for example, the elevation profile.

 

That is how we get a one-dimensional “image” of the area. One dimensional image is just a line with different strengths of reflections ordered by the distance, similar to the line at the top of Figure 3. However, images as we understand them are two dimensional. They have both height and width. For the second dimension, in the case of Synthetic Aperture Radar (SAR) on a satellite, we just allow for the satellite itself to move. In image 4, you can see this visualized. Radar on satellite sends the measurement pulses, gets them back and gets the one-dimensional image of the area. It continues to the next point on its track, and sends out another measurement pulse. Gets another one-dimensional image. It continues this process until it has a preprogrammed amount of these 1D images. Then it puts all of them together, and we have a two-dimensional radar “image”. In reality, it’s not that simple, especially putting them together, but the principle is like that.

Figure 4. SAR on track. It does one measurement at time t, moves and does a second one. Image for an area consists of multiple of these measurements put together. In reality, one row of pixels may consist of several measurements. Click on the image to enlarge.

 

Radar reflections give us information about the size and shape of materials on the surface in a different way than visible light. This is because the wavelength of radar waves is much much longer than wavelengths of visible light. And because the length of the wavelength of the radar waves can be less than, about the same as or much more than the objects they encounter on the ground, the reflected (return) signal is more difficult to understand. Figure 5 below demonstrates some of the simpler interactions of radar waves with things on the surface.  The “smooth surface” acts as a mirror and sends the incoming radar signal off in another direction. The radar does not receive a return signal. Lake water on a windless day is a good example of this. The “double bounce” reflection combines two smooth surfaces and the return signal to the radar is the maximum, for example, buildings near streets or ships in waveless water. In both of these situations, the surface objects are much much larger than the wavelengths of the incoming radar signal.  For a “rough surface” the surface’s shape changes on spatial scales approximately the same as the radar signal.  In this situation, the radar energy scatters in many directions and only a small return signal is measured. A good example is to think of a field in Pohjanmaa with many stones of different sizes scattered about.  A more complicated reflection situation is a forest, here the vegetation layer is made up of a volume of many “objects” approximately the same size and smaller (trunk, limbs, and leaves) compared to the wavelength of the incoming radar signal. In this situation, very little return signal is measured by the radar. The water content of the objects also is important in creating a strong reflected signal.

Figure 5. Different kinds of backscattering. Top left: smooth surface. Lower left: rough surface. Top right: double bounce. Lower right: vegetation layer. Image source NASA Applied Remote Sensing Training Program. Click on the image to enlarge.

 

There are many error sources related to using radar from orbit and vast differences in how things reflect the radar pulse back. Some of these we know how to correct or how to interpret and some we can just try to mitigate as well as we can. Radar imaging is also a quite complicated process. We still want to use radar images in many cases. Why? Wouldn’t camera-like sensors offer better, clearer images and more understandable results? Radar allows us to get information in several situations where optical instruments will not. For example, darkness does not affect radar. If you recall our last article, we did get very little information in images taken nighttime, when compared to images taken during the day. But with radar, the images are the same, night and day. Also, we can see through the clouds! This is quite important in cloudy areas, such as Finland, where we might have our most recent not cloudy image months from the date we wanted to investigate. This is also important if we do time series analysis. That is an analysis based on many images of the same area over time.  With radar, we get many times more usable images and we get them at a steady rate.  Lastly, and perhaps most importantly, we get different kinds of information out from the radar images than from the optical ones, and this additional data leads to a better understanding of our world.