Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  

How AI will help radar detect tiny drones 3 kilometers away

Flying High: Task Force Southwest Marines test new drone capabilities ((U.S. Marine Corps/Released)

Long before the U.S. Navy fried an Iranian drone over the Strait of Hormuz, the Pentagon was highlighting the difficulties of fending off small unmanned aircraft. The farther away you can spot them, the better. But as drones get smaller, detecting at distance isn’t easy.

A team of researchers from South Korea and California has figured out how to detect incredibly small disturbances in radar returns that could indicate the presence of small drones, perhaps as far as three kilometers away — enough to give airports, police, and militaries a big hand in stopping them.

The researchers, from the Daegu Gyeongbuk Institute of Science and Technology and California State University at Fresno, paired an active electronically scanned array, or AESA, radar, with a neural networking tool called a Generative Adversarial Network, or GAN.

AESA radars, which steer its multiple radar beams electronically instead of using physical gimbals, have been around for years. The real innovation lies in training software to detect objects, including objects as small as DJI’s popular Mavic drones, in radar imagery. But there’s very little imagery data to train a machine learning algorithm on how to see something that small. What’s needed is a dataset of extremely small modulations in the echoes of radar signals.

The researchers used a GAN to turn a small bit of available training data into an abundance. A GAN pits two conventional neural networks against one another. For instance, one network might learn how to recognize an object — say, a cat — by looking at many slightly varying examples. The second network in a GAN reverses that process. So if a conventional network learns that a certain combination of white pixels against a dark background represents a cat, the GAN starts with the finished image and then learns about the combination of white or dark pixels that the first network to its determination.

As the second network does its work, it creates slightly varying versions of the data — which themselves can be used for training. That’s how the researchers turned their small mini-Doppler dataset into something robust enough to be useful.

“To train a deep neural network, we need to use a large training data set that contains diverse target features. If the data is lacking, then an overfitting issue occurs. GANs were used to augment the data set by producing fake data that have similar distributions with the original data. We expect that the GANs data increase the target feature diversity,” said Youngwook Kim, a professor of electrical and computer engineering at California State University at Fresno.

“It is reasonable to say that our system can ‘detect’ a drone more than 3km, However, it will not be a fact if we say we can identify a drone with the help pf a GAN at this point. The fact is that we have constructed a platform/idea to be used for the classification, but diverse tests are needed to be done in the future,” said Kim.

They describe the effort in their paper in the June issue of IEEE Geoscience and Remote Sensing Letters.

___

© 2018 By National Journal Group, Inc.

Distributed by Tribune Content Agency, LLC.