• norsk
    • English
  • English 
    • norsk
    • English
  • Login
View Item 
  •   Home
  • Fakultet for informasjonsteknologi og elektroteknikk (IE)
  • Institutt for teknisk kybernetikk
  • View Item
  •   Home
  • Fakultet for informasjonsteknologi og elektroteknikk (IE)
  • Institutt for teknisk kybernetikk
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Joint Multisensor Fusion and Tracking Using Distributed Radars

Topland, Morten Pedersen
Doctoral thesis
Thumbnail
View/Open
Fulltext (PDF) available (1.551Mb)
URI
http://hdl.handle.net/11250/2381357
Date
2016
Metadata
Show full item record
Collections
  • Institutt for teknisk kybernetikk [2866]
Abstract
This thesis deals with multisensor fusion in the presence of systematic errors in

the context of target tracking. A typical target tracking problem consists of multiple

sensors producing position measurements of multiple targets, for instance

aircraft. The goal is to establish tracks on all targets that are observed by the sensors.

A track usually consists of an id, such as a track number, target position,

and target velocity. The systematic errors are modeled as measurement biases. If

unaccounted for these biases may lead to inaccurate estimates of the target state

(position and in particular velocity) and a single target may appear as several targets

if the biases are large enough (ghost tracks). Furthermore, if all sensors are

biased it is challenging to find an unbiased estimate of target state with respect

to a coordinate system independent of the sensors.

In this thesis the sensors are radars producing measurements in 3D. The systematic

errors (biases) are called alignment bias, location bias and sensor bias.

The first two are related to sensor deployment, as they describe errors in orientation

(misalignment) and sensor placement (location). The sensor bias addresses

errors caused by sensor imperfections. These biases are estimated relative to a

sensor independent coordinate system and relative to a sensor of reference (master

sensor). A novel distinction is made in this context, where a universal bias

estimator (UBE) is used relative to sensor independent coordinates, while an absolute

bias estimator (ABE) is used relative to a master sensor. The estimability

of the biases is investigated using a novel estimability index, which quantifies

whether a bias can be estimated more accurately with the available measurements.

The estimability index is based on the Cramer-Rao Lower Bound.

The study of estimability is used to determine a multisensor-multitarget scenario

where several bias estimators are compared with respect to performance

using a Monte Carlo simulation. The simulation includes alignment, location

and sensor biases, and all sensors are affected. The estimators are evaluated in

sensor independent coordinates and master sensor coordinates. Two Kalman Filter

(KF) based estimators are used as references. A lower bound is represented

by a KF where the bias values are known, while an upper bound is represented

by a KF where the measurement noise is increased to reflect the biases present.

The alignment, location and sensor biases contain three elements each, to a total

of nine bias values to estimate per sensor. The UBE performs well (below

the upper bound) in sensor independent coordinates when one of the sensor bias

values are removed from the simulation, estimating eight bias values per sensor.

Performance is close to the lower bound when the location bias only is removed,

yielding six bias values per sensor. In master sensor coordinates the ABE has

the best performance. However a simplified version has almost identical performance.

It is called the Relative Bias Estimator (RBE), and it neglects the biases

of the master sensor. This is a popular assumption in the literature, and this study

confirms that this simplification should be preferred in an implementation.

Possible extensions of this work are explored. First curved target motion is explored

by letting the target move at constant altitude above the Earth. The curvature

of the trajectory results in increased bias estimability. However, observing

this curvature requires observing the target for a long time with high accuracy.

This is challenging in practice, and therefore this path was not explored further.

Second, extending the application to Air Traffic Control (ATC) is considered. At

airports radars typically produce 2D measurements, so to extend the developed

3D bias estimators it is necessary to incorporate altitude measurements from the

aircraft Mode C transponders with these 2D measurements. The altitude measurements

are quantized and received with a coarse resolution which may have

a negative impact on bias estimator performance since the vertical velocity estimate

becomes unstable. Several estimators are developed to estimate altitude

and vertical velocity, and these are tested on real measurement data for a performance

comparison. The main contribution is the use of the Interacting Multiple

Model (IMM) and Unscented Kalman Filter (UKF) based estimators on quantized

real world measurements. The UKF produces the best performance for

long term altitude predictions, meaning that its vertical velocity estimate is the

most stable.
Publisher
NTNU
Series
Doctoral thesis at NTNU;2016:36

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit
 

 

Browse

ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDocument TypesJournalsThis CollectionBy Issue DateAuthorsTitlesSubjectsDocument TypesJournals

My Account

Login

Statistics

View Usage Statistics

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit