If the measurement of a target’s position by a surveillance radar were perfect and precise, tracking would be much easier. Changes in position would be easily converted into speed, course and accelerations, allowing motions and manoeuvres to be followed. The problem of matching tracks with measurements (the association problem) would not be the challenge that accounts for much of the complexity in a practical tracking implementation.
In the face of uncertainty and ambiguous measurements, a target tracker can use multiple hypotheses to accommodate different interpretations of the data, effectively allowing decisions to be deferred until additional information arrives.
Complexities of target tracking
The complexities of target tracking arise from the uncertainties in the measurement process associated with variations in the way radar is reflected from a target and the effects of the environment.
The accuracy of the measurement process can be reflected by the weight given to the measurement in updating the estimate of the target. This weight is represented by the filter gains (for example alpha-beta in the alpha-beta tracker, or the Kalman gains of the Kalman filter), so that a gain of 1 is an assumption that the measurement is precise, and a weight of 0 means the measurement is ignored.
A weighted measurement like this is a way of reflecting the uncertainty in the sensor data. However, it still results in a single outcome – hopefully the best outcome, but not necessarily.
The idea of multiple hypotheses in target tracking is to permit multiple outcomes to be considered. These multiple outcomes will have their own strength, so that we can still identify the best option. However, the addition of other hypotheses leaves the door open for new information to change the interpretation.
Using multiple hypotheses really just means deferring a decision until additional information makes the situation clearer.
The Association Problem
One of the key stages in a radar tracker process chain is association. This is the process of deciding how existing tracks are matched with new measurements.
In the simplest situation, an existing track may observe just a single new measurement around the expected position. We can simply associate that measurement with the track and proceed to update the track’s dynamics using some form of filter.
Even in this simple case though there are complications. For small targets in cluttered environments, we may not reliably detect the target on each update. Also, there may be spurious detections from clutter. So a single measurement for a target could be a valid measurement, or it could be that there was really no observation and what we see is spurious clutter.
We need to decide if we believe that the measurement is really derived from the target. There are problems if we get that wrong, that is ignoring the measurement if it is true, or using it if is false. Even with a simple situation, there are two possible interpretations (hypotheses).
Where tracks are close together (close being defined as a distance where the uncertainty bounds of the tracks positions intersect), there is a more interesting situation. For association, close tracks need to be considered together as a group. This group is called a track cluster.
Consider two tracks and two measurements as shown in Figure 1. The tracks, T1, and T2 are close enough together so are considered as a cluster for the purposes of association. New measurements are taken in the area around T1 and T2 and these are P1 and P2. We have two tracks and two measurements. The association problem is to decide which track should be associated with, and then updated by, which measurement. For example, is P1 a measurement of T1 or of T2? We have two possibilities:
T1 is associated with P1 and T2 is associated with P2
or
T1 is associated with P2 and T2 is associated with P1

There might be other explanations of the data we are seeing, but for now we will focus on these two. These two possible interpretations of the data are two hypotheses. Since P1 and P2 are close in distance to T1 and T2 it isn’t obvious which matching is better. We can compute a figure of merit for each hypothesis based on the distance of a track to a plot for all components of the hypothesis.
Compare this to the situation in Figure2, which shows the same two tracks and two new measurements, but this time it is more obvious which associations are most likely. The proximity of P1 to T1 strongly suggests that a T1.P1 association and a T2.P2 association is much more likely than the other alternative.

There are other possible interpretations of the data in these examples. Suppose that P1 or P2 are not actually valid measurements of T1 and T2, but rather they are spurious noise or clutter. Given the vagaries of radar-based detection we have always to be prepared for there being no valid detection of a track on a sweep of the radar. The possibility arises that none of the measurements we see are truly measurement of the track we are interested in. If we knew that a measurement was clutter then we’d be better ignoring it, rather than associating it with a track. This gives rise to more ways of interpreting the data in Figures 1 and 2 by introducing the idea that a track may be unassociated with any measurement. There are now six possible interpretations of Figures 1 and 2.
T1.P1, T2.P2 T1 is associated with P1 and T2 is associated with P2
T1.P2, T2.P1 T1 is associated with P2 and T2 is associated with P1
T1, T2.P1 T1 is unassociated. T2 is associated with P1. P2 is a clutter
T1, T2.P2 T1 is unassociated. T2 is associated with P2. P1 is a clutter
T1.P1, T2 T1 is associated with P1. T2 is unassociated. P2 is a clutter
T1.P2, T1 T1 is associated with P2. T1 is unassociated. P2 is a clutter
There are yet more ways of interpreting the data, including the possibility that T1 or T2 are new tracks that we have not seen before. This adds further options to the above list but these are not considered in this paper.
Ranking Hypotheses
It is clear from the examples of Figure 1 and 2 that although different hypotheses can be created for the two track, two measurement situation, it may be the case that one hypothesis is strongly suggested. If P1 is very close to T1 and P2 is very close to T2 (the situation in Figure 2) then the hypotheses T1.P1, T2.P2 is likely to be correct.
This can be represented by giving each hypotheses a confidence value or weight. The value can be derived from the distance (in some statistical sense) for the matches. A simple method would be to look at the physical distance between the tracks and the plot for each possible association and compute a distance function as the sum of those distances for all elements of the hypothesis (in practice this would over simplistic and a more realistic approach would need to consider the error in proportion to a statistical measure of historic errors – in effect how significant is this error compared with historical values).
We’d then say the best hypothesis is the one with the smallest total distance (lowest cost) between measurements and tracks. This can be normalised into a confidence value, where the hypothesis with the smallest distance is the one with a weight of 1 (call this the principal hypothesis) and all other hypotheses are scaled to have weights less than 1.
So given two existing tracks, T1 and T2, and given two new measurements P1 and P2, we derive 6 possible hypotheses, each with a confidence, C. These can be written as follows.
H1 = {T1.P1, T2.P2}, C1
H2 = {T1.P2, T2.P1}, C2
H3 = {T1, T2.P1} , C3
H4 = {T1, T2.P2} , C4
H5 = {T1.P1, T2} , C5
H6 = {T1.P2, T1}, C6
When we want to report a track update, for example for the purposes of displaying a track, then we simply take the hypothesis that has the largest confidence (this is the principal hypothesis). That’s the best we can say. However, by maintaining multiple hypotheses we will permit new information to possibly change the calculation of the best interpretation.
To Update or Coast
At this point it is instructive to consider why there is the need to support the case where a track may be desired to unassociated, rather than associated with a measurement, in effect deciding to ignore the measurement we see.
Consider a simple situation where a track is moving due North (Figure 3). Given the assumed motion of the target, the expected position of the next measurement is at the location shown by the red square in Figure 3.
We call this the predicted position and it is predicted based on the estimated speed and course of the target that we have calculated from previous measurements. Now suppose that the actual measurement is observed at P, which is some distance from the expected position.

Using the principles described previously, this situation gives rise to two hypotheses:
H1 = {T.P} meaning T is associated with P
H2 = {T} meaning T is unassociated (coasted) and P is ignored.
Hypothesis 1 says that T is associated with P and the measurement should be used to update T. Hypothesis 2 says that T has no association with P and therefore P is clutter. The case of T having no association is called coasting. It means that T’s new position is predicted. Another name for this is dead reckoning.
The distinction between the above hypotheses is critical in accommodating the possible explanations of what is going on in this example. If we believe H1 (P is really an observation of T) then there is clearly an indication that the target appears to be in a different position to the expected one.
This change in apparent position could be a result of a spurious measurement, or it could be because the target has really starting turning and in subsequent measurements there will be an obvious set of positional changes that suggest the target is turning to move East. If P is the start of a manoeuvre then it is important that the association between T and P is maintained to give the track filter an opportunity to follow the new target dynamics.
Self-contained cards for the conversion of analogue radar into network data streams based on ASTERIX protocols are available, such as Cambridge Pixel’s HPx-346 card. This card receives radar signals which are converted into Ethernet data, with only a DC power supply needed. The product can be supplied as a card or in a small box for easy deployment (see Figure).
If P is actually a spurious measurement and the target’s true position is as predicted then there is a benefit in ignoring the measurement and simply assuming the target is as predicted. This is what is presented by H2. Given that, initially, we do not know whether P in Figure 3 is the start of T’s manoeuvre or a spurious value, we cannot know which of H1 or H2 is correct.
If we were forced to choose one (ie where multiple hypotheses are not being supported in the track processing) then we may make the wrong decision. If P is actually a correct measurement from a manoeuvring track then adopting H2 (ignoring P) would likely mean that the next update would miss the measurement because it would be too far from the expected position.
If, on the other hand, P is a false measurement derived from clutter and there is no true measurement of T, then considering P in hypothesis 1 will cause the track estimate to be shifted towards P, thereby reducing the track accuracy.
In a practical implementation, it is possible to expand the use of hypotheses in this simple example to accommodate the uncertainty in whether the association of T with P is the association with a noisy measurement or with a reliable measurement that is indicative of a change in target dynamics.
This is taking H1 and splitting it into two hypotheses, where one filters the measurement (low gain in the update) and one strongly believes the measurement (high gain in the update). It’s only when the next measurement comes along that it will be clearer which hypothesis is most likely. But it is the ability to consider those multiple hypotheses and avoid a single decision point that is the key value in the multi-hypothesis approach.
Multiple Track Versions
Return back to the example from Figure 1 and 2 where there are two tracks and two plots, giving rise to six possible hypotheses. These hypotheses define the association options between the tracks and the measurements. Where a track-plot association is defined in a hypothesis, the track is updated by that measurement as part of the filter process.
Although there is a single “track” being maintained (say T1), there will exist many different versions of that track as part of each hypothesis. In effect, each hypothesis maintains a version of the track, all sharing the same identifier for the purpose of reporting, but all having different values for position, dynamics and filter gains. Across the six hypotheses created by the example, there are six instances of T1, each different according to how the track is updated in the associations made in that hypothesis.
When a track is reported, for example to display on a radar picture, it is always the strongest hypothesis (the principal hypothesis) that is reported. However, which of the many hypotheses for a track is principal can change across time. That doesn’t matter as far as the reporting is concerned, as it is still the same track identifier.
Growing and Pruning Hypotheses
The next consideration is what happens on the next radar update when the new measurements are taken. Suppose the measurement process again observes two measurements, call then P3 and P4. The processing repeats as before, but with an important difference. There are still two tracks, but this time there are six different hypotheses to consider. Each hypothesis will generate another six possible outcomes for how T1 and T2 can associate with P3 and P4, and each of those outcomes will have a confidence associated with it.
So at the end of the second update there will be 36 hypotheses. After the third update, this becomes 216, and so the process continues. After just 3 updates with just 2 tracks there are 216 hypotheses. For a slightly more complex situation of a group of 4 tracks wit 4 measurements there would be 648 hypotheses per update, or over 270,000,000 hypotheses after 3 updates!
Clearly, some form of control is essential to manage the potential rapid growth in hypothesis count. This is achieved by pruning hypothesis that are sufficiently low confidence.
If confidences are normalised so that the maximum value is 1, then a threshold may be used to eliminate hypotheses whose confidence is less than that threshold. Setting this value too low means that too many hypotheses get carried forward with a computational overload, whereas a value that is too high may limit the count. Dynamic adjustment of this threshold is a practical tool for automatically adjusting the processing to maintain real-time performance of the tracker.
Author: Dr David Johnson
Director, Cambridge Pixel