Skip to content
Snippets Groups Projects
Commit 527db3b2 authored by Anton Kullberg's avatar Anton Kullberg
Browse files

bug: fixed ex1 interfacing with the gating mechanism

parent bfcda8e3
Branches
No related tags found
No related merge requests found
%% Cell type:markdown id:7b4f552e-2590-44c2-ad81-13866058fb1e tags:
## <center> Target Tracking -- Exercise 1 </center>
### <center> Anton Kullberg </center>
This report contains solutions for the first set of exercises in the Target Tracking course given at Linköping University during fall 2021.
%% Cell type:code id:9e44486f tags:
``` python
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
rng = np.random.default_rng(13)
import jax
import jax.numpy as jnp
import tqdm
import scipy.stats as stats
from src.sim import generate_data
from src.trajectories import get_ex1_trajectories
ex1_trajectories = get_ex1_trajectories()
import src.gaters as gaters
import src.filters as filters
import src.associators as associators
import src.trackers as trackers
import src.plotters as plotters
import src.models as models
```
%% Cell type:code id:2a60bcd0 tags:
``` python
def evaluate_tracker(trajectory, tracker, MC):
T = trajectory
def evaluate_tracker(T, tracker, MC):
# Initialize correctly
xhat = np.zeros((4, T.shape[1], MC))
xhat[:2, 0, :] = np.repeat(T[:, [0]], MC, axis=1)
xhat[2:, 0, :] = np.repeat(T[:, [1]]-T[:, [0]], MC, axis=1)
# Initial uncertainty quite high
Phat = np.zeros((4, 4, T.shape[1], MC))
Phat[:, :, 0, :] = np.repeat(np.diag([10, 10, 1, 1])[:, :, None], MC, axis=2)
Y = []
for m in tqdm.tqdm(range(MC), desc="MC iteration: "):
Y.append(generate_data(T, tracker.filt.sensor_model, tracker.clutter_model, rng))
Y.append(generate_data({'T': T}, tracker.filt.sensor_model, tracker.clutter_model, rng))
xhat[:, :, m], Phat[:, :, :, m] = tracker.evaluate(Y[-1], xhat[:, :, m], Phat[:, :, :, m])
# Evaluate RMSE
rmse = np.sqrt(np.mean((T[:, :, None] - xhat[:2, :, :])**2, axis=1))
result = dict(Y=Y, xhat=xhat, Phat=Phat, rmse=rmse)
return result
```
%% Cell type:code id:ad8aeebb tags:
``` python
import src.gaters as gaters
import src.filters as filters
import src.associators as associators
import src.trackers as trackers
import src.plotters as plotters
# Setup sensor and clutter model
def h(x): # Assumes positional coordinates first
if len(x.shape) < 2:
xt = x.reshape(-1, 1)
else:
xt = x
target_range = jnp.linalg.norm(xt[:2, :], axis=0) # Use JAX numpy to be able to auto-differentiate
target_bearing = jnp.arctan2(xt[1, :], xt[0, :])
return jnp.vstack([target_range, target_bearing]).squeeze()
R = np.diag([10, 0.001])**2
PD = 0.9
lam = 2
volume = dict(xmin=0, xmax=2500, ymin=0, ymax=1000)
sensor_model = dict(h=h, R=R, PD=PD)
sensor_model = models.radar_model(R=R, PD=PD)
clutter_model = dict(volume=volume, lam=lam)
# CV model
F = np.identity(4)
F[:2, 2:] = np.identity(2)
G = np.vstack([np.identity(2), 1/2*np.identity(2)])
Q = lambda q: G@(q*np.identity(2))@G.T
f = lambda x: F@x
motion_model = dict(f=f, Q=Q(1))
Q = np.identity(2)
motion_model = models.cv_model(Q=Q, D=2, T=1)
# Setup Gater
gamma = 4.7
gater = gaters.MahalanobisGater(sensor_model, gamma)
# Setup associator
associator = associators.NNAssociator()
```
%% Cell type:markdown id:15aca3c7 tags:
### Task 2.1 - EKF for a low maneuvering target
A constant velocity model was implemented w/ process noise tuning $\mathbf{Q}=\mathbf{I}$.
This gave a good performance for trajectory T1 and T3, but very bad results for the concatenated trajectory T4, where the filter completely fails to follow the observations (due to gating, the associations fall off completely and the filter resorts to the CV model, seen by the completely straight trajectory).
10 simulations were run and the RMSE computed for $p_x$ and $p_y$ independently. The top figure illustrate the filtered trajectories in dashed colored lines, the ground truth trajectory in black and measurements from **one** of the simulations as black dots. The bottom figure illustrates the RMSE over the 10 different simulations.
As illustrated in the bottom figure, the RMSE explodes because of the complete failure to follow the circular part of the trajectory.
%% Cell type:code id:28db93ef tags:
``` python
motion_model['Q'] = Q(1)
motion_model = models.cv_model(Q=np.identity(2), D=2, T=1)
# Setup filter
filt = filters.EKF(motion_model, sensor_model)
# Setup tracker
tracker = trackers.BasicTracker(filt, clutter_model, associator, gater)
result = evaluate_tracker(ex1_trajectories['T4'], tracker, 10)
```
%% Cell type:code id:a676730b tags:
``` python
plotters.plot_result_ex1_2(result, ex1_trajectories['T4'])
plt.show()
```
%% Cell type:markdown id:d7db13f8 tags:
### Task 2.2 - EKF for a high maneuvering target
A constant velocity model was implemented w/ process noise tuning $\mathbf{Q}=100\mathbf{I}$.
This gave a good performance for trajectory T2 and somewhat reasonable results for the concatenated trajectory T4. However, the trajectory along the parts given by T1 and T3 are still better with the low maneuvering EKF.
10 simulations were run and the RMSE computed for $p_x$ and $p_y$ independently. The top figure illustrate the filtered trajectories in dashed colored lines, the ground truth trajectory in black and measurements from **one** of the simulations as black dots. The bottom figure illustrates the RMSE over the 10 different simulations.
As illustrated in the bottom figure, the RMSE is quite reasonable, but can still be improved.
%% Cell type:code id:5612b2a3 tags:
``` python
motion_model['Q'] = Q(100)
motion_model = models.cv_model(Q=np.identity(2)*100, D=2, T=1)
# Setup filter
filt = filters.EKF(motion_model, sensor_model)
# Setup tracker
tracker = trackers.BasicTracker(filt, clutter_model, associator, gater)
result = evaluate_tracker(ex1_trajectories['T4'], tracker, 10)
```
%% Cell type:code id:b0aa75c0 tags:
``` python
plotters.plot_result_ex1_2(result, ex1_trajectories['T4'])
plt.show()
```
%% Cell type:markdown id:50c3a848-cf00-4243-bb9d-559f1a203296 tags:
### Task 2.3 - IMM for Maneuvering Targets
A two-model IMM was implemented with two CV models with process noise tuning $\mathbf{Q}_1=\mathbf{I}$ and $\mathbf{Q}_2=100\mathbf{I}$, respectively. Mahalanobis gating was used with $\gamma=15$. Further, a nearest neighbour association step was used for associating the gated measurements.
The transition probability matrix was set to
$$\Pi = \begin{bmatrix}p & 1-p\\1-p & p\end{bmatrix},$$
with $p=0.9$, which yielded good performance.
10 simulations were run and the RMSE computed for $p_x$ and $p_y$ independently. Three figures are produced. The top figure illustrates the filtered trajectories in dashed colored lines, the ground truth trajectory in black and measurements from **one** of the simulations as black dots. The middle figure illustrates the RMSE over the 10 different simulations.
The bottom figure illustrates the model (mode) probabilities over the normalized trajectory length (normalized by the length of the trajectory).
As illustrated in the middle figure, the RMSE is better than for the two individual models from before. The mode clearly switches in the circular part of the trajectory as can be seen in the bottom figure.
%% Cell type:code id:c01d745a tags:
``` python
motion_model_two = motion_model.copy()
motion_model_two['Q'] = Q(100)
motion_model = models.cv_model(Q=np.identity(2), D=2, T=1) # Use Q=I
motion_model_two = models.cv_model(Q=100*np.identity(2), D=2, T=1) # Use Q=100I
filtone = filters.EKF(motion_model, sensor_model)
filttwo = filters.EKF(motion_model_two, sensor_model)
p = 0.8
p = 0.9
trans_prob = np.array([[p, 1-p], [1-p, p]])
imm = filters.IMM([filtone, filttwo], sensor_model, trans_prob)
gater.gamma = 10.6#9.2
gater.gamma = 12.43 # Corresponds to alpha=0.998. Tuned for performance.
```
%% Cell type:code id:c74df784 tags:
``` python
# Setup tracker
tracker = trackers.IMMTracker(imm, clutter_model, associator, gater)
T = ex1_trajectories['T4']
MC = 10
# Initialize correctly
xhat = np.zeros((4, T.shape[1], MC))
xhat[:2, 0, :] = np.repeat(T[:, [0]], MC, axis=1)
xhat[2:, 0, :] = np.repeat(T[:, [1]]-T[:, [0]], MC, axis=1)
# Initial uncertainty quite high
Phat = np.zeros((4, 4, T.shape[1], MC))
Phat[:, :, 0, :] = np.repeat(np.diag([100, 100, 100, 100])[:, :, None], MC, axis=2)
mode_probabilities = []
Y = []
for m in tqdm.tqdm(range(MC), desc="MC iteration: "):
Y.append(generate_data(T, tracker.filt.sensor_model, tracker.clutter_model, rng))
Y.append(generate_data({'T': T}, tracker.filt.sensor_model, tracker.clutter_model, rng))
xhat[:, :, m], Phat[:, :, :, m] = tracker.evaluate(Y[-1], xhat[:, :, m], Phat[:, :, :, m])
mode_probabilities.append(imm.mode_probabilities)
imm.mode_probabilities = [imm.mode_probabilities[0]]
# Evaluate RMSE
rmse = np.sqrt(np.mean((T[:, :, None] - xhat[:2, :, :])**2, axis=1))
result = dict(Y=Y, xhat=xhat, Phat=Phat, rmse=rmse)
```
%% Cell type:code id:490aff21 tags:
``` python
fig = plotters.plot_result_ex1_2(result, ex1_trajectories['T4'])
fig = plt.figure(figsize=(16, 4))
ax = fig.add_subplot(111)
for j in range(MC):
dat = np.vstack(mode_probabilities[j])
xdat = np.linspace(0, 1, dat.shape[0])
lo = ax.plot(xdat, dat[:, 0], color='tab:blue', linestyle='--')[0]
hi = ax.plot(xdat, dat[:, 1], color='tab:orange', linestyle='--')[0]
ax.set_xlabel('Normalized trajectory position')
plt.tight_layout()
ax.legend([lo, hi], ['Low-Maneuvering', 'High-Manuevering'])
plt.show()
```
%% Cell type:markdown id:64838b70 tags:
### Task 2.4 - Mysterious Data
An IMM was used with two CV models with tuning $\mathbf{Q}_1=\mathbf{I}$ and $\mathbf{Q}_2=1000\mathbf{I}$. Mahalanobis gating was used with $\gamma=4.7$. The transition probability is as in 2.3, with $p=0.8$.
After looking at the measurements, the first two measurements "seem" to belong to a target and as such, they were used to initialize the target state. The initial uncertainty is set quite high to $\mathbf{P}_0=100\mathbf{I}$.
%% Cell type:code id:cb6416c6 tags:
``` python
from scipy.io import loadmat
dat24 = loadmat('data/ex1data.mat')
```
%% Cell type:code id:1f990399-a854-49ec-98cf-396190b6258f tags:
``` python
T = np.round(np.diff(dat24['T1'][0])[0], 2)
F = np.identity(4)
F[:2, 2:] = np.identity(2)*T
G = np.vstack([np.identity(2)*T, T**2/2*np.identity(2)])
q1 = 1*np.identity(2)
q2 = 1e4*np.identity(2)
Q1 = G@q1@G.T
Q2 = G@q2@G.T
f = lambda x: F@x
mm = dict(f=f, Q=Q1)
mm2 = dict(f=f, Q=Q2)
mm = models.cv_model(Q=np.identity(2), D=2, T=T)
mm2 = models.cv_model(Q=1e4*np.identity(2), D=2, T=T)
sensor_model['R'] = np.diag([135, 1.5*np.pi/180])**2
clutter_model['volume'] = dict(xmin=20000, xmax=32000, ymin=-10000, ymax=35000)
gater.gamma = 4.7
filtone = filters.EKF(mm, sensor_model)
filttwo = filters.EKF(mm2, sensor_model)
p = 0.8
trans_prob = np.array([[p, 1-p], [1-p, p]])
imm = filters.IMM([filtone, filttwo], sensor_model, trans_prob)
# Setup tracker
tracker = trackers.IMMTracker(imm, clutter_model, associator, gater)
Y = dat24['Y1'].flatten()
Y = [yk.reshape(2, -1) for yk in Y]
N = len(Y)
# Initialize
xhat = np.zeros((4, N))
x0 = np.concatenate(plotters.radar_to_pos(Y[0]))
x1 = np.array(plotters.radar_to_pos(Y[1]))[:, 0]
v0 = (x1-x0)/T
xhat[:, 0] = np.concatenate([x0, v0])
# Initial uncertainty quite high
Phat = np.zeros((4, 4, N))
Phat[:, :, 0] = np.diag([100, 100, 100, 100])
# Evaluate data
xhat, Phat = tracker.evaluate(Y, xhat, Phat)
mode_probabilities = imm.mode_probabilities
result = dict(Y=list(Y), xhat=xhat, Phat=Phat)
```
%% Cell type:code id:6eb29483 tags:
``` python
plotters.plot_result_ex1_24(result, mode_probabilities)
plt.show()
```
%% Cell type:markdown id:83069a4e-36d9-4e50-b226-2e0858ff4b71 tags:
The top figure shows the estimated trajectory overlayed over the measurements. The method seems to capture the trajectory well. In the bottom figure, the mode probabilities are visualized. Clearly, the high-maneuvering mode is active in the segments of the trajectory where the target quickly steers.
%% Cell type:markdown id:cde84f60 tags:
### Task 3.1 - N/M Logic
An N/M logic was implemented with N1/M1 & N2/M2 for confirmation and N3/M3 for termination. N1, M1, N2, M2, N3 are tuneable parameters. M3 is assumed fixed to N3. Mahalanobis gating is used and nearest neighbour association. Further, an EKF is used with the model provided in the exercise. A track is initiated at the current measurement with speed 0 and initial uncertainty given by $\mathbf{P}_0=\mathrm{diag}[R,~0.1]$. The logic parameters were set to (according to the exercise)
$$N1=2,~M1=2,~N2=2,~M2=3,~N3=3$$
%% Cell type:markdown id:0b81e293-471d-4eb5-87b3-c24b108b6e65 tags:
#### Convenience functionality
%% Cell type:code id:1f61c04b tags:
``` python
def update_track(meas_k, unused_meas, filt, track, associator, gater, logic, logic_params):
# Calculate prediction error of each measurement
# Calculate prediction error of each unused measurement
yhat = filt.sensor_model['h'](track['x'][-1])
eps = meas_k[:, unused_meas]-yhat
eps = meas_k[unused_meas]-yhat
# No unused measurements -> update track with false measurement
if eps.size == 0:
track = logic(np.array([]), filt, track, logic_params)
return
# Necessary for broadcasting later on
if eps.ndim < 2:
eps = np.expand_dims(eps, 0)
# Gating step
accepted_meas = gater.gate(track['x'][-1], track['P'][-1], eps)
accepted_meas = gater.gate(track['x'][-1], track['P'][-1], meas_k[:, unused_meas])
# If any measurements are accepted, select the nearest one
if accepted_meas.any():
# Association step
yind = associator.associate(eps[:, accepted_meas])
# Update track information
track = logic(meas_k[unused_meas][accepted_meas][yind], filt, track, logic_params)
# Unfortunately, Python can't update an array value with multi-level indexing
tmp = unused_meas[unused_meas]
tmp2 = tmp[accepted_meas]
tmp2[yind] = 0
tmp[accepted_meas] = tmp2
unused_meas[unused_meas] = tmp
track = logic(meas_k[:, unused_meas][:, accepted_meas][:, yind], filt, track, logic_params)
# Remove this measurement from further consideration
unused_meas[meas_k.flatten()==meas_k[:, unused_meas][:, accepted_meas][:, yind]] = 0
# Update
track['x'][-1], track['P'][-1] = filt.update(track['x'][-1], track['P'][-1], eps[:, accepted_meas][:, yind])
else:
track = logic(np.array([]), filt, track, logic_params)
def logic_evaluater(filt, Y, associator, gater, logic, logic_params, init_track):
confirmed_tracks = []
tracks = []
ids = 0
for k, meas_k in tqdm.tqdm(enumerate(Y), desc="Evaluating observations: "):
unused_meas = np.ones((1,meas_k.size), dtype=bool)
unused_meas = np.ones((meas_k.size,), dtype=bool)
for track in confirmed_tracks:
if track['stage'] == 'deleted':
continue
tmp = unused_meas.copy()
# Update confirmed tracks
update_track(meas_k, unused_meas, filt, track, associator, gater, logic, logic_params)
if tmp.sum() != unused_meas.sum(): # If we've associated something, add the time here (for plotting purposes)
track['associations'].append(k)
tentative_tracks = [track for track in tracks if track['stage'] == 'tentative']
for track in tentative_tracks:
if track['stage'] == 'deleted':
continue
tmp = unused_meas.copy()
# Update tentative tracks
update_track(meas_k, unused_meas, filt, track, associator, gater, logic, logic_params)
if tmp.sum() != unused_meas.sum(): # If we've associated something, add the time here (for plotting purposes)
track['associations'].append(k)
if track['stage'] == 'confirmed':
confirmed_tracks.append(track) # If a track has been confirmed, add it to confirmed tracks
# Used the unused measurements to initiate new tracks
for meas in meas_k[unused_meas]:
tracks.append(init_track(meas, k, ids))
ids += 1
if unused_meas.any():
for meas in meas_k[:, unused_meas].T:
tracks.append(init_track(meas, k, ids))
ids += 1
for track in tracks:
if track['stage'] != 'deleted':
x, P = filt.propagate(track['x'][-1], track['P'][-1])
track['x'].append(x)
track['P'].append(P)
track['t'].append(k+1)
return tracks, confirmed_tracks
def plot_logic_result(tracks, confirmed_tracks):
fig, ax = plt.subplots(2, 1, figsize=(16, 16))
confirmed_ids = [track['identity'] for track in confirmed_tracks]
for track in tracks:
x = np.vstack(track['x'])
t = np.hstack(track['t']).flatten()
assoc = np.hstack(track['associations']).flatten()
if track in confirmed_tracks:
if track['identity'] in confirmed_ids:
ls = '-'
else:
ls = '--'
l = ax[0].plot(t, track['identity']*np.ones(t.shape), ls=ls, markersize=3)[0]
ax[0].plot(assoc, track['identity']*np.ones(assoc.shape), ls='', color=l.get_color(), marker='x', markersize=7)
ax[1].plot(t, x[:, 0], ls=ls, color=l.get_color())
T = np.hstack([k*np.ones(yk.shape) for k, yk in enumerate(Y)])
ax[0].set_ylabel('Track identity')
ax[0].set_title('Tracks over time')
ax[1].plot(T.flatten(), np.hstack(Y).flatten(), '.', color='k')
ax[1].set_xlabel('Time index, k')
ax[1].set_ylabel(r'$y/\hat{y}$')
ax[1].set_title('Measurements and measurement predictions + tracks')
return fig
```
%% Cell type:code id:3854e479 tags:
``` python
import src.logic as logic
F = np.array([[1, 1], [0, 1]])
Q1 = 0.001*np.array([[1/4, 1/2], [1/2, 1]])
R = np.array([0.01])[:, None]
R = np.array([[0.01]])
PD = 0.9
clutter_model = dict(volume=dict(xmin=-10, xmax=10, ymin=0, ymax=0), lam=0.05*20)
H = np.array([[1, 0]])
f = lambda x: F@x
h = lambda x: H@x
motion_model = dict(f=f, Q=Q1)
sensor_model = dict(h=h, R=R, PD=PD)
gater = gaters.MahalanobisGater(sensor_model, 4.7)
gater = gaters.MahalanobisGater(sensor_model, 9.2)#4.7)
Y = list(dat24['Y2'].flatten())
N = len(Y)
P0 = np.diag([R[0, 0], 0.1])
logic_params = dict(N1=2, M1=2, N2=2, M2=3, N3=3)
filt = filters.EKF(motion_model, sensor_model)
init_track = lambda y, k, identity: dict(stage='tentative',
nmeas=1,
nass=1,
x=[np.array([y,0])],
x=[np.append(y,0)],
P=[P0],
t=[k],
identity=identity,
associations=[k])
```
%% Cell type:code id:51560ab1 tags:
``` python
tracks, confirmed_tracks = logic_evaluater(filt, Y, associator, gater, logic.nm_logic, logic_params, init_track)
plot_logic_result(tracks, confirmed_tracks)
plt.show()
```
%% Cell type:markdown id:925927c7-701d-4a3d-bed0-b705d2eb2908 tags:
The top figure shows tracks over time. The confirmed tracks are plotted with solid lines and tentative tracks are dashed. Times when measurements are associated to a certain track are visualized as crosses. The bottom figure shows the actual tracks over time. The y-axis are the measurements as well as the predicted measurements. The solid lines are confirmed tracks and dashed are tentative. There seems to be ~6 confirmed tracks over time and quite a few tentative.
%% Cell type:markdown id:ba171671-ad40-444e-ad9f-9d0256b8c2c4 tags:
### Task 3.2 - Score Logic
A score logic was implemented with an exponential forgetting factor to avoid integrator wind-up. The parameters were set to
$$P_{TM}=0.01,~P_{FC}=0.001,~\lambda=0.9,~P_D=0.9,~P_G=1,~\beta_{FA}=0.05,~L_{hi}=\log\frac{1-P_{TM}}{P_{FC}},~L_{lo}=\log\frac{P_{TM}}{1-P_{FC}},~L_{del}=L_{lo}$$
%% Cell type:code id:fbd47a7a-6bf1-41a2-808f-bef156e31937 tags:
``` python
Ptm = 0.01
Pfc = 0.001
lam = 0.9
logic_params = dict(PD=sensor_model['PD'], PG=1, lam=lam, Ptm=Ptm, Pfc=Pfc, Bfa=0.05, Ldel=np.log(Ptm/(1-Pfc)))
init_track = lambda y, k, identity: dict(stage='tentative',
Lt=0,
x=[np.array([y,0])],
x=[np.append(y, 0)],
P=[P0],
t=[k],
identity=identity,
associations=[k])
```
%% Cell type:code id:eff42613 tags:
``` python
tracks, confirmed_tracks = logic_evaluater(filt, Y, associator, gater, logic.score_logic, logic_params, init_track)
plot_logic_result(tracks, confirmed_tracks)
plt.show()
```
%% Cell type:markdown id:d07c0cec-1a4f-4226-9dab-5d68f83756e8 tags:
The top figure shows tracks over time. The confirmed tracks are plotted with solid lines and tentative tracks are dashed. Times when measurements are associated to a certain track are visualized as crosses. The bottom figure shows the actual tracks over time. The y-axis are the measurements as well as the predicted measurements. The solid lines are confirmed tracks and dashed are tentative. There seems to be ~5 confirmed tracks over time and quite a few tentative. This score-based logic doesn't kill off possible tracks as quickly as the N/M logic (but that is just up to tuning of the deletion criteria as well as the exponential forgetting factor).
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment