next up previous
Next: 2.3 Restricted orbit identification Up: 2. Identification penalties Previous: 2.1 Differential corrections as

2.2 Linear orbit identification

By orbit identification problem we mean to find an algorithm to determine which couples of orbits, among many included in some catalog, might belong to the same object. We assume that both orbits, for which the possibility of identification is being investigated, have been obtained as solutions of a least squares problem. Note that this is not always the case for orbit catalogs containing asteroids observed only over a short arc. There are therefore two uniquely defined vectors of elements, $X_1$ and $X_2$, and the normal and covariance matrices $C_1, C_2, \Gamma_1, \Gamma_2$ computed after convergence of the iterative differential correction procedure, that is at $X_1,X_2$. The two target functions of the two separate orbit determination processes are:

$\displaystyle Q_i(X)$ $\textstyle =$ $\displaystyle {\displaystyle 1 \over \displaystyle m_i} \,\xi_i\cdot \xi_i=Q_i(...... {\displaystyle 2 \over \displaystyle m_i}\,(X-X_i)\cdot C_i\,(X-X_i) + \ldots$  
  $\textstyle =$ $\displaystyle Q_i^* + \Delta Q_i \, ,\qquad i=1,2\ ,$

     


where $\xi_i$ are the two vectors of dimensions $m_i$ of residuals of the separate orbit determination processes.

For the two orbits to represent the same object, observed at different times, we need to find a low enough minimum for the joint target function, formed with the sum of squares of the $m=m_1+m_2$ residuals:

$\displaystyle Q$ $\textstyle =$ $\displaystyle {\displaystyle 1 \over \displaystyle m} \, (\xi_1\cdot \xi_1 + \xi_2\cdot\xi_2)= {\displaystyle 1 \over \displaystyle m} (m_1Q_1+ m_2 Q_2)$  
  $\textstyle =$ $\displaystyle {\displaystyle 1 \over \displaystyle m} \, ( m_1 Q_1^*+ m_2Q_2^*)...... 1 \over \displaystyle m} \, (m_1\Delta Q_1 + m_2 \Delta Q_2) = Q^* + \Delta Q$  


where $Q^*$ is the value corresponding to the sum (with suitable weighting) of the two separate minima, and the penalty $\Delta Q$ measures the increase in the target function which results from the need to use the same orbit for both sets of observations.

The linear algorithm to solve the problem is obtained when the quasi-linear approximation can be used, not only locally, in the neighborhood of the two separate solutions $X_1$ and $X_2$, but even globally for the joint solution. This is a very strong assumption, because in general we cannot assume that the two separate solutions are near to each other, but if the assumption is true, we can use the quadratic approximation for both penalties $\Delta Q_i$, and obtain an explicit formula for the solution of the identification problem:

\begin{displaymath}{\displaystyle m \over \displaystyle 2}\, \Delta Q(X)\simeq (X-X_1)\cdot C_1\,(X-X_1) +(X-X_2)\cdot C_2\,(X-X_2)\end{displaymath}

\begin{displaymath}= X\cdot (C_1+C_2)\, X -2X\cdot(C_1\,X_1+C_2\,X_2)+X_1\cdot C_1\,X_1 + X_2\cdot C_2\,X_2\ .\end{displaymath}

Neglecting higher order terms, the minimum of the penalty $\Delta Q$ can be found by minimizing the nonhomogeneous quadratic form of the formula above. If the new joint minimum is $X_0$, then by expanding around $X_0$ we have

\begin{displaymath}{\displaystyle m \over \displaystyle 2} \, \Delta Q \simeq (X-X_0)\cdot C_0\, (X-X_0) + K\end{displaymath}

and by comparing the last two formulas we find:

$\displaystyle C_0$ $\textstyle =$ $\displaystyle C_1+C_2$  
$\displaystyle C_0\, X_0$ $\textstyle =$ $\displaystyle C_1\, X_1 + C_2\, X_2$  
$\displaystyle K$ $\textstyle =$ $\displaystyle X_1\cdot C_1\,X_1 + X_2\cdot C_2\,X_2 -X_0\cdot C_0\, X_0$  


If the matrix $C_0$, which is the sum of the two separate normal matrices $C_1$ and $C_2$, is positive-definite, then it is invertible and we can solve for the new minimum point:

\begin{displaymath}X_0=C_0^{-1}\, (C_1\, X_1 + C_2\, X_2)\ .\end{displaymath}

This equation has a very simple interpretation in terms of the differential correction process: at convergence in each one of the two separate pseudo-Newton iterations, $X\longrightarrow X_i$ with $C_i=C_i(X_i)$ and $D_i=D_i(X_i)=C_i\Delta X_i=\underline 0$; therefore

\begin{displaymath}C_1\,(X-X_1)=D_1=\underline 0 \;\; \mbox{and} \;\;C_2\,(X-X......derline 0\Longrightarrow (C_1+C_2)\,X = C_1\, X_1 + C_2\, X_2\end{displaymath}

The assumption that the quasi-linear approach is applicable to the identification means that $C_1, C_2$ can be kept constant, thus they have the same value at $X_1,X_2$ and at $X_0$; under these conditions $X_0$ can be interpreted as the result of the first differential correction iteration for the joint problem.

The computation of the minimum identification penalty $2K/m=\DeltaQ(X_0)$ can be simplified by taking into account that $K$ is translation invariant:

\begin{displaymath}X_0\to X_0+V\, ,\qquad X_1\to X_1 +V\, ,\qquad X_2\to X_2+V\end{displaymath}


\begin{displaymath}K\to K + 2V\cdot (C_1\, X_1 + C_2\, X_2 -C_0\, X_0)=K\end{displaymath}

Then we can compute $K$ after a translation by $-X_1$, that is assuming $X_1\to \underline 0$, $X_2\to X_2-X_1=\Delta X$, and $X_0\to C_0^{-1}\,C_2\, \Delta X$:

\begin{displaymath}K= \Delta X\cdot C_2\,\Delta X- X_0\cdot C_0\, X_0=\Delta X \cdot ( C_2-C_2\, C_0^{-1}\, C_2)\, \Delta X\end{displaymath}

and by defining

\begin{displaymath}C= C_2-C_2\, C_0^{-1}\, C_2\end{displaymath}

we have a simple expression of $K$ as a quadratic form:

\begin{displaymath}K= \Delta X \cdot C\, \Delta X\ .\end{displaymath}

Alternatively, translating by $-X_2\,$, that is with $\; X_2\to \underline 0\,$, $\; X_1\to -\Delta X$ and $\; X_0\to C_0^{-1}\,C_1\,(-\Delta X)\,$:

\begin{displaymath}K= \Delta X\cdot C_1\, \Delta X - X_0\cdot C_0\, X_0=\Delta X \cdot ( C_1-C_1\, C_0^{-1}\, C_1)\, \Delta X\end{displaymath}

and the same matrix $C$ can be defined by the alternative expression:

\begin{displaymath}C= C_1-C_1\, C_0^{-1}\, C_1 \, ,\qquad K= \Delta X \cdot C\, \Delta X\end{displaymath}

Note that both these formulas only assume that $C_0^{-1}$ exists. Under this hypothesis

 \begin{displaymath}C=C_2-C_2\, C_0^{-1}\, C_2 = C_1-C_1\, C_0^{-1}\, C_1\ .\end{displaymath} (2)


This is true in exact arithmetic, but might be difficult to realize in a numerical computation if the matrix $C_0$ is badly conditioned.

We can summarize the conclusions by the formula

\begin{displaymath}Q(X)\simeq Q^* + {\displaystyle 2 \over \displaystyle m} \, \......aystyle 2 \over \displaystyle m} \,(X-X_0)\cdot C_0\, (X-X_0)\end{displaymath}

which gives the minimum identification penalty $\Delta Q(X_0)=2K/m$ and also allows one to assess the uncertainty of the identified solution, by defining a confidence ellipsoid with matrix $C_0$.


next up previous
Next: 2.3 Restricted orbit identification Up: 2. Identification penalties Previous: 2.1 Differential corrections as

Maria Eugenia Sansaturio
1999-05-20