Sep 23

Navigation Error Predictions – Part 3

I know, it's been a long time coming.  Last February, I wrote a Nog on predicting GPS navigation errors in the long-term - over days and weeks.  In this Nog, I'll cover predicting short term navigation errors, which is a little more tricky believe it or not.  This is because for long-term errors, we can use statistics to predict the general behavior of GPS clocks and ephemeris, distilling that down into a statistical position error prediction.  That type of prediction results in an error covariance, an error ellipsoid around the true position.  For the short term (several hours), we have access to the latest clock and ephemeris errors and by using them we can create a predicted error vector, which is a better thing to have.  The difference between an error ellipsoid and an error vector can be explained by example.  Suppose you lose your car keys.  Having an error ellipsoid may tell you that they are in your house somewhere, not too bad of a search, but you have to search the entire house.  If you have an error vector, it would tell you that they are under last weeks mail in the kitchen junk drawer - much better information! A lot less searching.  In the navigation world, and error ellipsoid tells you the treasure is in the general area, but an error vector points to the giant X on the map.

So, now that we have a basic understanding of the types of errors, let's look at how we might use the data we already have (in a PAF file) to predict error vectors for several hours.  If you're not sure how a PAF file leads to a navigation error assessment, be sure to catch up with these Nogs.

An initial thought would lead us to perform a linear extrapolation of the the data in a PAF file.  This will definitely produce navigation accuracy predictions, but not ones you'd want to use to search for your keys.  The plot below shows how the navigation error prediction grows dramatically as time goes on.  This is just not good at all,  in fact, I'd call this prediction: FAIL.  In the plot, the data prior to 12:00 is actual data, the data after 12:00 is predicted, based on the data before 12:00.  The linear extrapolation routine uses the information in the last data points of a PAF file, and extrapolates them based on their value and the values rate of change (first derivative).

ExtrapPositionError

So, besides linear extrapolation what else can we do?  I spent a lot of time thinking about this (which is why there have been so few Nogs lately) and I've come up with an algorithm that better mimics the PAF data and leads to much better predictions.  There are further refinements I make to this algorithm, but  I wanted to share the results I have so far.

The plot below shows two different satellite's clock error - a major contributor to the navigation position error.  This plot was taken directly from the GPS Satellite Performance page here.

PRN27PRN12TruthDay200

With such different behaviors amongst satellites, I needed a method of prediction that was also varied.  The first cut at the prediction algorithm produced the results below:

TwoPredictedPRNs

These results are in the ballpark!  We're now getting closer to our navigation error prediction.  Using the same prediction algorithm on all data in the PAF file, using the same scheme as before, data before 12:00 is actual, data afterwards is predicted, I generated 30 samples of predicted PAF data.  The prediction algorithm is based on random numbers, so 30 different samples will all look different.  With these samples, I plotted them with the truth navigation error to see how well we faired.  This plot shows the results:

SampledPositionError

The thick red line shows the true position error, and the thick blue line shows the mean of the 30 prediction samples.  The other lines each represent one of the 30 predictions of the navigation position error.   These results are much better than the linear extrapolation method from above.  We can see the effect of Dilution of Precision (DOP) on the accuracy - where the truth data rises, we see similar rises in the predicted accuracy, but because half of the navigation accuracy calculation is based on data that is inherently random, we'll never match exactly.  The idea here is to get as close as we can.  I'd much rather use this new algorithm to find my lost keys - at least I know they are still somewhere near the kitchen!

Prediction is a tricky game, but the better we understand the problem, the more likely we are to get a better prediction.

I'll keep working on improvements to this algorithm, in the mean time, let me know your thoughts!.  My brain hurts, it's time for a Nog.

Smooth seas...

No comments

No Comments

Leave a comment