Archive for the 'Navigation Accuracy' Category

Number of GPS satellites – does it matter?

August 07th, 2010 | Category: Navigation Accuracy,Navigation Accuracy Library

In a previous Nog I wrote, an interesting result showed up.  The Second Space Operations Squadron (2SOPs) is currently in the midst of rephasing several GPS satellites – to optimize the coverage the entire constellation provides.  I analyzed the coverage before and after the optimization and showed plots of the coverage in both instances.  In one of the plots, the sheer number of GPS satellites available for your GPS receiver goes down AFTER the optimization.  This is a little non-intuitive, especially for those of us who have been in the GPS business awhile.  We tend to equate Dilution of Precision (DOP), that value associated with the GPS satellites’ orientation, with navigation accuracy.  This is somewhat true, but not always.

I decided to see how I could use AGI’s navigation library to prove the point - so I wrote a small application that runs over a single day, at 60 second intervals.  You can choose to calculate over one site or more, randomly picked around the globe.  So, over one site, I’ll get 1440 points of data.  The sites use a 5 degree mask angle above the horizon and the tool uses a SEM almanac and PAF file for July 1, 2010.

I want to show graphs of the following:

  • The PDOP value against the number of GPS satellites
  • The position navigation accuracy against the number of GPS satellites
  • The position navigation accuracy against the PDOP value
  • Oh, and a histogram of the number of available GPS satellites

So this tool serves two purposes – it let’s you play with the generated data, using as many sites as you like, to determine how much or how little the number of GPS satellites available affects your navigation error.  It also shows you how to create a simple program using (and how easy it is to program with!) our AGI components.  The components are free for development and personal use and can be downloaded here: http://adn.agi.com/detailedView.cfm?resourceId=240.

The Gizmo

So let’s look at the tool I created.  I built it using the C# .Net language (my favorite).  It’s a standard Windows form application, built using MS Visual Studio 2008.  If you want to build and run this tool yourself (HIGHLY recommended) you’ll need some things.  See the Appendix at the end of this Nog for details.  The main tool looks like this:

imageAs the well-spelled-out instructions state, you just pick the number of sites you want the tool to use for the analysis, then press the calculate button.  Once the calculations are finished, you can select any of the four buttons to plot the results.

Let’s look at some typical results.  I’m going to pick 30 sites to use – that gives a better average number.

Number of GPS SVs v. PDOP

Here’s the plot of the Position DOP against the number of GPS Satellite’s available for solution.

imageAs you may have expected, the PDOP value does decrease as we get more satellites visible above the mask angle – there is a clear decreasing trend in the data.

Number of GPS SVs v. Navigation Error

Let’s see how the navigation error looks against the number of GPS SVs.

imageThere is no clear trend here, we get roughly the same spread of errors with 13 satellites in view as we do with 8.  There is a slight decreasing trend after 13 satellites though.  Build the tool yourself and play with the number of sites to see if this is an artifact of the random sites used for my run, or if these results are repeatable.

PDOP v. Navigation Error

So it doesn’t look like the number of satellites affects my navigation error – but does PDOP affect my navigation error?  Mathematically, we know it does:

image Here Delta-X is the positioning error vector, G is the geometry matrix and delta-rho is the vector of corrected pseudorange errors.  The relationship is linear, though in a matrix framework.  Let’s see how this looks graphically:

imageNot as linear as you might see in a textbook example.  In fact, some areas of relatively high PDOP (4-5) have very low navigation error, meaning the  pseudorange errors are very small there.  Conversely, some low PDOP data points have a comparatively high navigation error, meaning the pseudorange errors are large at those points.

Number of GPS SVs Histogram

Just for fun, here’s the histogram of the number of GPS SVs at all 30 locations over the entire day.

imageThere are roughly 10-11 SVs in view on average with the current constellation, above a 5 degree mask angle.

Conclusions

Based on this single run (and the 100 or so I’ve already done and seen), It is evident that the more SVs available to you, the better your DOP is.  This does not mean that your accuracy will be better though, as evidenced by the other graphs.  So, not to worry if you have fewer satellites after the optimization, it doesn’t really matter with the current level of performance 2SOPs provides us.

Appendix:  How to get and build the gizmo

You’ll need the following:

Once you have all of these installed and unzipped, do the following:

  1. Open the project in Visual Studio
  2. Be sure the Solution Explorer is visible (View|Solution Explorer)
  3. Expand the References area and right-click, then select Add Reference…
  4. Browse to the AGI Components install \ Assemblies folder and select the following assembly files:
    1. AGI.Foundation.Navigation.dll
    2. AGI.Foundation.Core.dll
    3. AGI.Foundation.Platforms.dll
    4. AGI.Foundation.Models.dll
  5. Once those are added, right-click on the Project name and select Add | Existing Item…
  6. Browse to the AGI Components install \ Assemblies folder again.
  7. This time, add the licenses.licx file.  You may have to use the “All Files (*.*)” filter to see it.  Be sure you are adding the .licx file and not the .lic file that is also in that directory. (You should have placed the .lic file in that folder as part of the AGI Components install) Note the the .licx file tells the compiler to compile in the .lic file and thus license your application for use.  Without this, the application will throw a license exception.
  8. Build and run the tool.  I've tested on Win 7 and Win XP.

Feel free to e-mail me with questions about running the tool, analysis results you see or any other general comments: navigation@agi.com.

Smooth sailing,

Ted

No comments

GPS constellation optimization analysis

February 26th, 2010 | Category: Navigation Accuracy

The 2nd Space Operations Squadron last month put into play an optimization scheme for the GPS satellite constellation that will bring better accuracy to GPS users worldwide.  Prior to now, GPS was only required to have 24 satellites operating in 24 specific orbital slots.  Of course there have been many more than 24 satellites on orbit for over 12 years.  Because of the 24 satellite requirement though, their orbital positions were determined by optimizing constellation performance on only those 24 slots.  That has now changed.  There is a new 24+3 constellation slot definition that allows GPS satellite orbital positions to be optimized based on a 27 satellite  constellation rather than 24.  This means that your GPS receiver’s positioning accuracy will get better – for free!  I’m not covering the specifics of the satellite moves for this Nog, those are covered in articles linked above.  Here, I’m focusing on the end user’s gain based on this change.

Remember that your position accuracy depends generally on two things: the orientation of the the GPS satellites at a given time and the accuracy of the the GPS signals themselves.  I know, I know, there are other factors involved in accuracy, but let’s just look at the big picture here : ).

The metric defining the orientation of the satellites is Dilution of Precision (DOP), the metric defining the accuracy of the satellite signals is User Range Error (URE).  Because the GPS orbital slot positions are changing based on this new optimization, we expect to see a change in DOP, not URE.  The new positions presumably have been chosen to optimize DOP – let’s see how much better the DOP will be when the satellites have completed their move.  There are several types of DOP, representing the effect of satellite orientation on different coordinates.  For example, Horizontal DOP (HDOP) represents how much satellite orientation affects your navigation positioning error on the surface of the Earth (2 dimensions).  Vertical DOP (VDOP) represents how the satellite orientation affects on your altitude positioning accuracy.

For my analysis, I’ll look at HDOP and VDOP values for the entire globe for today’s constellation, and for the constellation once all the satellite moves have been completed for the optimization.  I’ll call this the optimized constellation.  I’m also going to constrain the DOP calculations by requiring that each point on Earth not be able to see any GPS satellite below 10 degrees from the horizon.  This is a typical mask angle used for these types of analysis.  Note that clicking on each picture will bring up a full size version, for closer examination.

Horizontal DOP

Today’s average HDOP values are shown in Figure 1.  The HDOP average is taken over a 24 hour period from Jan 1, 2010 to Jan 2, 2010.  The color at each point on the grid represents the daily average HDOP for that grid point.

todaysavghdopbr

Figure 1 – Average HDOP, January 1, 2010

Now we’ll look at how HDOP changes with the new optimized constellation.  Figure 2 shows the average HDOP with the newly optimized constellation.

optimizedavghdopbr

Figure 2 – Average HDOP, Optimized Constellation

The changes here aren’t too striking, but you’ll notice that the bands where the average HDOP is greater that 1.0 (the red bands) have shrunk a bit.  Let’s see if Vertical DOP is any better.

Vertical DOP

Today’s average VDOP values are shown in Figure 3.  Again, the VDOP average is taken over a 24 hour period from Jan 1, 2010 to Jan 2, 2010.  The color at each point on the grid represents the daily average VDOP for that grid point.

todaysavgvdopbr

Figure 3 –Average VDOP,  January 1, 2010

Next is the average VDOP for the new, optimized constellation, shown in Figure 4.

optimizedavgvdopbr

Figure 4 – Average VDOP, Optimized Constellation

It appears that VDOP is getting improved more than HDOP – but by how much?

Let’s look at some numbers.

I’ll take a global average of all the HDOP and VDOP values and put them into a table:

Optimized January 1, 2010 Percent change
Average HDOP 0.9491 0.9759 -2.743%
Average VDOP 1.651 1.699 -2.855%

Table 1 – DOP Percentage Changes

So the VDOP change is slightly better than the HDOP improvement and both changes reflect roughly a 3% increase in DOP performance.

How do these changes in DOP affect what we really care about though – our GPS positioning error?  Using the same methodology as above, I’ll now look at navigation accuracy for the January 2010 constellation and the optimized constellation.  To complete a navigation accuracy analysis, I need User Range Error (URE) information for each satellite. I’ll use the GPS User Range Errors from January 1, 2010 for both the current and the optimized constellations.  The green bars in Figure 5 show the values I’m using for each satellite.  Because PRN 1 (SVN 49) is a critical piece of this optimization, I’m including it in the analysis, with a URE of two (2) meters.  I’ve also included a receiver error value of two (2) meters in the following accuracy analysis.

January 1, 2010 PSF

Figure 5 – GPS User Range Error values: Dec 30, 2009 – Jan 01, 2010

Horizontal Accuracy

Figure 6 shows the average horizontal accuracy for January 1, 2010.   The color at each grid point is the average horizontal navigation error over 24 hours, for the constellation on January 1, 2010.  I tend to use navigation accuracy and navigation error interchangeably – to me, they mean the same thing.

todaysavghaccbr

Figure 6 – Average Horizontal Navigation Accuracy, January 1, 2010

With the new constellation, we do see some improvements.  Figure 7 shows the horizontal accuracy using the optimized constellation.

optimizedavghaccbr

Figure 7 – Average Horizontal Navigation Accuracy,  Optimized Constellation

Vertical Accuracy

The vertical navigation accuracy plots show improvement as well.  Figure 8 shows the average vertical navigation error for January 1, 2010.

todaysavgvaccbr

Figure 8 – Average Vertical Navigation Accuracy, January 1, 2010

Figure 9 shows the improved average vertical accuracy with the optimized constellation.

optimizedavgvaccbr

Figure 9 – Average Vertical Navigation Accuracy, Optimized Constellation

Table 2 shows the percent improvement in average accuracy with this change in constellation orientation.

Optimized January 1, 2010 Percent change
Average Horizontal Accuracy (meters) 2.244 2.291 -2.05%
Average Vertical Accuracy (meters) 3.949 4.027 -1.93%

Table 2 Average Accuracy Percent Change

The changes aren’t eye-dropping – you do get an overall accuracy improvement, but you may not notice it.  But hey, it’s free right?

Number of Satellites Visible

One last piece of analysis.  Typically you might equate the number of GPS satellites available to your receiver with better accuracy.  This does make some sense, but it’s important to remember that quality trumps quantity – fewer satellites oriented optimally are better than more satellites oriented sub-optimally.  This is shown in the following figures.  Figure 10 shows the minimum number of GPS satellites you’d see over 24 hours at a given location with the current constellation.

todaysminimumnassetsbr

Figure 10 – Minimum number of GPS satellites visible over 1 day, January 1, 2010

Figure 11 below shows the same plot but with the optimized constellation.

optimizedminimumnassetsbr

Figure 11- Minimum number of GPS satellites visible over 1 day, Optimized constellation

Note that the scale starts at four (4) satellites visible and goes up to nine (9). So far so good – it looks like we have larger minimums over the globe, except at the poles.  Also note that the two areas that had a minimum of 4 satellites available  currently (in red in Figure 10)have been eliminated in the optimized constellation.

What about the maximum number of satellites available?

todaysmaximumnassetsbr

Figure 12 – Maximum number of GPS satellites visible over 1 day, January 1, 2010

Figure 12 shows the maximum number of satellites visible over one day for the current constellation.  Note the scale on this graph has changed – it starts at ten (10) and goes up to fifteen (15).  Figure 13 shows the same plot, but for the optimized constellation.

optimizedmaxnassetsbr

Figure 13 -  Maximum number of GPS satellites visible over 1 day, Optimized constellation

The maximum number of visible satellites has gone down in most cases.  This is counter intuitive, but it echoes the idea that the orientation is what’s important, not the sheer number of satellites visible.

All in all the new optimized GPS constellation will improve average DOP and navigation accuracy, but not substantially so.  Another way to approach this analysis is to look at the maximum errors of today’s constellation versus the optimized constellation.  This may (or may not!) show more improvement, but those maximum errors only occur rarely in any case.  What we usually have is the average case as we cruise about with our GPS receivers.  Thanks for the optimization 2SOPs, we’ll take it!

Remember fresh batteries.

2 comments

Navigation Error Predictions – Part 3

September 23rd, 2009 | Category: Navigation Accuracy

I know, it's been a long time coming.  Last February, I wrote a Nog on predicting GPS navigation errors in the long-term - over days and weeks.  In this Nog, I'll cover predicting short term navigation errors, which is a little more tricky believe it or not.  This is because for long-term errors, we can use statistics to predict the general behavior of GPS clocks and ephemeris, distilling that down into a statistical position error prediction.  That type of prediction results in an error covariance, an error ellipsoid around the true position.  For the short term (several hours), we have access to the latest clock and ephemeris errors and by using them we can create a predicted error vector, which is a better thing to have.  The difference between an error ellipsoid and an error vector can be explained by example.  Suppose you lose your car keys.  Having an error ellipsoid may tell you that they are in your house somewhere, not too bad of a search, but you have to search the entire house.  If you have an error vector, it would tell you that they are under last weeks mail in the kitchen junk drawer - much better information! A lot less searching.  In the navigation world, and error ellipsoid tells you the treasure is in the general area, but an error vector points to the giant X on the map.

So, now that we have a basic understanding of the types of errors, let's look at how we might use the data we already have (in a PAF file) to predict error vectors for several hours.  If you're not sure how a PAF file leads to a navigation error assessment, be sure to catch up with these Nogs.

Read more

No comments

SVN 49 Navigation Data parameter changes

June 24th, 2009 | Category: General Navigation,Navigation Accuracy

At a telecon held by Air Force Space Command (AFSPC) last Friday, I asked a question regarding which GPS navigation data parameters were being modified to fix the elevation-based, excessive ranging errors that SVN 49 (PRN 01) is producing.  The answer on the telecon was not as detailed as I would have liked, so I followed up with an e-mail asking for the definitive terms that are being adjusted.  I received their reply today; here is the answer:

"Two methods are being evaluated for mitigating the effects of the SVN-49 problem.  This first method involves adjusting the AF0 and Tgd terms in the broadcast NAV message from SVN-49.  The second method involves adjusting the AF0 and Tgd terms as well as the square root of the semi-major axis and the mean motion difference terms in the broadcast NAV message.  The pros and cons of each method are still being assessed.  The satellite is currently being operated using the second mitigation method without the Tgd adjustment."

So, now we know what's being considered and tested.  I don't doubt that they would consider other parameters as well, if they think they can model the fix better, so this may not be the final answer.  I'll keep the Nog updated if I hear anything further.

More detailed analysis can be found on another blog by Tim Springer.

All comments and questions are appreciated, thanks for following!

Happy Nogging...

1 comment

AFSPC Media Telecon for IIR-20M (svn 49, prn 01) problem

June 19th, 2009 | Category: General Navigation,Navigation Accuracy

Today I attended the Air Force Space Command (AFSPC) media telecon specifically addressing the high User Range Error (URE) problems on the newest GPS satellite.  PRN 1 was launched on March 24, 2009, carrying the new L5 payload.  The L5 payload was turned on and successfully guaranteed it's spot in the spectrum for future L5 payloads on GPS.  But, while L5 worked, L1 and L2 were having problems; problems no one on the ground had seen before.  The URE from PRN 1 was inconsistent with the other IIR vehicles in that family, causing quite a stir.  Notes from today's telecon describe the situation, detail what happened, who's affected and what the resolution is.

Telecon started at 12:00 PM PDT, June 19, 2009

Col. Dave Madden and Col. Dave Buckman answering questions.

Several media representatives asked questions.

Note that I've paraphrased the questions and answers for brevity and clarity

Question: Does the problem affecting this satellite extend to GPS III?

Answer: (Madden) No, this problem is specific to this satellite only.  It turns out that the L5 payload was added to an existing IIR vehicle using the Auxiliary port [I'm assuming it's the RAP functionality on the satellite - the Reserve Auxiliary Payload]. All ground tests were normal and everything seemed fine.  This Auxiliary port is not the same architecture intended for implementing L5 on the GPS III vehicles.  It turns out that by connecting the L5 payload to the Auxiliary port, L1 and L2 energy is reflected and not compensated for.  To correct this, we've effectively moved the antenna phase center of the antenna and adjusted the navigation message.

Question: Is there any risk to the military's use of GPS?

Answer: (Madden) No.  This satellite, even without a fix, is still well within specification.  The [SIS]URE is between 2-4 meters  depending on where you are on Earth, and it's elevation dependant.

Question: Will L5 on SVN49 be turned off when the next L5 payload is turned on?

Answer: (Madden) We'll probably wait until the 2nd L5 payload is on orbit and active before considering turning the SVN 49 L5 payload off.  The problem on SVN 49 is with L1 and L2, not L5.

Question: Is this problem something that needs to be addressed for the upcoming IIR launch in August 2009?

Answer: (Madden) Initially we were concerned, so we performed a root cause analysis to determine the issues.  This analysis lead to the finding about the Auxiliary port.  We then recreated this situation on the ground in Denver and, with more extensive testing, found the same issue that we have on orbit.  This verifies to us that we've found the problem, clearing the next GPS satellite for launch.

Question: How is the fix for this problem modeled?  Is it a constant bias or something else?

Answer: (Madden) The fix effectively moved the antenna phase center for the satellite to 150 meters behind the satellite.

Question: What navigation parameters are being changed to implement this fix?

Answer: (Thomas Powell, Aerospace) The ephemeris phase center value (later determined to be the Tgd value) and the clock offset values are being modified to allow user's receiver to get a correct URE for this satellite.

Closing remarks from Col. Madden covered the Air Force's concern in the tone of the GAO's analysis for the future of GPS.  Col. Madden reiterated that the Air Force has always met GPS performance commitments and that they have a robust plan for continued health of the constellation. Another issue the GAO neglected he continued, was that the Air Force uses power management to increase the lifetime of satellites in certain cases.

See my Nog on the GAO report issue.

Ok, so now we know the scoop - this is a one vehicle hiccup and one that can be corrected, not too bad!

In the next Nog, I get more technical I promise.  The faithful among you have been waiting for the third installment on predicting GPS accuracy - it's next!  I promise! First two Nogs on that topic here and here.

Until then, smooth sailing.

2 comments

GPS Accuracy Failing – Seriously?

May 23rd, 2009 | Category: Dynamic Geometry Library,Navigation Accuracy

The scare level regarding the General Accounting Office report on the risk of future GPS failures is rising precipitously.  Let me be one of the first to say - hold on, there's no reason to panic, or sell your Garmin (GRMN) stock.  Many consumers have purchased the now ubiquitous GPS handhelds that tell you where to go.  Providing accurate positioning maps and voice response, they are a tempting buy (but not for me yet, somehow...).  Most folks even regard their device as the GPS, not the system that provides the location signals to their device.  So, what's the truth behind the failure cry and how bad is it really?

The General Accounting Office report, available here: GAO GPS Report, states there is increased risk of future GPS coverage failures because of acquisition problems - basically the next generation of GPS satellites; the Block IIF satellites, are behind schedule.  Also, there are several GPS satellites that are "single-string" meaning they have lost redundancy on one or more components.  This means that if the current component fails, the satellite may not be able to perform its navigation mission.  The GAO report is reporting on increased risk, it is not reporting on GPS failure.  The conclusion in their report is essentially, let's keep a close eye on it - by recommending the appointing of a single GPS oversight authority.

Let's talk specifics - what if the risks the GAO reports were actually to occur?  What if 6 or more satellites were to fail, with no additional satellites being launched and no GPS satellites being moved in orbit to counter poor coverage?  How bad would it get - really?

With that problem statement, I made the following conservative assumptions in order to analyze the problem:

  • A GPS user has a 12 channel receiver (able to track 12 GPS satellites at once)
  • A GPS receiver won't use any GPS satellites within 5 degrees elevation from their horizon.
  • The GPS Receiver will have a combined error of 2 meters (Signal-In-Space and receiver noise, multipath, etc)

Let's look at today's GPS coverage:

Baseline

This picture shows the maximum navigation error, over 1 day, for the world.  The legend is:

MaxLegend where I've used 10 meters as the max because it's about the width of a typical neighborhood street.

So, everywhere in the US for example, the maximum error you'll see during the day is under 6 meters - roughly half the width of the street.

What about the dark areas?  How bad is the accuracy in those areas, and more importantly, how long is it bad?

The plot below shows that in the dark area in Canada, over the entire day, only a small amount of time is spent with the larger navigation error - roughly 10 minutes.  Even then, the error is only about a street width and a half.

DarkRegion-Baseline

Ok, now on to the fretful stuff.  The GAO is reporting that, because of acquisition issues, GPS accuracy may begin to suffer starting as early as 2010.  Let's look at the situation where GPS starts to lose 1, 2, 3 and more satellites and see how bad our accuracy suffers as a result.

This video shows, in each frame, one additional GPS satellite removed.  There are a total of 9 frames, corresponding to 9 satellites removed.  To decide which satellites to remove, I used data that shows which satellites are most likely to fail based on their loss of redundancy.  I did not use any reliability numbers for these satellites, simply the state of their on-orbit hardware as of March 2009.  The most likely to fail satellites are taken out first, and so on.

The video starts to show some scary colors as we begin to remove large numbers of satellites, but remember - this is the maximum error you will see over the day.  The video points out that instead of localized larger navigation errors like we have today, many more people experience these large errors - but again, for only a short time.  Here's a plot of the worst case scenario along the Eastern seaboard, where 9 GPS satellites have failed, none have been launched, and no movement of GPS satellites has taken place to optimize the coverage.

DenverWorstCase-9-Out

Throughout the entire day, the accuracy never exceeds 22 meters (about two street widths) and averages roughly 4 meters (less than half a street width).

To counter the scary picture the video paints, I created the plot below to show the average navigation error for the world over one day, with 9 GPS satellites missing.

AverageFailed9

This result shows that we will still have sufficient GPS coverage for most navigation needs even if the worst was to happen.  For those users in more constrained environments (like canyons, urban or natural) or that have more stringent navigation requirements than knowing which road they are on, there will be additional effects.  It is unlikely that any of this will happen however, given the Air Force's track record for management of the GPS constellation.

So, keep you GPS unit, which ever kind you have, and don't over react when we hear more stories about how GPS will fail - we're nowhere near that result.

Smooth sailing, with an eye toward the sky...

No comments

GPS Daily Accuracy on Twitter

I was a little reluctant to open a Twitter account, not because I didn't think the tech was cool, but could I possibly have that much to say each day?  In such short sentences?  Well, I figured out that on a daily basis I may not have much to say, but GPS does.  I wanted to provide some useful information to GPS followers, something that could be said in a few words.

To that end, I created an account on Twitter with a user name GPSToday.  This account I figured, could send 'tweets' to followers about GPS events, like accuracy statistics, satellite outages, etc.  But this type of information would take a lot of my time to create and update on a regular basis.  Ahhh, but wait, the AGI Navigation component can be coded in any way, shape or form.  I could use this to create a program that automatically did what I needed and produce the results automatically.

The first application: GPS Accuracy Stats over the globe each day.  Whether you're aware or not, GPS accuracy varies each day - due to satellite outages, GPS signal quality and many other factors.  Getting a quick glance of GPS accuracy and status on Twitter can keep you informed with no work on your part.  So what's available?

Here's a picture of a sample GPSToday Daily Accuracy tweet:

GPSTodayTweet

I use the AGI Navigation Accuracy Library, Dynamic Geometry Library and the Spatial Analysis Library to calculate the global position error, at 5 degree grid increments and 60 sec time steps.  I then find the Maximum, Mean and Minimum statistics over the globe for the day.  Once I have this information, I construct a string that states what you see in the picture above and use Twitterizer post the tweet.  I can't believe how easy this was to do.

On the machine I use to calculate the global accuracy, I used Windows Scheduler to set up the run every day at midnight. When it completes, The code will send me an e-mail that it finished and update the GPSToday status with the message above.  Also, if there were any satellite outages, a tweet with that info would be posted as well.

Computing global accuracy is easy using the Spatial Library component.  A peek at the documentation here, then heading to the Programmer's Guide, Overview, Coverage section, show lots of examples of how to compute coverage.  Down towards the bottom are some navigation examples also.  The coverage algorithm first calculates access over the grid (not at specific times, but based on the assets and constraints you assigned to the grid).   Once Access is calculated, you can evaluate a Figure of Merit (FOM), such as Navigation Accuracy, on that calculated Access at given time steps.  Also built in are statistical functions to allow statistical calculations over the entire grid and time, or just across time at a specified grid point.  Nice.

The best part of all this is that the access and FOM calculations are multi-threaded and core aware - the library will take advantage of all the cores on your machine simply by setting the following:

CoverageDefinitionOnCentralBody m_CoverageDefinition;

m_CoverageDefinition.MultithreadCoverage = true;

So, with the components, a little time and the help of a couple of tools, getting a new requirement coded and out the door happened in very little time.

If you don't have a twitter account, consider getting one, if just to follow how well GPS is doing everyday.  Follow this Twitter user: GPSToday.  Oh, You can find me at TedDriver too.

Happy tweeting!

5 comments

Navigation Error Predictions – Part 2

February 03rd, 2009 | Category: Navigation Accuracy,Navigation Accuracy Library

In the last Nog, we left off trying to figure out how to predict GPS behavior from the data I showed you.  Our GPS error prediction problem involves predicting the Signal-In-Space User Range Error (SISURE), to the extent possible.  From this picture, we came to the conclusion that trying to fit some type of periodic function to this data was going to be difficult.  So, where do we go from here?  In situations like these, I'll always recommend that more data analysis can help, and this case is a perfect example.  The picture linked above shows only one day's worth of SISURE values - the next question we should ask ourselves is; is their a long term behavior to this data?  Let's find out.

GPS Satellite Error Trends

I gathered over 800 days of SISURE data, and looked at the maximum clock error, ephemeris error and the combined user range error for that period.  The following plots show what the maximum errors look like.  To keep the plots readable, I've only plotted two satellite's worth of data in each.

Maximum Clock Error By Day

 

Maximum Ephemeris Error By Day

Maximum URE Error By Day

These plots show something good.  The errors in both clock and ephemeris (and hence the SISURE) are clamped. This means that they do not grow past a certain value - a value we can estimate and use to our advantage.  Even when the errors oscillate over the day, we can say that on average, the errors will not go above some value.  This clamping behavior is not a result of GPS system mathematics or design stability.  It's the direct result of active participation and monitoring by the 2nd Space Operations group (2SOPs) - the Air Force squadron that runs the GPS Control Segment.

This may be old information to some, but I want to be clear on why these errors do not grow.  The GPS system continually broadcasts its position and clock state information to users world wide.  The information the satellites broadcast was predicted by 2SOPs and uploaded to the satellite roughly 24 hours earlier.  When this predicted data is sent to a GPS satellite by 2SOPs it's called a nav upload.  Nav uploads only occur when they are necessary - that is - when a satellite's predicted position differs from it's actual position (For the ringers in the audience - that's the Kalman Filter's estimated position).  So the maximum error a satellite will broadcast is determined by 2SOPs - they do the clamping.  Without this clamping, we would see errors that increase roughly quadratically over time.  Thanks 2SOPs!

Using the Clamped Errors

Looking at the above graphs, we can see that using an average of the errors will give us a good number to use in our predictions.  There are long term trending issues, especially with PRN 1's ephemeris error in this case, so we'll have to take our averages over shorter periods.  These average values will help us predict our GPS accuracy statistically, over longer periods of time.  Obviously, we can't use these numbers to predict the short term behavior of the SISUREs, but we can identify how each satellite performs and get statistical estimates of GPS accuracy for longer periods.  This is exactly how the Prediction Support Files (PSF) are used.  If you've used AGI's Navigation ToolKit or the AGI Navigation Accuracy Library Component at all, you'll be familiar with PSF files.  A PSF file contains the root mean square values (RMS) of the ephemeris components and the clock for each satellite over the last seven days.  A graph of this data is available here: http://adn.agi.com/GNSSWeb/PAFPSFViewer.aspx (second graph on the page).  Here's the graph from today:

PSF Graph

You can see that some satellites perform much better than others, and it's this type of differentiation we want to take into account when predicting GPS accuracy.

Predicting Long Term GPS Accuracy

Warning: Statistics Ahead

Using this PSF data, we can predict GPS accuracy.  We cannot predict specific errors in a given direction (East, North, etc.) but we can predict a statistical GPS error for any location given a confidence level we want to use.  Recall the Assessed Navigation Accuracy Nogs from several months ago.  In these, I outlined how to generate GPS errors from a previous time using PAF data.  Using that same method, we can use PSF data to generate future GPS errors - but only the RMS value of the error - not the actual error.  The RMS values produced from the GPS navigation accuracy algorithms have probability distributions associated with them, depending on what type of prediction we are using.  One-dimensional predictions, like east error, vertical error or time error will have the standard one-dimensional Gaussian probability distribution.  This means that the RMS prediction of these values will have a 68% probability of likelihood, 1 sigma.  Multi-dimensional statistics are required for predicted values of horizontal error and position error.  For the two dimensional horizontal error, the predicted RMS value has a 39.4% likelihood 1 sigma.  Three dimensional position errors have a 19.9% likelihood 1 sigma.  These 1 sigma values are not constant for the different dimensions, making comparisons difficult.  These predicted values can all be scaled to a specific 1 sigma level, or confidence level, using scaling factors derived from past GPS error data.  For example, to compare the East, Vertical and Position errors, we would use different scale factors to convert the predicted RMS values for each of those metrics to a 95% confidence level.  Theoretical scale factors are listed on the internet, but the theoretical values don't accurately model the behavior of GPS.  The AGI Component Navigation Accuracy Library provides a scaling interface using scaling factors derived from empirical data, more accurately representing the GPS constellation behavior.

The graph below shows the empirically derived scale multipliers, using over 600 days worth of data.

Confidence Multipliers

The tables below show the actual scale factors to use for the different metrics, with their associated errors.

50% Confidence Level multipliers

Dimensions

Empirical Value /
Standard Deviation

Theoretical
Value

1- Vertical

0.6323 / 0.0223

0.6745

1- Time

0.6084 / 0.0220

0.6745

2 - Horizontal

0.7824 / 0.0236

0.8326

3 - Position

0.7551 / 0.0236

0.8880

 

95% Confidence Level multipliers

Dimensions

Empirical Value /
Standard Deviation

Theoretical
Value

1- Vertical

2.0096 / 0.0316

1.960

1- Time

2.0230 / 0.0281

1.960

2 - Horizontal

1.8109 / 0.0431

1.731

3 - Position

1.8433 / 0.0380

1.614

So, using this scale data and the PSF data, what do my predictions look like?  The graph below has the actual error in red.  The 95% confidence predicted GPS accuracy is in blue and the 50% confidence predicted GPS accuracy is in green.  Notice that roughly only 5% of the actual errors are above the blue line, and roughly 50% of the actual errors are above the green line.  Notice also that the shape of the 50% line and the 95% line are identical. This is because they are the same prediction - just scaled differently.

ErrorPrediction 

There's one more thing you should be aware of when predicting navigation accuracy.  The confidence levels you pick won't always be adhered to.  Because of the day-to-day variability of the GPS system, the multiplier values are not constant for a given confidence level.  This is evident from the Confidence Interval Multiplier Analysis graph above.  In the Actual and Predicted Position Errors graph, the true percentage of actual errors above the 95% prediction line is 6.8%, not 5%.  This makes me wonder, how long can I use a PSF file to predict my GPS accuracy before the PSF data, or the multipliers become too old to use?

How long can a PSF file be used?

To see if I could find out, I plotted the excursions (the percent of actual GPS errors greater than the predicted GPS errors) for 155 days, using the same PSF file.  The PSF is brand new for day 1, but as we head towards day 155, the PSF file becomes increasingly older.  If there is any correlation between older PSF data and GPS accuracy prediction, we'll be able to see it.

95PercentConfidenceExcursions

The graph says it all - there is no difference in the number of excursions based on PSF age.  If there were, we'd see an increasing trend from left to right, meaning more actual errors were breaking the 95% confidence threshold.  This implies that a PSF file is good to use for longer periods of time, but in using one, you must expect that sometimes the GPS errors will be worse that you expected.

If you've made it this far, congratulations!  The topic is not an easy one and you have to be a die-hard stats fan to keep at it.  Enjoy your Nog and tell everyone at your next party that you know GPS prediction excursions aren't constant, but can they tell you why?

Next time, I'll cover the art of short-term GPS error prediction.  We'll move away from stats for awhile, but we may ask Taylor for a little help...

Until then, smooth sailing.

No comments

Predictions for the New Year

January 14th, 2009 | Category: Navigation Accuracy

It's the beginning of the year and that's when the predictions typically come out.  Well in our case, not about movie star miseries or political hot topics but navigation error predictions - what else!?  In three previous Nogs, I outlined what assessed navigation accuracy was and how it was determined, now I want to focus on how to predict navigation accuracy.

Navigation error prediction is an on-going science that so far has produced mixed results. The algorithms are generally the same for GPS accuracy prediction as they are for GPS accuracy assessment as outlined in the previous posts.  If you have not read those Nogs linked above yet, I highly recommend it. I should bound your expectations early, so you don't think that we can predict GPS errors a year in advance - typical prediction timelines for GPS errors work similarly to prediction timelines for the stock market and other volatile processes - the longer you want to predict for, the larger the error.  For purposes of our discussion, we'll be looking at days and possibly weeks of GPS error prediction - nothing longer.

Since GPS navigation accuracy is a function of both dilution of precision (DOP) and each satellite's individual user range error (URE), we need to look at how well each can be predicted.  DOP predictions were covered at length in this article I wrote for the December 2008 issue of InsideGNSS.  It turns out that DOP can be predicted quite well for weeks into the future, especially if there are no intervening satellite outages.  That being the case, I'll focus this Nog more towards predicting the URE portion of the navigation error.

Satellite User Range Errors

Each GPS satellite broadcasts it's position in space along with other pertinent information to your GPS receiver.  That position combined with the timing information embedded in the signal itself allows your receiver to calculate the distance to the satellite.  The error in that distance calculation is called the user range error and stems from inaccuracies in the satellite position and clock information as well as errors introduced by ionospheric and tropospheric refractions.  The satellite and clock errors together combine to create what's termed the Signal-In-Space USR (SISURE), while the addition of the atmospheric errors and other positioning errors the receiver itself provides combine to create the commonly named User Equipment Range Error (UERE).  In this post, I'll focus on the SISUREs, leaving the full URE for a further topic of investigation.

I've broken the navigation error prediction problem into constituent pieces, and am now focusing on a small (but feisty!) piece of the puzzle.  Implicit in this process is the fact that I can recreate the full navigation error once I determine the predicted error in the URE.  What the heck, let's assume that for now!

Because we're determining a range to the satellite, any errors we want to predict must lie along the line between the receiver and the satellite.  By definition, timing, or clock errors are always along this line, but only the radial portion of the satellite position error lies on this line.  Typically, the SISURE is determined by subtracting the satellite's radial position error from the satellite's clock error:

SISURE = Radial error - Clock error

What does the SISURE look like?  The best way I know of to start predicting something is to look at it and see what patterns can be seen and possibly repeated.  Here's a plot of the SISURE values for the entire GPS constellation of satellites on January 11th (generated in about 5 minutes with NavTK):

Satellite URE Example

Ok, one look at this picture shows us that trying to fit some kind of periodic function to predict the behavior is not going to be easy.  So what options are available?  How could we use this data to predict future GPS behavior?

I'll let you mull over that while you sip your Nog and continue with my thoughts on the subject in the next installment.  Until then, good travels.

1 comment

How long can you use an almanac? (Part two)

December 05th, 2008 | Category: Navigation Accuracy

Technorati Tags: ,,,,

As the saying goes, you don't need to beat a dead horse.  My covering almanacs again in The Nog seems like a long dead horse, but I'll make this one final entry - I promise.  I recently wrote a Nog entitled: How long can you use an almanac?  There, I outlined the timelines for almanac longevity and some general guidelines too.  Well, as luck would have it, the GNSS trade magazine InsideGNSS  liked that Nog and asked if I could do a little more research and publish the results in their GNSS Solutions column.  The column is finished and can be found here: http://www.insidegnss.com/node/923, as well as in the November/December issue of the magazine.

In this Nog, I'll summarize the results of the analysis, since it differs a bit from the analysis in my previous post.  The following sections are covered:

Mission Planning

Almanacs are used in mission planning to predict dilution of precision (DOP) — a key component in navigation accuracy. DOP is not the only element of navigation accuracy; the other is the measurement accuracy, but DOP is a key indicator of mission success.

Receiver Operations

A critical receiver operation is signal acquisition, where the receiver scans both frequency and code phase to lock onto a GPS signal. When scanning the frequency, the amount to scan is determined by the predicted Doppler shift of the desired signal.

Satellite Maintenance

From the foregoing discussion, one may be tempted to use the almanac for long periods of time based on these results. However, the three PRNs that we examined did not undergo any maintenance during the 22-week period shown.

In my previous post, I covered the Mission Planning piece, but not Receiver Operations.  The reason behind the longevity numbers for an almanac, satellite maintenance, was not discussed previously either.  It turns out that required satellite maneuvers are the biggest reason that almanac usage time isn't longer.  (Not really surprising - right?)

Read more

1 comment

Next Page »