Archive for the 'Navigation Accuracy Library' Category

Number of GPS satellites – does it matter?

August 07th, 2010 | Category: Navigation Accuracy,Navigation Accuracy Library

In a previous Nog I wrote, an interesting result showed up.  The Second Space Operations Squadron (2SOPs) is currently in the midst of rephasing several GPS satellites – to optimize the coverage the entire constellation provides.  I analyzed the coverage before and after the optimization and showed plots of the coverage in both instances.  In one of the plots, the sheer number of GPS satellites available for your GPS receiver goes down AFTER the optimization.  This is a little non-intuitive, especially for those of us who have been in the GPS business awhile.  We tend to equate Dilution of Precision (DOP), that value associated with the GPS satellites’ orientation, with navigation accuracy.  This is somewhat true, but not always.

I decided to see how I could use AGI’s navigation library to prove the point - so I wrote a small application that runs over a single day, at 60 second intervals.  You can choose to calculate over one site or more, randomly picked around the globe.  So, over one site, I’ll get 1440 points of data.  The sites use a 5 degree mask angle above the horizon and the tool uses a SEM almanac and PAF file for July 1, 2010.

I want to show graphs of the following:

  • The PDOP value against the number of GPS satellites
  • The position navigation accuracy against the number of GPS satellites
  • The position navigation accuracy against the PDOP value
  • Oh, and a histogram of the number of available GPS satellites

So this tool serves two purposes – it let’s you play with the generated data, using as many sites as you like, to determine how much or how little the number of GPS satellites available affects your navigation error.  It also shows you how to create a simple program using (and how easy it is to program with!) our AGI components.  The components are free for development and personal use and can be downloaded here: http://adn.agi.com/detailedView.cfm?resourceId=240.

The Gizmo

So let’s look at the tool I created.  I built it using the C# .Net language (my favorite).  It’s a standard Windows form application, built using MS Visual Studio 2008.  If you want to build and run this tool yourself (HIGHLY recommended) you’ll need some things.  See the Appendix at the end of this Nog for details.  The main tool looks like this:

imageAs the well-spelled-out instructions state, you just pick the number of sites you want the tool to use for the analysis, then press the calculate button.  Once the calculations are finished, you can select any of the four buttons to plot the results.

Let’s look at some typical results.  I’m going to pick 30 sites to use – that gives a better average number.

Number of GPS SVs v. PDOP

Here’s the plot of the Position DOP against the number of GPS Satellite’s available for solution.

imageAs you may have expected, the PDOP value does decrease as we get more satellites visible above the mask angle – there is a clear decreasing trend in the data.

Number of GPS SVs v. Navigation Error

Let’s see how the navigation error looks against the number of GPS SVs.

imageThere is no clear trend here, we get roughly the same spread of errors with 13 satellites in view as we do with 8.  There is a slight decreasing trend after 13 satellites though.  Build the tool yourself and play with the number of sites to see if this is an artifact of the random sites used for my run, or if these results are repeatable.

PDOP v. Navigation Error

So it doesn’t look like the number of satellites affects my navigation error – but does PDOP affect my navigation error?  Mathematically, we know it does:

image Here Delta-X is the positioning error vector, G is the geometry matrix and delta-rho is the vector of corrected pseudorange errors.  The relationship is linear, though in a matrix framework.  Let’s see how this looks graphically:

imageNot as linear as you might see in a textbook example.  In fact, some areas of relatively high PDOP (4-5) have very low navigation error, meaning the  pseudorange errors are very small there.  Conversely, some low PDOP data points have a comparatively high navigation error, meaning the pseudorange errors are large at those points.

Number of GPS SVs Histogram

Just for fun, here’s the histogram of the number of GPS SVs at all 30 locations over the entire day.

imageThere are roughly 10-11 SVs in view on average with the current constellation, above a 5 degree mask angle.

Conclusions

Based on this single run (and the 100 or so I’ve already done and seen), It is evident that the more SVs available to you, the better your DOP is.  This does not mean that your accuracy will be better though, as evidenced by the other graphs.  So, not to worry if you have fewer satellites after the optimization, it doesn’t really matter with the current level of performance 2SOPs provides us.

Appendix:  How to get and build the gizmo

You’ll need the following:

Once you have all of these installed and unzipped, do the following:

  1. Open the project in Visual Studio
  2. Be sure the Solution Explorer is visible (View|Solution Explorer)
  3. Expand the References area and right-click, then select Add Reference…
  4. Browse to the AGI Components install \ Assemblies folder and select the following assembly files:
    1. AGI.Foundation.Navigation.dll
    2. AGI.Foundation.Core.dll
    3. AGI.Foundation.Platforms.dll
    4. AGI.Foundation.Models.dll
  5. Once those are added, right-click on the Project name and select Add | Existing Item…
  6. Browse to the AGI Components install \ Assemblies folder again.
  7. This time, add the licenses.licx file.  You may have to use the “All Files (*.*)” filter to see it.  Be sure you are adding the .licx file and not the .lic file that is also in that directory. (You should have placed the .lic file in that folder as part of the AGI Components install) Note the the .licx file tells the compiler to compile in the .lic file and thus license your application for use.  Without this, the application will throw a license exception.
  8. Build and run the tool.  I've tested on Win 7 and Win XP.

Feel free to e-mail me with questions about running the tool, analysis results you see or any other general comments: navigation@agi.com.

Smooth sailing,

Ted

No comments

GPS Daily Accuracy on Twitter

I was a little reluctant to open a Twitter account, not because I didn't think the tech was cool, but could I possibly have that much to say each day?  In such short sentences?  Well, I figured out that on a daily basis I may not have much to say, but GPS does.  I wanted to provide some useful information to GPS followers, something that could be said in a few words.

To that end, I created an account on Twitter with a user name GPSToday.  This account I figured, could send 'tweets' to followers about GPS events, like accuracy statistics, satellite outages, etc.  But this type of information would take a lot of my time to create and update on a regular basis.  Ahhh, but wait, the AGI Navigation component can be coded in any way, shape or form.  I could use this to create a program that automatically did what I needed and produce the results automatically.

The first application: GPS Accuracy Stats over the globe each day.  Whether you're aware or not, GPS accuracy varies each day - due to satellite outages, GPS signal quality and many other factors.  Getting a quick glance of GPS accuracy and status on Twitter can keep you informed with no work on your part.  So what's available?

Here's a picture of a sample GPSToday Daily Accuracy tweet:

GPSTodayTweet

I use the AGI Navigation Accuracy Library, Dynamic Geometry Library and the Spatial Analysis Library to calculate the global position error, at 5 degree grid increments and 60 sec time steps.  I then find the Maximum, Mean and Minimum statistics over the globe for the day.  Once I have this information, I construct a string that states what you see in the picture above and use Twitterizer post the tweet.  I can't believe how easy this was to do.

On the machine I use to calculate the global accuracy, I used Windows Scheduler to set up the run every day at midnight. When it completes, The code will send me an e-mail that it finished and update the GPSToday status with the message above.  Also, if there were any satellite outages, a tweet with that info would be posted as well.

Computing global accuracy is easy using the Spatial Library component.  A peek at the documentation here, then heading to the Programmer's Guide, Overview, Coverage section, show lots of examples of how to compute coverage.  Down towards the bottom are some navigation examples also.  The coverage algorithm first calculates access over the grid (not at specific times, but based on the assets and constraints you assigned to the grid).   Once Access is calculated, you can evaluate a Figure of Merit (FOM), such as Navigation Accuracy, on that calculated Access at given time steps.  Also built in are statistical functions to allow statistical calculations over the entire grid and time, or just across time at a specified grid point.  Nice.

The best part of all this is that the access and FOM calculations are multi-threaded and core aware - the library will take advantage of all the cores on your machine simply by setting the following:

CoverageDefinitionOnCentralBody m_CoverageDefinition;

m_CoverageDefinition.MultithreadCoverage = true;

So, with the components, a little time and the help of a couple of tools, getting a new requirement coded and out the door happened in very little time.

If you don't have a twitter account, consider getting one, if just to follow how well GPS is doing everyday.  Follow this Twitter user: GPSToday.  Oh, You can find me at TedDriver too.

Happy tweeting!

5 comments

Navigation Error Predictions – Part 2

February 03rd, 2009 | Category: Navigation Accuracy,Navigation Accuracy Library

In the last Nog, we left off trying to figure out how to predict GPS behavior from the data I showed you.  Our GPS error prediction problem involves predicting the Signal-In-Space User Range Error (SISURE), to the extent possible.  From this picture, we came to the conclusion that trying to fit some type of periodic function to this data was going to be difficult.  So, where do we go from here?  In situations like these, I'll always recommend that more data analysis can help, and this case is a perfect example.  The picture linked above shows only one day's worth of SISURE values - the next question we should ask ourselves is; is their a long term behavior to this data?  Let's find out.

GPS Satellite Error Trends

I gathered over 800 days of SISURE data, and looked at the maximum clock error, ephemeris error and the combined user range error for that period.  The following plots show what the maximum errors look like.  To keep the plots readable, I've only plotted two satellite's worth of data in each.

Maximum Clock Error By Day

 

Maximum Ephemeris Error By Day

Maximum URE Error By Day

These plots show something good.  The errors in both clock and ephemeris (and hence the SISURE) are clamped. This means that they do not grow past a certain value - a value we can estimate and use to our advantage.  Even when the errors oscillate over the day, we can say that on average, the errors will not go above some value.  This clamping behavior is not a result of GPS system mathematics or design stability.  It's the direct result of active participation and monitoring by the 2nd Space Operations group (2SOPs) - the Air Force squadron that runs the GPS Control Segment.

This may be old information to some, but I want to be clear on why these errors do not grow.  The GPS system continually broadcasts its position and clock state information to users world wide.  The information the satellites broadcast was predicted by 2SOPs and uploaded to the satellite roughly 24 hours earlier.  When this predicted data is sent to a GPS satellite by 2SOPs it's called a nav upload.  Nav uploads only occur when they are necessary - that is - when a satellite's predicted position differs from it's actual position (For the ringers in the audience - that's the Kalman Filter's estimated position).  So the maximum error a satellite will broadcast is determined by 2SOPs - they do the clamping.  Without this clamping, we would see errors that increase roughly quadratically over time.  Thanks 2SOPs!

Using the Clamped Errors

Looking at the above graphs, we can see that using an average of the errors will give us a good number to use in our predictions.  There are long term trending issues, especially with PRN 1's ephemeris error in this case, so we'll have to take our averages over shorter periods.  These average values will help us predict our GPS accuracy statistically, over longer periods of time.  Obviously, we can't use these numbers to predict the short term behavior of the SISUREs, but we can identify how each satellite performs and get statistical estimates of GPS accuracy for longer periods.  This is exactly how the Prediction Support Files (PSF) are used.  If you've used AGI's Navigation ToolKit or the AGI Navigation Accuracy Library Component at all, you'll be familiar with PSF files.  A PSF file contains the root mean square values (RMS) of the ephemeris components and the clock for each satellite over the last seven days.  A graph of this data is available here: http://adn.agi.com/GNSSWeb/PAFPSFViewer.aspx (second graph on the page).  Here's the graph from today:

PSF Graph

You can see that some satellites perform much better than others, and it's this type of differentiation we want to take into account when predicting GPS accuracy.

Predicting Long Term GPS Accuracy

Warning: Statistics Ahead

Using this PSF data, we can predict GPS accuracy.  We cannot predict specific errors in a given direction (East, North, etc.) but we can predict a statistical GPS error for any location given a confidence level we want to use.  Recall the Assessed Navigation Accuracy Nogs from several months ago.  In these, I outlined how to generate GPS errors from a previous time using PAF data.  Using that same method, we can use PSF data to generate future GPS errors - but only the RMS value of the error - not the actual error.  The RMS values produced from the GPS navigation accuracy algorithms have probability distributions associated with them, depending on what type of prediction we are using.  One-dimensional predictions, like east error, vertical error or time error will have the standard one-dimensional Gaussian probability distribution.  This means that the RMS prediction of these values will have a 68% probability of likelihood, 1 sigma.  Multi-dimensional statistics are required for predicted values of horizontal error and position error.  For the two dimensional horizontal error, the predicted RMS value has a 39.4% likelihood 1 sigma.  Three dimensional position errors have a 19.9% likelihood 1 sigma.  These 1 sigma values are not constant for the different dimensions, making comparisons difficult.  These predicted values can all be scaled to a specific 1 sigma level, or confidence level, using scaling factors derived from past GPS error data.  For example, to compare the East, Vertical and Position errors, we would use different scale factors to convert the predicted RMS values for each of those metrics to a 95% confidence level.  Theoretical scale factors are listed on the internet, but the theoretical values don't accurately model the behavior of GPS.  The AGI Component Navigation Accuracy Library provides a scaling interface using scaling factors derived from empirical data, more accurately representing the GPS constellation behavior.

The graph below shows the empirically derived scale multipliers, using over 600 days worth of data.

Confidence Multipliers

The tables below show the actual scale factors to use for the different metrics, with their associated errors.

50% Confidence Level multipliers

Dimensions

Empirical Value /
Standard Deviation

Theoretical
Value

1- Vertical

0.6323 / 0.0223

0.6745

1- Time

0.6084 / 0.0220

0.6745

2 - Horizontal

0.7824 / 0.0236

0.8326

3 - Position

0.7551 / 0.0236

0.8880

 

95% Confidence Level multipliers

Dimensions

Empirical Value /
Standard Deviation

Theoretical
Value

1- Vertical

2.0096 / 0.0316

1.960

1- Time

2.0230 / 0.0281

1.960

2 - Horizontal

1.8109 / 0.0431

1.731

3 - Position

1.8433 / 0.0380

1.614

So, using this scale data and the PSF data, what do my predictions look like?  The graph below has the actual error in red.  The 95% confidence predicted GPS accuracy is in blue and the 50% confidence predicted GPS accuracy is in green.  Notice that roughly only 5% of the actual errors are above the blue line, and roughly 50% of the actual errors are above the green line.  Notice also that the shape of the 50% line and the 95% line are identical. This is because they are the same prediction - just scaled differently.

ErrorPrediction 

There's one more thing you should be aware of when predicting navigation accuracy.  The confidence levels you pick won't always be adhered to.  Because of the day-to-day variability of the GPS system, the multiplier values are not constant for a given confidence level.  This is evident from the Confidence Interval Multiplier Analysis graph above.  In the Actual and Predicted Position Errors graph, the true percentage of actual errors above the 95% prediction line is 6.8%, not 5%.  This makes me wonder, how long can I use a PSF file to predict my GPS accuracy before the PSF data, or the multipliers become too old to use?

How long can a PSF file be used?

To see if I could find out, I plotted the excursions (the percent of actual GPS errors greater than the predicted GPS errors) for 155 days, using the same PSF file.  The PSF is brand new for day 1, but as we head towards day 155, the PSF file becomes increasingly older.  If there is any correlation between older PSF data and GPS accuracy prediction, we'll be able to see it.

95PercentConfidenceExcursions

The graph says it all - there is no difference in the number of excursions based on PSF age.  If there were, we'd see an increasing trend from left to right, meaning more actual errors were breaking the 95% confidence threshold.  This implies that a PSF file is good to use for longer periods of time, but in using one, you must expect that sometimes the GPS errors will be worse that you expected.

If you've made it this far, congratulations!  The topic is not an easy one and you have to be a die-hard stats fan to keep at it.  Enjoy your Nog and tell everyone at your next party that you know GPS prediction excursions aren't constant, but can they tell you why?

Next time, I'll cover the art of short-term GPS error prediction.  We'll move away from stats for awhile, but we may ask Taylor for a little help...

Until then, smooth sailing.

No comments

What’s the difference between SEM and YUMA almanacs?

November 11th, 2008 | Category: Navigation Accuracy,Navigation Accuracy Library

Technorati Tags: ,,,,

You need a GPS almanac, but there are two types generally available, SEM and YUMA.  Which one should you choose?  In this article, I'll outline their similarities and differences and explain the data contained in them.  The two almanacs both contain ephemeris data that you can propagate to give a rough position of each GPS satellite.  The orbits generated are very close to one another.  For example, in the picture below, the orbits are 12.1 meters apart, the maximum over a 24-hour period on the first of July of this year.

PRN 5 SEM YUMA distance

Both SEM and YUMA almanacs contain orbital elements - data that is used to propagate orbits.  The orbital elements are defined in the Interface Standard document IS-GPS-200D, on page 107.  A more detailed explanation of the elements is listed on T.S. Kelso's Celestrak site for both SEM and YUMA almanacs.  There are a couple of differences in these two almanacs.  I'll highlight those next.

An excerpt of the current YUMA almanac below shows the parameters used for propagation, along with other information:

******** Week 481 almanac for PRN-02 ********
ID:                         02
Health:                     000
Eccentricity:               0.8813381195E-002
Time of Applicability(s):  405504.0000
Orbital Inclination(rad):   0.9428253174
Rate of Right Ascen(r/s):  -0.7723429007E-008
SQRT(A)  (m 1/2):           5153.531250
Right Ascen at Week(rad):   0.3100656748E+001
Argument of Perigee(rad):   2.673376799
Mean Anom(rad):             0.2047160983E+001
Af0(s):                     0.1735687256E-003
Af1(s/s):                  -0.3637978807E-011
week:                        481

******** Week 481 almanac for PRN-03 ********
ID:                         03
Health:                     000
...

Specifically, the ID, Health and Af0 and Af1 values are not needed for orbital propagation.  The ID value specifies the Pseudo-Random Noise (PRN) number of the GPS satellite, defining that portion of the 37 week gold code that this particular satellite is designated to transmit.  The Health value is 000 if the satellite is usable - other health values are defined in Table 20-VII of IS-GPS-200D on page 109.  Anything other than zero's is generally not a good thing.  The two remaining parameters are clock correction parameters for the GPS satellite's on-board atomic clock.  Page 119 of IS-GPS-200D, section 20.3.3.5.2.3, tells how to construct the correct time for this satellite using these parameters.  One benefit to the Yuma almanac is that it is quite readable.

A portion of the current SEM almanac shows a much less readable format (at least for human's):

31  CURRENT.ALM
 481 405504

2
61
0
 8.81338119506836E-03  1.10626220703125E-04 -2.45927367359400E-09
 5.15353125000000E+03  9.86969709396362E-01  8.50962281227112E-01
 6.51631593704224E-01  1.73568725585938E-04 -3.63797880709171E-12
0
9

3
33
0
 1.14517211914062E-02 -5.35774230957031E-03 -2.65572452917695E-09
...

Notice however that there is more information here.  The SEM almanac lists how many satellites are contained in the almanac file at the top, 31 in this case.  This file type also contains the Satellite Vehicle Number (SVN) that is associated with the PRN.  The SVN is designated prior to launch of the satellite and can't be changed.  The PRN a satellite transmits on the other hand can be changed, depending on the needs of the GPS Control Segment.  So, a SEM almanac is a good way to map which SVN is transmitting which PRN over time.  In this case, PRN 2 is being transmitted by SVN 61.  Below the value 61, is a '0'.  this is the Average URA (User Range Accuracy) value as defined by IS-GPS-200D on page 84, section 20.3.3.3.1.3.  This defines how accurate this satellite is generally.  The higher the number here, the less accurate the satellite's navigation signal.

The orbital elements are the same, with two exceptions.  First, there is more precision in each element in the SEM file.  Second, Orbital Inclination in the Yuma almanac is represented in the SEM as an inclination offset from the nominal 54 degrees.  This allows the inclination to be specified with greater precision as well.

The final two numbers, '0' and '9' in the first record, are the satellite health and the satellite configuration respectively.  The health value is the same as in the Yuma almanac, but the satellite configuration is new.  This value represents the Anti-spoof status of the satellite as well as the type or Block of satellite.  Page 111, section 20.3.3.5.1.4 of IS-GPS-200D explain the configurations.

So, the SEM almanac has a bit more data in it and has a little more precision for the orbital elements.  Deciding which almanac to use is more a choice of which one you prefer, or have access to.  There's something to be said for readability, but a little more precision is always good to!  One other point to note is that with the transition to the new ground system computers, Yuma almanacs are now even slightly less precise.  I commented on this here, but didn't know at the time that only Yuma almanacs were affected.  Given all this information, if I had to choose (and I always do!), I'd choose a SEM almanac - but that's just me.

One last point, when using the Navigation Accuracy Library, which almanac you choose is immaterial - the methods to create the GPS constellation are identical.  This makes comparisons between SEM, Yuma, precise ephemeris, or broadcast ephemeris trivial.

using (StringReader almReader = new StringReader(@"C:\MyAlmanacOrEphemeris.txt"))
{
	SemAlmanac Almanac = SemAlmanac.ReadFrom(almReader , 1);
        PlatformCollection GpsSvs = Almanac.CreateSatelliteCollection();
}

To use a Yuma almanac, simply change the SemAlmanac statement to:

YumaAlmanac Almanac = YumaAlmanac.ReadFrom(almReader , 1);

This same technique works for Precise GPS ephemeris (in SP3a or SP3c format) or broadcast ephemeris in RINEX 2.11 or lower format.

I'd like to hear from you - let me know what you would like to see an article on, and I'll see what I can come up with.

Until next time, Smooth sailing!

4 comments

ION GNSS – I’m hearing a lot about RAIM these days

September 19th, 2008 | Category: Dynamic Geometry Library,Navigation Accuracy Library,RAIM

I'm at the ION GNSS meeting here in balmy Savannah Georgia and I've now heard from several people regarding the RAIM issue.  Here's the issue.

Pilots flying with GPS as their main navigation system or as a secondary system on selected routes must submit a RAIM (Receiver Autonomous Integrity Monitoring) report for their destination before flying.  This report defines whether GPS will meet some fairly stringent requirements for satellite  and signal availability.  The Federal Aviation Administration (FAA) is mandating this requirement.  The pilots, concerned with how they will meet this requirement asked the FAA - "how do we meet this requirement?"  The FAA responded by tasking the VOLPE center to develop a RAIM assessment tool.  This tool technically calculates RAIM, but it turns out, is not very useful to any pilot.

One pilot I talked with at the ION GNSS show said that it was too difficult to plan a route with the VOLPE tool.  Look at the tool and you'll see why.  Well, it's a good thing he was talking to us!  AGI has long been in the business of calculating access and coverage along routes - for virtually any kind of metric you can think of (no jokes here Kevin).  Here's an example:  the blue line represents a route in a mountainous area (Breckenridge I think) where there's a threat (large colorful communications jammer).  The white potato chip shaped thing is the Azimuth-Elevation mask for the jammer.  Any place below the potato chip the jammer is unable to see - above the chip - watch out. 

Route1

This route can be edited in the 3-D window to reroute around threats (or, say RAIM outages?) and reports run easily to determine RAIM outage times, etc.

I bring this point up because AGI has the technology to solve this pilots problem today.  Using our STK Engine and our components, an application can be built to make the RAIM reporting requirement a breeze and we can add the benefit of being able to give the pilot some additional valuable information!  Here's how.

Our Navigation Accuracy Library calculates RAIM (Fault Detection) and can report three types of RAIM outages for any location and time.  (En Route, Terminal and Non-Precision Approach) Our Dynamic Geometry Library comes with a waypoint propagator, useful for propagating routes of aircraft, land vehicles, whatever.  Our STK Engine provides 3-D graphing capabilities that can display routes (colored if you like) as well as coverage (a RAIM outage contour map for example).  Put these together in a single application and add some features like the following:

  • Upload your own route file
  • Database of predefined routes the Airline flies
  • Recalculate the 'wheels up' time based on RAIM outages
  • Create a handheld mobile application the pilot can use to initially assess RAIM for standard flight routes - from anywhere prior to flight.

There are many different versions of this scheme that could be put together depending on how the particular pilot or airline wanted to use the system.

There's another stick in the mud I haven't touched on yet.  The VOLPE center calculates RAIM in a specific way, using predefined vertical and horizontal alert thresholds.  I say predefined, because there is nowhere on their tool to change these.  There is no standard RAIM algorithm to use when calculating RAIM outages.  There are accepted algorithms in the literature, but nothing mandated by and interface control document (ICD).  Each GPS receiver manufacturer is free to calculate RAIM any way they choose.  AGI uses the accepted algorithms in the literature, but we allow the user/developer to define any alert threshold they like.  This provides more flexibility in calculating different RAIM outage thresholds.  AGI components are also flexible enough to be able to incorporate receiver manufacturer algorithms - with our without source code.

So, there is a great opportunity here for pilots to have a better system and a way to grow when future GPS availability requirements are enacted.  For more information on our technologies in this area, please contact me.

3 comments

GPS Tool Collection

Technorati Tags: ,,,,,,,,,

Recently, I posted a Nog on some utilities and tools I created to help with the day-to-day engineering tasks faced when working with GPS.  Well, I've added to the mix (with help!) and put the collection in a single, easy to access place.  the tools can be accessed from the Web Apps/Services tab on the AGI developer Network: http://adn.agi.com/webServices/.

Here's a list of the GPS related tools there.  Some are repeats from the previous Nog, but I wanted to give you a complete list:

The GPS Date Calendar displays a standard calendar with dates specific to the GPS community. Dates included are: GPS Week (current and full), day of week, day of year and time of week.

This web application calculates GPS Navigation Accuracy for the specified time interval and location of the GPS receiver. Specifically, it calculates Total System Error and Signal In Space Error and generate the appropriate reports and graphs. This application has been built using the Navigation Accuracy Library, which is part of AGI Components.

This Calendar displays historical, current and predicted GPS satellite outages based on the latest satellite outage file (SOF) produced by the United Stated Air Force. Clicking on a specific calendar date provides more information about the outage. You can also view all historical outages in a table format.

This web service allows you to connect and retrieve outage information for specific GPS PRNs, or specific time intervals, or both. Using just the PRN, a list of strings will be returned of all times that PRN has been unhealthy, based on the latest satellite outage file (SOF). Providing a start and stop time will return a list of strings denoting all PRNs that were unhealthy during that interval, with their outage times. Another service allows you to specify both the PRN and the time interval to further thin the results. The dates returned can be either Gregorian (standard date format) or Julian date format.

This example application shows how this service could be used.

GPS satellite performance is at the heart of navigation accuracy. AGI Performance Assessment Files (PAFs) and Prediction Support Files (PSFs), contain GPS satellite errors which AGI uses to calculate receiver positioning errors. This utility helps to visualize and understand those GPS satellite errors.

The following tools aren't specifically for GPS, but are just as handy in their own right. 

This KML network link visualizes all earth orbiting objects tracked by the United States Strategic Command (USSTRATCOM) using the satellite database processed by Analytical Graphics, Inc. using the Dynamic Geometry Library in AGI Components. All satellites are tracked in real-time and updated every 30 seconds. Please open this link using Google Earth.

This web application generates ground tracks on an embedded Google map, as well as ephemeris tables, for a user-selected satellite viewed from a user-defined location during a user-specified time period.

The Space Data Reporter is an example implementation of AGI’s new Dynamic Geometry Library (DGL). This suite of Web Services powered by the DGL enables users to:
- Publish ground tracks
- Derive current locations of satellites
- Calculate inter-visibility between satellites and locations on the ground
- Compute GPS dilution of precision (DOP)
- Analyze STK generated conjunction analysis (CAT)

No comments

Ionospheric Error Analyst – a nav mashup Part 3

Technorati Tags: ,,,

This is the last installment of the Ionospheric Error Analyst blog.  I've written two previous blogs on this mashup application, describing how you can calculate near real time navigation accuracy, including ionospheric effects, and display them on a globe.  In this installment, I'll complete the project and provide a download of the entire application so you can build it on your own and use it.

When we left off, the last pieces we needed were the time and positions to calculate and then finally, the actual navigation accuracy calculation itself.  To provide the time to calculate, we need to look at the filename of the TEC file.  Remember the TEC file contains the total electron count data  we need to calculate the navigation error due to the ionosphere.  The filename contains the date the TEC data is valid for - and it's valid for 15 minutes after that date.  Parsing the name and generating a date from the name elements is straight forward:

// sample TEC filename
// 200706102130_TEC.txt
// 01234567890123456789
year = Int32.Parse(filename.Substring(0, 4));
month = Int32.Parse(filename.Substring(4, 2));
day = Int32.Parse(filename.Substring(6, 2));
hour = Int32.Parse(filename.Substring(8, 2));
minute = Int32.Parse(filename.Substring(10, 2));
// add 15 minutes to file timestamp - we want latest accuracy
return new JulianDate(new DateTime(year, month, day, hour,
  minute, 0)).AddSeconds(900);

What about the grid we need to calculate over?  That's easy too.  We start with the bottom left grid point defined by the user and simply increment that by 1 degree for the entire area specified in the GUI.  I'll use a 1 degree step because the TEC data is defined with this increment.  Each time the grid point location is calculated, it's stored in a List of ContourCells.  The ContourCell class simply stores the four vertices of the cell as well as the cell ID and a definition of which of the four vertices of the cell the accuracy calculation will occur on.  You could change this implementation calculate for the middle of the cell - or somewhere in between too if you like.  In the interest of saving space, I'll let you browse that code on your own.  Look for the CalculateCONUSGrid method.  Note that the TEC data from NOAA is defined only over the continental United States (CONUS).  There's no error checking in the application to ensure the user has entered the correct latitude/longitude boundaries however.  You'll even get results for areas outside of CONUS.  The point is, be sure to understand the limitations of the data you use - otherwise your analysis will be flawed.

Read more

No comments

Ionospheric Error Analyst – a nav mashup Part 2

May 28th, 2008 | Category: Navigation Accuracy Library

Technorati Tags: ,,,,

I've started creating a nav mashup that takes several technologies and data from different sources to determine the near real time navigation error across CONUS. We left off just before I showed you how to create a custom error model to model the receiver noise. Let's get busy.

First, we need to derive a new noise model class from the GPSReceiverNoiseModel. Here's the code, I'll explain what it does below.

public class IonosphericNoiseModel : GpsReceiverNoiseModel
    {
        public IonosphericNoiseModel(StreamReader IonoTecFile)
        {
            m_tecFile = NoaaVerticalSlantTecFile.ReadFrom(IonoTecFile);
            m_GpsL1Frequency = 1575.5e6;
        }
	NoaaVerticalSlantTecFile m_tecFile;
    }

The IonosphericNoiseModel constructor takes a StreamReader that reads in a text file containing the Total Electron Count (TEC) data for a given time, and sets the GPS L1 frequency.

Once I have a TEC file, I need to override the GetTotalNoiseScalar method defined in the GPSReceiverNoiseModel class, to provide my own noise values. This is the meat of the functionality.

Read more

1 comment

AGI Components release 3 – Evaluators on Fire!

May 06th, 2008 | Category: Navigation Accuracy Library

So, it's the beginning of the month, that means a new AGI components release is here! You can get the release from the ADN here: http://adn.agi.com/detailedView.cfm?resourceId=210. I've profiled the performance of navigation accuracy calculations over a grid - usually the entire Earth - at different levels of granularity. I've started plotting the run times for these tests and wanted to show off the improvements in calculation speed. Kevin's DGL blog shows the how of evaluators and the changes that you need to be aware of, I'm going to cover the results from a nav point of view.

Here's the performance chart:

Release Performance R1-R3

The first two releases of AGI Components are show in red and blue. Release 3 is shown in the magenta line - the one with the really small slope. Release 3 shows about a 4400% improvement in calculation speed over R1 and R2.

Read more

No comments

Ionospheric Error Analyst – a nav mashup Part 1

April 30th, 2008 | Category: Navigation Accuracy Library

Mashups are great - even if that word is getting kind of old now.  It's the idea that's really cool.  Like the tinkers who grab bits and pieces from junkyards and make something really cool, today's programmer's search the Internet and find gold nuggets that can be pieced together and brought to life in a new way.  So, I wanted to tinker a little bit.  I decided to combine the navigation accuracy component library (that I'm so fond of) with the STK 4DX embedded technology and some near real time data I found at the National Oceanic and Atmospheric (NOAA) web site.  This NOAA data consists of slant range total electron count (TEC) data for GPS satellites - and this is important is your a civilian GPS user.  The largest error for civilian users is caused by the ionosphere, so it pays to watch what it's doing.  I also found a graphing package called Zedgraph that will help with plotting any necessary data.

Ok, so what can you do with slant range TEC data, NavAccLib, DGL,  4DX and Zedgraph?  Well grab a Nog and let's take a look.

First, I created a Windows Application from Visual Studio and started designing the GUI - nothing too difficult, just enough to get the job done.

IEAGUI

ok, maybe I added a few extra things that aren't absolutely necessary, but I get carried away.

Read more

No comments

Next Page »