The Calibration and Validation Program for the National Polar-Orbiting Operational Environmental Satellite System Preparatory Project (NPP)
- Published on Monday, 08 March 2010 00:01
- 2 Comments
By Karen M. St. Germain Senior Member, IEEE
Reprinted from IEEE Geoscience and Remote Sensing Society Newsletter (September 2009)
The NPOESS program will launch its second risk reduction mission, the NPOESS Preparatory Project (NPP) in 2011. NPP is a collaboration between the NPOESS program (for risk reduction) and the NASA Earth Science program (for continuity of earth science measurements). The NPP platform will carry five remote sensing instruments, covering the electromagnetic spectrum from microwaves to visible waves. Each of these instruments will be flying for the first time on NPP, although some have substantially more legacy than others.
The Cross-track Infrared Sounder (CrIS) is a hyperspectral instrument that will provide measurements in the infrared over the long to short wave range, from 650 to 2550 cm21 (15.4 to 3.92 μm) In the US, the legacy experience for CrIS comes from the Atmospheric Infrared Sounder (AIRS), currently flying on the NASA EOS missions. In sensor operation, CrIS bears greater resemblance to the Infrared Atmospheric Sounding Interferometer (IASI), flying aboard the EUMETSAT METOP series. The CrIS instrument will work with its microwave counterpart, the Advanced Technology Microwave Sounder (ATMS), to produce atmospheric temperature, moisture, and pressure profiles under most weather conditions. The ATMS traces its legacy to the successful series of Advanced Microwave Sounding Units (AMSUs) currently flying as part of the National Oceanic and Atmospheric Administration (NOAA) Polar Operational Environmental Satellite (POES) system.
The Ozone Mapping and Profiler Suite (OMPS) will monitor atmospheric ozone in three ways: total column ozone, vertical ozone profile, and limb ozone profile. The nadir instruments trace their heritage to the Solar Backscatter Ultraviolet radiometer (SBUV)/2 and the Total Ozone Mapping Spectrometer (TOMS). The Limb profiler is being flown as an experimental sensor aboard NPP, and will provide a higher spatial resolution vertical profile than the nadir instrument. The OMPS sensor measurements are made between 250 and 380 nm.
The Visible/Infrared Imager/Radiometer Suite (VIIRS) collects visible and infrared imagery and radiometric data over the wavelength range 412 nm to 12.01 μm. Although there are differences in sensor operation, the closest VIIRS predecessor is the Earth Observing System (EOS) Moderate-resolution Imaging Spectroradiometer (MODIS) instrument (with additional enhanced capability for imagery across the terminator). Data products from VIIRS range from ocean surface products to cloud properties and land surface characterization. In planning the Calibration and Validation (Cal/Val) campaign for this first launch, we first consider lessons learned from previous Cal/Val campaigns, from both operational and science missions.
The highest objective of any Cal/Val program must be the accomplishment of the mission for which the program was chartered. In the case of NPOESS, the required National Mission Capabilities are captured in a requirements document (the Integrated Operational Requirements Document, IORD II), which outlines the performance attributes needed for each environmental data product. Fully accomplishing this goal means establishing that the data products meet required performance and are operationally viable. The term “operational viability” means that the products are suitable for inclusion in civilian and defense mission support, with robust performance, minimum down time, and low data latency. Elements of this include a full understanding of data product performance (e.g. error statistics), and rapid resolution of performance issues. For the NPOESS program, the Cal/Val program also plays a role in establishing contractual compliance of the work of the prime contractor.
The Calibration and Validation Program for the National Polar -Orbiting Operational Environmental Satellite System Preparatory Project (NPP)
Lessons Learned from Heritage Programs System View
An earth remote sensing system is a physical system, composed of the phenomena to be sensed, the space borne system making the measurements, and the processing system that packages, transmits, and processes the data. A simple depiction of such a system is shown in Figure 1. The ground processing system executes a series of operations that essentially “walk backward” through the physical system (black arrows in Figure 1), eventually yielding a representation of the environmental phenomena of the earth and atmosphere. These operations fall in to three major categories. The first stage involves unpacking and organizing the data, creating the Level 0 products, or Raw Data Records (RDRs) in NPOESS parlance. Then, the raw data are geolocated and calibrated using information from the spacecraft, the internal calibration targets, and knowledge of sensor performance attained during the prelaunch testing. This process produces radiance measurements and creates the Level 1 products (Sensor Data Records, or SDRs). Finally, the radiances are processed through algorithms to infer properties of the environment from which the emission originated. The outputs of these processes are the Environmental Data Records (EDRs), which are commonly known as Level 2 products. For a microwave sensor there is often one additional intermediate step between the SDR and EDR, where additional antenna pattern corrections are applied. This output of this step is called a Temperature Data Record, or TDR. The algorithms to produce the RDRs, SDRs, and EDRs require input data from the spacecraft (e.g. timing, navigation and pointing information) and the sensors (e.g. temperatures, voltages, sensor state and position). They may also require definable databases such as sensor characterization tables, environmental models and field of view models. Ultimately, the success of the algorithms in accurately reversing the measurement process depends upon a correct interface between the algorithm and each component of the system. A quick survey of past programs gives us insight on what drives the pace and success of the post-launch Cal/Val effort.
Over three generations of first-launch microwave sensors, the length of the Cal/Val program has been dominated by sensor performance and sensor interface issues. Some examples are: 1) Timing and position (dominated by spacecraft flight software, hardware, and spacecraft to sensor alignment), 2) Channel Polarization (inaccurately determined prior to launch), 3) Calibration target errors (dominated by calibration target materials and uniformity of target temperature, 4) Antenna Properties or Field of View Intrusions (dominated by completeness of pre-launch pattern measurements and knowledge of the complete system geometry). System Engineering and management challenges pre-launch have also caused considerable delay in post-launch Cal/Val, particularly with issues of format and documentation errors or inconsistencies and unavailability of pre-launch data or analyses.
A similar analysis for MODIS on Terra (the first VIIRS-like instrument), yields similar lessons. Sensor performance and sensor interface issues once again dominated the Cal/Val program. For example: 1) Electronic and optical cross-talk (driven by focal plane and filter performance), 2) Optical path performance (dominated by A/B side mirror differences and polarization geometry), and 3) Calibration errors due to reflected solar energy contamination of the cold space calibration view. Most first-flight systems also suffer from incomplete sensor models, ultimately limited by the completeness of the pre-launch test program. The time required for the validation of the EDRs is dominated by the maturity of the science from a space platform for that product. When well-understood heritage algorithms are simply “tuned” for the new instrument characteristics and the sensor changes are minimal, the Cal/Val period is minimized. However, for cases where no heritage product exists and new science understanding must be developed post launch, the EDR validation is rarely complete in less than two years.
From these experiences we take the following lessons. First, prelaunch test and analysis focus is critical for building the foundation for eventual high-quality data products. This requires a strong pre-launch sensor data analysis team. Second, even with a strong pre-launch program, sensor engineering and expertise will be needed after launch, so team continuity from pre- to post-launch must be a key consideration. Third, during the initial stages of a post launch Cal/Val, sensor performance “features” will require compensation in the ground processing algorithm. In many cases large errors will have to be corrected before moderate or smaller errors can even be identified. This means that a rapid and affordable algorithm update process is needed to keep the Cal/Val team moving at top speed. Finally, extensive involvement from the user community in the early stages is of great benefit in assessing the operational viability of the products and prioritizing the implementation of corrections. This last point always carries some programmatic risk, but it is a risk well worth taking for the long term health of the program.
NPP Cal/Val Guiding Philosophy
As an outcome of studying the successes and challenges of heritage Cal/Val programs, we established the guiding philosophy for the NPP Cal/Val program. There are seven key points: 1) Sensor performance and characterization are the cornerstone of all data products. 2) Experience and resources from past operational and science missions should be fully exploited and incorporated into the NPP and NPOESS Cal/Val plans, 3) Customer and User satisfaction is achieved through their participation in the Cal/Val process, 4) Customer and User proficiency with the operational algorithms is essential to efficient Cal/Val and community buy-in of the data, 5) A quick, cost-effective, global view of performance can be achieved through early comparisons with data from other space-based sensors, global surface models, surface networks, and direct radiance assimilation comparisons, 6) Targeted campaigns and special studies should be planned and executed with knowledge of the global performance, and 7) Corrective actions must be handled with customer involvement and in accordance with established program priorities. These concepts form the foundation of the NPP Cal/Val program.
The NPP Cal/Val Program
Phases of the Cal/Val Program
There are four primary phases of the NPP Cal/Val program. The pre-launch phase covers the period during which the sensors are in development, test, and integration, and the ground system is being built. The Early Orbit Check-out (EOC) covers the period of post-launch sensor activation, and typically lasts for 30 to 100 days. The Intensive Cal/Val (ICV) covers the period between activation and the declaration of operational readiness for each product. The duration of the ICV varies, but for a first-launch sensor it is typically an 18 month process, even in the absence of the need for new science development. Finally, the Long-Term Monitoring (LTM)
phase extends through the life of the sensors to ensure that data products continue to meet their performance requirements, anomalies are appropriately handled, and upgrades are implemented as needed. The specific activities during each of these phases are different for each data product type (RDR, SDR, and EDR). In the next section we present the NPP Cal/Val program overview for each phase of the program and for each product chain. The product chain threads may be understood as representing the basic sensor functionality (RDR), the calibratability of the sensor (SDR), and the functionality and performance of the retrieval algorithms (EDR).
The Pre-Launch Phase
For the RDR product chain, the pre-launch Cal/Val effort seeks to answer the question “What are the criteria that establish the sensor as a stable, configurable, and functioning instrument capable of meeting its performance requirements?” The activities include verifying operational modes and data formats, analyzing the ambient and thermal/vacuum performance measurements, tuning parameters such as gain and offsets, establishing air-to-vacuum and temperature sensitivities, and developing look-up-tables and sensor constants. Another important component during this phase is looking ahead and developing the post-launch sensor team. At the same time the SDR product chain team, in a closely related activity, seeks to answer the question “Do the SDR algorithms (in their operational implementation) capture how the sensor actually works as built? And is the product compliant with requirements?” The primary activities during this period focus on making pre-launch measurements to established standards (e.g. NIST), establishing the completeness of the sensor test program, developing sensor error budgets and populating them with as-built numbers, analyzing test data, developing look-up-tables and sensor constants and their documentation, participating in “fix-or-fly” decisions, and identifying any liens (due to as-built performance) that may alter the on-orbit operations concept of the sensor. Finally, there are important activities prior to launch for the EDR product chains. The EDR team works to establish the answer to the question “Are the algorithms (as implemented in the operational processing system) stable, tunable, well understood, and working with realistic sensor and system performance characteristics?” For the NPP system, proxy data are available from heritage instruments. These data are adjusted to reflect sensor differences and are used for assessing algorithm performance under both normal and stressing conditions. In addition, we run these data through the operational processing system to establish the robustness of the system. Synthetic data (data generated through modeling) are used to establish algorithm sensitivities. We also make, at this stage, an initial assessment of areas where more research, added on-orbit resources, post-launch campaigns, or other mitigation may be needed.
The Early Orbit Check-out Phase
The most fundamental question, answered in the early post-launch RDR verification, is “Is the sensor operating as it was tested on the ground?” This question is answered by analysis of engineering data (e.g. voltages, currents, and temperatures), telemetry data, and calibration data. Bringing about a positive response to this question may require instrument tuning or adjustment. This is also the activity that establishes instrument baseline performance and represents the beginning of on-orbit instrument trending. If the RDR verification does not verify that the sensor is operating as expected, then a sensor anomaly resolution activity is activated, drawing on the sensor development, systems engineering, and Cal/Val teams. RDR verification lays the foundation for a closely related activity: SDR verification and tuning.
The SDR verification during EOC answers the question “Taken together, are the RDR and SDR algorithms producing radiances that are reasonable (spectrally and radiometrically) and geolocated?” This initial assessment is intended to find large errors and systematic performance issues. The primary tools for this analysis are radiance comparisons with other space-borne sensors, model and analysis fields processed through radiative transfer models. This is also the prime opportunity for executing spacecraft maneuvers to position the sensors to observe more “pure” scenes such as deep space or well understood scenes such as the moon. After launch, radiance errors are most typically handled through modification of the SDR algorithm. In such a circumstance the SDR team will work very closely with the sensor anomaly team to establish a correction approach that is as faithful as possible to the established root cause of the unexpected behavior. Often the SDR verification activity is informed by the EDR verification activity, especially where the previously mentioned forms of radiance comparison are technically difficult or expensive.
The EDR verification activities in the EOC phase are designed to answer the question “Are the EDR algorithms functioning and valid over a subset of nominal conditions?” The first element of this activity is establishing that all inputs from the sensor are available and reasonable. In many cases the EDR algorithm must be activated or tuned using correlative analysis with independent data sets. The EDRs are compared with similar products from other space borne sensors or model/analysis fields to establish that the large scale patterns are reasonable. The Cal/Val team also looks at performance comparisons under selected conditions such as the sensor operating range (e.g. is the sensor performance varying with its orbital position/temperature?). Such an outcome will immediately be fed back to the SDR and RDR activities for investigation. The EDR verification activity also assesses the performance over a range of stressing environmental conditions(e.g. extreme surface temperature, temperature inversions, absorbing aerosols). This phase also marks the beginning of the generation of matchup data sets with other sources of correlative measurements such as ocean surface buoys, operational radiosondes, etc. For these matchup data sets, the associated RDR, SDR, EDR, calibration and engineering data will be captured so that the matchup dataset may be efficiently regenerated upon implementation of an SDR or EDR algorithm correction.
The Intensive Cal/Val Phase
Just as the EOC is intended to identify and correct or mitigate major sensor or system anomalies, the Intensive Cal/Val (ICV) phase is intended to identify and correct or mitigate moderate to minor sensor or system anomalies.
The primary focus of the RDR product chain is to establish the sensor stability by answering the question “Is the sensor and its calibration stable over the sensor’s range of operating conditions?” The answer to this question is established primarily through correlative analysis involving a host of system variables. These variables include position in orbit, seasonal variations, sensor operating state (and the operating states of neighboring sensors and transmitters), and the like. Performance is established through detailed analysis of telemetry and calibration data, correlation analysis, and early trend analysis. Unexpected findings during this activity may result in a modification of the sensor operations concept (e.g. table uploads, calibration frequencies, etc.)
Again, in a closely related activity, the SDR validation establishes the foundation for all future EDR work. The question to be answered by this activity is “Are the SDRs precisely geolocated, stable, and valid to expected levels (accuracy and precision) over conditions seen to date. There are a number of activities that support SDR validation, and only the primary ones are discussed here. First, the analyses that were begun during the EOC are continued and expanded. For example, analyses of accumulated comparisons with radiances and environmental products from other space based sensors and model fields will be continued. With the increasing comparison statistics, performance will be stratified, for example, in a zonal average global sense. Other statistical analysis techniques such as vicarious calibration approaches are viable at this stage, and will be used to provide a very reliable performance point for trending over the life of the sensor. Spacecraft maneuvers may continue into this phase as needed, although they may become difficult to schedule as some of the data begin to see operational use. Aircraft under-flights, with calibration targets independently calibrated to national standards may take place during this phase. Sensor error budgets established prior to launch are key to the success of the SDR validation, particularly as they may provide insight into unexpected sensor behaviors.
Finally, in recent years, a new approach to SDR validation has emerged through collaboration with the operational user community. Radiance assimilation into off-line instantiations of operational numerical weather prediction or analysis systems will be used to provide very sensitive indications of areas or conditions under which the sensor provided radiances deviate from expected values. An additional benefit of this interaction is that the operational users gain early familiarity with the new sensor data sets, their formats and performance attributes, allowing for earlier and more efficient operational use of the validated radiance data products.
Long-Term Monitoring Phase
At the conclusion of the ICV, all data products should be meeting performance expectations and should be viable for operational use. Product lines will likely reach this state at different times, depending on instrument performance and algorithm maturity at launch. We then will enter the Long-Term Monitoring (LTM) phase where the instrument and products are scrutinized for trends, finer adjustments may be made to the processing algorithms, and handling of sensor degradation becomes the primary focus.
During LTM, the RDRs (including telemetry, engineering, and calibration data) trending and analysis continue. The question of interest is “Is the sensor stable over seasons and is degradation as expected?” Mitigation approaches include tuning of the warning thresholds and recognizing changes in sensor operating state. As the sensors age, modification of their operations concepts may be required to maintain performance. These may include more frequent table uploads, and adjustments to operating set points.
For the SDR product chain, the question during LTM is “Are the SDR algorithms and supporting look-up tables and sensor constants optimized as the sensor ages?” Continuous tracking of radiance performance (in addition to the RDR trending) is central to answering this question. During this phase mitigation approaches will have to be developed to handle changes to redundant side (A/B side) subsystems as necessary. Typical issues also include degradation of sensor electronics, calibration targets, and optical surfaces. In some cases complete loss of a channel has occurred. Mitigation of these performance changes ranges from simple updates to Look-Up Tables (LUTs) to a reformulation of the SDR algorithm. These adjustments are most critical for the long term utility of the NPP data. Issues uncovered and mitigated in the SDR production are almost always accompanied by adjustments to EDR product chains as well.
The question “Are the EDR products valid over the full range of conditions and operationally viable?” dominates the EDR product chain team during the long term monitoring phase. Continued analysis of accumulated comparisons with both space borne sensor and correlative data sets will be used to validate the data products under stressing and important conditions, even if such conditions are not uncommon. In some cases special campaigns for poorly understood conditions may be needed, in accordance with program priorities. Of course the most fundamental activity will revolve around adapting EDR products to accommodate sensor channel loss or performance degradation.
The specifics of the activities during each Cal/Val phase, and for each product chain are captured in individual Calibration and Validation plans, but all follow the general structure captured here.
The NPP Cal/Val Teams
An examination of the activities described in the previous sections will reveal that the expertise required to execute a successful Cal/Val varies with product chain and phase of the program. Figure 2 is a graphical depiction of this concept.
In the upper left hand corner of this matrix the required expertise is focused on the sensor performance and engineering considerations. Activities in the lower right quadrant are more focused on the environmental data side, required expertise in the physics of the earth environment and an understanding of how the EDRs are to be used to support operational mission. In other words, the needed expertise shifts from sensor engineering (typically with a strong sensor developer presence) to Government customers over time and product chain. The NPP Cal/Val teams have been constructed to accommodate these varying expertise requirements.
The NPOESS program has two components; the Integrated Program Office (IPO), and the Prime Contractor. From the Cal/Val perspective there are several important considerations that flow from this structure. First, the prime contractor holds most of the sensor development contracts and the systems engineering responsibility. In addition, the prime contractor is responsible for the development of the ground processing system. The prime contractor’s performance requirements are captured in the contract they have signed with the Integrated Program Office.
The IPO, on the other hand, is the primary interface with the Government customer/user community, and as such is well positioned to work with the users to ensure program priorities are achieved. The program commitments to the customers are captured in the governing requirements document, the IORD. The IPO also has the ability to draw upon technical expertise from within the Government and academia far more readily than the prime contractor. With this construct in mind, Cal/Val teams were created to best draw upon the resources available to the IPO and the Prime Contractor.
The RDR & SDR Cal/Val Teams
The RDR and SDR Cal/Val efforts are led by the prime contractor. Their responsibilities include development of the RDR and SDR algorithms and LUTs, performance verification of the sensors, and post-launch calibration and validation of the SDR products. To support this activity, they will carry a core team for each sensor, consisting of members of the sensor development team, the algorithm developers and the calibration specialists. The IPO will augment this contractor team with experts from government and academia, especially when such experts bring strong heritage expertise. In the case of the VIIRS instrument, the IPO has worked closely with NASA to bring the lessons learned from the Moderate-resolution Imaging Spectroradiometer (MODIS) calibration team to the VIIRS SDR team. For each sensor, the Government team is lead by an identified sensor science lead. The sensor science lead is responsible for leading the Government SDR team and coordinating their activities with the prime contractor. The RDR and SDR validation programs are captured in sensor specific calibration and validation plans, with the detailed task descriptions and responsibilities further enumerated in an integrated task network. This task network is the management tool that the leads will use to coordinate the work of the team, adapting as necessary as understanding of sensor performance and issues evolves.
The EDR Cal/Val Teams
The EDR Cal/Val activities are led by the IPO team, which is organized by discipline area. The IPO has six environmental product teams: imagery and cloud mask products, ocean surface products, land surface products, atmosphere products (cloud and aerosol properties), ozone, and sounder products. For each product team, the IPO sought leadership from a center of expertise. Each of these team leads has put together a plan and a supporting team (from across the stakeholder agencies) to execute their Cal/Val program. The IPO provides the resources for these efforts and coordinates across the discipline area teams. Examples of coordination activities include optimizing any field campaigns for maximum benefit across teams and developing an infrastructure that supports all of the discipline teams.
The imagery and cloud mask team is led by the Air Force Weather Agency and The Aerospace Corporation because they are the most involved users of the imagery data products.
The ocean surface product (sea surface temperature and ocean color/chlorophyll) team is led by the Naval Research Laboratory and the Naval Oceanographic Office, both located at Stennis Space Center. They were asked to lead the oceans effort because of their resident technical expertise and because their operational missions are most sensitive to the quality of these ocean products.
The land surface products team leadership is provided by the NOAA National Climatic Data Center (NCDC) in Asheville, NC. NCDC was selected because of their in-house technical expertise, their working partnerships with other stakeholders, and their wealth of independent data sets.
The sounder product team is led by NOAA/National Environmental Satellite, Data, and Information Service (NESDIS) Center for Satellite Applications and Research (STAR) because of the close connection between the NOAA operational weather mission and the sounder product quality.
The ozone products from the nadir instrument will be led by NOAA/NESDIS STAR, in close cooperation with the NASA team leading the validation of the ozone limb sensor products.
Finally, the atmospheric products, which include cloud properties and aerosol properties, will be led by NASA Goddard Space Flight Center. This is the only product team that does not have direct ties to operational missions. This is due to the fact that this subset of the EDRs does not have operational heritage, but does have strong heritage from within the NASA EOS program, and in particular, the MODIS science team.
The Clouds and the Earth’s Radiative Energy System (CERES) sensor is also flying on NPP, but the NASA Langley Research Center Science Directorate owns the Cal/Val responsibilities under the terms of the sensor manifest agreement Every effort will be made to coordinate with the Langley team avail them of the infrastructure that supports the rest of the Cal/Val teams.
Correlative Data Sets
The SDR and EDR Cal/Val teams have identified an extensive preliminary list of correlative data sets that are available in the post-launch Cal/Val effort. These data sets are generally of four types: space borne sensors, global fields & models, airborne sensors, networks and deployables and currently include over 32 space-based sensors, 6 individual global fields/models, 12 separate ground based networks and 13 separate deployable/airborne data sources.
The NPP Cal/Val Support Infrastructure
An important benefit of embracing a community based Cal/Val program is in bringing not just heritage experience, but also heritage tools to benefit the NPP program. This approach provides savings in both development cost and tool verification. However, there are some functions that are best done with centralized resources, to establish a common infrastructure for the benefit of all of the Cal/Val teams. That common infrastructure is described here. The infrastructure is called the Government Resource for Algorithm Verification, Integration, Test and Evaluation (GRAVITE). GRAVITE has four main components, the technical library, the central processing and distribution capability, the software repository, and a whole system triage tool.
The Technical Library
The NPP system produces 24 EDRs which are supported by 70 algorithms implemented in the ground system (not including CERES products, which are developed and maintained by NASA Langley). The documentation for this complex system is extensive and development/update cycles for the documentation are not, in most cases, synchronized. The technical reference library is intended to be the primary resource for the accurate information, available in a timely manner to support rapid post-launch Cal/Val activities. The goal is to use graphical representations of the system to allow the user to rapidly identify the detailed information needed – whether that is format information, algorithm flow diagrams, or sensor descriptions.
Tied in to the technical library is a repository of all prelaunch instrument test data and telemetry. This repository also includes instrument test procedures, test logs, and analysis reports. These items are the basis for pre-launch instrument performance assessments made by the SDR Cal/Val team that inform the sensor requirements sell-off process. A set of tools that allows querying of the test data by telemetry parameters (e.g. instrument state or optical bench temperature) is also included for convenience in searching the data.
The Central Processing and Data Distribution
The central processing and data distribution capability is located within the NOAA facility that will process NPP data operationally. These assets support the collection, storage distribution and reformatting of mission, ancillary, auxiliary and correlative data to support the geographically distributed Cal/Val teams. The central processing capability is intended to perform functions that either benefit multiple Cal/Val teams (e.g. SDR work) or where reduction in data flow results (e.g. matchup generation).
A shared access software repository is provided which contains algorithm processing modules from the IDPS operational code, tools to run these modules on a “scientist friendly” platform such as Linux, and a platform for sharing analysis software tools, all with configuration management and change tracking. This capability is especially important for managing, vetting, testing, and verifying any changes to the operational code that may be proposed by the Cal/Val team.
System Visualization Tools
Heritage Cal/Val programs have demonstrated that anomalies observed are often traceable to sensor geometry relative to the satellite, earth, sun or other satellites. The ability of the Cal/Val team to visualize these relationships, and correlate them to the mission data and telemetry is the key to rapid issue resolution. The infrastructure team is developing such a capability and expects to demonstrate this tool to the Cal/Val team 6-8 months prior to launch. The software will be freely distributed to the Cal/Val team, and will run in a desktop environment. In addition, these tools will be available to Cal/Val scientists visiting the IPO.
The NPP program is the pathfinder for the NPOESS program in many ways, including development and maturation of the Calibration and Validation Program. The Integrated Program Office has put together discipline teams, lead by internationally recognized experts, to plan and execute the Cal/Val. This planning takes as a basis the most effective heritage approaches and tools, but updates these in light of availability of data sources, known sensor performance and issues, recent scientific developments. The phases of the Cal/Val program have been identified and are designed to optimize the impact of the available resources. The details of these plans are captured in 11 volumes, which will be released to the broader community in 2010.
Karen M. St. Germain (SM‘02) received the BS degree in electrical engineering from Union College, Schenectady, NY in 1987, and the Ph.D. degree from the University of Massachusetts, Amherst, in 1993. She joined NOAA in 2004 and is currently Chief of the Data Products Division at the National Polar-Orbiting Operational Environmental Satellite System (NPOESS) Integrated Program Office. NPOESS is the next generation operational weather and environment satellite system supporting U.S. National civilian and defense weather prediction and environmental observation. Dr. St. Germain is responsible for demonstrating the scientific integrity of the data processing algorithms, pre- and post-launch sensor calibration and data product validation for the nine NPOESS sensors and the 38 operational earth, atmosphere and space environmental data products. Dr. St. Germain has been a member of the IEEE Geoscience and Remote Sensing Society since 1988. She served as an associate editor of the IEEE GRSS Newsletter from 1994 to 1996, and was elected to the AdCom in 1997. She served as the Membership Chairman from 1997 to 1998, as the Vice President for Meetings and Symposia from 1998 to 2001, and currently serves as the Vice President for Operations and Finance. Dr. St. Germain was Co-Chairman of the Technical Program for IGARSS 2000. She served on the U.S. National Academy of Sciences National Research Council Committee on Radio Frequencies (CORF) from 2000–2007 and served as the chairman from 2005–2007. Dr. St. Germain is the general co-chair of IGARSS 2010 and hopes to see you all in Hawaii.