- API
NEW HORIZONS SDC PLUTO CRUISE RAW V2.0
data.nasa.gov | Last Updated 2023-01-26T20:54:05.000ZThis data set contains Raw data taken by the New Horizons Student Dust Counter instrument during the pluto cruise mission phase. This is VERSION 2.0 of this data set. SDC collected science data intermittently during the hibernation years following the Jupiter encounter, designated as the PLUTOCRUISE phase. There were also Annual Checkouts (ACOs), STIM calibrations, Noise calibrations, and an anomaly in November, 2007. SDC's main science data collection periods were during hibernation. During ACOs, science data are taken intermittently but the user must be careful in analyzing these data since there is usually more activity on the spacecraft during hibernation. STIM and Noise refer to scheduled calibrations and are done with a regular cadence of one per year after the Jupiter encounter; they occurred sporadically in the early years of the mission. Note that some SDC data files have the same stop and start time and a zero exposure time. The reason for this is that the start and stop time for SDC data files are the event times for the first and last events in the files, so for files that contain a single event, these two values are the same. The changes in Version 2.0 were re-running of the ancillary data in the data product, updated geometry from newer SPICE kernels, minor editing of the documentation, catalogs, etc., and resolution of liens from the December, 2014 review, plus those from the May, 2016 review of the Pluto Encounter data sets. New observations added with this version (V2.0) include ongoing cruise observations from August, 2014 through January, 2015.
- API
Classification of Aeronautics System Health and Safety Documents
data.nasa.gov | Last Updated 2020-01-29T01:57:57.000ZMost complex aerospace systems have many text reports on safety, maintenance, and associated issues. The Aviation Safety Reporting System (ASRS) spans several decades and contains over 700 000 reports. The Aviation Safety Action Plan (ASAP) contains over 12 000 reports from various airlines. Problem categorizations have been developed for both ASRS and ASAP to enable identification of system problems. However, repository volume and complexity make human analysis difficult. Multiple experts are needed, and they often disagree on classifications. Even the same person has classified the same document differently at different times due to evolving experiences. Consistent classification is necessary to support tracking trends in problem categories over time. A decision support system that performs consistent document classification quickly and over large repositories would be useful. We discuss the results of two algorithms we have developed to classify ASRS and ASAP documents. The first is Mariana---a support vector machine (SVM) with simulated annealing, which is used to optimize hyperparameters for the model. The second method is classification built on top of nonnegative matrix factorization (NMF), which attempts to find a model that represents document features that add up in various combinations to form documents. We tested both methods on ASRS and ASAP documents with the latter categorized two different ways. We illustrate the potential of NMF to provide document features that are interpretable and indicative of topics. We also briefly discuss the tool that we have incorporated Mariana into in order to allow human experts to provide feedback on the document categorizations.
- API
Metrics for Evaluating Performance of Prognostic Techniques
data.nasa.gov | Last Updated 2020-01-29T03:23:28.000ZPrognostics is an emerging concept in condition basedmaintenance(CBM)ofcriticalsystems.Alongwith developing the fundamentals of being able to confidently predict Remaining Useful Life (RUL), the technology calls for fielded applications as it inches towards maturation. This requires a stringent performance evaluation so that the significance of the concept can be fully exploited. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few issues. Instead, the research community has used a variety of metrics based largely on convenience with respect to their respective requirements. Very little attention has been focused on establishing a common ground to compare different efforts. This paper surveys the metrics that are already used for prognostics in a variety of domains including medicine, nuclear, automotive, aerospace, and electronics. It also considers other domains that involve prediction-related tasks, such as weather and finance. Differences and similarities between these domains and health maintenancehave been analyzed to help understand what performance evaluation methods may or may not be borrowed. Further, these metrics have been categorized in several ways that may be useful in deciding upon a suitable subset for a specific application. Some important prognostic concepts have been defined using a notational framework that enables interpretation of different metrics coherently. Last, but not the least, a list of metrics has been suggested to assess critical aspects of RUL predictions before they are fielded in real applications.
- API
Metrics for Offline Evaluation of Prognostic Performance
data.nasa.gov | Last Updated 2020-01-29T01:57:42.000ZPrognostic performance evaluation has gained significant attention in the past few years.*Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end- user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
- API
NEW HORIZONS SDC PLUTO CRUISE CALIBRATED V2.0
data.nasa.gov | Last Updated 2023-01-26T20:25:34.000ZThis data set contains Calibrated data taken by the New Horizons Student Dust Counter instrument during the pluto cruise mission phase. This is VERSION 2.0 of this data set. SDC collected science data intermittently during the hibernation years following the Jupiter encounter, designated as the PLUTOCRUISE phase. There were also Annual Checkouts (ACOs), STIM calibrations, Noise calibrations, and an anomaly in November, 2007. SDC's main science data collection periods were during hibernation. During ACOs, science data are taken intermittently but the user must be careful in analyzing these data since there is usually more activity on the spacecraft during hibernation. STIM and Noise refer to scheduled calibrations and are done with a regular cadence of one per year after the Jupiter encounter; they occurred sporadically in the early years of the mission. Note that some SDC data files have the same stop and start time and a zero exposure time. The reason for this is that the start and stop time for SDC data files are the event times for the first and last events in the files, so for files that contain a single event, these two values are the same. The changes in Version 2.0 were re-running of the ancillary data in the data product, updated geometry from newer SPICE kernels, minor editing of the documentation, catalogs, etc., and resolution of liens from the December, 2014 review, plus those from the May, 2016 review of the Pluto Encounter data sets. New observations added with this version (V2.0) include ongoing cruise observations from August, 2014 through January, 2015.
- API
SBIR/STTR Programs
data.nasa.gov | Last Updated 2020-01-29T04:18:05.000Z<p>The NASA SBIR and STTR programs fund the research, development, and demonstration of innovative technologies that fulfill NASA needs as described in the annual Solicitations and have significant potential for successful commercialization. If you are a small business concern (SBC) with 500 or fewer employees or a non-profit RI such as a university or a research laboratory with ties to an SBC, then NASA encourages you to learn more about the SBIR and STTR programs as a potential source of seed funding for the development of your innovations.</p><p><strong>The SBIR and STTR programs have 3 phases</strong>:</p><ul><li><strong>Phase I</strong> is the opportunity to establish the scientific, technical, and commercial feasibility of the proposed innovation in fulfillment of NASA needs.</li><li><strong>Phase II</strong> is focused on the development, demonstration and delivery of the proposed innovation.</li></ul><p>The SBIR and STTR Phase I contracts last for 6 months with a maximum funding of $125,000, and Phase II contracts last for 24 months with a maximum funding of $750,000 - $1.5 million.</p><ul><li><strong>Phase III</strong> is the commercialization of innovative technologies, products, and services resulting from either a Phase I or Phase II contract. Phase III contracts are funded from sources other than the SBIR and STTR programs and may be awarded without further competition.</li></ul><p><strong>Opportunity for Continued Technology Development Post-Phase II</strong>:</p><p>The NASA SBIR/STTR Program currently has in place two initiatives for supporting its small business partners past the basic Phase I and Phase II elements of the program that emphasize opportunities for commercialization. Specifically, the NASA SBIR/STTR Program has the Phase II Enhancement (Phase II-E) and Phase II eXpanded (Phase II-X) contract options.&nbsp;</p><p><strong>Please review the links below to obtain more information on the SBIR/STTR programs.</strong></p><ul><li><strong><a target="_blank" href="http://sbir.gsfc.nasa.gov/sites/default/files/ParticipationGuide.pdf">Participation Guide</a></strong></li></ul><p>Provides an overview of the SBIR and STTR programs as implemented by NASA</p><ul><li><strong><a href="http://sbir.gsfc.nasa.gov/solicitations">Program Solicitations</a></strong></li></ul><p>Provides access to the annual SBIR/STTR Solicitations containing detailed information on the program eligibility requirements, proposal instructions and research topics and subtopics</p><ul><li><strong><a href="http://sbir.gsfc.nasa.gov/prg_sched_anncmnt">Schedule and Awards</a></strong></li></ul><p>Schedule and links for the SBIR/STTR solicitations and selection announcements</p><ul><li><strong><a href="http://sbir.gsfc.nasa.gov/content/additional-sources-assistance">Sources of Assistance</a></strong></li></ul><p>Federal and non-Federal sources of assistance for small business</p><ul><li><strong><a href="http://sbir.gsfc.nasa.gov/abstract_archives">Awarded Abstracts</a></strong></li></ul><p>Search our complete archive of awarded project abstracts to learn about what NASA has funded</p><ul><li><strong><a href="http://sbir.gsfc.nasa.gov/content/frequently-asked-questions">Frequently Asked Questions</a></strong></li></ul><p>&nbsp;Still have questions? Visit the program FAQs</p>
- API
SIAM 2007 Text Mining Competition dataset
data.nasa.gov | Last Updated 2020-01-29T04:25:03.000Z**Subject Area:** Text Mining **Description:** This is the dataset used for the SIAM 2007 Text Mining competition. This competition focused on developing text mining algorithms for document classification. The documents in question were aviation safety reports that documented one or more problems that occurred during certain flights. The goal was to label the documents with respect to the types of problems that were described. This is a subset of the Aviation Safety Reporting System (ASRS) dataset, which is publicly available. **How Data Was Acquired:** The data for this competition came from human generated reports on incidents that occurred during a flight. **Sample Rates, Parameter Description, and Format:** There is one document per incident. The datasets are in raw text format. All documents for each set will be contained in a single file. Each row in this file corresponds to a single document. The first characters on each line of the file are the document number and a tilde separats the document number from the text itself. **Anomalies/Faults:** This is a document category classification problem.
- API
IceCube Level 1 Radiance Data and Codes
data.nasa.gov | Last Updated 2023-01-31T14:55:30.000ZThis zipped meta data file can be expanded into two folders. One folder contains the daily calibrated Level 1 radiance and geolocation data in HDF5 format, and the other folder contains the main IDL codes that process the data and make plots (mainly for generating plots for the paper Gong et al. 2021 that is under review for Earth Science System Data journal). Both folders contain a README file in each to guide readers through the file name, variable namelist, quality flag, code's usage, etc.
- API
OMPS/NPP PCA SO2 Total Column 1-Orbit L2 Swath 50x50km V2 (OMPS_NPP_NMSO2_PCA_L2) at GES DISC
data.nasa.gov | Last Updated 2022-01-17T05:47:20.000ZThe OMPS_NPP_NMSO2_PCA_L2 product is part of the MEaSUREs (Making Earth Science Data Records for Use in Research Environments) suite of products. It is retrieved from the NASA/NOAA Suomi National Polar-orbiting Partnership (SNPP) Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) spectrometer and provides contiguous daily global monitoring of anthropogenic and volcanic sulfur dioxide (SO2), an important pollutant and aerosol precursor that affects both air quality and the climate. The product is based on the NASA Goddard Space Flight Center principal component analysis (PCA) spectral fitting algorithm (Li et al., 2013, 2017), and continues (Zhang et al., 2017) NASA's Earth Observing System (EOS) standard Aura/Ozone Monitoring Instrument SO2 product (OMSO2). The latest OMPS_NPP_NMSO2_PCA_L2 V2 product uses new Jacobian lookup tables and more realistic model based a priori profiles in anthropogenic SO2 retrievals. This helps to more accurately account for the pixel-to-pixel variation in SO2 sensitivity due to different factors such as the vertical distribution of SO2, solar and viewing angles, surface reflectivity, and cloudiness. As compared with the previous OMPS_NPP_NMSO2_PCA_L2 V1.2 product that assumes the same SO2 sensitivity for all OMPS pixels, the new V2 anthropogenic SO2 retrievals have reduced retrieval biases especially over background regions (see Figure 1 for an example). The same updated PCA SO2 retrieval algorithm (Li et al., 2020) is also used to produce the recently released OMSO2 V2 product (doi:10.5067/Aura/OMI/DATA2022). The new OMPS_NPP_NMSO2_PCA_L2 V2 product thus offers enhanced consistency between the NASA EOS standard (OMI) and continuity (OMPS) SO2 data records Sulfur Dioxide (SO2) is a short-lived gas primarily produced by volcanoes, power plants, refineries, metal smelting and burning of fossil fuels. Where SO2 remains near the Earth's surface, it is toxic, causes acid rain, and degrades air quality. Where SO2 is lofted into the free troposphere, it forms aerosols that can alter cloud reflectivity and precipitation. In the stratosphere, volcanic SO2 forms sulfate aerosols that can result in climate change.
- API
Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates
data.nasa.gov | Last Updated 2020-01-29T01:48:43.000ZUncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research in developing newer and more competent methods for URM is gaining momentum.2Beyond initial concepts, more sophisticated prediction distributions are obtained that are not limited to assumptions of Normality and unimodal characteristics. Most prediction algorithms yield non-parametric distributions that are then approximated as known ones for analytical simplicity, especially for performance assessment methods. Although applying the prognostic metrics introduced earlier with their simple definitions has proven useful, a lot of information about the distributions gets thrown away. In this paper, several techniques have been suggested for incorporating information available from Remaining Useful Life (RUL) distributions, while applying the prognostic performance metrics. These approaches offer a convenient and intuitive visualization of algorithm performance with respect to metrics like prediction horizon and α-λ performance, and also quantify the corresponding performance while incorporating the uncertainty information. A variety of options have been shortlisted that could be employed depending on whether the distributions can be approximated to some known form or cannot be parameterized. This paper presents a qualitative analysis on how and when these techniques should be used along with a quantitative comparison on a real application scenario. A particle filter based prognostic framework has been chosen as the candidate algorithm on which to evaluate the performance metrics due to its unique advantages in uncertainty management and flexibility in accommodating non-linear models and non-Gaussian noise. We investigate how performance estimates get affected by choosing different options of integrating the uncertainty estimates. This allows us to identify the advantages and limitations of these techniques and their applicability towards a standardized performance evaluation method.