- API
India Annual Winter Cropped Area, 2001-2016
data.nasa.gov | Last Updated 2022-01-17T05:29:43.000ZThe India Annual Winter Cropped Area, 2001 - 2016 consists of annual winter cropped areas for most of India (except the Northeastern states) from 2000-2001 to 2015-2016. This data set utilizes the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI; spatial resolution: 250m) for the winter growing season (October-March). The methodology uses an automated algorithm identifying the EVI peak in each pixel for each year and linearly scales the EVI value between 0% and 100% cropped area within that particular pixel. Maps were then resampled to 1 km and were validated using high-resolution QuickBird, RapidEye, SkySat, and WorldView-2 images spanning 2008 to 2016 across 11 different agricultural regions of India. The spatial resolution of the data set is 1 km, resampled from 250m. The data are distributed as GeoTIFF and NetCDF files and are in WGS 84 projection.
- API
NEW HORIZONS SDC PLUTO CRUISE RAW V2.0
data.nasa.gov | Last Updated 2023-01-26T20:54:05.000ZThis data set contains Raw data taken by the New Horizons Student Dust Counter instrument during the pluto cruise mission phase. This is VERSION 2.0 of this data set. SDC collected science data intermittently during the hibernation years following the Jupiter encounter, designated as the PLUTOCRUISE phase. There were also Annual Checkouts (ACOs), STIM calibrations, Noise calibrations, and an anomaly in November, 2007. SDC's main science data collection periods were during hibernation. During ACOs, science data are taken intermittently but the user must be careful in analyzing these data since there is usually more activity on the spacecraft during hibernation. STIM and Noise refer to scheduled calibrations and are done with a regular cadence of one per year after the Jupiter encounter; they occurred sporadically in the early years of the mission. Note that some SDC data files have the same stop and start time and a zero exposure time. The reason for this is that the start and stop time for SDC data files are the event times for the first and last events in the files, so for files that contain a single event, these two values are the same. The changes in Version 2.0 were re-running of the ancillary data in the data product, updated geometry from newer SPICE kernels, minor editing of the documentation, catalogs, etc., and resolution of liens from the December, 2014 review, plus those from the May, 2016 review of the Pluto Encounter data sets. New observations added with this version (V2.0) include ongoing cruise observations from August, 2014 through January, 2015.
- API
Classification of Aeronautics System Health and Safety Documents
data.nasa.gov | Last Updated 2020-01-29T01:57:57.000ZMost complex aerospace systems have many text reports on safety, maintenance, and associated issues. The Aviation Safety Reporting System (ASRS) spans several decades and contains over 700 000 reports. The Aviation Safety Action Plan (ASAP) contains over 12 000 reports from various airlines. Problem categorizations have been developed for both ASRS and ASAP to enable identification of system problems. However, repository volume and complexity make human analysis difficult. Multiple experts are needed, and they often disagree on classifications. Even the same person has classified the same document differently at different times due to evolving experiences. Consistent classification is necessary to support tracking trends in problem categories over time. A decision support system that performs consistent document classification quickly and over large repositories would be useful. We discuss the results of two algorithms we have developed to classify ASRS and ASAP documents. The first is Mariana---a support vector machine (SVM) with simulated annealing, which is used to optimize hyperparameters for the model. The second method is classification built on top of nonnegative matrix factorization (NMF), which attempts to find a model that represents document features that add up in various combinations to form documents. We tested both methods on ASRS and ASAP documents with the latter categorized two different ways. We illustrate the potential of NMF to provide document features that are interpretable and indicative of topics. We also briefly discuss the tool that we have incorporated Mariana into in order to allow human experts to provide feedback on the document categorizations.
- API
2008 Environmental Performance Index (EPI)
data.nasa.gov | Last Updated 2022-01-17T05:02:20.000ZThe 2008 Environmental Performance Index (EPI) centers on two broad environmental protection objectives: (1) reducing environmental stresses on human health, and (2) promoting ecosystem vitality and sound natural resource management. Derived from a careful review of the environmental literature, these twin goals mirror the priorities expressed by policymakers. Environmental health and ecosystem vitality are gauged using 25 indicators tracked in six well-established policy categories: Environmental Health (Environmental Burden of Disease, Water, and Air Pollution), Air Pollution (effects on ecosystems), Water (effects on ecosystems), Biodiversity and Habitat, Productive Natural Resources (Forestry, Fisheries, and Agriculture), and Climate Change. The 2008 EPI utilizes a proximity-to-target methodology in which performance on each indicator is rated on a 0 to 100 scale (100 represents �at target�). By identifying specific targets and measuring how close each country comes to them, the EPI provides a foundation for policy analysis and a context for evaluating performance. Issue-by-issue and aggregate rankings facilitate cross-country comparisons both globally and within relevant peer groups. The 2008 EPI is the result of collaboration among the Yale Center for Environmental Law and Policy (YCELP), Columbia University Center for International Earth Science Information Network (CIESIN), World Economic Forum (WEF), and the Joint Research Centre (JRC), European Commission.
- API
NEW HORIZONS SDC PLUTO CRUISE CALIBRATED V2.0
data.nasa.gov | Last Updated 2023-01-26T20:25:34.000ZThis data set contains Calibrated data taken by the New Horizons Student Dust Counter instrument during the pluto cruise mission phase. This is VERSION 2.0 of this data set. SDC collected science data intermittently during the hibernation years following the Jupiter encounter, designated as the PLUTOCRUISE phase. There were also Annual Checkouts (ACOs), STIM calibrations, Noise calibrations, and an anomaly in November, 2007. SDC's main science data collection periods were during hibernation. During ACOs, science data are taken intermittently but the user must be careful in analyzing these data since there is usually more activity on the spacecraft during hibernation. STIM and Noise refer to scheduled calibrations and are done with a regular cadence of one per year after the Jupiter encounter; they occurred sporadically in the early years of the mission. Note that some SDC data files have the same stop and start time and a zero exposure time. The reason for this is that the start and stop time for SDC data files are the event times for the first and last events in the files, so for files that contain a single event, these two values are the same. The changes in Version 2.0 were re-running of the ancillary data in the data product, updated geometry from newer SPICE kernels, minor editing of the documentation, catalogs, etc., and resolution of liens from the December, 2014 review, plus those from the May, 2016 review of the Pluto Encounter data sets. New observations added with this version (V2.0) include ongoing cruise observations from August, 2014 through January, 2015.
- API
Metrics for Evaluating Performance of Prognostic Techniques
data.nasa.gov | Last Updated 2020-01-29T03:23:28.000ZPrognostics is an emerging concept in condition basedmaintenance(CBM)ofcriticalsystems.Alongwith developing the fundamentals of being able to confidently predict Remaining Useful Life (RUL), the technology calls for fielded applications as it inches towards maturation. This requires a stringent performance evaluation so that the significance of the concept can be fully exploited. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few issues. Instead, the research community has used a variety of metrics based largely on convenience with respect to their respective requirements. Very little attention has been focused on establishing a common ground to compare different efforts. This paper surveys the metrics that are already used for prognostics in a variety of domains including medicine, nuclear, automotive, aerospace, and electronics. It also considers other domains that involve prediction-related tasks, such as weather and finance. Differences and similarities between these domains and health maintenancehave been analyzed to help understand what performance evaluation methods may or may not be borrowed. Further, these metrics have been categorized in several ways that may be useful in deciding upon a suitable subset for a specific application. Some important prognostic concepts have been defined using a notational framework that enables interpretation of different metrics coherently. Last, but not the least, a list of metrics has been suggested to assess critical aspects of RUL predictions before they are fielded in real applications.
- API
Metrics for Offline Evaluation of Prognostic Performance
data.nasa.gov | Last Updated 2020-01-29T01:57:42.000ZPrognostic performance evaluation has gained significant attention in the past few years.*Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end- user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
- API
GPM Ground Validation SEA FLUX ICE POP V1
data.nasa.gov | Last Updated 2022-06-07T06:12:15.000ZThe GPM Ground Validation SEA FLUX ICE POP dataset includes estimates of ocean surface latent and sensible heat fluxes, 10m wind speed, 10m air temperature, 10m air humidity, and skin sea surface temperature in support of the International Collaborative Experiments for Pyeongchang 2018 Olympic and Paralympic Winter Games (ICE-POP) field campaign in South Korea. The two major objectives of ICE-POP were to study severe winter weather events in regions of complex terrain and improve the short-term forecasting of such events. These data contributed to the Global Precipitation Measurement mission Ground Validation (GPM GV) campaign efforts to improve satellite estimates of orographic winter precipitation. This data file is available in netCDF-4 format from September 1, 2017 through April 30, 2018.
- API
Global gene expression analysis highlights microgravity sensitive key genes in soleus and EDL of 30 days space flown mice
data.nasa.gov | Last Updated 2023-01-26T18:49:58.000ZMicrogravity exposure as well as chronic muscle disuse are two of the main causes of physiological adaptive skeletal muscle atrophy in humans and murine animals in physiological condition. The aim of this study was to investigate at both morphological and global gene expression level skeletal muscle adaptation to microgravity in mouse soleus and extensor digitorum longus (EDL). Adult male mice C57BL/N6 were flown aboard the BION-M1 biosatellite for 30 days on orbit (BF) or housed in a replicate flight habitat on Earth (BG) as reference flight control. In this study we investigated for the first time gene expression adaptation to 30 days of microgravity exposure in mouse soleus and EDL highlighting potential new targets for improvement of countermeasures able to ameliorate or even prevent microgravity-induced atrophy in future spaceflights. Overall Design: C57BL/N6 mice were randomly divided in 3 groups: Bion Flown (BF) mice flown aboard the Bion M1 biosatellite in microgravity environment for 30 days; Bion Ground (BG) mice housed in the same habitat of flown animals but exposed to earth gravity; and Flight Control (FC) mice housed in a standard animal facility.
- API
Transcriptional analysis of liver from mice flown on the RR-6 mission
data.nasa.gov | Last Updated 2023-01-26T18:45:47.000ZThe objective of the Rodent Research-6 (RR-6) study was to evaluate muscle atrophy in mice during spaceflight and to test the efficacy of a novel therapeutic to mitigate muscle wasting. The experiment involved an implantable subcutaneous nanochannel delivery system (nDS; between scapula) which delivered the drug formoterol (FMT; a selective Beta-2 adrenoceptor agonist) over the course of time. To this end a cohort of forty 32-weeks-old female C57BL/6NTac mice were either sham operated or implanted with vehicle or treatment-filled nDS launched in two Transporters (20 mice per Transporter) on SpaceX-13 on December 15 2017. They were transferred to Rodent Habitats onboard the International Space Station (ISS) and maintained in microgravity for 29 days (N=20 Live Animal Return [LAR]) or >50 days (N=20 ISS Terminal). After 29 days the 20 LAR animals were returned live to back to Earth on January 13 2018,. After splashdown the animals were ambulatory on-ground for ~4 days until all subjects were processed during one day of dissections. There were two Baseline groups of animals sacrificed (LAR Baseline & FLT Baseline; N=20; 40 animals; ~36 weeks old) at Kennedy Space Center (KSC; 12/9/17). A Ground Control group mimicked the Flight LAR group which was housed at KSC then shipped alive to Novartis Facilities where both the LAR and LAR Ground Control groups were processed (~41 weeks old; 1/16/18). All were anesthetized with isoflurane blood samples were obtained by closed-chest cardiac puncture and the animals were euthanized by exsanguination and thoracotomy. The 20 ISS Terminal mice were anesthetized via intraperitoneal injection of ketamine/xylazine/acepromazine over the course of a four days of dissections (2/6/18 until 2/9/18; 53-56 days after launch; 44 weeks old at time of on-orbit dissections). Blood samples and euthanasia were conducted the same as LAR and Baseline. Following blood draw and hind limb dissection the ISS-terminal animal carcasses were wrapped in aluminum foil placed in a ziploc bag and placed in storage at -80C or colder until return. The ISS-terminal Ground Controls (at KSC) followed the same euthanasia timeline methods and preservation. The final processing of frozen ISS-terminal frozen ISS-terminal Ground Controls and frozen 0-day FLT baseline animals were completed at Houston Methodist Research Institute in Houston TX (5/21/18 until 5/24/18). GeneLab received samples of liver from only sham treated animals (no drug treated animals) from the following groups Flight: LAR (n=10) ISS Terminal (n= 10); Ground Controls: LAR GC (N=9) ISS Terminal GC (N=10) LAR Baseline (n=10) ISS Terminal Baseline (n=10). Total RNA was extracted and sequenced at a target depth of 60 M clusters per sample (ribodepleted paired end 150).