- API
Nano Dust Analyzer Project
data.nasa.gov | Last Updated 2020-01-29T04:54:41.000Z<p> We propose to develop a new highly sensitive instrument to confirm the existence of the so-called nano-dust particles, characterize their impact parameters, and measure their chemical composition. Simultaneous theoretical studies will be used to derive the expected&nbsp; mass and velocity ranges of these putative particles to formulate science and measurement requirements for the future deployment of&nbsp; the proposed Nano-Dust Analyzer (NDA)&nbsp;</p> <p> Early dust instruments onboard Pioneer 8 and 9 and Helios spacecraft detected a flow of submicron sized dust particles coming from the direction of the Sun. These particles originate in the inner solar system from mutual collisions among meteoroids and move on&nbsp; hyperbolic orbits that leave the Solar System under the prevailing radiation pressure force. Later dust instruments with higher&nbsp; sensitivity had to avoid looking toward the Sun because of interference from the solar wind and UV radiation and thus contributed&nbsp; little to the characterization of the dust stream. The one exception is the Ulysses dust detector that observed escaping dust particles&nbsp; high above the solar poles, which confirm the suspicion that charged nanometer sized dust grains are carried to high heliographic&nbsp; latitudes by electromagnetic interactions with the Interplanetary Magnetic Field (IMF). Recently, the STEREO WAVES instruments&nbsp; recorded a large number of intense electric field signals, which were interpreted as impacts from nanometer sized particles striking the&nbsp; spacecraft with velocities of about the solar wind speed. This high flux and strong spatial and/or temporal variations of nanometer&nbsp; sized dust grains at low latitude appears to be uncorrelated with the solar wind properties. This is a mystery as it would require that&nbsp; the total collisional meteoroid debris inside 1 AU is cast in nanometer sized fragments. The observed fluxes of inner-source pickup ions&nbsp; also point to the existence of a much enhanced dust population in the nanometer size range.&nbsp;</p> <p> This new heliospherical phenomenon of nano-dust streams may have consequences throughout the planetary system, but as of yet no dust instrument exists that could be used to shed light on their properties. &nbsp;We propose to develop a dust analyzer capable to detect and&nbsp; analyze these mysterious dust particles coming from the solar direction and to embark upon complementary theoretical studies to&nbsp; understand their characteristics. The instrument is based on the Cassini Dust Analyzer (CDA) that has analyzed the composition of&nbsp; nanometer sized dust particles emanating from the Jovian and Saturnian systems but could not be pointed towards the Sun. By&nbsp; applying technologies implemented in solar wind instruments and coronagraphs a highly sensitive dust analyzer will be developed and&nbsp; tested in the laboratory. The dust analyzer shall be able to characterize impact properties (impact charge and energy distribution of&nbsp; ions from which mass and speed of the impacting grains may be derived) and chemical composition of individual nanometer sized&nbsp; particles while exposed to solar wind and UV radiation. The measurements will enable us to identify the source of the dust by&nbsp; comparing their elemental composition with that of larger micrometeoroid particles of cometary and asteroid origin and will reveal&nbsp; interaction of nano-dust with the interplanetary medium by investigating the relation of the dust flux with solar wind and IMF&nbsp; properties.&nbsp;</p> <p> Complementary theoretically studies will be performed to understand the characteristics of nano-dust particles at 1 AU to answer the&nbsp; following questions:&nbsp; - What is the speed range at which nanometer sized particles impact
- API
TRMM (TMPA-RT) Near Real-Time Precipitation L3 1 day 0.25 degree x 0.25 degree V7 (TRMM_3B42RT_Daily) at GES DISC
data.nasa.gov | Last Updated 2022-01-17T05:59:46.000ZTMPA (3B42RT_Daily) dataset have been discontinued as of Dec. 31, 2019, and users are strongly encouraged to shift to the successor IMERG dataset (doi: 10.5067/GPM/IMERGDE/DAY/06; 10.5067/GPM/IMERGDL/DAY/06). This daily accumulated precipitation product is generated from the Near Real-Time 3-hourly TRMM Multi-Satellite Precipitation Analysis TMPA (3B42RT). It is produced at the NASA GES DISC, as a value added product. Simple summation of valid retrievals in a grid cell is applied for the data day. The result is given in (mm). Although the grid is from 60S to 60N, the high latitudes (beyond 50S/N) near real-time retrievals are considered very unreliable and thus are screened out from the daily accumulations. The beginning and ending time for every daily granule are listed in the file global attributes, and are taken correspondingly from the first and the last 3-hourly granules participating in the aggregation. Thus the time period covered by one daily granule amounts to 24 hours, which can be inspected in the file global attributes. Counts of valid retrievals for the day are provided for every variable, making it possible to compute conditional and unconditional mean precipitation for grid cells where less than 8 retrievals for the day are available. Efforts have been made to make the format of this derived product as similar as possible to the new Global Precipitation Measurement CF-compliant file format. The latency of this derived daily product is about 7 hours after the UTC day is closed. Users should be mindful that the price for the short latency of these data is the reduced quality as compared to the research quality product. The information provided here on the TRMM mission, and on the original 3-hr 3B42 product, remain relevant for this derived product. Note, however, this product is in netCDF-4 format. The following describes the derivation in more details. The daily accumulation is derived by summing *valid* retrievals in a grid cell for the data day. Since the 3-hourly source data are in mm/hr, a factor of 3 is applied to the sum. Thus, for every grid cell we have Pdaily = 3 * SUM{Pi * 1[Pi valid]}, i=[1,Nf] Pdaily_cnt = SUM{1[Pi valid]} where: Pdaily - Daily accumulation (mm) Pi - 3-hourly input, in (mm/hr) Nf - Number of 3-hourly files per day, Nf=8 1[.] - Indicator function; 1 when Pi is valid, 0 otherwise Pdaily_cnt - Number of valid retrievals in a grid cell per day. Grid cells for which Pdaily_cnt=0, are set to fill value in the Daily files. Note that Pi=0 is a valid value. On occasion, the 3-hourly source data have fill values for Pi in a very few grid cells. The total accumulation for such grid cells is still issued, inspite of the likelihood that thus resulting accumulation has a larger uncertainty in representing the "true" daily total. These events are easily detectable using "counts" variables that contain Pdaily_cnt, whereby users can screen out any grid cells for which Pdaily_cnt less than Nf. There are various ways the accumulated daily error could be estimated from the source 3-hourly error. In this release, the daily error provided in the data files is calculated as follows. First, squared 3-hourly errors are summed, and then square root of the sum is taken. Similarly to the precipitation, a factor of 3 is finally applied: Perr_daily = 3 * { SUM[ (Perr_i * 1[Perr_i valid])^2 ] }^0.5 , i=[1,Nf] Ncnt_err = SUM( 1[Perr_i valid] ) where: Perr_daily - Magnitude of the daily accumulated error power, (mm) Ncnt_err - The counts for the error variable Thus computed Perr_daily represents the worst case scenario that assumes the error in the 3-hourly source data, which is given in mm/hr, is accumulating within the 3-hourly period of the source data and then during the day. These values, however, can easily be conveted to root mean square error estimate of the rainfall rate: rms_err = { (Perr_daily/3) ^2 / Ncnt
- API
Metrics for Offline Evaluation of Prognostic Performance
data.nasa.gov | Last Updated 2020-01-29T01:57:42.000ZPrognostic performance evaluation has gained significant attention in the past few years.*Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end- user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
- API
CORONA Satellite Photographs from the U.S. Geological Survey
data.nasa.gov | Last Updated 2022-01-17T05:16:00.000ZThe first generation of U.S. photo intelligence satellites collected more than 860,000 images of the Earth’s surface between 1960 and 1972. The classified military satellite systems code-named CORONA, ARGON, and LANYARD acquired photographic images from space and returned the film to Earth for processing and analysis. The images were originally used for reconnaissance and to produce maps for U.S. intelligence agencies. In 1992, an Environmental Task Force evaluated the application of early satellite data for environmental studies. Since the CORONA, ARGON, and LANYARD data were no longer critical to national security and could be of historical value for global change research, the images were declassified by Executive Order 12951 in 1995. The first successful CORONA mission was launched from Vandenberg Air Force Base in 1960. The satellite acquired photographs with a telescopic camera system and loaded the exposed film into recovery capsules. The capsules or buckets were de-orbited and retrieved by aircraft while the capsules parachuted to earth. The exposed film was developed and the images were analyzed for a range of military applications. The intelligence community used Keyhole (KH) designators to describe system characteristics and accomplishments. The CORONA systems were designated KH-1, KH-2, KH-3, KH-4, KH-4A, and KH-4B. The ARGON systems used the designator KH-5 and the LANYARD systems used KH-6. Mission numbers were a means for indexing the imagery and associated collateral data. A variety of camera systems were used with the satellites. Early systems (KH-1, KH-2, KH-3, and KH-6) carried a single panoramic camera or a single frame camera (KH-5). The later systems (KH-4, KH-4A, and KH-4B) carried two panoramic cameras with a separation angle of 30° with one camera looking forward and the other looking aft. The original film and technical mission-related documents are maintained by the National Archives and Records Administration (NARA). Duplicate film sources held in the USGS EROS Center archive are used to produce digital copies of the imagery. Mathematical calculations based on camera operation and satellite path were used to approximate image coordinates. Since the accuracy of the coordinates varies according to the precision of information used for the derivation, users should inspect the preview image to verify that the area of interest is contained in the selected frame. Users should also note that the images have not been georeferenced.
- API
Metrics for Evaluating Performance of Prognostic Techniques
data.nasa.gov | Last Updated 2020-01-29T03:23:28.000ZPrognostics is an emerging concept in condition basedmaintenance(CBM)ofcriticalsystems.Alongwith developing the fundamentals of being able to confidently predict Remaining Useful Life (RUL), the technology calls for fielded applications as it inches towards maturation. This requires a stringent performance evaluation so that the significance of the concept can be fully exploited. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few issues. Instead, the research community has used a variety of metrics based largely on convenience with respect to their respective requirements. Very little attention has been focused on establishing a common ground to compare different efforts. This paper surveys the metrics that are already used for prognostics in a variety of domains including medicine, nuclear, automotive, aerospace, and electronics. It also considers other domains that involve prediction-related tasks, such as weather and finance. Differences and similarities between these domains and health maintenancehave been analyzed to help understand what performance evaluation methods may or may not be borrowed. Further, these metrics have been categorized in several ways that may be useful in deciding upon a suitable subset for a specific application. Some important prognostic concepts have been defined using a notational framework that enables interpretation of different metrics coherently. Last, but not the least, a list of metrics has been suggested to assess critical aspects of RUL predictions before they are fielded in real applications.
- API
Optimal Alarm Systems
data.nasa.gov | Last Updated 2020-01-29T03:25:13.000ZAn optimal alarm system is simply an optimal level-crossing predictor that can be designed to elicit the fewest false alarms for a fixed detection probability. It currently use Kalman filtering for dynamic systems to provide a layer of predictive capability for the forecasting of adverse events. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. Due to the fact that the alarm regions for an optimal level-crossing predictor cannot be expressed in closed form, one of our aims has been to investigate approximations for the design of an optimal alarm system. Approximations to this sort of alarm region are required for the most computationally efficient generation of a ROC curve or other similar alarm system design metrics. Algorithms based upon the optimal alarm system concept also require models that appeal to a variety of data mining and machine learning techniques. As such, we have investigated a serial architecture which was used to preprocess a full feature space by using SVR (Support Vector Regression), implicitly reducing it to a univariate signal while retaining salient dynamic characteristics (see AIAA attachment below). This step was required due to current technical constraints, and is performed by using the residual generated by SVR (or potentially any regression algorithm) that has properties which are favorable for use as training data to learn the parameters of a linear dynamical system. Future development will lift these restrictions so as to allow for exposure to a broader class of models such as a switched multi-input/output linear dynamical system in isolation based upon heterogeneous (both discrete and continuous) data, obviating the need for the use of a preprocessing regression algorithm in serial. However, the use of a preprocessing multi-input/output nonlinear regression algorithm in serial with a multi-input/output linear dynamical system will allow for the characterization of underlying static nonlinearities to be investigated as well. We will even investigate the use of non-parametric methods such as Gaussian process regression and particle filtering in isolation to lift the linear and Gaussian assumptions which may be invalid for many applications. Future work will also involve improvement of approximations inherent in use of the optimal alarm system of optimal level-crossing predictor. We will also perform more rigorous testing and validation of the alarm systems discussed by using standard machine learning techniques and consider more complex, yet practically meaningful critical level-crossing events. Finally, a more detailed investigation of model fidelity with respect to available data and metrics has been conducted (see attachment below). As such, future work on modeling will involve the investigation of necessary improvements in initialization techniques and data transformations for a more feasible fit to the assumed model structure. Additionally, we will explore the integration of physics-based and data-driven methods in a Bayesian context, by using a more informative prior.
- API
ORACLES Navigational and Meteorological Data
data.nasa.gov | Last Updated 2022-08-22T13:04:33.000ZORACLES_MetNav_AircraftInSitu_Data are in situ meteorological and navigational measurements collected onboard the P-3 Orion or ER-2 aircraft during the ObseRvations of Aerosols above CLouds and their intEractionS (ORACLES) campaign. These measurements were collected from August 19, 2016 – October 27, 2016, August 1, 2017 – September 4, 2017 and September 21, 2018 – October 27, 2018. ORACLES provides multi-year airborne observations over the complete vertical column of key parameters that drive aerosol-cloud interactions in the southeast Atlantic, an area with some of the largest inter-model differences in aerosol forcing assessments on the planet. The P-3 Orion aircraft was utilized as a low-flying platform for simultaneous in situ and remote sensing measurements of aerosols and clouds and was supplemented by ER-2 remote sensing during the 2016 campaign. Data collection for this product is complete. Southern Africa produces almost one-third of the Earth’s biomass burning aerosol particles. The ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS) experiment was a five year investigation with three intensive observation periods (August 19, 2016 – October 27, 2016; August 1, 2017 – September 4, 2017; September 21, 2018 – October 27, 2018) and was designed to study key processes that determine the climate impacts of African biomass burning aerosols. ORACLES provided multi-year airborne observations over the complete vertical column of the key parameters that drive aerosol-cloud interactions in the southeast Atlantic, an area with some of the largest inter-model differences in aerosol forcing assessments. These inter-model differences in aerosol and cloud distributions, as well as their combined climatic effects in the SE Atlantic are partly due to the persistence of aerosols above clouds. The varying separation of cloud and aerosol layers sampled during ORACLES allow for a process-oriented understanding of how variations in radiative heating profiles impact cloud properties, which is expected to improve model simulations for other remote regions experience long-range aerosol transport above clouds. ORACLES utilized two NASA aircraft, the P-3 and ER-2. The P-3 was used as a low-flying platform for simultaneous in situ and remote sensing measurements of aerosols and clouds in all three campaigns, supplemented by ER-2 remote sensing in 2016. ER-2 observations will be used to enhance satellite-based remote sensing by resolving variability within a particular scene, and by guiding the development of new and improved remote sensing techniques.
- API
MISR Level 3 FIRSTLOOK Global Land product in netCDF format covering a day V002
data.nasa.gov | Last Updated 2023-01-19T22:32:40.000ZThis file contains the MISR Level 3 FIRSTLOOK Component Global Land product in netCDF format covering a day
- API
Metagenomic analysis of feces from mice flown on the RR-6 mission
data.nasa.gov | Last Updated 2023-01-26T18:46:14.000ZThe objective of the Rodent Research-6 (RR-6) study was to evaluate muscle atrophy in mice during spaceflight and to test the efficacy of a novel therapeutic to mitigate muscle wasting. The experiment involved an implantable subcutaneous nanochannel delivery system (nDS; between scapula) which delivered the drug formoterol (FMT; a selective xce xb22 adrenoceptor agonist) over the course of time. To this end a cohort of forty 32-weeks-old female C57BL/6NTac mice were either sham operated or implanted with vehicle or treatment-filled nDS launched in two Transporters (20 mice per Transporter) on SpaceX-13 on December 15 2017. They were transferred to Rodent Habitats onboard the International Space Station (ISS) and maintained in microgravity for 29 days (N=20 Live Animal Return Spaceflight [LAR FLT]) or >50 days (N=20 ISS Terminal Spaceflight [ISS-T FLT]). After 29 days the 20 LAR FLT animals were returned live to back to Earth on January 13 2018. After splashdown the animals were ambulatory on-ground for ~4 days until all subjects were processed during one day of dissections. There were two Basal (BSL) groups of animals sacrificed (LAR BSL & ISS-T BSL; N=20; 40 animals; ~36 weeks old) at Kennedy Space Center (KSC; 12/9/17). LAR BSL animals were dissected and samples were collected upon euthanasia. A Ground Control (GC) group LAR GC mimicked the LAR FLT group which was housed at KSC then shipped alive to Novartis xe2 x80 x99s Facilities where both the LAR FLT and LAR GC groups were processed (~41 weeks old; 1/16/18). All were anesthetized with isoflurane blood samples were obtained by closed-chest cardiac puncture and the animals were euthanized by exsanguination and thoracotomy. The 20 ISS-T FLT mice were anesthetized via intraperitoneal injection of ketamine/xylazine/acepromazine over the course of a four days of dissections (2/6/18 until 2/9/18; 53-56 days after launch; 44 weeks old at time of on-orbit dissections). Blood samples and euthanasia were conducted the same as LAR groups. Following blood draw and hind limb dissection the ISS-T FLT animal carcasses were wrapped in aluminum foil placed in a ziploc bag and placed in storage at -80 xcb x9aC or colder until return. The ISS-T Ground Control (ISS-T GC) (at KSC) followed the same euthanasia timeline methods and preservation. The final processing of frozen ISS-T FLT frozen ISS-T GC and frozen 0-day ISS-T BSL animals were completed at Houston Methodist Research Institute in Houston TX (5/21/18 until 5/24/18). GeneLab received feces from only sham treated animals (no drug treated animals) from the following groups. FLT: LAR (n=9) ISS-T (n=7); GC: LAR (N=7) ISS-T (N=9); BSL: LAR (n=7) ISS-T (n=9). DNA was extracted and analyzed by sequencing using a variety of different targeted and un-targeted metagenome profiling assays.
- API
Transcriptional analysis of dorsal skin from mice flown on the RR-6 mission
data.nasa.gov | Last Updated 2023-01-26T18:45:51.000ZThe objective of the Rodent Research-6 (RR-6) study was to evaluate muscle atrophy in mice during spaceflight and to test the efficacy of a novel therapeutic to mitigate muscle wasting. The experiment involved an implantable subcutaneous nanochannel delivery system (nDS; between scapula) which delivered the drug formoterol (FMT; a selective Beta-2 adrenoceptor agonist) over the course of time. To this end a cohort of forty 32-weeks-old female C57BL/6NTac mice were either sham operated. or implanted with vehicle or treatment-filled nDS and launched in two Transporters (20 mice per Transporter) on SpaceX-13 on December 15 2017. They were transferred to Rodent Habitats onboard the International Space Station (ISS) and maintained in microgravity for 29 days (N=20 Live Animal Return [LAR]) or >50 days (N=20 ISS Terminal). After 29 days the 20 LAR animals were returned live to back to Earth on January 13 2018. After splashdown the animals were ambulatory on-ground for ~4 days until all subjects were processed during one day of dissections. There were two Baseline groups of animals sacrificed (LAR Baseline & FLT Baseline; N=20; 40 animals; ~36 weeks old) at Kennedy Space Center (KSC; 12/9/17). A Ground Control group mimicked the Flight LAR group which was housed at KSC then shipped alive to Novartis facilities where both the LAR and LAR Ground Control groups were processed (~41 weeks old; 1/16/18). All were anesthetized with isoflurane blood samples were obtained by closed-chest cardiac puncture and the animals were euthanized by exsanguination and thoracotomy. The 20 ISS Terminal mice were anesthetized via intraperitoneal injection of ketamine/xylazine/acepromazine over the course of a four days of dissections (2/6/18 until 2/9/18; 53-56 days after launch; 44 weeks old at time of on-orbit dissections). Blood samples and euthanasia were conducted the same as LAR and Baseline. Following blood draw and hind limb dissection the ISS-terminal animal carcasses were wrapped in aluminum foil placed in a ziploc bag and placed in storage at -80C or colder until return. The ISS-terminal Ground Controls (at KSC) followed the same euthanasia timeline methods and preservation. The final processing of frozen ISS-terminal frozen ISS-terminal Ground Controls and frozen 0-day FLT baseline animals were completed at Houston Methodist Research Institute in Houston TX (5/21/18 until 5/24/18). GeneLab received samples of dorsal skin from only sham treated animals (no drug treated animals) from the following groups Flight: LAR (n=9) ISS Terminal (n=9); Ground Controls: LAR GC (N=9) ISS Terminal GC (N=10) LAR Baseline (n=10) ISS Terminal Baseline (n=6). Total RNA was extracted and sequenced at a target depth of 60 M clusters per sample (ribodepleted paired end 150).