- API
Nano Dust Analyzer Project
data.nasa.gov | Last Updated 2020-01-29T04:54:41.000Z<p> We propose to develop a new highly sensitive instrument to confirm the existence of the so-called nano-dust particles, characterize their impact parameters, and measure their chemical composition. Simultaneous theoretical studies will be used to derive the expected&nbsp; mass and velocity ranges of these putative particles to formulate science and measurement requirements for the future deployment of&nbsp; the proposed Nano-Dust Analyzer (NDA)&nbsp;</p> <p> Early dust instruments onboard Pioneer 8 and 9 and Helios spacecraft detected a flow of submicron sized dust particles coming from the direction of the Sun. These particles originate in the inner solar system from mutual collisions among meteoroids and move on&nbsp; hyperbolic orbits that leave the Solar System under the prevailing radiation pressure force. Later dust instruments with higher&nbsp; sensitivity had to avoid looking toward the Sun because of interference from the solar wind and UV radiation and thus contributed&nbsp; little to the characterization of the dust stream. The one exception is the Ulysses dust detector that observed escaping dust particles&nbsp; high above the solar poles, which confirm the suspicion that charged nanometer sized dust grains are carried to high heliographic&nbsp; latitudes by electromagnetic interactions with the Interplanetary Magnetic Field (IMF). Recently, the STEREO WAVES instruments&nbsp; recorded a large number of intense electric field signals, which were interpreted as impacts from nanometer sized particles striking the&nbsp; spacecraft with velocities of about the solar wind speed. This high flux and strong spatial and/or temporal variations of nanometer&nbsp; sized dust grains at low latitude appears to be uncorrelated with the solar wind properties. This is a mystery as it would require that&nbsp; the total collisional meteoroid debris inside 1 AU is cast in nanometer sized fragments. The observed fluxes of inner-source pickup ions&nbsp; also point to the existence of a much enhanced dust population in the nanometer size range.&nbsp;</p> <p> This new heliospherical phenomenon of nano-dust streams may have consequences throughout the planetary system, but as of yet no dust instrument exists that could be used to shed light on their properties. &nbsp;We propose to develop a dust analyzer capable to detect and&nbsp; analyze these mysterious dust particles coming from the solar direction and to embark upon complementary theoretical studies to&nbsp; understand their characteristics. The instrument is based on the Cassini Dust Analyzer (CDA) that has analyzed the composition of&nbsp; nanometer sized dust particles emanating from the Jovian and Saturnian systems but could not be pointed towards the Sun. By&nbsp; applying technologies implemented in solar wind instruments and coronagraphs a highly sensitive dust analyzer will be developed and&nbsp; tested in the laboratory. The dust analyzer shall be able to characterize impact properties (impact charge and energy distribution of&nbsp; ions from which mass and speed of the impacting grains may be derived) and chemical composition of individual nanometer sized&nbsp; particles while exposed to solar wind and UV radiation. The measurements will enable us to identify the source of the dust by&nbsp; comparing their elemental composition with that of larger micrometeoroid particles of cometary and asteroid origin and will reveal&nbsp; interaction of nano-dust with the interplanetary medium by investigating the relation of the dust flux with solar wind and IMF&nbsp; properties.&nbsp;</p> <p> Complementary theoretically studies will be performed to understand the characteristics of nano-dust particles at 1 AU to answer the&nbsp; following questions:&nbsp; - What is the speed range at which nanometer sized particles impact
- API
India Annual Winter Cropped Area, 2001-2016
data.nasa.gov | Last Updated 2022-01-17T05:29:43.000ZThe India Annual Winter Cropped Area, 2001 - 2016 consists of annual winter cropped areas for most of India (except the Northeastern states) from 2000-2001 to 2015-2016. This data set utilizes the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI; spatial resolution: 250m) for the winter growing season (October-March). The methodology uses an automated algorithm identifying the EVI peak in each pixel for each year and linearly scales the EVI value between 0% and 100% cropped area within that particular pixel. Maps were then resampled to 1 km and were validated using high-resolution QuickBird, RapidEye, SkySat, and WorldView-2 images spanning 2008 to 2016 across 11 different agricultural regions of India. The spatial resolution of the data set is 1 km, resampled from 250m. The data are distributed as GeoTIFF and NetCDF files and are in WGS 84 projection.
- API
TRMM (TMPA-RT) Near Real-Time Precipitation L3 1 day 0.25 degree x 0.25 degree V7 (TRMM_3B42RT_Daily) at GES DISC
data.nasa.gov | Last Updated 2022-01-17T05:59:46.000ZTMPA (3B42RT_Daily) dataset have been discontinued as of Dec. 31, 2019, and users are strongly encouraged to shift to the successor IMERG dataset (doi: 10.5067/GPM/IMERGDE/DAY/06; 10.5067/GPM/IMERGDL/DAY/06). This daily accumulated precipitation product is generated from the Near Real-Time 3-hourly TRMM Multi-Satellite Precipitation Analysis TMPA (3B42RT). It is produced at the NASA GES DISC, as a value added product. Simple summation of valid retrievals in a grid cell is applied for the data day. The result is given in (mm). Although the grid is from 60S to 60N, the high latitudes (beyond 50S/N) near real-time retrievals are considered very unreliable and thus are screened out from the daily accumulations. The beginning and ending time for every daily granule are listed in the file global attributes, and are taken correspondingly from the first and the last 3-hourly granules participating in the aggregation. Thus the time period covered by one daily granule amounts to 24 hours, which can be inspected in the file global attributes. Counts of valid retrievals for the day are provided for every variable, making it possible to compute conditional and unconditional mean precipitation for grid cells where less than 8 retrievals for the day are available. Efforts have been made to make the format of this derived product as similar as possible to the new Global Precipitation Measurement CF-compliant file format. The latency of this derived daily product is about 7 hours after the UTC day is closed. Users should be mindful that the price for the short latency of these data is the reduced quality as compared to the research quality product. The information provided here on the TRMM mission, and on the original 3-hr 3B42 product, remain relevant for this derived product. Note, however, this product is in netCDF-4 format. The following describes the derivation in more details. The daily accumulation is derived by summing *valid* retrievals in a grid cell for the data day. Since the 3-hourly source data are in mm/hr, a factor of 3 is applied to the sum. Thus, for every grid cell we have Pdaily = 3 * SUM{Pi * 1[Pi valid]}, i=[1,Nf] Pdaily_cnt = SUM{1[Pi valid]} where: Pdaily - Daily accumulation (mm) Pi - 3-hourly input, in (mm/hr) Nf - Number of 3-hourly files per day, Nf=8 1[.] - Indicator function; 1 when Pi is valid, 0 otherwise Pdaily_cnt - Number of valid retrievals in a grid cell per day. Grid cells for which Pdaily_cnt=0, are set to fill value in the Daily files. Note that Pi=0 is a valid value. On occasion, the 3-hourly source data have fill values for Pi in a very few grid cells. The total accumulation for such grid cells is still issued, inspite of the likelihood that thus resulting accumulation has a larger uncertainty in representing the "true" daily total. These events are easily detectable using "counts" variables that contain Pdaily_cnt, whereby users can screen out any grid cells for which Pdaily_cnt less than Nf. There are various ways the accumulated daily error could be estimated from the source 3-hourly error. In this release, the daily error provided in the data files is calculated as follows. First, squared 3-hourly errors are summed, and then square root of the sum is taken. Similarly to the precipitation, a factor of 3 is finally applied: Perr_daily = 3 * { SUM[ (Perr_i * 1[Perr_i valid])^2 ] }^0.5 , i=[1,Nf] Ncnt_err = SUM( 1[Perr_i valid] ) where: Perr_daily - Magnitude of the daily accumulated error power, (mm) Ncnt_err - The counts for the error variable Thus computed Perr_daily represents the worst case scenario that assumes the error in the 3-hourly source data, which is given in mm/hr, is accumulating within the 3-hourly period of the source data and then during the day. These values, however, can easily be conveted to root mean square error estimate of the rainfall rate: rms_err = { (Perr_daily/3) ^2 / Ncnt
- API
Metrics for Offline Evaluation of Prognostic Performance
data.nasa.gov | Last Updated 2020-01-29T01:57:42.000ZPrognostic performance evaluation has gained significant attention in the past few years.*Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end- user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
- API
Metrics for Evaluating Performance of Prognostic Techniques
data.nasa.gov | Last Updated 2020-01-29T03:23:28.000ZPrognostics is an emerging concept in condition basedmaintenance(CBM)ofcriticalsystems.Alongwith developing the fundamentals of being able to confidently predict Remaining Useful Life (RUL), the technology calls for fielded applications as it inches towards maturation. This requires a stringent performance evaluation so that the significance of the concept can be fully exploited. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few issues. Instead, the research community has used a variety of metrics based largely on convenience with respect to their respective requirements. Very little attention has been focused on establishing a common ground to compare different efforts. This paper surveys the metrics that are already used for prognostics in a variety of domains including medicine, nuclear, automotive, aerospace, and electronics. It also considers other domains that involve prediction-related tasks, such as weather and finance. Differences and similarities between these domains and health maintenancehave been analyzed to help understand what performance evaluation methods may or may not be borrowed. Further, these metrics have been categorized in several ways that may be useful in deciding upon a suitable subset for a specific application. Some important prognostic concepts have been defined using a notational framework that enables interpretation of different metrics coherently. Last, but not the least, a list of metrics has been suggested to assess critical aspects of RUL predictions before they are fielded in real applications.
- API
GPM Ground Validation SEA FLUX ICE POP V1
data.nasa.gov | Last Updated 2022-06-07T06:12:15.000ZThe GPM Ground Validation SEA FLUX ICE POP dataset includes estimates of ocean surface latent and sensible heat fluxes, 10m wind speed, 10m air temperature, 10m air humidity, and skin sea surface temperature in support of the International Collaborative Experiments for Pyeongchang 2018 Olympic and Paralympic Winter Games (ICE-POP) field campaign in South Korea. The two major objectives of ICE-POP were to study severe winter weather events in regions of complex terrain and improve the short-term forecasting of such events. These data contributed to the Global Precipitation Measurement mission Ground Validation (GPM GV) campaign efforts to improve satellite estimates of orographic winter precipitation. This data file is available in netCDF-4 format from September 1, 2017 through April 30, 2018.
- API
OWLETS-1 Ozonesonde Data
data.nasa.gov | Last Updated 2022-07-18T13:04:47.000ZOWLETS1_Sondes_Data_1 is the Ozone Water-Land Environmental Transition Study (OWLETS-1) ozone data collected via synchronous ozonesonde launches at the NASA Langley Research Center ground site and Chesapeake Bay Bridge Tunnel site during the OWLETS field campaign. OWLETS was supported by the NASA Science Innovation Fund (SIF). Data collection is complete. Coastal regions have typically posed a challenge for air quality researchers due to a lack of measurements available over water and water-land boundary transitions. Supported by NASA’s Science Innovation Fund (SIF), the Ozone Water-Land Environmental Transition Study (OWLETS) field campaign examined ozone concentrations and gradients over the Chesapeake Bay from July 5, 2017 – August 3, 2017, with twelve intensive measurement days occurring during this time period. OWLETS utilized a unique combination of instrumentation, including aircraft, TOLNet ozone lidars (NASA Goddard Space Flight Center Tropospheric Ozone Differential Absorption Lidar and NASA Langley Research Center Mobile Ozone Lidar), UAV/drones, ozonesondes, AERONET sun photometers, and mobile and ship-based measurements, to characterize the land-water differences in ozone and other pollutants. Two main research sites were established as part of the campaign: an over-land site at NASA LaRC, and an over-water site at the Chesapeake Bay Bridge Tunnel. These two research sites were established to provide synchronous vertical measurements of meteorology and pollutants over water and over land. In combination with mobile observations between the two sites, pollutant gradients were able to be observed and used to better understand the fundamental processes occurring at the land-water interface. OWLETS-2 was completed from June 6, 2018 – July 6, 2018 in the upper Chesapeake Bay region. Research sites were established at the University of Maryland, Baltimore County (UMBC), Hart Miller Island (HMI), and Howard University Beltsville (HUBV), with HMI representing the over-water location and UMBC and HUBV representing the over-land sites. Similar measurements were carried out to further characterize water-land gradients in the upper Chesapeake Bay. The measurements completed during OWLETS are of importance in enhancing air quality models, and improving future satellite retrievals, particularly, NASA’s Tropospheric Emissions: Monitoring of Pollution, which is scheduled to launch in 2022.
- API
ORACLES Navigational and Meteorological Data
data.nasa.gov | Last Updated 2022-08-22T13:04:33.000ZORACLES_MetNav_AircraftInSitu_Data are in situ meteorological and navigational measurements collected onboard the P-3 Orion or ER-2 aircraft during the ObseRvations of Aerosols above CLouds and their intEractionS (ORACLES) campaign. These measurements were collected from August 19, 2016 – October 27, 2016, August 1, 2017 – September 4, 2017 and September 21, 2018 – October 27, 2018. ORACLES provides multi-year airborne observations over the complete vertical column of key parameters that drive aerosol-cloud interactions in the southeast Atlantic, an area with some of the largest inter-model differences in aerosol forcing assessments on the planet. The P-3 Orion aircraft was utilized as a low-flying platform for simultaneous in situ and remote sensing measurements of aerosols and clouds and was supplemented by ER-2 remote sensing during the 2016 campaign. Data collection for this product is complete. Southern Africa produces almost one-third of the Earth’s biomass burning aerosol particles. The ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS) experiment was a five year investigation with three intensive observation periods (August 19, 2016 – October 27, 2016; August 1, 2017 – September 4, 2017; September 21, 2018 – October 27, 2018) and was designed to study key processes that determine the climate impacts of African biomass burning aerosols. ORACLES provided multi-year airborne observations over the complete vertical column of the key parameters that drive aerosol-cloud interactions in the southeast Atlantic, an area with some of the largest inter-model differences in aerosol forcing assessments. These inter-model differences in aerosol and cloud distributions, as well as their combined climatic effects in the SE Atlantic are partly due to the persistence of aerosols above clouds. The varying separation of cloud and aerosol layers sampled during ORACLES allow for a process-oriented understanding of how variations in radiative heating profiles impact cloud properties, which is expected to improve model simulations for other remote regions experience long-range aerosol transport above clouds. ORACLES utilized two NASA aircraft, the P-3 and ER-2. The P-3 was used as a low-flying platform for simultaneous in situ and remote sensing measurements of aerosols and clouds in all three campaigns, supplemented by ER-2 remote sensing in 2016. ER-2 observations will be used to enhance satellite-based remote sensing by resolving variability within a particular scene, and by guiding the development of new and improved remote sensing techniques.
- API
2008 Environmental Performance Index (EPI)
data.nasa.gov | Last Updated 2022-01-17T05:02:20.000ZThe 2008 Environmental Performance Index (EPI) centers on two broad environmental protection objectives: (1) reducing environmental stresses on human health, and (2) promoting ecosystem vitality and sound natural resource management. Derived from a careful review of the environmental literature, these twin goals mirror the priorities expressed by policymakers. Environmental health and ecosystem vitality are gauged using 25 indicators tracked in six well-established policy categories: Environmental Health (Environmental Burden of Disease, Water, and Air Pollution), Air Pollution (effects on ecosystems), Water (effects on ecosystems), Biodiversity and Habitat, Productive Natural Resources (Forestry, Fisheries, and Agriculture), and Climate Change. The 2008 EPI utilizes a proximity-to-target methodology in which performance on each indicator is rated on a 0 to 100 scale (100 represents �at target�). By identifying specific targets and measuring how close each country comes to them, the EPI provides a foundation for policy analysis and a context for evaluating performance. Issue-by-issue and aggregate rankings facilitate cross-country comparisons both globally and within relevant peer groups. The 2008 EPI is the result of collaboration among the Yale Center for Environmental Law and Policy (YCELP), Columbia University Center for International Earth Science Information Network (CIESIN), World Economic Forum (WEF), and the Joint Research Centre (JRC), European Commission.
- API
CORONA Satellite Photographs from the U.S. Geological Survey
data.nasa.gov | Last Updated 2022-01-17T05:16:00.000ZThe first generation of U.S. photo intelligence satellites collected more than 860,000 images of the Earth’s surface between 1960 and 1972. The classified military satellite systems code-named CORONA, ARGON, and LANYARD acquired photographic images from space and returned the film to Earth for processing and analysis. The images were originally used for reconnaissance and to produce maps for U.S. intelligence agencies. In 1992, an Environmental Task Force evaluated the application of early satellite data for environmental studies. Since the CORONA, ARGON, and LANYARD data were no longer critical to national security and could be of historical value for global change research, the images were declassified by Executive Order 12951 in 1995. The first successful CORONA mission was launched from Vandenberg Air Force Base in 1960. The satellite acquired photographs with a telescopic camera system and loaded the exposed film into recovery capsules. The capsules or buckets were de-orbited and retrieved by aircraft while the capsules parachuted to earth. The exposed film was developed and the images were analyzed for a range of military applications. The intelligence community used Keyhole (KH) designators to describe system characteristics and accomplishments. The CORONA systems were designated KH-1, KH-2, KH-3, KH-4, KH-4A, and KH-4B. The ARGON systems used the designator KH-5 and the LANYARD systems used KH-6. Mission numbers were a means for indexing the imagery and associated collateral data. A variety of camera systems were used with the satellites. Early systems (KH-1, KH-2, KH-3, and KH-6) carried a single panoramic camera or a single frame camera (KH-5). The later systems (KH-4, KH-4A, and KH-4B) carried two panoramic cameras with a separation angle of 30° with one camera looking forward and the other looking aft. The original film and technical mission-related documents are maintained by the National Archives and Records Administration (NARA). Duplicate film sources held in the USGS EROS Center archive are used to produce digital copies of the imagery. Mathematical calculations based on camera operation and satellite path were used to approximate image coordinates. Since the accuracy of the coordinates varies according to the precision of information used for the derivation, users should inspect the preview image to verify that the area of interest is contained in the selected frame. Users should also note that the images have not been georeferenced.