- API
Earth Radiation Budget Experiment (ERBE) S-10 Wide Field of View (WFOV) Numerical Filter (NF) Earth Flux and Albedo
data.nasa.gov | Last Updated 2022-01-17T05:17:10.000ZERBE_S10_WFOV_NF_NAT_1 is the Earth Radiation Budget Experiment (ERBE) S-10 Wide Field of View (WFOV) Numerical Filter (NF) Earth Flux and Albedo data product. Data collection for this product is complete. It is available in the Native (NAT) Format. ERBE was a multi-satellite system designed to measure the Earth's radiation budget. The ERBE instruments flew on a mid-inclination National Aeronautics and Space Administration (NASA) Earth Radiation Budget Satellite (ERBS) and two sun-synchronous National Oceanic and Atmospheric Administration (NOAA) satellites (NOAA-9 and NOAA-10). Each satellite carried both a scanner and a non-scanner instrument package. The non-scanner instrument package contained four Earth-viewing channels and a solar monitor. The Earth-viewing channels had two spatial resolutions: a horizon-to-horizon view of the Earth, and a field-of-view limited to about 1000 km in diameter. The former was called WFOV and the latter the medium field-of-view (MFOV) channels. The solar monitor was a direct descendant of the Solar Maximum Mission's Active Cavity Radiometer Irradiance Monitor detector. Due to the concern for spectral flatness and high accuracy, all five of the channels were active cavity radiometers. The MFOV (medium-field-of-view) SF (shape factor) S-10 contained inverted daily, monthly hourly, and monthly averages of shortwave and long-wave radiant fluxes at the top-of-the-atmosphere for one month. This data set was produced for each of the satellites (ERBS and NOAA-9) and the combination of satellites, which were operational during the data month. The values for this data set were derived using the shape factor technique (Smith et al. 1986). As described in the Earth Radiant Fluxes and Albedo, Scanner S-9, Non-scanner S-10/S-10N User's Guide, the data contains a 30 byte header, 67 scale factors which were used to scale the data in the first record, and 26 scale factors which were used to scale the data in the second record. The data set also contained two records for each processed region. The first record was of fixed length (990 words) and contained averaged data. The second record was of variable length and contained individual hour box estimates. The length of the second record, in words, was calculated by multiplying the number of hour boxes (978th word of record one) by the number of values stored for each hour box (38 for the non-scanner).
- API
West Africa Coastal Vulnerability Mapping: GPW Version 4 Population Growth, Preliminary Release 1, 2000-2010
data.nasa.gov | Last Updated 2022-01-17T06:03:26.000ZThe West Africa Coastal Vulnerability Mapping: GPW Version 4 Population Growth, Preliminary Release 1, 2000-2010, represents positive or negative growth in the number of persons per grid cell, and was calculated by subtracting an unreleased working version of the Gridded Population of the World (GPW), Version 4, year 2000 population count raster for the West Africa region from an unreleased working version of the GPWv4 year 2010 population count raster and cropping the result to within 200 kilometers of the coast. GPW provides globally consistent and spatially explicit human population information and data for use in research, policy making, and communications. This is a gridded (raster) data product that renders global population data at the scale and extent needed to demonstrate the spatial relationship of human populations and the environment globally. The gridded data set is constructed from national or subnational input Units (usually administrative Units) of varying resolutions. The native grid cell resolution of GPWv4 is 30 arc-second, or ~1 km at the equator.
- API
LIS 0.1 DEGREE VERY HIGH RESOLUTION GRIDDED LIGHTNING MONTHLY CLIMATOLOGY (VHRMC) V1
data.nasa.gov | Last Updated 2022-05-03T14:29:10.000ZThe LIS 0.1 Degree Very High Resolution Gridded Lightning Monthly Climatology (VHRMC) dataset consists of gridded monthly climatologies of total lightning flash rates seen by the Lightning Imaging Sensor (LIS) from January 1, 1998 through December 31, 2013. LIS is an instrument on the Tropical Rainfall Measurement Mission satellite (TRMM) used to detect the distribution and variability of total lightning occurring in the Earth's tropical and subtropical regions. This information can be used for severe storm detection and analysis, and also for lightning-atmosphere interaction studies. The gridded climatologies include annual mean flash rate, mean diurnal cycle of flash rate with 24 hour resolution, and mean annual cycle of flash rate with daily, monthly, or seasonal resolution. All datasets are in 0.1 degree spatial resolution. The mean annual cycle of flash rate datasets (i.e., daily, monthly or seasonal) have both 49-day and 1 degree boxcar moving averages to remove diurnal cycle and smooth regions with low flash rate, making the results more robust.
- API
SBUV2/NOAA-17 Ozone (O3) Profile and Total Column Ozone 1 Month Zonal Mean L3 Global 5.0 degree Latitude Zones V1 (SBUV2N17L3zm) at GES DISC
data.nasa.gov | Last Updated 2022-01-17T05:51:02.000ZThe Solar Backscattered Ultraviolet (SBUV) from NOAA-17 Level-3 monthly zonal mean (MZM) product (SBUV2N17L3zm) is derived from the Level-2 retrieved ozone profiles. Ozone retrievals are generated from the v8.6 SBUV algorithm. A Level-3 MZM file computes zonal means covering 5 degree latitude bands for each calendar month. For this product there are 126 months of data from August 2002 through January 2013. There are a total of 36 latitudinal bands, 18 in each hemisphere. Profile data are provided at 21 layers from 1013.25, 639.318, 403.382,254.517, 160.589, 101.325,63.9317, 40.3382, 25.4517, 16.0589, 10.1325, 6.39317,4.03382, 2.54517, 1.60589, 1.01325,0.639317, 0.403382, 0.254517, 0.160589 and 0.101325 hPa (measured at bottom of layer). NOTE: Some profiles have 20 layers and do not report the top most layer. Mixing ratios are reported at 15 layers from 0.5, 0.7, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0, 15.0, 20.0, 30.0, 40.0 and 50.0 hPa (measured at middle of layer). The MZM product averages retrievals that meet the criteria for a good retrieval as determined by error flags in the Level 2 data. A good retrieval is defined as satisfying the following conditions: 1) Profile Error Flag = 0 or 1 (0 = good retrieval; 1 = solar zenith angle > 84 degrees). 2) Total Error Flags = 0, 1, 2 or 5 (0 = good retrieval; 1 = not used; 2 = solar zenith angle > 84 degrees; large discrepancy between profile total and best total ozone). NOTE - Total error flag = 5 is anomalously applied at high latitudes and high solar zenith angles where the B-Pair total ozone estimate is not as reliable as the ozone profile under these conditions. This error flag may be removed in future version of algorithm. The zonal means computed for each month are screened according to the following statistical criteria: 1) Number of good retrievals for the month greater than or equal to 2/3 of the samples for a nominal month. 2) Mean latitude of good retrievals less than or equal to 1 degree from center of latitude band. 3) Mean time of good retrievals less than or equal to 4 days from center of month (i.e., day = 15).
- API
TRMM (TMPA-RT) Near Real-Time Precipitation L3 1 day 0.25 degree x 0.25 degree V7 (TRMM_3B42RT_Daily) at GES DISC
data.nasa.gov | Last Updated 2022-01-17T05:59:46.000ZTMPA (3B42RT_Daily) dataset have been discontinued as of Dec. 31, 2019, and users are strongly encouraged to shift to the successor IMERG dataset (doi: 10.5067/GPM/IMERGDE/DAY/06; 10.5067/GPM/IMERGDL/DAY/06). This daily accumulated precipitation product is generated from the Near Real-Time 3-hourly TRMM Multi-Satellite Precipitation Analysis TMPA (3B42RT). It is produced at the NASA GES DISC, as a value added product. Simple summation of valid retrievals in a grid cell is applied for the data day. The result is given in (mm). Although the grid is from 60S to 60N, the high latitudes (beyond 50S/N) near real-time retrievals are considered very unreliable and thus are screened out from the daily accumulations. The beginning and ending time for every daily granule are listed in the file global attributes, and are taken correspondingly from the first and the last 3-hourly granules participating in the aggregation. Thus the time period covered by one daily granule amounts to 24 hours, which can be inspected in the file global attributes. Counts of valid retrievals for the day are provided for every variable, making it possible to compute conditional and unconditional mean precipitation for grid cells where less than 8 retrievals for the day are available. Efforts have been made to make the format of this derived product as similar as possible to the new Global Precipitation Measurement CF-compliant file format. The latency of this derived daily product is about 7 hours after the UTC day is closed. Users should be mindful that the price for the short latency of these data is the reduced quality as compared to the research quality product. The information provided here on the TRMM mission, and on the original 3-hr 3B42 product, remain relevant for this derived product. Note, however, this product is in netCDF-4 format. The following describes the derivation in more details. The daily accumulation is derived by summing *valid* retrievals in a grid cell for the data day. Since the 3-hourly source data are in mm/hr, a factor of 3 is applied to the sum. Thus, for every grid cell we have Pdaily = 3 * SUM{Pi * 1[Pi valid]}, i=[1,Nf] Pdaily_cnt = SUM{1[Pi valid]} where: Pdaily - Daily accumulation (mm) Pi - 3-hourly input, in (mm/hr) Nf - Number of 3-hourly files per day, Nf=8 1[.] - Indicator function; 1 when Pi is valid, 0 otherwise Pdaily_cnt - Number of valid retrievals in a grid cell per day. Grid cells for which Pdaily_cnt=0, are set to fill value in the Daily files. Note that Pi=0 is a valid value. On occasion, the 3-hourly source data have fill values for Pi in a very few grid cells. The total accumulation for such grid cells is still issued, inspite of the likelihood that thus resulting accumulation has a larger uncertainty in representing the "true" daily total. These events are easily detectable using "counts" variables that contain Pdaily_cnt, whereby users can screen out any grid cells for which Pdaily_cnt less than Nf. There are various ways the accumulated daily error could be estimated from the source 3-hourly error. In this release, the daily error provided in the data files is calculated as follows. First, squared 3-hourly errors are summed, and then square root of the sum is taken. Similarly to the precipitation, a factor of 3 is finally applied: Perr_daily = 3 * { SUM[ (Perr_i * 1[Perr_i valid])^2 ] }^0.5 , i=[1,Nf] Ncnt_err = SUM( 1[Perr_i valid] ) where: Perr_daily - Magnitude of the daily accumulated error power, (mm) Ncnt_err - The counts for the error variable Thus computed Perr_daily represents the worst case scenario that assumes the error in the 3-hourly source data, which is given in mm/hr, is accumulating within the 3-hourly period of the source data and then during the day. These values, however, can easily be conveted to root mean square error estimate of the rainfall rate: rms_err = { (Perr_daily/3) ^2 / Ncnt
- API
Gridded Population of the World, Version 4 (GPWv4): Population Density, Revision 11
data.nasa.gov | Last Updated 2022-01-17T05:27:37.000ZThe Gridded Population of the World, Version 4 (GPWv4): Population Density, Revision 11 consists of estimates of human population density (number of persons per square kilometer) based on counts consistent with national censuses and population registers, for the years 2000, 2005, 2010, 2015, and 2020.�A proportional allocation gridding algorithm, utilizing approximately 13.5 million national and sub-national administrative Units, was used to assign population counts to 30 arc-second grid cells. The population density rasters were created by dividing the population count raster for a given target year by the land area raster. The data files were produced as global rasters at 30 arc-second (~1 km at the equator) resolution. To enable faster global processing, and in support of research commUnities, the 30 arc-second count data were aggregated to 2.5 arc-minute, 15 arc-minute, 30 arc-minute and 1 degree resolutions to produce density rasters at these resolutions.
- API
Global Navigation Satellite System (GNSS) Final Clock Product (30 second resolution, daily files, generated weekly) from NASA CDDIS
data.nasa.gov | Last Updated 2023-02-28T19:25:26.000ZThis derived product set consists of Global Navigation Satellite System Final Satellite and Receiver Clock Product (30-second granularity, daily files, generated weekly) from the NASA Crustal Dynamics Data Information System (CDDIS). GNSS provide autonomous geo-spatial positioning with global coverage. GNSS data sets from ground receivers at the CDDIS consist primarily of the data from the U.S. Global Positioning System (GPS) and the Russian GLObal NAvigation Satellite System (GLONASS). Since 2011, the CDDIS GNSS archive includes data from other GNSS (Europe’s Galileo, China’s Beidou, Japan’s Quasi-Zenith Satellite System/QZSS, the Indian Regional Navigation Satellite System/IRNSS, and worldwide Satellite Based Augmentation Systems/SBASs), which are similar to the U.S. GPS in terms of the satellite constellation, orbits, and signal structure. Analysis Centers (ACs) of the International GNSS Service (IGS) retrieve GNSS data on regular schedules to produce GNSS satellite and ground receiver clock values. The IGS Analysis Center Coordinator (ACC) uses these individual AC solutions to generate the official IGS final combined satellite and receiver clock products. The final products are considered the most consistent and highest quality IGS solutions; they consist of daily orbit files, generated on a weekly basis with a delay up to 13 (for the last day of the week) to 20 (for the first day of the week) days. All satellite and receiver clock solution files utilize the clock RINEX format and span 24 hours from 00:00 to 23:45 UTC.
- API
SBUV2/NOAA-16 Ozone (O3) Profile and Total Column Ozone 1 Month Zonal Mean L3 Global 5.0 degree Latitude Zones V1 (SBUV2N16L3zm) at GES DISC
data.nasa.gov | Last Updated 2022-01-17T05:51:01.000ZThe Solar Backscattered Ultraviolet (SBUV) from NOAA-16 Level-3 monthly zonal mean (MZM) product (SBUV2N16L3zm) is derived from the Level-2 retrieved ozone profiles. Ozone retrievals are generated from the v8.6 SBUV algorithm. A Level-3 MZM file computes zonal means covering 5 degree latitude bands for each calendar month. For this product there are 154 months of data from October 2000 through July 2013. There are a total of 36 latitudinal bands, 18 in each hemisphere. Profile data are provided at 21 layers from 1013.25, 639.318, 403.382,254.517, 160.589, 101.325,63.9317, 40.3382, 25.4517, 16.0589, 10.1325, 6.39317,4.03382, 2.54517, 1.60589, 1.01325,0.639317, 0.403382, 0.254517, 0.160589 and 0.101325 hPa (measured at bottom of layer). NOTE: Some profiles have 20 layers and do not report the top most layer. Mixing ratios are reported at 15 layers from 0.5, 0.7, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0, 15.0, 20.0, 30.0, 40.0 and 50.0 hPa (measured at middle of layer). The MZM product averages retrievals that meet the criteria for a good retrieval as determined by error flags in the Level 2 data. A good retrieval is defined as satisfying the following conditions: 1) Profile Error Flag = 0 or 1 (0 = good retrieval; 1 = solar zenith angle > 84 degrees). 2) Total Error Flags = 0, 1, 2 or 5 (0 = good retrieval; 1 = not used; 2 = solar zenith angle > 84 degrees; large discrepancy between profile total and best total ozone). NOTE - Total error flag = 5 is anomalously applied at high latitudes and high solar zenith angles where the B-Pair total ozone estimate is not as reliable as the ozone profile under these conditions. This error flag may be removed in future version of algorithm. The zonal means computed for each month are screened according to the following statistical criteria: 1) Number of good retrievals for the month greater than or equal to 2/3 of the samples for a nominal month. 2) Mean latitude of good retrievals less than or equal to 1 degree from center of latitude band. 3) Mean time of good retrievals less than or equal to 4 days from center of month (i.e., day = 15).
- API
Bio-optical properties of the different water masses in the Gulf of St. Lawrence
data.nasa.gov | Last Updated 2023-04-17T13:03:21.000ZThe St. Lawrence ecosystem is a complex environment influenced by a variety of physical forces (runoff, winds, tides, bathymetry) that sustains a diverse food web going from phytoplankton to whales. Chlorophyll concentration is thus an important variable to measure at the scale of the ecosystem. Because of its large size, remote sensing is the only available tool to measure chlorophyll distribution in the St. Lawrence using ocean color imagery. To fully utilize this type of data, it is however important to have a sound knowledge of the bio-optical properties of the different water masses in the system. A St. Lawrence SeaWiFS program was thus built to gather this knowledge beginning in 1997.
- API
Optimal Alarm Systems
data.nasa.gov | Last Updated 2020-01-29T03:25:13.000ZAn optimal alarm system is simply an optimal level-crossing predictor that can be designed to elicit the fewest false alarms for a fixed detection probability. It currently use Kalman filtering for dynamic systems to provide a layer of predictive capability for the forecasting of adverse events. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. Due to the fact that the alarm regions for an optimal level-crossing predictor cannot be expressed in closed form, one of our aims has been to investigate approximations for the design of an optimal alarm system. Approximations to this sort of alarm region are required for the most computationally efficient generation of a ROC curve or other similar alarm system design metrics. Algorithms based upon the optimal alarm system concept also require models that appeal to a variety of data mining and machine learning techniques. As such, we have investigated a serial architecture which was used to preprocess a full feature space by using SVR (Support Vector Regression), implicitly reducing it to a univariate signal while retaining salient dynamic characteristics (see AIAA attachment below). This step was required due to current technical constraints, and is performed by using the residual generated by SVR (or potentially any regression algorithm) that has properties which are favorable for use as training data to learn the parameters of a linear dynamical system. Future development will lift these restrictions so as to allow for exposure to a broader class of models such as a switched multi-input/output linear dynamical system in isolation based upon heterogeneous (both discrete and continuous) data, obviating the need for the use of a preprocessing regression algorithm in serial. However, the use of a preprocessing multi-input/output nonlinear regression algorithm in serial with a multi-input/output linear dynamical system will allow for the characterization of underlying static nonlinearities to be investigated as well. We will even investigate the use of non-parametric methods such as Gaussian process regression and particle filtering in isolation to lift the linear and Gaussian assumptions which may be invalid for many applications. Future work will also involve improvement of approximations inherent in use of the optimal alarm system of optimal level-crossing predictor. We will also perform more rigorous testing and validation of the alarm systems discussed by using standard machine learning techniques and consider more complex, yet practically meaningful critical level-crossing events. Finally, a more detailed investigation of model fidelity with respect to available data and metrics has been conducted (see attachment below). As such, future work on modeling will involve the investigation of necessary improvements in initialization techniques and data transformations for a more feasible fit to the assumed model structure. Additionally, we will explore the integration of physics-based and data-driven methods in a Bayesian context, by using a more informative prior.