- API
Nano Dust Analyzer Project
data.nasa.gov | Last Updated 2020-01-29T04:54:41.000Z<p> We propose to develop a new highly sensitive instrument to confirm the existence of the so-called nano-dust particles, characterize their impact parameters, and measure their chemical composition. Simultaneous theoretical studies will be used to derive the expected&nbsp; mass and velocity ranges of these putative particles to formulate science and measurement requirements for the future deployment of&nbsp; the proposed Nano-Dust Analyzer (NDA)&nbsp;</p> <p> Early dust instruments onboard Pioneer 8 and 9 and Helios spacecraft detected a flow of submicron sized dust particles coming from the direction of the Sun. These particles originate in the inner solar system from mutual collisions among meteoroids and move on&nbsp; hyperbolic orbits that leave the Solar System under the prevailing radiation pressure force. Later dust instruments with higher&nbsp; sensitivity had to avoid looking toward the Sun because of interference from the solar wind and UV radiation and thus contributed&nbsp; little to the characterization of the dust stream. The one exception is the Ulysses dust detector that observed escaping dust particles&nbsp; high above the solar poles, which confirm the suspicion that charged nanometer sized dust grains are carried to high heliographic&nbsp; latitudes by electromagnetic interactions with the Interplanetary Magnetic Field (IMF). Recently, the STEREO WAVES instruments&nbsp; recorded a large number of intense electric field signals, which were interpreted as impacts from nanometer sized particles striking the&nbsp; spacecraft with velocities of about the solar wind speed. This high flux and strong spatial and/or temporal variations of nanometer&nbsp; sized dust grains at low latitude appears to be uncorrelated with the solar wind properties. This is a mystery as it would require that&nbsp; the total collisional meteoroid debris inside 1 AU is cast in nanometer sized fragments. The observed fluxes of inner-source pickup ions&nbsp; also point to the existence of a much enhanced dust population in the nanometer size range.&nbsp;</p> <p> This new heliospherical phenomenon of nano-dust streams may have consequences throughout the planetary system, but as of yet no dust instrument exists that could be used to shed light on their properties. &nbsp;We propose to develop a dust analyzer capable to detect and&nbsp; analyze these mysterious dust particles coming from the solar direction and to embark upon complementary theoretical studies to&nbsp; understand their characteristics. The instrument is based on the Cassini Dust Analyzer (CDA) that has analyzed the composition of&nbsp; nanometer sized dust particles emanating from the Jovian and Saturnian systems but could not be pointed towards the Sun. By&nbsp; applying technologies implemented in solar wind instruments and coronagraphs a highly sensitive dust analyzer will be developed and&nbsp; tested in the laboratory. The dust analyzer shall be able to characterize impact properties (impact charge and energy distribution of&nbsp; ions from which mass and speed of the impacting grains may be derived) and chemical composition of individual nanometer sized&nbsp; particles while exposed to solar wind and UV radiation. The measurements will enable us to identify the source of the dust by&nbsp; comparing their elemental composition with that of larger micrometeoroid particles of cometary and asteroid origin and will reveal&nbsp; interaction of nano-dust with the interplanetary medium by investigating the relation of the dust flux with solar wind and IMF&nbsp; properties.&nbsp;</p> <p> Complementary theoretically studies will be performed to understand the characteristics of nano-dust particles at 1 AU to answer the&nbsp; following questions:&nbsp; - What is the speed range at which nanometer sized particles impact
- API
Vital Signs: Time in Congestion - Corridor (Updated October 2018)
data.bayareametro.gov | Last Updated 2018-10-24T00:31:33.000ZVITAL SIGNS INDICATOR Time Spent in Congestion (T7) FULL MEASURE NAME Time Spent in Congestion LAST UPDATED October 2018 DATA SOURCE MTC/Iteris Congestion Analysis No link available CA Department of Finance Forms E-8 and E-5 http://www.dof.ca.gov/Forecasting/Demographics/Estimates/E-8/ http://www.dof.ca.gov/Forecasting/Demographics/Estimates/E-5/ CA Employment Division Department: Labor Market Information http://www.labormarketinfo.edd.ca.gov/ CONTACT INFORMATION vitalsigns.info@bayareametro.gov METHODOLOGY NOTES (across all datasets for this indicator) Time spent in congestion measures the hours drivers are in congestion on freeway facilities based on traffic data. In recent years, data for the Bay Area comes from INRIX, a company that collects real-time traffic information from a variety of sources including mobile phone data and other GPS locator devices. The data provides traffic speed on the region’s highways. Using historical INRIX data (and similar internal datasets for some of the earlier years), MTC calculates an annual time series for vehicle hours spent in congestion in the Bay Area. Time spent in congestion is defined as the average daily hours spent in congestion on Tuesdays, Wednesdays and Thursdays during peak traffic months on freeway facilities. This indicator focuses on weekdays given that traffic congestion is generally greater on these days; this indicator does not capture traffic congestion on local streets due to data unavailability. This congestion indicator emphasizes recurring delay (as opposed to also including non-recurring delay), capturing the extent of delay caused by routine traffic volumes (rather than congestion caused by unusual circumstances). Recurring delay is identified by setting a threshold of consistent delay greater than 15 minutes on a specific freeway segment from vehicle speeds less than 35 mph. This definition is consistent with longstanding practices by MTC, Caltrans and the U.S. Department of Transportation as speeds less than 35 mph result in significantly less efficient traffic operations. 35 mph is the threshold at which vehicle throughput is greatest; speeds that are either greater than or less than 35 mph result in reduced vehicle throughput. This methodology focuses on the extra travel time experienced based on a differential between the congested speed and 35 mph, rather than the posted speed limit. To provide a mathematical example of how the indicator is calculated on a segment basis, when it comes to time spent in congestion, 1,000 vehicles traveling on a congested segment for a 1/4 hour (15 minutes) each, [1,000 vehicles x ¼ hour congestion per vehicle= 250 hours congestion], is equivalent to 100 vehicles traveling on a congested segment for 2.5 hours each, [100 vehicles x 2.5 hour congestion per vehicle = 250 hours congestion]. In this way, the measure captures the impacts of both slow speeds and heavy traffic volumes. MTC calculates two measures of delay – congested delay, or delay that occurs when speeds are below 35 miles per hour, and total delay, or delay that occurs when speeds are below the posted speed limit. To illustrate, if 1,000 vehicles are traveling at 30 miles per hour on a one mile long segment, this would represent 4.76 vehicle hours of congested delay [(1,000 vehicles x 1 mile / 30 miles per hour) - (1,000 vehicles x 1 mile / 35 miles per hour) = 33.33 vehicle hours – 28.57 vehicle hours = 4.76 vehicle hours]. Considering that the posted speed limit on the segment is 60 miles per hour, total delay would be calculated as 16.67 vehicle hours [(1,000 vehicles x 1 mile / 30 miles per hour) - (1,000 vehicles x 1 mile / 60 miles per hour) = 33.33 vehicle hours – 16.67 vehicle hours = 16.67 vehicle hours]. Data sources listed above were used to calculate per-capita and per-worker statistics. Top congested corridors are ranked by total vehicle hours of delay, meaning that the highlighted corridors reflect a combination of slow speeds and heavy t
- API
TRMM Microwave Imager (TMI) Gridded Oceanic Rainfall Product (TRMM Product 3A11) V7
nasa-test-0.demo.socrata.com | Last Updated 2015-07-20T04:52:56.000ZThe Tropical Rainfall Measuring Mission (TRMM) is a joint U.S.-Japan satellite mission to monitor tropical and subtropical precipitation and to estimate its associated latent heating. TRMM was successfully launched on November 27, at 4:27 PM (EST) from the Tanegashima Space Center in Japan. The TRMM Microwave Imager (TMI) is a nine-channel passive microwave radiometer, which builds on the heritage of the Special Sensor Microwave/Imager (SSM/I) instrument flown aboard the Defense Meteorological Satellite Program (DMSP) platforms. Microwave radiation is emitted by the Earth's surface and by water droplets within clouds. However, when layers of large ice particles are present in upper cloud regions - a condition highly correlated with heavy rainfall - microwave radiation tends to scatter at frequencies above 19 GHz. The TMI detects radiation at five frequencies chosen to discriminate among these processes, thus revealing the likelihood of rainfall. The key to accurate retrieval of rainfall rates by this method is the deduction of cloud precipitation consistent with the radiation measurement at each frequency. The TMI frequencies are 10.65, 19.35, 37 and 85.5 GHz (dual polarization), and 21 GHz (vertical polarization only). The TMI Gridded Oceanic Rainfall Product, also known as TMI Emission, consists of 5 degree by 5 degree monthly oceanic rainfall maps using TMI Level 1 data as input. Statistics of the monthly rainfall, including number of samples, standard deviation, goodness-of-fit (of the brightness temperature histogram to the lognormal rainfall distribution function) and rainfall probability are also included in the output for each grid box. Spatial coverage is between 40 degrees North and 40 degrees South owing to the 35 degree inclination of the TRMM satellite. TMI brightness temperature histograms at 1 degree intervals are generated based on the 19, 21 and 19-21 GHz combination channels obtained from the Level 1B (calibrated brightness temperature) TMI product. Monthly rainfall indices over the ocean are derived by statistically matching monthly histograms of brightness temperatures with model calculated rainfall Probability Distribution Functions (PDF) using the 19-21 GHz combination data. Retrieved monthly rainfall data must pass a quality test based on the quality of the PDF fit. The data are stored in the Hierarchical Data Format (HDF), which includes both core and product specific metadata applicable to the TMI measurements. A file contains 12 arrays of rainfall data and supporting information each of dimension 72 x 16, with a file size of about 40 KB (uncompressed). The HDF-EOS "grid" structure is used to accommodate the actual geophysical data arrays. There is 1 file of TMI 3A11 data produced per month.
- API
NEW HORIZONS SDC PLUTO CRUISE RAW V2.0
data.nasa.gov | Last Updated 2023-01-26T20:54:05.000ZThis data set contains Raw data taken by the New Horizons Student Dust Counter instrument during the pluto cruise mission phase. This is VERSION 2.0 of this data set. SDC collected science data intermittently during the hibernation years following the Jupiter encounter, designated as the PLUTOCRUISE phase. There were also Annual Checkouts (ACOs), STIM calibrations, Noise calibrations, and an anomaly in November, 2007. SDC's main science data collection periods were during hibernation. During ACOs, science data are taken intermittently but the user must be careful in analyzing these data since there is usually more activity on the spacecraft during hibernation. STIM and Noise refer to scheduled calibrations and are done with a regular cadence of one per year after the Jupiter encounter; they occurred sporadically in the early years of the mission. Note that some SDC data files have the same stop and start time and a zero exposure time. The reason for this is that the start and stop time for SDC data files are the event times for the first and last events in the files, so for files that contain a single event, these two values are the same. The changes in Version 2.0 were re-running of the ancillary data in the data product, updated geometry from newer SPICE kernels, minor editing of the documentation, catalogs, etc., and resolution of liens from the December, 2014 review, plus those from the May, 2016 review of the Pluto Encounter data sets. New observations added with this version (V2.0) include ongoing cruise observations from August, 2014 through January, 2015.
- API
GPM, DPR, GMI Level 3 Combined Precipitation V03
nasa-test-0.demo.socrata.com | Last Updated 2015-07-20T05:03:54.000ZThere are uncertainties in the interpretation of data from any one of the instruments (KuPR, KaPR, and GMI). By using data from multiple instruments, further constraints on the solution of precipitation structure improve the final product.The purpose of 3CMB is to give a daily and monthly accumulation of the 2BCMB precipitation product. The 3CMB product is a daily and monthly accumulation of the 2BCMB orbital combined product at two grid sizes, 5 x 5 degrees (G1) and 0.25 x 0.25 degrees (G2). Grid G1 contains the following physical measurements of general interest, among others. Grid G2 contains the same groups, but it is on the ltH x lnH grid and does not have the surface type (st) dimension or the histograms (see dimension definitions below). Below, conditional products represent means based upon precipitating areas only; unconditional products represent means for raining and non-raining areas combined. Probabilities represent the number of raining observations divided by the total number of raining and non-raining observations. precipTotRate (Group in G1)- Conditional mean rate for all precipitation phases (ice, liquid, mixed-phase). * count (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st): Count. * mean (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Mean, mm/h. * stdev (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Standard deviation for the monthly product. Mean of squares for the daily product, mm/h. * hist (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st x bin): Histogram. precipLiqRate (Group in G1) - Conditional mean rate for liquid precipitation. * count (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st): Count. * mean (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Mean, mm/h. * stdev (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Standard deviation for the monthly product. Mean of squares for the daily product, mm/h. * hist (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st x bin): Histogram. precipTotWaterContent (Group in G1) - Conditional mean water content for all precipitation phases. * count (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st): Count. * mean (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Mean, g/m3. * stdev (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Standard deviation for the monthly product. Mean of squares for the daily product, g/m3. * hist (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st x bin): Histogram. precipLiqWaterContent (Group in G1) - Conditional mean liquid water content. * count (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st): Count. * mean (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Mean, g/m3. * stdev (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Standard deviation for the monthly product. Mean of squares for the daily product, g/m3. * hist (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st x bin): Histogram. precipTotDm (Group in G1) - Conditional mass-weighted mean particle diameter. * count (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st): Count. * mean (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Mean, mm. * stdev (4-byte float, array size: ltL x lnL x ns x hgt x rt x st): Standard deviation for the monthly product. Mean of squares for the daily product, mm. * hist (4-byte integer, array size: ltL x lnL x ns x hgt x rt x st x bin): Histogram. precipTotRateDiurnal (Group in G1) - Conditional mean total surface precipitation rate indexed by local time. * count (4-byte integer, array size: ltL x lnL x ns x st x tim): Count. * mean (4-byte float, array size: ltL x lnL x ns x st x tim): Mean, mm/h. * stdev (4-byte float, array size: ltL x lnL x ns x st x tim): Standard deviation for the monthly product. Mean of squares for the daily product, mm/h. surfPrecipTotRateDiurnalAllObs (4-byte integer, array size: ltL x lnL x ns x st x tim): Number of total observa...
- API
Classification of Aeronautics System Health and Safety Documents
data.nasa.gov | Last Updated 2020-01-29T01:57:57.000ZMost complex aerospace systems have many text reports on safety, maintenance, and associated issues. The Aviation Safety Reporting System (ASRS) spans several decades and contains over 700 000 reports. The Aviation Safety Action Plan (ASAP) contains over 12 000 reports from various airlines. Problem categorizations have been developed for both ASRS and ASAP to enable identification of system problems. However, repository volume and complexity make human analysis difficult. Multiple experts are needed, and they often disagree on classifications. Even the same person has classified the same document differently at different times due to evolving experiences. Consistent classification is necessary to support tracking trends in problem categories over time. A decision support system that performs consistent document classification quickly and over large repositories would be useful. We discuss the results of two algorithms we have developed to classify ASRS and ASAP documents. The first is Mariana---a support vector machine (SVM) with simulated annealing, which is used to optimize hyperparameters for the model. The second method is classification built on top of nonnegative matrix factorization (NMF), which attempts to find a model that represents document features that add up in various combinations to form documents. We tested both methods on ASRS and ASAP documents with the latter categorized two different ways. We illustrate the potential of NMF to provide document features that are interpretable and indicative of topics. We also briefly discuss the tool that we have incorporated Mariana into in order to allow human experts to provide feedback on the document categorizations.
- API
Average Trend Percent by WRIA
data.wa.gov | Last Updated 2022-09-07T23:23:38.000ZSummer Low Flow Trend Indicator results, statewide, updated through Oct 2013. This information is updated annually with an additional year of flow data. These results are provided to the Puget Sound Partnership for their Vital Signs (http://www.psp.wa.gov/vitalsigns/summer_stream_flows.php) and to the Governor's Salmon Recovery Office for the "State of Salmon in WAtersheds" report (http://stateofsalmon.wa.gov/statewide/indicators/water-quantity). The attached document "WR Indicator Outcomes Memo - 10-24-10.pdf" describes the methodology for developing these indicators. The attached document "Low Flow Indicator Metadata.pdf" describes the contents of each column. Dept. of Ecology home page: http://www.ecy.wa.gov/ Disclaimer: Information provided by Ecology on this Web site is accurate to the best of Ecology's knowledge and is subject to change on a regular basis, without notice. Ecology cannot and does not warrant that the information on this Web site is absolutely current, although every effort is made to ensure that it is kept as current as possible. Ecology cannot and does not warrant the accuracy of these documents beyond the source documents, although every attempt is made to work from authoritative sources. Links to related sites are provided as a courtesy, but Ecology is not responsible for their availability, content or policies.
- API
Summer Low Flow Indicator 1975-2017
data.wa.gov | Last Updated 2024-04-15T17:06:22.000ZSummer Low Flow Trend Indicator results, statewide, updated through Oct 2017. This information is updated annually with an additional year of flow data. These results are provided to the Puget Sound Partnership for their Vital Signs (http://www.psp.wa.gov/vitalsigns/summer_stream_flows.php) and to the Governor's Salmon Recovery Office for the "State of Salmon in Watersheds" report (http://stateofsalmon.wa.gov/statewide/indicators/water-quantity). The attached document "WR Indicator Outcomes Memo - 10-24-10.pdf" describes the methodology for developing these indicators. The attached document "Low Flow Indicator Metadata.pdf" describes the contents of each column. Dept. of Ecology home page: http://www.ecy.wa.gov/ Disclaimer: Information provided by Ecology on this Web site is accurate to the best of Ecology's knowledge and is subject to change on a regular basis, without notice. Ecology cannot and does not warrant that the information on this Web site is absolutely current, although every effort is made to ensure that it is kept as current as possible. Ecology cannot and does not warrant the accuracy of these documents beyond the source documents, although every attempt is made to work from authoritative sources. Links to related sites are provided as a courtesy, but Ecology is not responsible for their availability, content or policies.
- API
Queens Libraries (Map)
data.cityofnewyork.us | Last Updated 2023-12-13T02:09:57.000ZMap of Queens Public Libraries with Hours and Locations
- API
2018 Kansas City Energy and Water Consumption Benchmarking for Community-Wide Buildings v1.0
data.kcmo.org | Last Updated 2019-07-26T15:14:25.000ZThe first version of the 2018 Energy and Water consumption sent to the City by owners of buildings 50,000 SQFT or greater using the Energy Star Portfolio Manager tool. Data is required by the Energy Empowerment Ordinance in Kansas City, Missouri. The data were collected in 2019 and might be appended as new submissions come in.