- Air Pollution and Dementia: A Systematic Review
- Air Pollution Monitoring: A Case Study from Romania
- chapter and author info
- Handbook of air pollution technology (Conference) | caflekornre.ga
Theoretical description and modeling of these processes, air quality measurement techniques and pollution control techniques are covered. Objective The students gain general knowledge of the factors resulting in air pollution and the techniques used for air pollution control. The students can identify major air pollution sources and understand the methods for measurement, data collection and analysis. The students can evaluate possible control methods and equipment, design a control system and estimate the efficiency and cost.
Content - the physical and chemical processes leading to emission of pollutants - air quality analysis - the meteorological parameters influencing air pollution dispersion - deterministic and stochastic models, describing the air pollution dispersion - measurement concepts to observe ambient air pollution - removal of gaseous pollutants by absorption and adsorption - control of NOx and Sox - fundamentals of particulate control - design and application of wet scrubbers Literature Text book Air Pollution Control Technology Handbook, Karl B.
Schnelle, Jr. There are a number of advantages in modeling scenarios using a formal Bayesian framework. First, expert opinion and literature results can be included in the analysis through the definition of a prior. Second, probabilities can be obtained from the posterior distribution, and, lastly, it is relatively easy to specify a hierarchical structure on data and parameters useful to make predictions and working with missing data. However, the authors only model a single pollutant at a time; this is a common limitation of many epidemiological studies.
In recent years, instead, there has been increasing interest in the combined effect of multiple species.
Therefore, the modeling work presented here will incorporate data from multiple species in an effort to understand the more holistic nature and impact of exposure. In addition to pollution exposure, it is also important to consider environmental factors, some of which may have an exacerbating effect, some others a mitigating one. Extreme temperatures and ultraviolet radiation, for instance, can exacerbate preexisting health conditions such as cardiovascular and pulmonary diseases, as well as trigger new ones by aggravating exposure levels.
Precipitation and wind can, instead, facilitate deposition and dispersion of pollutants. Basu and Samet reviewed a number of US studies on the relationship between elevated ambient temperature and mortality and concluded that the risk is higher for people with preexisting cardiovascular and respiratory diseases but that age and socioeconomic status can also be factors worth considering. Bell et al. Estimating the interaction between temperature and Ozone in light of observed health outcomes is not trivial, as these are highly correlated variables. Jhun et al. They found a statistically significant increase in Ozone mortality risk during high temperature days and also that air conditioning seemed to have a mitigating effect.
As such, the present study includes certain environmental variables beyond air pollution in an effort to capture more of the factors that combine to influence mortality. Zheng et al. In general, air quality is good in two cases: high temperature and low humidity; and high pressure and low temperature. De Sario et al. These factors increase allergic reactions, decline lung function, cause lung cancer, and even premature death.
Computational challenges are also apparent, since analyzing large databases requires long processing times and can only be handled with an adequate computer infrastructure in place. In this context, graphical models such as Bayesian Networks BNs seem to be a perfect fit. These models are increasingly being used in computer science Hu et al. BNs can be inferred from data, their construction involves identifying the conditional independence structures among variables or their joint probability distribution , and are schematized as a Directed Acyclic Graph DAG in which features are represented as nodes and dependencies as edges.
By definition, a DAG cannot contain cycles and an edge can only have one direction. The advantage of using such models is threefold: 1 they limit the number of possible dependencies to analyze during the structure learning task making the network easier to inspect visually; 2 speed up computations the fewer dependencies, the less computational time is required ; and finally 3 they allow the introduction of expert knowledge. The whitelist declares which edges are forced to be present in the DAG, while the blacklist declares edges that are excluded from the DAG.
The main goal and novelty of this work consists of investigating whether BNs can be used with large volumes of heterogeneous data in terms of spatiotemporal scale and data types and still able to identify, interpret, and predict the dependence structure between these predictors and health outcomes mortality. As described in section 1 , this has significant value in understanding the complex links between environmental factors and health outcomes as well as being used in the evidence base to inform policy interventions.
To the best knowledge of the authors, the variety and volume of information taken into account has not been previously analyzed for the English regions and constitutes an additional novelty of this work. The remainder of the paper is organized as follows: in section 2 we describe the case study and data availability, as well as how we suggest to assemble the database of available information, handle missing values and build the BN for both continuous and categorical variables. The results of the structure learning process, inference, and predictions are discussed in section 3 , while the overall results are discussed in section 4 and main conclusions and future works are summarized in section 5.volunteerparks.org/wp-content/buzilyva/1810.php
Air Pollution and Dementia: A Systematic Review
According to the guidelines suggested by Marcot et al. We take into account the air quality monitoring stations in England UK as location of interest and extract the weather, geography, and health data at these locations, from to The recorded features are summarized in Table 1. The table also contains the names of variables as used by the model, which is helpful for reading the network and interpret the results. More details are given in the following subsections.
We identified relevant features for air pollution modeling reviewing the literature in the field.
- 1. Introduction.
- Hell is in Paradise?
- 50 Shades of Punch - 50 Delicious, Fast & Easy Punch Recipes.
- The Journey of One Woman Discovering Her Sexuality.
The most reliable measurements in England are available from a network of stations, which record hourly measurements. These stations were identified using the rdefra R package Vitolo et al. The geographical distribution of data points is shown in Figure 1. Spatially, the stations are evenly distributed across regions but concentrated in urban areas within each region see Figure 2.
Air Pollution Monitoring: A Case Study from Romania
In particular, complete observations are only recorded for the period — in the urban area of the Greater London Authority Environment Type: Background Urban Traffic. Individual stations usually monitor only a subset of pollutants. From the same source, we also obtained the exact location latitude, longitude, and altitude and environmental type of each station. Health outcomes are described in terms of mortality rates.
chapter and author info
These are obtained from mortality counts per thousand individuals split by region, age, and day of occurrence. Data are provided by the Office for National Statistics and cumulated within each region of England Office of the National Statistics, a , b. The data on mortality were filtered for the over 60 age demographic in order to focus on a vulnerable population group. The choice of this age bracket further necessitated the aggregation of data on the larger scale of the English regions because the use of any smaller geographical scale could have resulted in the identification of individuals, which is not permitted with this data set.
Lastly, mortality rates are calculated by dividing the mortality counts by yearly regional population estimates obtained from the MYEDE data set Office for National Statistics, The population estimates are considered constant over the year and the mortality rates constant over each region. A BN is built to identify the dependence structure among exposure and outcome variables.
- Lady Bramley Goes Wild 5.
- Air Pollution and Dementia: A Systematic Review.
- Astrology Love Secrets: Let Your Soulmate Find You.
- Dante - Die Göttliche Komödie - Divina Commedia: Dann traten wir heraus und sahen die Sterne (German Edition);
- Smart Searching.
We estimate goodness of fit using the Bayesian Information Criterion Schwarz, , which approximates the posterior probability of the DAG. In particular, we assume that discrete features are categorical i.
The distribution of continuous features conditional on the respective parents is assumed to take the form of a set of classic linear regression models one for each combination of the possible values of the discrete parents in which the continuous parents take the role of explanatory variables. In order to avoid inadvertently introducing bias in the BN, we decided not to declare any whitelist but only a blacklist marking some edges as unrealistic.
In particular, topographic variables latitude, longitude, and altitude can influence weather and pollutants but not vice versa; pollutants can influence weather and mortality but not vice versa; and mortality rates cannot influence any of the other variables. The assembled data set consists of almost 50 million records with 24 features. Training and testing data sets are publicly available Vitolo et al. The gaps between each pair of consecutive weather observations in the training set are filled in using linear interpolation.
Initialize using the empty graph and complete observations. Fit the parameters for the empty structure using their maximum likelihood estimates. Repeat the following until convergence: Expectation step: replace missing values with their posterior expectations conditional on the observed values, using the predict function. This is initialized using an empty graph and the blacklist.
Then missing values in each observation are imputed with their maximum a posteriori estimates from the variables that are observed, and a new graph is learned from the now complete data. These two steps, imputation and learning, are repeated until the learned DAG and the imputed data are stable i. The training data set is used to generate the graph model. After learning the structure and parameters of the BN, we assess its accuracy through the analysis of residuals and validate it by comparing predicted variables under unobserved conditions provided by the testing data set.
The method described above produces data sets whose size depends on the number of monitoring stations and temporal coverage of the network. This has a strong impact on the performance of the analysis, which can take a long time if it is possible at all for a data set made of several millions of records and tens of features, using an average desktop machine. Determining the most appropriate technologies to employ, both in terms of hardware and software, is crucial in this respect. Without going into much detail on the parallelization, which is beyond the scope of this paper, horizontal scaling was essential to build the database and run the Structural EM algorithm, while vertical scaling was used for both exploratory analysis and verification of results.
We developed this analysis pipeline in the R programming language because of the availability of libraries implementing most of the required algorithms. These libraries have been thoroughly tested and, in most cases, are considered the reference implementations of the methods they implement.
Handbook of air pollution technology (Conference) | caflekornre.ga
The calculation did not fully converge; however, for every successive iteration the structures showed a maximum of three different arcs only, while at least 68 arcs were in common see Table 2. The DAG obtained from the last iteration Figure 3 was used for the subsequent analysis. This structure clearly detects the hierarchical structure of the different scales of observation, with the mortality over the age of 60 CVD60, observed at regional scale related to the geographical location Region and variable with time year, season, and month.
Within each region, the proximity to urban areas encoded in the TYPE and ZONE variables and the time of the year affect the concentration of pollutants. These, in turn, influence the weather. The accuracy of the model is assessed by analyzing the residuals between the observed variables and those predicted by the model. In Table 3 , the root mean square error RMSE is used to summarize the average deviation of the estimates from the actual values.
The RMSE here is normalized to make it comparable across variables. As the training and testing data sets were split based on the Year, we disregarded this information when predicting using the testing data set. The table shows that RMSE for pollution and weather variables in the training and testing sets are generally very similar. This suggests that the model tests well in sample but also has good predictive power when tested out of sample.
Clear exceptions are the geographical variables longitude, latitude, and altitude for which errors increase from 0. The error associated to the mortality rates also increases considerably, from 0.