Non-rigid image registration using…
Updated:
September 4, 2017
We propose a novel non-rigid image registration algorithm that is built upon fully convolutional networks (FCNs) to optimize and learn spatial transformations between pairs of images to be registered. Different from most existing deep learning based image registration methods that learn spatial transformations from training data with known corresponding spatial transformations, our method directly estimates spatial transformations between pairs of images by maximizing an image-wise similarity metric between fixed and deformed moving images, similar to conventional image registration algorithms. At the same time, our method also learns FCNs for encoding the spatial transformations at the same spatial resolution of images to be registered, rather than learning coarse-grained spatial transformation information. The image registration is implemented in a multi-resolution image registration framework to jointly optimize and learn spatial transformations and FCNs at different resolutions with deep self-supervision through typical feedforward and backpropagation computation. Since our method simultaneously optimizes and learns spatial transformations for the image registration, our method can be directly used to register a pair of images, and the registration of a set of images is also a training procedure for FCNs so that the trained FCNs can be directly adopted to register new images by feedforward computation of the learned FCNs without any optimization. The proposed method has been evaluated for registering 3D structural brain magnetic resonance (MR) images and obtained better performance than state-of-the-art image registration algorithms.
Estimation of interventional effec…
Updated:
September 3, 2017
The interpretability of prediction mechanisms with respect to the underlying prediction problem is often unclear. While several studies have focused on developing prediction models with meaningful parameters, the causal relationships between the predictors and the actual prediction have not been considered. Here, we connect the underlying causal structure of a data generation process and the causal structure of a prediction mechanism. To achieve this, we propose a framework that identifies the feature with the greatest causal influence on the prediction and estimates the necessary causal intervention of a feature such that a desired prediction is obtained. The general concept of the framework has no restrictions regarding data linearity; however, we focus on an implementation for linear data here. The framework applicability is evaluated using artificial data and demonstrated using real-world data.
An Ensemble Classifier for Predict…
Updated:
August 24, 2017
Prediction of disease onset from patient survey and lifestyle data is quickly becoming an important tool for diagnosing a disease before it progresses. In this study, data from the National Health and Nutrition Examination Survey (NHANES) questionnaire is used to predict the onset of type II diabetes. An ensemble model using the output of five classification algorithms was developed to predict the onset on diabetes based on 16 features. The ensemble model had an AUC of 0.834 indicating high performance.
An Improved Neural Segmentation Me…
Updated:
August 16, 2017
Neural segmentation has a great impact on the smooth implementation of local anesthesia surgery. At present, the network for the segmentation includes U-NET [1] and SegNet [2]. U-NET network has short training time and less training parameters, but the depth is not deep enough. SegNet network has deeper structure, but it needs longer training time, and more training samples. In this paper, we propose an improved U-NET neural network for the segmentation. This network deepens the original structure through importing residual network. Compared with U-NET and SegNet, the improved U-NET network has fewer training parameters, shorter training time and get a great improvement in segmentation effect. The improved U-NET network structure has a good application scene in neural segmentation.
Document Image Binarization with F…
Updated:
August 10, 2017
Binarization of degraded historical manuscript images is an important pre-processing step for many document processing tasks. We formulate binarization as a pixel classification learning task and apply a novel Fully Convolutional Network (FCN) architecture that operates at multiple image scales, including full resolution. The FCN is trained to optimize a continuous version of the Pseudo F-measure metric and an ensemble of FCNs outperform the competition winners on 4 of 7 DIBCO competitions. This same binarization technique can also be applied to different domains such as Palm Leaf Manuscripts with good performance. We analyze the performance of the proposed model w.r.t. the architectural hyperparameters, size and diversity of training data, and the input features chosen.
Land Cover Classification from Mul…
Updated:
August 2, 2017
Sustainability of the global environment is dependent on the accurate land cover information over large areas. Even with the increased number of satellite systems and sensors acquiring data with improved spectral, spatial, radiometric and temporal characteristics and the new data distribution policy, most existing land cover datasets were derived from a pixel-based single-date multi-spectral remotely sensed image with low accuracy. To improve the accuracy, the bottleneck is how to develop an accurate and effective image classification technique. By incorporating and utilizing the complete multi-spectral, multi-temporal and spatial information in remote sensing images and considering their inherit spatial and sequential interdependence, we propose a new patch-based RNN (PB-RNN) system tailored for multi-temporal remote sensing data. The system is designed by incorporating distinctive characteristics in multi-temporal remote sensing data. In particular, it uses multi-temporal-spectral-spatial samples and deals with pixels contaminated by clouds/shadow present in the multi-temporal data series. Using a Florida Everglades ecosystem study site covering an area of 771 square kilo-meters, the proposed PB-RNN system has achieved a significant improvement in the classification accuracy over pixel-based RNN system, pixel-based single-imagery NN system, pixel-based multi-images NN system, patch-based single-imagery NN system and patch-based multi-images NN system. For example, the proposed system achieves 97.21% classification accuracy while a pixel-based single-imagery NN system achieves 64.74%. By utilizing methods like the proposed PB-RNN one, we believe that much more accurate land cover datasets can be produced over large areas efficiently.
Application of machine learning fo…
Updated:
August 1, 2017
Quick and accurate medical diagnosis is crucial for the successful treatment of a disease. Using machine learning algorithms, we have built two models to predict a hematologic disease, based on laboratory blood test results. In one predictive model, we used all available blood test parameters and in the other a reduced set, which is usually measured upon patient admittance. Both models produced good results, with a prediction accuracy of 0.88 and 0.86, when considering the list of five most probable diseases, and 0.59 and 0.57, when considering only the most probable disease. Models did not differ significantly from each other, which indicates that a reduced set of parameters contains a relevant fingerprint of a disease, expanding the utility of the model for general practitioner's use and indicating that there is more information in the blood test results than physicians recognize. In the clinical test we showed that the accuracy of our predictive models was on a par with the ability of hematology specialists. Our study is the first to show that a machine learning predictive model based on blood tests alone, can be successfully applied to predict hematologic diseases and could open up unprecedented possibilities in medical diagnosis.
FCN-rLSTM: Deep Spatio-Temporal Ne…
Updated:
August 1, 2017
In this paper, we develop deep spatio-temporal neural networks to sequentially count vehicles from low quality videos captured by city cameras (citycams). Citycam videos have low resolution, low frame rate, high occlusion and large perspective, making most existing methods lose their efficacy. To overcome limitations of existing methods and incorporate the temporal information of traffic video, we design a novel FCN-rLSTM network to jointly estimate vehicle density and vehicle count by connecting fully convolutional neural networks (FCN) with long short term memory networks (LSTM) in a residual learning fashion. Such design leverages the strengths of FCN for pixel-level prediction and the strengths of LSTM for learning complex temporal dynamics. The residual learning connection reformulates the vehicle count regression as learning residual functions with reference to the sum of densities in each frame, which significantly accelerates the training of networks. To preserve feature map resolution, we propose a Hyper-Atrous combination to integrate atrous convolution in FCN and combine feature maps of different convolution layers. FCN-rLSTM enables refined feature representation and a novel end-to-end trainable mapping from pixels to vehicle count. We extensively evaluated the proposed method on different counting tasks with three datasets, with experimental results demonstrating their effectiveness and robustness. In particular, FCN-rLSTM reduces the mean absolute error (MAE) from 5.31 to 4.21 on TRANCOS, and reduces the MAE from 2.74 to 1.53 on WebCamT. Training process is accelerated by 5 times on average.
Building Detection from Satellite …
Updated:
July 27, 2017
In the last several years, remote sensing technology has opened up the possibility of performing large scale building detection from satellite imagery. Our work is some of the first to create population density maps from building detection on a large scale. The scale of our work on population density estimation via high resolution satellite images raises many issues, that we will address in this paper. The first was data acquisition. Labeling buildings from satellite images is a hard problem, one where we found our labelers to only be about 85% accurate at. There is a tradeoff of quantity vs. quality of labels, so we designed two separate policies for labels meant for training sets and those meant for test sets, since our requirements of the two set types are quite different. We also trained weakly supervised footprint detection models with the classification labels, and semi-supervised approaches with a small number of pixel-level labels, which are very expensive to procure.
Graphical posterior predictive cla…
Updated:
June 6, 2018
In this study, we present a multi-class graphical Bayesian predictive classifier that incorporates the uncertainty in the model selection into the standard Bayesian formalism. For each class, the dependence structure underlying the observed features is represented by a set of decomposable Gaussian graphical models. Emphasis is then placed on the Bayesian model averaging which takes full account of the class-specific model uncertainty by averaging over the posterior graph model probabilities. An explicit evaluation of the model probabilities is well known to be infeasible. To address this issue, we consider the particle Gibbs strategy of Olsson et al. (2018b) for posterior sampling from decomposable graphical models which utilizes the Christmas tree algorithm of Olsson et al. (2018a) as proposal kernel. We also derive a strong hyper Markov law which we call the hyper normal Wishart law that allow to perform the resultant Bayesian calculations locally. The proposed predictive graphical classifier reveals superior performance compared to the ordinary Bayesian predictive rule that does not account for the model uncertainty, as well as to a number of out-of-the-box classifiers.
Marchenko-based target replacement…
Updated:
June 7, 2020
In seismic monitoring, one is usually interested in the response of a changing target zone, embedded in a static inhomogeneous medium. We introduce an efficient method which predicts reflection responses at the earth's surface for different target-zone scenarios, from a single reflection response at the surface and a model of the changing target zone. The proposed process consists of two main steps. In the first step, the response of the original target zone is removed from the reflection response, using the Marchenko method. In the second step, the modelled response of a new target zone is inserted between the overburden and underburden responses. The method fully accounts for all orders of multiple scattering and, in the elastodynamic case, for wave conversion. For monitoring purposes, only the second step needs to be repeated for each target-zone model. Since the target zone covers only a small part of the entire medium, the proposed method is much more efficient than repeated modelling of the entire reflection response.
Investigation of the Relationship …
Updated:
June 20, 2017
How did the Sun affect the air pollution on the Earth? There are few papers about this question. This work investigates the relationship between the air pollution and solar activity by using the geophysical and environmental data during the period of 2000-2016. It is quite certain that the solar activity may impact on the air pollution, but the relationship is very weak and indirect. The Pearson correlation, Spearman rank correlation, Kendalls rank correlation, and conditional probability were adopted to analyze the air pollution index (API), air quality index (AQI), sunspot number (SSN), radio flux at wavelength of 10.7 cm (F10.7), and total solar irradiance (TSI). The analysis implies that the correlation coefficient between API and SSN is weak ($-0.17<r<0.32$) with complex variation. The main results are: (1) For cities with higher air pollution, the probability of high API will be increased along with SSN, then reach to a maximum, and then decreased; (2) For cities with lower air pollution, the API has lower correlation with SSN; (3) The relationship between API and F10.7, or API and TSI are also similar as API and SSN. The solar activities take direct effect on TSI and the energetic particle flux, and indirect and long-term effect on lower atmosphere and weather near the Earth. All of these factors contribute to the air pollution on the Earth.
Concurrent risks of dam failure du…
Updated:
May 31, 2017
The chance (or probability) of a dam failure can change for various reasons such as structural degradation, the impacts of climate change and land-use change. Similarly the consequences of dam failure (flooding) can change for many reasons such as growth in the population in areas below a dam. Consequently both the chance that a dam might fail and the likely consequences of that failure can change over time. It is therefore crucial that reservoir safety risk analysis methods and decision-making processes are able to support (as a minimum) what-if testing (or sensitivity testing) to take into account these changes over time to gauge their effect on the estimated risk of dam failure. The consequences of a dam failure relate to the vulnerability and exposure of the receptors (for example, people, property and environment) to floodwater. Also the probability of dam failure varies with age, design and construction of the dam. Spillway failure may be caused by the dissipation of energy from water flowing down the spillway, and embankment erosion (scour) may be caused by a dam overtopping. The occurrence of these events depends upon the dam design and the likelihood of extreme rainfall, also in the case of overtopping wind-driven waves on the reservoir surface. In this study the meteorological situations of notable recent events i.e. the Boltby, North Yorkshire incident, 19 June 2005 in which the dam almost overtopped, and the spillway failure of the Ulley Dam near Rotherham at the end of June 2007, are studied. The WRF numerical model will be used to indicate how these meteorological situations might be maximized, and be coupled with the occurrence of other failure modes such as the likelihood of internal dam failure assessed from previous work by government panel engineers.
The impact of energetic electron p…
Updated:
June 1, 2017
In 2008 a sequence of geomagnetic storms occurred triggered by high-speed solar wind streams from coronal holes. Improved estimates of precipitating fluxes of energetic electrons are derived from measurements on board the NOAA/POES 18 satellite using a new analysis technique. These fluxes are used to quantify the direct impact of energetic electron precipitation (EEP) during solar minimum on middle atmospheric hydroxyl (OH) measured from the Aura satellite. During winter, localized longitudinal density enhancements in the OH are observed over northern Russia and North America at corrected geomagnetic latitudes poleward of 55$^{\circ}$. Although the northern Russia OH enhancement is closely associated with increased EEP at these longitudes, the strength and location of the North America enhancement appear to be unrelated to EEP. This OH density enhancement is likely due to vertical motion induced by atmospheric wave dynamics that transports air rich in atomic oxygen and atomic hydrogen downward into the middle atmosphere, where it plays a role in the formation of OH. In the Southern Hemisphere, localized enhancements of the OH density over West Antarctica can be explained by a combination of enhanced EEP due to the local minimum in Earth's magnetic field strength and atmospheric dynamics. Our findings suggest that even during solar minimum, there is substantial EEP-driven OH production. However, to quantify this effect, a detailed knowledge of where and when the precipitation occurs is required in the context of the background atmospheric dynamics.
Neuron Segmentation Using Deep Com…
Updated:
May 31, 2017
In this paper, we consider the problem of automatically segmenting neuronal cells in dual-color confocal microscopy images. This problem is a key task in various quantitative analysis applications in neuroscience, such as tracing cell genesis in Danio rerio (zebrafish) brains. Deep learning, especially using fully convolutional networks (FCN), has profoundly changed segmentation research in biomedical imaging. We face two major challenges in this problem. First, neuronal cells may form dense clusters, making it difficult to correctly identify all individual cells (even to human experts). Consequently, segmentation results of the known FCN-type models are not accurate enough. Second, pixel-wise ground truth is difficult to obtain. Only a limited amount of approximate instance-wise annotation can be collected, which makes the training of FCN models quite cumbersome. We propose a new FCN-type deep learning model, called deep complete bipartite networks (CB-Net), and a new scheme for leveraging approximate instance-wise annotation to train our pixel-wise prediction model. Evaluated using seven real datasets, our proposed new CB-Net model outperforms the state-of-the-art FCN models and produces neuron segmentation results of remarkable quality
Enhancement of SSD by concatenatin…
Updated:
May 26, 2017
We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300 x 300 achieved 78.5% mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512 x 512 sized input achieved 80.8% mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.
Assessing the role of the spatial …
Updated:
May 24, 2017
The analysis of benthic assemblages is a valuable tool to describe the ecological status of transitional water ecosystems, but species are extremely sensitive and respond to both microhabitat and seasonal differences. The identification of changes in the composition of the macrobenthic community in specific microhabitats can then be used as an "early warning" for environmental changes which may affect the economic and ecological importance of lagoons, through their provision of Ecosystem Services. From a conservational point of view, the appropriate definition of the spatial aggregation level of microhabitats or local communities is of crucial importance. The main objective of this work is to assess the role of the spatial scale in the analysis of lagoon biodiversity. First, we analyze the variation in the sample coverage for alternative aggregations of the monitoring stations in three lagoons of the Po River Delta. Then, we analyze the variation of a class of entropy indices by mixed effects models, properly accounting for the fixed effects of biotic and abiotic factors and random effects ruled by nested sources of variability corresponding to alternative definitions of local communities. Finally, we address biodiversity partitioning by a generalized diversity measure, namely the Tsallis entropy, and for alternative definitions of the local communities. The main results obtained by the proposed statistical protocol are presented, discussed and framed in the ecological context.
Towards seamless multi-view scene …
Updated:
May 23, 2017
In this paper, we discuss and review how combined multi-view imagery from satellite to street-level can benefit scene analysis. Numerous works exist that merge information from remote sensing and images acquired from the ground for tasks like land cover mapping, object detection, or scene understanding. What makes the combination of overhead and street-level images challenging, is the strongly varying viewpoint, different scale, illumination, sensor modality and time of acquisition. Direct (dense) matching of images on a per-pixel basis is thus often impossible, and one has to resort to alternative strategies that will be discussed in this paper. We review recent works that attempt to combine images taken from the ground and overhead views for purposes like scene registration, reconstruction, or classification. Three methods that represent the wide range of potential methods and applications (change detection, image orientation, and tree cataloging) are described in detail. We show that cross-fertilization between remote sensing, computer vision and machine learning is very valuable to make the best of geographic data available from Earth Observation sensors and ground imagery. Despite its challenges, we believe that integrating these complementary data sources will lead to major breakthroughs in Big GeoData.
Modeling the impact of rain on pop…
Updated:
April 28, 2017
Environmental pollution, comprising of air, water and soil have emerged as a serious problem in past two decades. The air pollution is caused by contamination of air due to various natural and anthropogenic activities. The growing air pollution has diverse adverse effects on human health and other living species. However, a significant reduction in the concentration of air pollutants has been observed during the rainy season. Recently, a number of studies have been performed to understand the mechanism of removal of air pollutants due to the rain. These studies have found that rain is helpful in removing many air pollutants from the environment. In this paper, we proposed a mathematical model to investigate the role of rain in removal of air pollutants and its subsequent impacts on human population.
Satellite altimetry reveals spatia…
Updated:
May 3, 2017
The main properties of the climate of waves in the seasonally ice-covered Baltic Sea and its decadal changes since 1990 are estimated from satellite altimetry data. The data set of significant wave heights (SWH) from all existing nine satellites, cleaned and cross-validated against in situ measurements, shows overall a very consistent picture. A comparison with visual observations shows a good correspondence with correlation coefficients of 0.6-0.8. The annual mean SWH reveals a tentative increase of 0.005 m yr-1, but higher quantiles behave in a cyclic manner with a timescale of 10-15 yr. Changes in the basin-wide average SWH have a strong meridional pattern: an increase in the central and western parts of the sea and decrease in the east. This pattern is likely caused by a rotation of wind directions rather than by an increase in the wind speed.
Anthropogenic influences on ground…
Updated:
May 1, 2017
The groundwater flow system in the Culebra Dolomite Member (Culebra) of the Permian Rustler Formation is a potential radionuclide release pathway from the Waste Isolation Pilot Plant (WIPP), the only deep geological repository for transuranic waste in the United States. In early conceptual models of the Culebra, groundwater levels were not expected to fluctuate markedly, except in response to long-term climatic changes, with response times on the order of hundreds to thousands of years. Recent groundwater pressures measured in monitoring wells record more than 25 m of drawdown. The fluctuations are attributed to pumping activities at a privately-owned well that may be associated with the demand of the Permian Basin hydrocarbon industry for water. The unprecedented magnitude of drawdown provides an opportunity to quantitatively assess the influence of unplanned anthropogenic forcings near the WIPP. Spatially variable realizations of Culebra saturated hydraulic conductivity and storativity were used to develop groundwater flow models to estimate a pumping rate for the private well and investigate its effect on advective transport. Simulated drawdown shows reasonable agreement with observations (average Model Efficiency coefficient = 0.7). Steepened hydraulic gradients associated with the pumping reduce estimates of conservative particle travel times across the domain by one-half and shift the intersection of the average particle track with the compliance boundary by more than two kilometers. The value of the transient simulations conducted for this study lie in their ability to (i) improve understanding of the Culebra groundwater flow system and (ii) challenge the notion of time-invariant land use in the vicinity of the WIPP.
Does forest replacement increase w…
Updated:
April 28, 2017
The forest plays an important role in a watershed hydrology, regulating the transfer of water within the system. The forest role in maintaining watersheds hydrological regime is still a controversial issue. Consequently, we use the Soil and Water Assessment Tool (SWAT) model to simulate scenarios of land use in a watershed. In one of these scenarios we identified, through GIS techniques, Environmentally Sensitive Areas (ESAs) which have watershed been degraded and we considered these areas protected by forest cover. This scenario was then compared to current usage scenario regarding watershed sediment yield and hydrological regime. The results showed a reduction in sediment yield of 54% among different scenarios, at the same time that the watershed water yield was reduced by 19.3%.
Prediction of Daytime Hypoglycemic…
Updated:
April 27, 2017
Daytime hypoglycemia should be accurately predicted to achieve normoglycemia and to avoid disastrous situations. Hypoglycemia, an abnormally low blood glucose level, is divided into daytime hypoglycemia and nocturnal hypoglycemia. Many studies of hypoglycemia prevention deal with nocturnal hypoglycemia. In this paper, we propose new predictor variables to predict daytime hypoglycemia using continuous glucose monitoring (CGM) data. We apply classification and regression tree (CART) as a prediction method. The independent variables of our prediction model are the rate of decrease from a peak and absolute level of the BG at the decision point. The evaluation results showed that our model was able to detect almost 80% of hypoglycemic events 15 min in advance, which was higher than the existing methods with similar conditions. The proposed method might achieve a real-time prediction as well as can be embedded into BG monitoring device.
Hierarchical 3D fully convolutiona…
Updated:
April 21, 2017
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of full volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of seven abdominal structures (artery, vein, liver, spleen, stomach, gallbladder, and pancreas) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training organ-specific models. To this end, we propose a two-stage, coarse-to-fine approach that trains an FCN model to roughly delineate the organs of interest in the first stage (seeing $\sim$40% of the voxels within a simple, automatically generated binary mask of the patient's body). We then use these predictions of the first-stage FCN to define a candidate region that will be used to train a second FCN. This step reduces the number of voxels the FCN has to classify to $\sim$10% while maintaining a recall high of $>$99%. This second-stage FCN can now focus on more detailed segmentation of the organs. We respectively utilize training and validation sets consisting of 281 and 50 clinical CT images. Our hierarchical approach provides an improved Dice score of 7.5 percentage points per organ on average in our validation set. We furthermore test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans with three anatomical labels (liver, spleen, and pancreas). In such challenging organs as the pancreas, our hierarchical approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset.
Land Cover Classification via Mult…
Updated:
April 13, 2017
Nowadays, modern earth observation programs produce huge volumes of satellite images time series (SITS) that can be useful to monitor geographical areas through time. How to efficiently analyze such kind of information is still an open question in the remote sensing field. Recently, deep learning methods proved suitable to deal with remote sensing data mainly for scene classification (i.e. Convolutional Neural Networks - CNNs - on single images) while only very few studies exist involving temporal deep learning approaches (i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series. In this letter we evaluate the ability of Recurrent Neural Networks, in particular the Long-Short Term Memory (LSTM) model, to perform land cover classification considering multi-temporal spatial data derived from a time series of satellite images. We carried out experiments on two different datasets considering both pixel-based and object-based classification. The obtained results show that Recurrent Neural Networks are competitive compared to state-of-the-art classifiers, and may outperform classical approaches in presence of low represented and/or highly mixed classes. We also show that using the alternative feature representation generated by LSTM can improve the performances of standard classifiers.
Using convolutional networks and s…
Updated:
September 13, 2017
Urban planning applications (energy audits, investment, etc.) require an understanding of built infrastructure and its environment, i.e., both low-level, physical features (amount of vegetation, building area and geometry etc.), as well as higher-level concepts such as land use classes (which encode expert understanding of socio-economic end uses). This kind of data is expensive and labor-intensive to obtain, which limits its availability (particularly in developing countries). We analyze patterns in land use in urban neighborhoods using large-scale satellite imagery data (which is available worldwide from third-party providers) and state-of-the-art computer vision techniques based on deep convolutional neural networks. For supervision, given the limited availability of standard benchmarks for remote-sensing data, we obtain ground truth land use class labels carefully sampled from open-source surveys, in particular the Urban Atlas land classification dataset of $20$ land use classes across $~300$ European cities. We use this data to train and compare deep architectures which have recently shown good performance on standard computer vision tasks (image classification and segmentation), including on geospatial data. Furthermore, we show that the deep representations extracted from satellite imagery of urban environments can be used to compare neighborhoods across several cities. We make our dataset available for other machine learning researchers to use for remote-sensing applications.
Prediction of infectious disease e…
Updated:
March 31, 2017
Accurate and reliable predictions of infectious disease dynamics can be valuable to public health organizations that plan interventions to decrease or prevent disease transmission. A great variety of models have been developed for this task, using different model structures, covariates, and targets for prediction. Experience has shown that the performance of these models varies; some tend to do better or worse in different seasons or at different points within a season. Ensemble methods combine multiple models to obtain a single prediction that leverages the strengths of each model. We considered a range of ensemble methods that each form a predictive density for a target of interest as a weighted sum of the predictive densities from component models. In the simplest case, equal weight is assigned to each component model; in the most complex case, the weights vary with the region, prediction target, week of the season when the predictions are made, a measure of component model uncertainty, and recent observations of disease incidence. We applied these methods to predict measures of influenza season timing and severity in the United States, both at the national and regional levels, using three component models. We trained the models on retrospective predictions from 14 seasons (1997/1998 - 2010/2011) and evaluated each model's prospective, out-of-sample performance in the five subsequent influenza seasons. In this test phase, the ensemble methods showed overall performance that was similar to the best of the component models, but offered more consistent performance across seasons than the component models. Ensemble methods offer the potential to deliver more reliable predictions to public health decision makers.
Survey techniques, detection proba…
Updated:
January 31, 2018
Carnivores are important components of ecosystems with wide-ranging effects on ecological communities.We studied the carnivore community in the Apostle Islands National Lakeshore (APIS), where the presence, distribution, and populations of carnivores was largely unknown. We developed a systematic method to deploy camera traps across a grid while targeting fine-scale features to maximize carnivore detection (Appendix 1), including systematic methods for organizing and tagging the photo data (Appendix 2). We deployed 88 cameras on 13 islands from 2014-2016. We collected 92,694 photographs across 18,721 trap nights, including 3,591 wildlife events and 1,070 carnivore events. We had a mean of 6.6 cameras per island (range 2-30), and our camera density averaged 1.23 (range 0.74-3.08) cameras/ km2. We detected 27 species and 10 terrestrial carnivores, including surprising detections of American martens (Martes americana) and gray wolves (Canis lupus). The mean richness of carnivores on an island was 3.23 (range 0-10). The best single variable to explain carnivore richness on the Apostle Islands was island size, while the best model was island size (positive correlation) and distance from mainland (negative correlation) (R2 = 0.92). Relative abundances for carnivores ranged from a low of 0.01 for weasels (Mustela spp.) to a high of 2.64 for black bears (Ursus americanus), and the relative abundance of a species was significantly correlated with the number of islands on which they were found. Carnivore occupancy ranged from lows of 0.09 for gray wolves and 0.11 for weasels to a high of 0.82 for black bears. Fuller understanding of APIS ecology will require on-going monitoring of carnivores to evaluate temporal dynamics as well as related ecological evaluations (e.g. small mammal dynamics, plant community dynamics) to understand trophic effects.
Algebraic Variety Models for High-…
Updated:
March 28, 2017
We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i.e. each data point is a solution to a system of polynomial equations. In this case the original matrix is possibly high-rank, but it becomes low-rank after mapping each column to a higher dimensional space of monomial features. Many well-studied extensions of linear models, including affine subspaces and their union, can be described by a variety model. In addition, varieties can be used to model a richer class of nonlinear quadratic and higher degree curves and surfaces. We study the sampling requirements for matrix completion under a variety model with a focus on a union of affine subspaces. We also propose an efficient matrix completion algorithm that minimizes a convex or non-convex surrogate of the rank of the matrix of monomial features. Our algorithm uses the well-known "kernel trick" to avoid working directly with the high-dimensional monomial matrix. We show the proposed algorithm is able to recover synthetically generated data up to the predicted sampling complexity bounds. The proposed algorithm also outperforms standard low rank matrix completion and subspace clustering techniques in experiments with real data.
Forecasting the magnitude and onse…
Updated:
April 13, 2018
El Nino is probably the most influential climate phenomenon on interannual time scales. It affects the global climate system and is associated with natural disasters and serious consequences in many aspects of human life. However, the forecasting of the onset and in particular the magnitude of El Nino are still not accurate, at least more than half a year in advance. Here, we introduce a new forecasting index based on network links representing the similarity of low frequency temporal temperature anomaly variations between different sites in the El Nino 3.4 region. We find that significant upward trends and peaks in this index forecast with high accuracy both the onset and magnitude of El Nino approximately 1 year ahead. The forecasting procedure we developed improves in particular the prediction of the magnitude of El Nino and is validated based on several, up to more than a century long, datasets.
Learning to Predict: A Fast Re-con…
Updated:
March 25, 2017
Integrating visual and linguistic information into a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple method to build multimodal representations by learning a language-to-vision mapping and using its output to build multimodal embeddings. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently re-constructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped vectors not only implicitly encode multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more "human-like" judgments---particularly in zero-shot settings.
Response and Feedback of Cloud Diu…
Updated:
March 25, 2017
By reflecting solar radiation and reducing longwave emissions, clouds regulate the earth's radiation budget, impacting atmospheric circulation and cloud dynamics. Given the diurnal fluctuation of shortwave and longwave radiation, a shift in the cloud cycle phase (CCP) may lead to substantial feedbacks to the climate system. While most efforts have focused on the overall cloud feedback, the response of CCP to climate change has received much less attention. Here we analyze the variations of CCP using both long-term global satellite records and general circulation models (GCM) simulations to evaluate their impacts on the earth's energy budget. Satellite records show that in warm periods the CCP shifts earlier in the morning over the oceans and later in the afternoon over the land. Although less marked and with large inter-model spread, similar shifting patterns also occur in GCMs over the oceans with non-negligible CCP feedbacks. Over the land, where the GCM results are not conclusive, our findings are supported by atmospheric boundary layer models. A simplified radiative model further suggests that such shifts may in turn cause reduced reflection of solar radiation, thus inducing a positive feedback on climate. The crucial role of the cloud cycle calls for increased attention to the temporal evolution of the cloud diurnal cycle in climate models.
Changing measurements or changing …
Updated:
August 1, 2017
1. Animal movement patterns contribute to our understanding of variation in breeding success and survival of individuals, and the implications for population dynamics. 2. Over time, sensor technology for measuring movement patterns has improved. Although older technologies may be rendered obsolete, the existing data are still valuable, especially if new and old data can be compared to test whether a behaviour has changed over time. 3. We used simulated data to assess the ability to quantify and correctly identify patterns of seabird flight lengths under observational regimes used in successive generations of tracking technology. 4. Care must be taken when comparing data collected at differing time-scales, even when using inference procedures that incorporate the observational process, as model selection and parameter estimation may be biased. In practice, comparisons may only be valid when degrading all data to match the lowest resolution in a set. 5. Changes in tracking technology that lead to aggregation of measurements at different temporal scales make comparisons challenging. We therefore urge ecologists to use synthetic data to assess whether accurate parameter estimation is possible for models comparing disparate data sets before conducting analyses such as responses to environmental changes or the assessment of management actions.
Biodiversity, extinctions and evol…
Updated:
March 16, 2017
We investigate the formation of stable ecological networks where many species share the same resource. We show that such stable ecosystem naturally occurs as a result of extinctions. We obtain an analytical relation for the number of coexisting species and find a relation describing how many species that may go extinct as a result of a sharp environmental change. We introduce a special parameter that is a combination of species traits and resource characteristics used in the model formulation. This parameter describes the pressure on system to converge, by extinctions. When that stress parameter is large we obtain that the species traits concentrate at some values. This stress parameter is thereby a parameter that determines the level of final biodiversity of the system. Moreover, we show that dynamics of this limit system can be described by simple differential equations.
Comparison of the Deep-Learning-Ba…
Updated:
March 15, 2017
This paper presents an end-to-end pixelwise fully automated segmentation of the head sectioned images of the Visible Korean Human (VKH) project based on Deep Convolutional Neural Networks (DCNNs). By converting classification networks into Fully Convolutional Networks (FCNs), a coarse prediction map, with smaller size than the original input image, can be created for segmentation purposes. To refine this map and to obtain a dense pixel-wise output, standard FCNs use deconvolution layers to upsample the coarse map. However, upsampling based on deconvolution increases the number of network parameters and causes loss of detail because of interpolation. On the other hand, dilated convolution is a new technique introduced recently that attempts to capture multi-scale contextual information without increasing the network parameters while keeping the resolution of the prediction maps high. We used both a standard FCN and a dilated convolution based FCN for semantic segmentation of the head sectioned images of the VKH dataset. Quantitative results showed approximately 20% improvement in the segmentation accuracy when using FCNs with dilated convolutions.
Global and Brazilian carbon respon…
Updated:
March 10, 2017
The El Ni\~{n}o Modoki in 2010 lead to historic droughts in Brazil. We quantify the global and Brazilian carbon response to this event using the NASA Carbon Monitoring System Flux (CMS-Flux) framework. Satellite observations of CO$_2$, CO, and solar induced fluorescence (SIF) are ingested into a 4D-variational assimilation system driven by carbon cycle models to infer spatially resolved carbon fluxes including net ecosystem exchange, biomass burning, and gross primary productivity (GPP). The global net carbon flux tendency, which is the flux difference 2011-2010 and is positive for net fluxes into the atmosphere, was estimated to be -1.60 PgC between 2011-2010 while the Brazilian tendency was -0.24 $\pm$ 0.11 PgC. This estimate is broadly within the uncertainty of previous aircraft based estimates restricted to the Amazonian basin. The biomass burning tendency in Brazil was -0.24 $\pm$ 0.036 PgC, which implies a near-zero change of the net ecosystem production (NEP). The near-zero change of the NEP is the result of quantitatively comparable increase in GPP (0.34 $\pm$ 0.20) and respiration in Brazil. Comparisons of the component fluxes in Brazil to the global fluxes show a complex balance between regional contributions to individual carbon fluxes such as biomass burning, and their net contribution to the global carbon balance, i.e., the Brazilian biomass burning tendency is a significant contributor to the global biomass burning tendency but the Brazilian net flux tendency is not a dominant contributor to the global tendency. These results show the potential of multiple satellite observations to help quantify the spatially resolved response of productivity and respiration fluxes to climate variability.
Approach to spatial estimation of …
Updated:
March 1, 2017
The space-temporal evaluation to characterize meteorological droughts was based on data accumulated monthly precipitation between 1996-2005 from 20 meteorological stations distributed in the Coello River basin. Data precipitation was performed preprocessing with data consistency tests to correct and delete data over- or under estimated. To estimate missing precipitation data are compared three geostatistical interpolation methods derived from Kriging, associated with secondary variables such as the Ordinary Kriging, CoKrigin Ordinary associated with secondary variables of a Digital Elevation Model and data satellite TRMM. To select the statistical method the setting of each interpolation was compared with respect to three reference stations through three quality tests, which were Root Mean Square Error (RMSE), Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). In this investigation two of three tests favor the Ordinary CoKriging using as a secondary variable Altitude (COK+DEM). With the interpolated series of precipitation were evaluated and characterized by drought Standardized Precipitation Index (SPI) at monthly and quarterly scale, calculating the parameters of severity, duration, intensity and frequency of droughts. By mapping are delimited the regions where occur the more negative values of SPI. In analyzing spacetemporal the months of January, February, July and August are the driest of the year. In 1997 the meteorological drought greatest damage occurs in the Coello River basin generally concentrated in the middle and lower part of the basin, with a maximum intensity of SPI -2,57.
Probabilistic Reduced-Order Modeli…
Updated:
March 6, 2017
We discuss a Bayesian formulation to coarse-graining (CG) of PDEs where the coefficients (e.g. material parameters) exhibit random, fine scale variability. The direct solution to such problems requires grids that are small enough to resolve this fine scale variability which unavoidably requires the repeated solution of very large systems of algebraic equations. We establish a physically inspired, data-driven coarse-grained model which learns a low- dimensional set of microstructural features that are predictive of the fine-grained model (FG) response. Once learned, those features provide a sharp distribution over the coarse scale effec- tive coefficients of the PDE that are most suitable for prediction of the fine scale model output. This ultimately allows to replace the computationally expensive FG by a generative proba- bilistic model based on evaluating the much cheaper CG several times. Sparsity enforcing pri- ors further increase predictive efficiency and reveal microstructural features that are important in predicting the FG response. Moreover, the model yields probabilistic rather than single-point predictions, which enables the quantification of the unavoidable epistemic uncertainty that is present due to the information loss that occurs during the coarse-graining process.
A clustering approach to heterogen…
Updated:
February 10, 2017
Change detection in heterogeneous multitemporal satellite images is a challenging and still not much studied topic in remote sensing and earth observation. This paper focuses on comparison of image pairs covering the same geographical area and acquired by two different sensors, one optical radiometer and one synthetic aperture radar, at two different times. We propose a clustering-based technique to detect changes, identified as clusters that split or merge in the different images. To evaluate potentials and limitations of our method, we perform experiments on real data. Preliminary results confirm the relationship between splits and merges of clusters and the occurrence of changes. However, it becomes evident that it is necessary to incorporate prior, ancillary, or application-specific information to improve the interpretation of clustering results and to identify unambiguously the areas of change.
In-flight calibration of NOAA POES…
Updated:
January 17, 2017
The MEPED instruments on board the NOAA POES andMetOp satellites have been continuously measuring energetic particles in the magnetosphere since 1978. However, degradation of the proton detectors over time leads to an increase in the energy thresholds of the instrument and imposes great challenges to studies of long-term variability in the near-Earth space environment as well as a general quantification of the proton fluxes. By comparing monthly mean accumulated integral flux from a new and an old satellite at the same magnetic local time (MLT) and time period, we estimate the change in energy thresholds. The first 12 monthly energy spectra of the new satellite are used as a reference, and the derived monthly correction factors over a year for an old satellite show a small spread, indicating a robust calibration procedure. The method enables us to determine for the first time the correction factors also for the highest-energy channels of the proton detector. In addition, we make use of the newest satellite in orbit (MetOp-01) to find correction factors for 2013 for the NOAA 17 and MetOp-02 satellites. Without taking into account the level of degradation, the proton data from one satellite cannot be used quantitatively for more than 2 to 3 years after launch. As the electron detectors are vulnerable to contamination from energetic protons, the corrected proton measurements will be of value for electron flux measurements too. Thus, the correction factors ensure the correctness of both the proton and electron measurements.
Methods for Mapping Forest Disturb…
Updated:
March 22, 2017
Purpose of review: This paper presents a review of the current state of the art in remote sensing based monitoring of forest disturbances and forest degradation from optical Earth Observation data. Part one comprises an overview of currently available optical remote sensing sensors, which can be used for forest disturbance and degradation mapping. Part two reviews the two main categories of existing approaches: classical image-to-image change detection and time series analysis. Recent findings: With the launch of the Sentinel-2a satellite and available Landsat imagery, time series analysis has become the most promising but also most demanding category of degradation mapping approaches. Four time series classification methods are distinguished. The methods are explained and their benefits and drawbacks are discussed. A separate chapter presents a number of recent forest degradation mapping studies for two different ecosystems: temperate forests with a geographical focus on Europe and tropical forests with a geographical focus on Africa. Summary: The review revealed that a wide variety of methods for the detection of forest degradation is already available. Today, the main challenge is to transfer these approaches to high resolution time series data from multiple sensors. Future research should also focus on the classification of disturbance types and the development of robust up-scalable methods to enable near real time disturbance mapping in support of operational reactive measures.
Conditioning of stochastic 3-D fra…
Updated:
January 7, 2017
The geometry and connectivity of fractures exert a strong influence on the flow and transport properties of fracture networks. We present a novel approach to stochastically generate three-dimensional discrete networks of connected fractures that are conditioned to hydrological and geophysical data. A hierarchical rejection sampling algorithm is used to draw realizations from the posterior probability density function at different conditioning levels. The method is applied to a well-studied granitic formation using data acquired within two boreholes located 6 m apart. The prior models include 27 fractures with their geometry (position and orientation) bounded by information derived from single-hole ground-penetrating radar (GPR) data acquired during saline tracer tests and optical televiewer logs. Eleven cross-hole hydraulic connections between fractures in neighboring boreholes and the order in which the tracer arrives at different fractures are used for conditioning. Furthermore, the networks are conditioned to the observed relative hydraulic importance of the different hydraulic connections by numerically simulating the flow response. Among the conditioning data considered, constraints on the relative flow contributions were the most effective in determining the variability among the network realizations. Nevertheless, we find that the posterior model space is strongly determined by the imposed prior bounds. Strong prior bounds were derived from GPR measurements and helped to make the approach computationally feasible. We analyze a set of 230 posterior realizations that reproduce all data given their uncertainties assuming the same uniform transmissivity in all fractures. The posterior models provide valuable statistics on length scales and density of connected fractures, as well as their connectivity.
Inferring transport characteristic…
Updated:
January 7, 2017
Investigations of solute transport in fractured rock aquifers often rely on tracer test data acquired at a limited number of observation points. Such data do not, by themselves, allow detailed assessments of the spreading of the injected tracer plume. To better understand the transport behavior in a granitic aquifer, we combine tracer test data with single-hole ground-penetrating radar (GPR) reflection monitoring data. Five successful tracer tests were performed under various experimental conditions between two boreholes 6 m apart. For each experiment, saline tracer was injected into a previously identified packed-off transmissive fracture while repeatedly acquiring single-hole GPR reflection profiles together with electrical conductivity logs in the pumping borehole. By analyzing depth-migrated GPR difference images together with tracer breakthrough curves and associated simplified flow and transport modeling, we estimate (1) the number, the connectivity, and the geometry of fractures that contribute to tracer transport, (2) the velocity and the mass of tracer that was carried along each flow path, and (3) the effective transport parameters of the identified flow paths. We find a qualitative agreement when comparing the time evolution of GPR reflectivity strengths at strategic locations in the formation with those arising from simulated transport. The discrepancies are on the same order as those between observed and simulated breakthrough curves at the outflow locations. The rather subtle and repeatable GPR signals provide useful and complementary information to tracer test data acquired at the outflow locations and may help us to characterize transport phenomena in fractured rock aquifers.
Distributed soil moisture from cro…
Updated:
January 6, 2017
Geophysical methods offer several key advantages over conventional subsurface measurement approaches, yet their use for hydrologic interpretation is often problematic. Here, we introduce theory and concepts of a novel Bayesian approach for high-resolution soil moisture estimation using traveltime observations from crosshole Ground Penetrating Radar (GPR) experiments. The recently developed Multi-try DiffeRential Evolution Adaptive Metropolis with sampling from past states, MT-DREAM(ZS) is being used to infer, as closely and consistently as possible, the posterior distribution of spatially distributed vadose zone soil moisture and/or porosity under saturated conditions. Two differing and opposing model parameterization schemes are being considered, one involving a classical uniform grid discretization and the other based on a discrete cosine transformation (DCT). We illustrate our approach using two different case studies involving geophysical data from a synthetic water tracer infiltration study and a real-world field study under saturated conditions. Our results demonstrate that the DCT parameterization yields the most accurate estimates of distributed soil moisture for a large range of spatial resolutions, and superior MCMC convergence rates. In addition, DCT is admirably suited to investigate and quantify the effects of model truncation errors on the MT-DREAM(ZS) inversion results. For the field example, lateral anisotropy needs to be enforced to derive reliable soil moisture variability. Our results also demonstrate that the posterior soil moisture uncertainty derived with the proposed Bayesian procedure is significantly larger than its counterpart estimated from classical smoothness-constrained deterministic inversions.
3D density structure and geologica…
Updated:
January 6, 2017
We present the first density model of Stromboli volcano (Aeolian Islands, Italy) obtained by simultaneously inverting land-based (543) and sea-surface (327) relative gravity data. Modern positioning technology, a 1 * 1 m digital elevation model, and a 15 * 15 m bathymetric model made it possible to obtain a detailed 3-D density model through an iteratively reweighted smoothness-constrained least-squares inversion that explained the land-based gravity data to 0.09 mGal and the sea-surface data to 5 mGal. Our inverse formulation avoids introducing any assumptions about density magnitudes. At 125 m depth from the land surface, the inferred mean density of the island is 2380 kg m-3, with corresponding 2.5 and 97.5 percentiles of 2200 and 2530 kg m-3. This density range covers the rock densities of new and previously published samples of Paleostromboli I, Vancori, Neostromboli and San Bartolo lava flows. High-density anomalies in the central and southern part of the island can be related to two main degassing faults crossing the island (N41 and N64) that are interpreted as preferential regions of dyke intrusions. In addition, two low-density anomalies are found in the northeastern part and in the summit area of the island. These anomalies seem to be geographically related with past paroxysmal explosive phreato-magmatic events that have played important roles in the evolution of Stromboli Island by forming the Scari caldera and the Neostromboli crater, respectively.
Commentary on 'Modified MMF (Morga…
Updated:
January 16, 2017
The Morgan-Morgan-Finney (MMF) model is a widely used semi-physically based soil erosion model that has been tested and validated in various land use types and climatic regions. The latest version of the model, the modified MMF (MMMF) model, improved its conceptual physical representations through several modifications of the original model. However, the MMMF model has three problematic parts to be corrected: 1) the effective rainfall equation, 2) the interflow equation, and 3) the improperly normalized C-factor of the transport capacity equation. In this commentary, we identify and correct the problematic parts of the MMMF model, which should result in more accurate estimations of runoff and soil erosion rates.
YOLO9000: Better, Faster, Stronger
Updated:
December 25, 2016
We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that don't have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes; it predicts detections for more than 9000 different object categories. And it still runs in real-time.
Isotopic profiles imply strong con…
Updated:
December 6, 2016
The influence of deep convection on water vapor in the Tropical Tropopause Layer (TTL), the region just below the high ($\sim$18 km), cold tropical tropopause, remains an outstanding question in atmospheric science. Moisture transport to this region is important for climate projections because it drives the formation of local cirrus (ice) clouds, which have a disproportionate impact on the Earth's radiative balance. Deep cumulus towers carrying large volumes of ice are known to reach the TTL, but their importance to the water budget has been debated for several decades. We show here that profiles of the isotopic composition of water vapor can provide a quantitative estimate of the convective contribution to TTL moistening. Isotopic measurements from the ACE satellite instrument, in conjunction with ice loads inferred from CALIOP satellite measurements and simple mass-balance modeling, suggest that convection is the dominant source of water vapor in the TTL up to near-tropopause altitudes. The relatively large ice loads inferred from CALIOP satellite measurements can be produced only with significant water sources, and isotopic profiles imply that these sources are predominantly convective ice. Sublimating ice from deep convection appears to increase TTL cirrus by a factor of several over that expected if cirrus production were driven only by large-scale uplift; sensitivity analysis implies that these conclusions are robust for most physically reasonable assumptions. Changes in tropical deep convection in future warmer conditions may thus provide an important climate feedback.
Classification With an Edge: Impro…
Updated:
December 21, 2017
We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful boundaries. Semantic segmentation is a fundamental remote sensing task, and most state-of-the-art methods rely on DCNNs as their workhorse. A major reason for their success is that deep networks learn to accumulate contextual information over very large windows (receptive fields). However, this success comes at a cost, since the associated loss of effecive spatial resolution washes out high-frequency details and leads to blurry object boundaries. Here, we propose to counter this effect by combining semantic segmentation with semantically informed edge detection, thus making class-boundaries explicit in the model, First, we construct a comparatively simple, memory-efficient model by adding boundary detection to the Segnet encoder-decoder architecture. Second, we also include boundary detection in FCN-type models and set up a high-end classifier ensemble. We show that boundary detection significantly improves semantic segmentation with CNNs. Our high-end ensemble achieves > 90% overall accuracy on the ISPRS Vaihingen benchmark.
Interaction prediction between gro…
Updated:
November 28, 2016
Groundwater and rock are intensively exploited in the world. When a quarry is deepened the water table of the exploited geological formation might be reached. A dewatering system is therefore installed so that the quarry activities can continue, possibly impacting the nearby water catchments. In order to recommend an adequate feasibility study before deepening a quarry, we propose two interaction indices between extractive activity and groundwater resources based on hazard and vulnerability parameters used in the assessment of natural hazards. The levels of each index (low, medium, high, very high) correspond to the potential impact of the quarry on the regional hydrogeology. The first index is based on a discrete choice modelling methodology while the second is relying on an artificial neural network. It is shown that these two complementary approaches (the former being probabilistic while the latter fully deterministic) are able to predict accurately the level of interaction. Their use is finally illustrated by their application on the Boverie quarry and the Tridaine gallery located in Belgium. The indices determine the current interaction level as well as the one resulting from future quarry extensions. The results highlight the very high interaction level of the quarry with the gallery.