Effective Use of Transformer Netwo…
Updated:
September 5, 2019
Tracking entities in procedural language requires understanding the transformations arising from actions on entities as well as those entities' interactions. While self-attention-based pre-trained language encoders like GPT and BERT have been successfully applied across a range of natural language understanding tasks, their ability to handle the nuances of procedural texts is still untested. In this paper, we explore the use of pre-trained transformer networks for entity tracking tasks in procedural text. First, we test standard lightweight approaches for prediction with pre-trained transformers, and find that these approaches underperform even simple baselines. We show that much stronger results can be attained by restructuring the input to guide the transformer model to focus on a particular entity. Second, we assess the degree to which transformer networks capture the process dynamics, investigating such factors as merged entities and oblique entity references. On two different tasks, ingredient detection in recipes and QA over scientific processes, we achieve state-of-the-art results, but our models still largely attend to shallow context clues and do not form complex representations of intermediate entity or process state.
Feedbackward Decoding for Semantic…
Updated:
August 22, 2019
We propose a novel approach for semantic segmentation that uses an encoder in the reverse direction to decode. Many semantic segmentation networks adopt a feedforward encoder-decoder architecture. Typically, an input is first downsampled by the encoder to extract high-level semantic features and continues to be fed forward through the decoder module to recover low-level spatial clues. Our method works in an alternative direction that lets information flow backward from the last layer of the encoder towards the first. The encoder performs encoding in the forward pass and the same network performs decoding in the backward pass. Therefore, the encoder itself is also the decoder. Compared to conventional encoder-decoder architectures, ours doesn't require additional layers for decoding and further reuses the encoder weights thereby reducing the total number of parameters required for processing. We show by using only the 13 convolutional layers from VGG-16 plus one tiny classification layer, our model significantly outperforms other frequently cited models that are also adapted from VGG-16. On the Cityscapes semantic segmentation benchmark, our model uses 50.0% less parameters than SegNet and achieves an 18.1% higher "IoU class" score; it uses 28.3% less parameters than DeepLab LargeFOV and the achieved "IoU class" score is 3.9% higher; it uses 89.1% fewer parameters than FCN-8s and the achieved "IoU class" score is 3.1% higher. Our code will be publicly available on Github later.
Denoising based Sequence-to-Sequen…
Updated:
August 22, 2019
This paper presents a new sequence-to-sequence (seq2seq) pre-training method PoDA (Pre-training of Denoising Autoencoders), which learns representations suitable for text generation tasks. Unlike encoder-only (e.g., BERT) or decoder-only (e.g., OpenAI GPT) pre-training approaches, PoDA jointly pre-trains both the encoder and decoder by denoising the noise-corrupted text, and it also has the advantage of keeping the network architecture unchanged in the subsequent fine-tuning stage. Meanwhile, we design a hybrid model of Transformer and pointer-generator networks as the backbone architecture for PoDA. We conduct experiments on two text generation tasks: abstractive summarization, and grammatical error correction. Results on four datasets show that PoDA can improve model performance over strong baselines without using any task-specific techniques and significantly speed up convergence.
SG-Net: Syntax-Guided Machine Read…
Updated:
November 20, 2019
For machine reading comprehension, the capacity of effectively modeling the linguistic knowledge from the detail-riddled and lengthy passages and getting ride of the noises is essential to improve its performance. Traditional attentive models attend to all words without explicit constraint, which results in inaccurate concentration on some dispensable words. In this work, we propose using syntax to guide the text modeling by incorporating explicit syntactic constraints into attention mechanism for better linguistically motivated word representations. In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention. Syntax-guided network (SG-Net) is then composed of this extra SDOI-SAN and the SAN from the original Transformer encoder through a dual contextual architecture for better linguistics inspired representation. To verify its effectiveness, the proposed SG-Net is applied to typical pre-trained language model BERT which is right based on a Transformer encoder. Extensive experiments on popular benchmarks including SQuAD 2.0 and RACE show that the proposed SG-Net design helps achieve substantial performance improvement over strong baselines.
Apache Spark Accelerated Deep Lear…
Updated:
August 8, 2019
The shear volumes of data generated from earth observation and remote sensing technologies continue to make major impact; leaping key geospatial applications into the dual data and compute intensive era. As a consequence, this rapid advancement poses new computational and data processing challenges. We implement a novel remote sensing data flow (RESFlow) for advanced machine learning and computing with massive amounts of remotely sensed imagery. The core contribution is partitioning massive amount of data based on the spectral and semantic characteristics for distributed imagery analysis. RESFlow takes advantage of both a unified analytics engine for large-scale data processing and the availability of modern computing hardware to harness the acceleration of deep learning inference on expansive remote sensing imagery. The framework incorporates a strategy to optimize resource utilization across multiple executors assigned to a single worker. We showcase its deployment across computationally and data-intensive on pixel-level labeling workloads. The pipeline invokes deep learning inference at three stages; during deep feature extraction, deep metric mapping, and deep semantic segmentation. The tasks impose compute intensive and GPU resource sharing challenges motivating for a parallelized pipeline for all execution steps. By taking advantage of Apache Spark, Nvidia DGX1, and DGX2 computing platforms, we demonstrate unprecedented compute speed-ups for deep learning inference on pixel labeling workloads; processing 21,028~Terrabytes of imagery data and delivering an output maps at area rate of 5.245sq.km/sec, amounting to 453,168 sq.km/day - reducing a 28 day workload to 21~hours.
A Mechanistic Pore-Scale Analysis …
Updated:
August 7, 2019
The enhanced oil recovery technique of low-salinity (LS) water flooding is a topic of substantial interest in the petroleum industry. Studies have shown that LS brine injection can increase oil production relative to conventional high-salinity (HS) brine injection, but contradictory results have also been reported and an understanding of the underlying mechanisms remains elusive. We have recently developed a steady-state pore network model to simulate oil recovery by LS brine injection in uniformly wetted pore structures (Watson et al., Transp. Porous Med. 118, 201-223, 2017). We extend this approach here to investigate the low-salinity effect (LSE) in heterogeneously wetted media. We couple a model of capillary force-driven fluid displacement to a novel tracer algorithm and track the salinity front in the pore network as oil and HS brine are displaced by injected LS brine. The wettability of the pore structure is modified in regions where water salinity falls below a critical threshold, and simulations show that this can have significant consequences for oil recovery. For networks that contain spanning clusters of both water-wet and oil-wet (OW) pores prior to flooding, our results demonstrate that the OW pores contain the only viable source of incremental oil recovery by LS brine injection. Moreover, we show that a LS-induced increase in microscopic sweep efficiency in the OW pore fraction is a necessary, but not sufficient, condition to guarantee additional oil production. Simulations suggest that the fraction of OW pores in the network, the average network connectivity and the initial HS brine saturation are key factors that can determine the extent of any improvement in oil recovery in heterogeneously wetted networks following LS brine injection. This study highlights that the mechanisms of the LSE can be markedly different in uniformly wetted and non-uniformly wetted porous media.
ScarfNet: Multi-scale Features wit…
Updated:
January 18, 2020
Convolutional neural network (CNN) has led to significant progress in object detection. In order to detect the objects in various sizes, the object detectors often exploit the hierarchy of the multi-scale feature maps called feature pyramid, which is readily obtained by the CNN architecture. However, the performance of these object detectors is limited since the bottom-level feature maps, which experience fewer convolutional layers, lack the semantic information needed to capture the characteristics of the small objects. In order to address such problem, various methods have been proposed to increase the depth for the bottom-level features used for object detection. While most approaches are based on the generation of additional features through the top-down pathway with lateral connections, our approach directly fuses multi-scale feature maps using bidirectional long short term memory (biLSTM) in effort to generate deeply fused semantics. Then, the resulting semantic information is redistributed to the individual pyramidal feature at each scale through the channel-wise attention model. We integrate our semantic combining and attentive redistribution feature network (ScarfNet) with baseline object detectors, i.e., Faster R-CNN, single-shot multibox detector (SSD) and RetinaNet. Our experiments show that our method outperforms the existing feature pyramid methods as well as the baseline detectors and achieve the state of the art performances in the PASCAL VOC and COCO detection benchmarks.
Grape detection, segmentation and …
Updated:
February 7, 2020
Agricultural applications such as yield prediction, precision agriculture and automated harvesting need systems able to infer the crop state from low-cost sensing devices. Proximal sensing using affordable cameras combined with computer vision has seen a promising alternative, strengthened after the advent of convolutional neural networks (CNNs) as an alternative for challenging pattern recognition problems in natural images. Considering fruit growing monitoring and automation, a fundamental problem is the detection, segmentation and counting of individual fruits in orchards. Here we show that for wine grapes, a crop presenting large variability in shape, color, size and compactness, grape clusters can be successfully detected, segmented and tracked using state-of-the-art CNNs. In a test set containing 408 grape clusters from images taken on a trellis-system based vineyard, we have reached an F 1 -score up to 0.91 for instance segmentation, a fine separation of each cluster from other structures in the image that allows a more accurate assessment of fruit size and shape. We have also shown as clusters can be identified and tracked along video sequences recording orchard rows. We also present a public dataset containing grape clusters properly annotated in 300 images and a novel annotation methodology for segmentation of complex objects in natural images. The presented pipeline for annotation, training, evaluation and tracking of agricultural patterns in images can be replicated for different crops and production systems. It can be employed in the development of sensing components for several agricultural and environmental applications.
A Regularized Convolutional Neural…
Updated:
June 28, 2019
Convolutional neural networks (CNNs) show outstanding performance in many image processing problems, such as image recognition, object detection and image segmentation. Semantic segmentation is a very challenging task that requires recognizing, understanding what's in the image in pixel level. Though the state of the art has been greatly improved by CNNs, there is no explicit connections between prediction of neighbouring pixels. That is, spatial regularity of the segmented objects is still a problem for CNNs. In this paper, we propose a method to add spatial regularization to the segmented objects. In our method, the spatial regularization such as total variation (TV) can be easily integrated into CNN network. It can help CNN find a better local optimum and make the segmentation results more robust to noise. We apply our proposed method to Unet and Segnet, which are well established CNNs for image segmentation, and test them on WBC, CamVid and SUN-RGBD datasets, respectively. The results show that the regularized networks not only could provide better segmentation results with regularization effect than the original ones but also have certain robustness to noise.
A generalized theory for full micr…
Updated:
July 10, 2019
Advances in the field of seismic interferometry have provided a basic theoretical interpretation to the full spectrum of the microtremor horizontal-to-vertical spectral ratio [H/V(f)]. The interpretation has been applied to ambient seismic noise data recorded both at the surface and at depth. The new algorithm, based on the diffuse wavefield assumption, has been used in inversion schemes to estimate seismic wave velocity profiles that are useful input information for engineering and exploration seismology both for earthquake hazard estimation and to characterize surficial sediments. However, until now, the developed algorithms are only suitable for on land environments with no offshore consideration. Here, the microtremor H/V(z, f) modeling is extended for applications to marine sedimentary environments for a 1D layered medium. The layer propagator matrix formulation is used for the computation of the required Green's functions. Therefore, in the presence of a water layer on top, the propagator matrix for the uppermost layer is defined to account for the properties of the water column. As an application example we analyze eight simple canonical layered earth models. Frequencies ranging from 0.2 to 50 Hz are considered as they cover a broad wavelength interval and aid in practice to investigate subsurface structures in the depth range from a few meters to a few hundreds of meters. Results show a marginal variation of 8 percent at most for the fundamental frequency when a water layer is present. The water layer leads to variations in H/V peak amplitude of up to 50 percent atop the solid layers.
Effects of Short Scale Roughness a…
Updated:
June 19, 2019
Simultaneous measurements of sea spray aerosol (SSA), wind, wave, underwater acoustic noise, and microwave brightness temperature are obtained in the open ocean. These data are analyzed to clarify the ocean surface processes important to SSA production. Parameters are formulated to represent surface processes with characteristic length scales over a broad range, from tens of meters to a few centimeters. The result shows that the correlation coefficients between SSA properties (number, volume and flux) and surface process parameters improve toward the shortest length scale. This suggests that whereas surface wave breaking is a necessary initial and boundary condition, the final state of the atmospheric SSA properties is controlled primarily by turbulent processes characterized by the ocean surface roughness. The investigation also reveals distinct differences of the SSA properties in rising winds and falling winds, with higher efficiency of breaking production in low or falling winds. Previous studies show that the length scale of breaking waves is shorter in mixed seas than in wind seas. Combining the observations together, it is suggestive that larger air cavities are entrained in rising winds (with wind seas more likely). The larger air cavities escape before they can be fully broken down into small bubbles for the subsequent SSA production. In contrast, the shorter breakers in low or falling winds (with mixed seas more likely) trap smaller air cavities that stay underwater longer for more efficient bubble breakup by turbulence.
SEN12MS -- A Curated Dataset of Ge…
Updated:
June 18, 2019
The availability of curated large-scale training data is a crucial factor for the development of well-generalizing deep learning methods for the extraction of geoinformation from multi-sensor remote sensing imagery. While quite some datasets have already been published by the community, most of them suffer from rather strong limitations, e.g. regarding spatial coverage, diversity or simply number of available samples. Exploiting the freely available data acquired by the Sentinel satellites of the Copernicus program implemented by the European Space Agency, as well as the cloud computing facilities of Google Earth Engine, we provide a dataset consisting of 180,662 triplets of dual-pol synthetic aperture radar (SAR) image patches, multi-spectral Sentinel-2 image patches, and MODIS land cover maps. With all patches being fully georeferenced at a 10 m ground sampling distance and covering all inhabited continents during all meteorological seasons, we expect the dataset to support the community in developing sophisticated deep learning-based approaches for common tasks such as scene classification or semantic segmentation for land cover mapping.
A Joint Planning and Learning Fram…
Updated:
December 24, 2019
Conventional reinforcement learning (RL) allows an agent to learn policies via environmental rewards only, with a long and slow learning curve, especially at the beginning stage. On the contrary, human learning is usually much faster because prior and general knowledge and multiple information resources are utilized. In this paper, we propose a \textbf{P}lanner-\textbf{A}ctor-\textbf{C}ritic architecture for hu\textbf{MAN}-centered planning and learning (\textbf{PACMAN}), where an agent uses prior, high-level, deterministic symbolic knowledge to plan for goal-directed actions. PACMAN integrates Actor-Critic algorithm of RL to fine-tune its behavior towards both environmental rewards and human feedback. To the best our knowledge, This is the first unified framework where knowledge-based planning, RL, and human teaching jointly contribute to the policy learning of an agent. Our experiments demonstrate that PACMAN leads to a significant jump-start at the early stage of learning, converges rapidly and with small variance, and is robust to inconsistent, infrequent, and misleading feedback.
A comparison of remotely-sensed an…
Updated:
June 14, 2019
Quantitative estimate of observational uncertainty is an essential ingredient to correctly interpret changes in climatic and environmental variables such as wildfires. In this work we compare four state-of-the-art satellite fire products with the gridded, ground-based EFFIS dataset for Mediterranean Europe and analyse their statistical differences. The data are compared for spatial and temporal similarities at different aggregations to identify a spatial scale at which most of the observations provide equivalent results. The results of the analysis indicate that the datasets show high temporal correlation with each other (0.5/0.6) when aggregating the data at resolution of at least 1.0{\deg} or at NUTS3 level. However, burned area estimates vary widely between datasets. Filtering out satellite fires located on urban and crop land cover classes greatly improves the agreement with EFFIS data. Finally, in spite of the differences found in the area estimates, the spatial pattern is similar for all the datasets, with spatial correlation increasing as the resolution decreases. Also, the general reasonable agreement between satellite products builds confidence in using these datasets and in particular the most-recent developed dataset, FireCCI51, shows the best agreement with EFFIS overall. As a result, the main conclusion of the study is that users should carefully consider the limitations of the satellite fire estimates currently available, as their uncertainties cannot be neglected in the overall uncertainty estimate/cascade that should accompany global or regional change studies and that removing fires on human-dominated land areas is key to analyze forest fires estimation from satellite products.
Transpiration- and precipitation-i…
Updated:
June 14, 2019
Movement of soil moisture associated with tree root-water uptake is ecologically important, but technically challenging to measure. Here, the self-potential (SP) method, a passive electrical geophysical method, is used to characterize water flow in situ. Unlike tensiometers, which use a measurement of state (i.e. matric pressure) at two locations to infer fluid flow, the SP method directly measures signals generated by water movement. We collected SP measurements in a two-dimensional array at the base of a Douglas-fir tree (Pseudotsuga menziesii) in the H.J. Andrews Experimental Forest in western Oregon over five months to provide insight on the propagation of transpiration signals into the subsurface under variable soil moisture. During dry conditions, SP data appear to show downward unsaturated flow, while nearby tensiometer data appear to suggest upward flow during this period. After the trees enter dormancy in the fall, precipitation-induced vertical flow dominates in the SP and tensiometer data. Diel variations in SP data correspond to periods of tree transpiration. Changes in volumetric water content occurring from soil moisture movement during transpiration are not large enough to appear in volumetric water content data. Fluid flow and electrokinetic coupling (i.e. electrical potential distribution) were simulated using COMSOL Multiphysics to explore the system controls on field data. The coupled model, which included a root-water uptake term, reproduced components of both the long-term and diel variations in SP measurements, thus indicating that SP has potential to provide spatially and temporally dense measurements of transpiration-induced changes in water flow. This manuscript presents the first SP measurements focusing on the movement of soil moisture in response to tree transpiration.
Stand-Alone Self-Attention in Visi…
Updated:
June 13, 2019
Convolutions are a fundamental building block of modern computer vision systems. Recent approaches have argued for going beyond convolutions in order to capture long-range dependencies. These efforts focus on augmenting convolutional models with content-based interactions, such as self-attention and non-local means, to achieve gains on a number of vision tasks. The natural question that arises is whether attention can be a stand-alone primitive for vision models instead of serving as just an augmentation on top of convolutions. In developing and testing a pure self-attention vision model, we verify that self-attention can indeed be an effective stand-alone layer. A simple procedure of replacing all instances of spatial convolutions with a form of self-attention applied to ResNet model produces a fully self-attentional model that outperforms the baseline on ImageNet classification with 12% fewer FLOPS and 29% fewer parameters. On COCO object detection, a pure self-attention model matches the mAP of a baseline RetinaNet while having 39% fewer FLOPS and 34% fewer parameters. Detailed ablation studies demonstrate that self-attention is especially impactful when used in later layers. These results establish that stand-alone self-attention is an important addition to the vision practitioner's toolbox.
Tackling Climate Change with Machi…
Updated:
November 5, 2019
Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help. Here we describe how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by machine learning, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the machine learning community to join the global effort against climate change.
Scene and Environment Monitoring U…
Updated:
June 6, 2019
Unmanned Aerial vehicles (UAV) are a promising technology for smart farming related applications. Aerial monitoring of agriculture farms with UAV enables key decision-making pertaining to crop monitoring. Advancements in deep learning techniques have further enhanced the precision and reliability of aerial imagery based analysis. The capabilities to mount various kinds of sensors (RGB, spectral cameras) on UAV allows remote crop analysis applications such as vegetation classification and segmentation, crop counting, yield monitoring and prediction, crop mapping, weed detection, disease and nutrient deficiency detection and others. A significant amount of studies are found in the literature that explores UAV for smart farming applications. In this paper, a review of studies applying deep learning on UAV imagery for smart farming is presented. Based on the application, we have classified these studies into five major groups including: vegetation identification, classification and segmentation, crop counting and yield predictions, crop mapping, weed detection and crop disease and nutrient deficiency detection. An in depth critical analysis of each study is provided.
Kinetic energy of eddy-like featur…
Updated:
August 24, 2019
The mesoscale eddy field plays a key role in the mixing and transport of physical and biological properties and redistribute energy budgets in the ocean. Eddy kinetic energy is commonly defined as the kinetic energy of the time-varying component of the velocity field. However, this definition contains all processes that vary in time, including coherent mesoscale eddies, jets, waves, and large-scale motions. The focus of this paper is on the eddy kinetic energy contained in coherent mesoscale eddies. We present a new method to decompose eddy kinetic energy into oceanic processes. The proposed method uses a new eddy-identification algorithm (TrackEddy). This algorithm is based on the premise that the sea level signature of a coherent eddy can be approximated as a Gaussian feature. The eddy Gaussian signature then allows for the calculation of kinetic energy of the eddy field through the geostrophic approximation. TrackEddy has been validated using synthetic sea surface height data, and then used to investigate trends of eddy kinetic energy in the Southern Ocean using Satellite Sea Surface Height anomaly (AVISO+). We detect an increasing trend of eddy kinetic energy associated with mesoscale eddies in the Southern Ocean. This trend is correlated with an increase of the coherent eddy amplitude and the strengthening of wind stress over the last two decades.
Light-Weight RetinaNet for Object …
Updated:
May 24, 2019
Object detection has gained great progress driven by the development of deep learning. Compared with a widely studied task -- classification, generally speaking, object detection even need one or two orders of magnitude more FLOPs (floating point operations) in processing the inference task. To enable a practical application, it is essential to explore effective runtime and accuracy trade-off scheme. Recently, a growing number of studies are intended for object detection on resource constraint devices, such as YOLOv1, YOLOv2, SSD, MobileNetv2-SSDLite, whose accuracy on COCO test-dev detection results are yield to mAP around 22-25% (mAP-20-tier). On the contrary, very few studies discuss the computation and accuracy trade-off scheme for mAP-30-tier detection networks. In this paper, we illustrate the insights of why RetinaNet gives effective computation and accuracy trade-off for object detection and how to build a light-weight RetinaNet. We propose to only reduce FLOPs in computational intensive layers and keep other layer the same. Compared with most common way -- input image scaling for FLOPs-accuracy trade-off, the proposed solution shows a constantly better FLOPs-mAP trade-off line. Quantitatively, the proposed method result in 0.1% mAP improvement at 1.15x FLOPs reduction and 0.3% mAP improvement at 1.8x FLOPs reduction.
FORESAIL-1 cubesat mission to meas…
Updated:
May 23, 2019
Today, the near-Earth space is facing a paradigm change as the number of new spacecraft is literally sky-rocketing. Increasing numbers of small satellites threaten the sustainable use of space, as without removal, space debris will eventually make certain critical orbits unusable. A central factor affecting small spacecraft health and leading to debris is the radiation environment, which is unpredictable due to an incomplete understanding of the near-Earth radiation environment itself and its variability driven by the solar wind and outer magnetosphere. This paper presents the FORESAIL-1 nanosatellite mission, having two scientific and one technological objectives. The first scientific objective is to measure the energy and flux of energetic particle loss to the atmosphere with a representative energy and pitch angle resolution over a wide range of magnetic local times. To pave the way to novel model - in situ data comparisons, we also show preliminary results on precipitating electron fluxes obtained with the new global hybrid-Vlasov simulation Vlasiator. The second scientific objective of the FORESAIL-1 mission is to measure energetic neutral atoms (ENAs) of solar origin. The solar ENA flux has the potential to contribute importantly to the knowledge of solar eruption energy budget estimations. The technological objective is to demonstrate a satellite de-orbiting technology, and for the first time, make an orbit manoeuvre with a propellantless nanosatellite. FORESAIL-1 will demonstrate the potential for nanosatellites to make important scientific contributions as well as promote the sustainable utilisation of space by using a cost-efficient de-orbiting technology.
Artificial Intelligence Based Clou…
Updated:
May 21, 2019
Here we introduce the artificial intelligence-based cloud distributor (AI-CD) approach to generate two-dimensional (2D) marine low cloud reflectance fields. AI-CD uses a conditional generative adversarial net (cGAN) framework to model distribution of 2-D cloud reflectance in nature as observed by the MODerate resolution Imaging Spectrometer (MODIS). Specifically, the AI-CD models the conditional distribution of cloud reflectance fields given a set of large-scale environmental conditions such as instantaneous sea surface temperature, estimated inversion strength, surface wind speed, relative humidity and large-scale subsidence rate together with random noise. We show that AI-CD can not only generate realistic cloudy scenes but also capture known, physical dependence of cloud properties on large-scale variables. AI-CD is stochastic in nature because generated cloud fields are influenced by random noise. Therefore, given a fixed set of large-scale variables, an ensemble of cloud reflectance fields can be generated using AI-CD. We suggest that AI-CD approach can be used as a data driven framework for stochastic cloud parameterization because it can realistically model sub-grid cloud distributions and their sensitivity to meteorological variables.
Wind, wave and current interaction…
Updated:
May 21, 2019
The highly heterogeneous and biologically active continental shelf-seas are important components of the oceanic carbon sink. Carbon rich water from shelf-seas is exported at depth to the open ocean, a process known as the continental shelf pump, with open-ocean surface water moving (transported) onto the shelf driving the export at depth. Existing methods to study shelf-wide exchange focus on the wind or geostrophic currents, often ignoring their combined effect, spatial heterogeniety or any other ageostrophic components. Here we investigate the influence that wind, wave and current interactions can have on surface transport and carbon export across continental shelves. Using a 21 year global re-analysis dataset we confirm that geostrophic and wind driven Ekman processes are important for the transport of water onto shelf seas; but the dominance of each is location and season dependent. A global wave model re-analysis shows that one type of ageostrophic flow, Stokes drift due to waves, can also be significant. A regional case study using two submesocale model simulations identifies that up to 100% of the cross-shelf surface flow in European seas can be due to ageostrophic components. Using these results and grouping shelf-seas based on their observed carbon accumulation rates shows that differences in rates are consistent with imbalances between the processes driving atmosphere-ocean exchange at the surface and those driving carbon export at depth. Therefore expected future changes in wind and wave climate support the need to monitor cross-shelf transport and the size of the continental shelf-sea carbon pump. The results presented show that the Sea Surface Kinematics Multiscale monitoring satellite mission (SKIM), will be capable of providing measurements of the total cross-shelf current, which are now needed to enable routine monitoring of the global continental shelf-sea carbon pump.
Accelerated Discovery of Sustainab…
Updated:
May 20, 2019
Concrete is the most widely used engineered material in the world with more than 10 billion tons produced annually. Unfortunately, with that scale comes a significant burden in terms of energy, water, and release of greenhouse gases and other pollutants. As such, there is interest in creating concrete formulas that minimize this environmental burden, while satisfying engineering performance requirements. Recent advances in artificial intelligence have enabled machines to generate highly plausible artifacts, such as images of realistic looking faces. Semi-supervised generative models allow generation of artifacts with specific, desired characteristics. In this work, we use Conditional Variational Autoencoders (CVAE), a type of semi-supervised generative model, to discover concrete formulas with desired properties. Our model is trained using open data from the UCI Machine Learning Repository joined with environmental impact data computed using a web-based tool. We demonstrate CVAEs can design concrete formulas with lower emissions and natural resource usage while meeting design requirements. To ensure fair comparison between extant and generated formulas, we also train regression models to predict the environmental impacts and strength of discovered formulas. With these results, a construction engineer may create a formula that meets structural needs and best addresses local environmental concerns.
Surface Water Formation on the Nat…
Updated:
May 19, 2019
Heterogeneous nucleation and subsequent growth of surface water occur on the natural substrate when the water vapor concentration reached the point of super-saturation. This study focuses on the parameterization of super-saturation on the canopy-air interface by field observations monitoring surface water formation (SWF) such as dew and frost in the evergreen shrub at an urban cite during autumn and winter in 2015-2017. Here we show that both the interfacial and vertical temperature differences ranged from 1 to 3 K and were necessary but not sufficient for super-saturated condensation on the natural surface. Excessive supplies of moisture must exist, continuously contribute to the growth of the condensed water embryos, originate from both the local and the external sources such as evapotranspiration and atmospheric advection driven by the reduced air pressure, cause SWF not only on the ground soil but also on the vegetation canopy at 1-2 m height. The super-saturation ratio is mainly determined by the coefficient of thermophoresis deposition, which approaches to 1. SWF on the natural surface is not only an indicator but also a weak cleaner of air pollution. The downward thermophoresis deposition of fine particle and droplets favors SWF and the scavenging of air pollutants. The removal efficiency of the deposition flux during SWF event for [SO42-+NO3-] is estimated by ~0.3 mmol (per [Ca2+] meq)/m2(per leaf area).
Star-Convex structures as prototyp…
Updated:
May 15, 2019
Oceanic surface flows are dominated by finite-time Lagrangian coherent structures that separate regions of qualitatively different dynamical behavior. Among these, eddy boundaries are of particular interest. Their exact identification is crucial for the study of oceanic transport processes and the investigation of impacts on marine life and the climate. Here, we present a novel method purely based on convexity, a condition that is intuitive and well-established, yet not fully explored. We discuss the underlying theory, derive an algorithm that yields comprehensible results and illustrate the presented method by identifying coherent structures and filaments in simulations and real oceanic velocity fields.
Random and Coherence Noise Attenua…
Updated:
April 22, 2019
Noises are common events in seismic reflection data that have very striking features in seismograms, affecting seismic data processing and interpretation. Noise attenuation is an essential phase in seismic processing data, usually resulting in seismic interpretation improvement that enhances the signal to noise ratio. Groundroll presence is the major fashion of significant noise in the land seismic survey. It is a type of coherent noise present in seismograms that appears as linear events, in most cases overlapping the reflections and probably making it challenging to recognize. There are several domains used in noise attenuation, Domain transformations is a complex algorithms standard tool used commonly used during processing of seismic data and imaging processing, So a large number methods have been developed to attenuate these types of noise. In the time-offset domain, the noise wave such as Groundroll and random noises, overlap each other over time; a different domain makes it easier to successfully isolate coherent, random noise and reflection events. Five steps are introduced to attenuate coherent and random noise, these steps are FDNAT, AGORA, RADMX, SCFIL, DDMED as well as Time-Variant band-pass Filter. The results indicate that the different domains can actually reveal features and geological structures that have been masked by the noises present in current data. because encourage significant improvements in the final image quality in 2-D seismic section, so, these filtering techniques possibly give advantage to the interpreter in particularly in structural and stratigraphic interpretation during the work of interpretation, especially in the exploration and characterizing possible traps.
Robust Building-based Registration…
Updated:
April 7, 2019
The motivation of this paper is to address the problem of registering airborne LiDAR data and optical aerial or satellite imagery acquired from different platforms, at different times, with different points of view and levels of detail. In this paper, we present a robust registration method based on building regions, which are extracted from optical images using mean shift segmentation, and from LiDAR data using a 3D point cloud filtering process. The matching of the extracted building segments is then carried out using Graph Transformation Matching (GTM) which allows to determine a common pattern of relative positions of segment centers. Thanks to this registration, the relative shifts between the data sets are significantly reduced, which enables a subsequent fine registration and a resulting high-quality data fusion.
Libra R-CNN: Towards Balanced Lear…
Updated:
April 4, 2019
Compared with model architectures, the training process, which is also crucial to the success of detectors, has received relatively less attention in object detection. In this work, we carefully revisit the standard training practice of detectors, and find that the detection performance is often limited by the imbalance during the training process, which generally consists in three levels - sample level, feature level, and objective level. To mitigate the adverse effects caused thereby, we propose Libra R-CNN, a simple but effective framework towards balanced learning for object detection. It integrates three novel components: IoU-balanced sampling, balanced feature pyramid, and balanced L1 loss, respectively for reducing the imbalance at sample, feature, and objective level. Benefitted from the overall balanced design, Libra R-CNN significantly improves the detection performance. Without bells and whistles, it achieves 2.5 points and 2.0 points higher Average Precision (AP) than FPN Faster R-CNN and RetinaNet respectively on MSCOCO.
No evidence of fish biodiversity e…
Updated:
April 7, 2019
We demonstrate that the conclusions drawn by Lefcheck et al. (2019) regarding the positive effects of fish diversity on coral reef ecosystem functioning across scales are flawed because of a series of conceptual and statistical issues that include spurious correlations, the conflation of population size and species diversity effects and a failure to recognize that observing a biodiversity effect at multiple sites is not equivalent to observing it at multiple scales.
FCOS: Fully Convolutional One-Stag…
Updated:
August 20, 2019
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:Code is available at: https://tinyurl.com/FCOSv1
Modelling the impact of deep-water…
Updated:
March 27, 2019
The concentration of the population in coastal regions, in addition to the direct human use, is leading to an accelerated process of change and deterioration of the marine ecosystems. Human activities such as fishing together with environmental drivers (e.g. climate change) are triggering major threats to marine biodiversity, and impact directly the services they provide. In the South and Southwest coasts of Portugal, the deep-water crustacean trawl fishery is not exemption. This fishery is recognized to have large effects on a number of species while generating high rates of unwanted catches. However, taking into account an ecosystem-based perspective, the fishing impacts along the food web accounting for biological interactions between and among species caught remains poorly understood. These impacts are particularly troubling and are a cause of concern given the cascading effects that might arise. Facing the main policies and legislative instruments for the restoration and conservation of the marine environment, times are calling for implementing ecosystem-based approaches to fisheries management. To this end, we use a food web modelling (Ecopath with Ecosim) approach to assess the fishing impacts of this particular fishery on the marine ecosystem of southern and southwestern Portugal. In particular, we describe the food web structure and functioning, identify the main keystone species and/or groups, quantify the major trophic and energy flows, and ultimately assess the impact of fishing on the target species but also on the ecosystem by means of ecological and ecosystem-based indicators. Finally, we examine limitations and weaknesses of the model for potential improvements and future research directions.
Aggregated Deep Local Features for…
Updated:
March 22, 2019
Remote Sensing Image Retrieval remains a challenging topic due to the special nature of Remote Sensing Imagery. Such images contain various different semantic objects, which clearly complicates the retrieval task. In this paper, we present an image retrieval pipeline that uses attentive, local convolutional features and aggregates them using the Vector of Locally Aggregated Descriptors (VLAD) to produce a global descriptor. We study various system parameters such as the multiplicative and additive attention mechanisms and descriptor dimensionality. We propose a query expansion method that requires no external inputs. Experiments demonstrate that even without training, the local convolutional features and global representation outperform other systems. After system tuning, we can achieve state-of-the-art or competitive results. Furthermore, we observe that our query expansion method increases overall system performance by about 3%, using only the top-three retrieved images. Finally, we show how dimensionality reduction produces compact descriptors with increased retrieval performance and fast retrieval computation times, e.g. 50% faster than the current systems.
Estimating the density of resident…
Updated:
March 20, 2019
Technological advances in underwater video recording are opening novel opportunities for monitoring wild fish. However, extracting data from videos is often challenging. Nevertheless, it has been recently demonstrated that accurate and precise estimates of density for animals (whose normal activities are restricted to a bounded area or home range) can be obtained from counts averaged across a relatively low number of video frames. The method, however, requires that individual detectability (PID, the probability of detecting a given animal provided that it is actually within the area surveyed by a camera) has to be known. Here we propose a Bayesian implementation for estimating PID after combining counts from cameras with counts from any reference method. The proposed framework was demonstrated using Serranus scriba as a case-study, a widely distributed and resident coastal fish. Density and PID were calculated after combining fish counts from unbaited remote underwater video (RUV) and underwater visual censuses (UVC) as reference method. The relevance of the proposed framework is that after estimating PID, fish density can be estimated accurately and precisely at the UVC scale (or at the scale of the preferred reference method) using RUV only. This key statement has been extensively demonstrated using computer simulations yielded by real empirical data. Finally, we provide a simulation tool-kit for comparing the expected precision attainable for different sampling effort and for species with different levels of PID. Overall, the proposed method may contribute to substantially enlarge the spatio-temporal scope of density monitoring programs for many resident fish.
Roof fall hazard due to blasting a…
Updated:
March 13, 2019
One of the major problems associated with exploitation of the copper ore deposit in underground mines in Poland is the local disturbance in a state of stable equilibrium manifested in a sudden release of strain energy stored in the deformed rock mass. It occurs mainly in the form of dynamic events which may result in rockbursts and roof falls. In order to face this threats, a number of organisational and technical prevention methods are applied in mines. It should be also noted that the greatest difficulties with the roof control are observed in the vicinity of active mining fronts, where the highest deformations are observed. The detonation of explosives generates a propagating shock wave which may cause a serious damage to a material body that is encountered on its way. Thus, a number of doubts during the mining operation emerged, that simultaneous firing of group of mining faces may have the negative impact on the condition of applied roof support and condition of roof strata as well The article discusses geomechanical influence of multi-faces blasting on immediate roof strata condition through the mutual comparison of the instrumented bolts monitoring data and the computer simulations results. The numerically assessed stress/strain field in the near vicinity of the blasting works operation has proved to be in close agreement with the field measured data. In the considered mining conditions both the numerical approach and field strain/stress monitoring indicated the low effect of production blasting on the immediate roof fall potential.
Significant Impact of Rossby Waves…
Updated:
March 6, 2019
Air pollution is associated with human diseases and has been found to be related to premature mortality. In response, environmental policies have been adopted in many countries, to decrease anthropogenic air pollution for the improvement of long-term air quality, since most air pollutant sources are anthropogenic. However, air pollution fluctuations have been found to strongly depend on the weather dynamics. This raises a fundamental question: What are the significant atmospheric processes that affect the local daily variability of air pollution? For this purpose, we develop here a multi-layered network analysis to detect the interlinks between the geopotential height of the upper air (~5 km) and surface air pollution in both China and the USA. We find that Rossby waves significantly affect air pollution fluctuations through the development of cyclone and anticyclone systems, and further affect the local stability of the air and the winds. The significant impacts of Rossby waves on air pollution are found to underlie most of the daily fluctuations in air pollution. Thus, the impact of Rossby waves on human life is greater than previously assumed. The rapid warming of the Arctic could slow down Rossby waves, thus increasing human health risks. Our method can help to determine the risk assessment of such extreme events and can improve potential predictability.
Pancreas segmentation with probabi…
Updated:
August 11, 2022
Pancreas segmentation in medical imaging data is of great significance for clinical pancreas diagnostics and treatment. However, the large population variations in the pancreas shape and volume cause enormous segmentation difficulties, even for state-of-the-art algorithms utilizing fully-convolutional neural networks (FCNs). Specifically, pancreas segmentation suffers from the loss of spatial information in 2D methods, and the high computational cost of 3D methods. To alleviate these problems, we propose a probabilistic-map-guided bi-directional recurrent UNet (PBR-UNet) architecture, which fuses intra-slice information and inter-slice probabilistic maps into a local 3D hybrid regularization scheme, which is followed by bi-directional recurrent network optimization. The PBR-UNet method consists of an initial estimation module for efficiently extracting pixel-level probabilistic maps and a primary segmentation module for propagating hybrid information through a 2.5D U-Net architecture. Specifically, local 3D information is inferred by combining an input image with the probabilistic maps of the adjacent slices into multichannel hybrid data, and then hierarchically aggregating the hybrid information of the entire segmentation network. Besides, a bi-directional recurrent optimization mechanism is developed to update the hybrid information in both the forward and the backward directions. This allows the proposed network to make full and optimal use of the local context information. Quantitative and qualitative evaluation was performed on the NIH Pancreas-CT dataset, and our proposed PBR-UNet method achieved better segmentation results with less computational cost compared to other state-of-the-art methods.
Trade-offs between carbon stocks a…
Updated:
February 28, 2019
Policies to mitigate climate change and biodiversity loss often assume that protecting carbon-rich forests provides co-benefits in terms of biodiversity, due to the spatial congruence of carbon stocks and biodiversity at biogeographic scales. However, it remains unclear whether this holds at the scales relevant for management, with particularly large knowledge gaps for temperate forests and for taxa other than trees. We built a comprehensive dataset of Central European temperate forest structure and multi-taxonomic diversity (beetles, birds, bryophytes, fungi, lichens, and plants) across 352 plots. We used Boosted Regression Trees to assess the relationship between above-ground live carbon stocks and (a) taxon-specific richness, (b) a unified multidiversity index. We used Threshold Indicator Taxa ANalysis to explore individual species' responses to changing above-ground carbon stocks and to detect change-points in species composition along the carbon-stock gradient. Our results reveal an overall weak and highly variable relationship between richness and carbon stock at the stand scale, both for individual taxonomic groups and for multidiversity. Similarly, the proportion of win-win and trade-off species (i.e. species favored or disadvantaged by increasing carbon stock, respectively) varied substantially across taxa. Win-win species gradually replaced trade-off species with increasing carbon, without clear thresholds along the above-ground carbon gradient, suggesting that community-level surrogates (e.g. richness) might fail to detect critical changes in biodiversity. Collectively, our analyses highlight that leveraging co-benefits between carbon and biodiversity in temperate forest may require stand-scale management that prioritizes either biodiversity or carbon-in order to maximize co-benefits at broader scales. Importantly, this contrasts with tropical forests, where climate [...]
Learning Factored Markov Decision …
Updated:
February 27, 2019
Methods for learning and planning in sequential decision problems often assume the learner is aware of all possible states and actions in advance. This assumption is sometimes untenable. In this paper, we give a method to learn factored markov decision problems from both domain exploration and expert assistance, which guarantees convergence to near-optimal behaviour, even when the agent begins unaware of factors critical to success. Our experiments show our agent learns optimal behaviour on small and large problems, and that conserving information on discovering new possibilities results in faster convergence.
Spatial And Temporal Changes Of Th…
Updated:
February 21, 2019
Observational constraints on geomagnetic field changes from interannual to millenial periods are reviewed, and the current resolution of field models (covering archeological to satellite eras) is discussed. With the perspective of data assimilation, emphasis is put on uncertainties entaching Gauss coefficients, and on the statistical properties of ground-based records. These latter potentially call for leaving behind the notion of geomagnetic jerks. The accuracy at which we recover interannual changes also requires considering with caution the apparent periodicity seen in the secular acceleration from satellite data. I then address the interpretation of recorded magnetic fluctuations in terms of core dynamics, highlighting the need for models that allow (or pre-suppose) a magnetic energy orders of magnitudes larger than the kinetic energy at large length-scales, a target for future numerical simulations of the geodynamo. I finally recall the first attempts at implementing geomagnetic data assimilation algorithms.
Quantitative analysis of timing in…
Updated:
February 20, 2019
Timing features such as the silence gaps between vocal units -- inter-call intervals (ICIs) -- often correlate with biological information such as context or genetic information. Such correlates between the ICIs and biological information have been reported for a diversity of animals. Yet, few quantitative approaches for investigating timing exist to date. Here, we propose a novel approach for quantitatively comparing timing in animal vocalisations in terms of the typical ICIs. As features, we use the distribution of silence gaps parametrised with a kernel density estimate (KDE) and compare the distributions with the symmetric Kullback-Leibler divergence (sKL-divergence). We use this technique to compare timing in vocalisations of two frog species, a group of zebra finches and calls from parrots of the same species. As a main finding, we demonstrate that in our dataset, closely related species have more similar distributions than species genetically more distant, with sKL-divergences across-species larger than within-species distances. Compared with more standard methods such as Fourier analysis, the proposed method is more robust to different durations present in the data samples, flexibly applicable to different species and easy to interpret. Investigating timing in animal vocalisations may thus contribute to taxonomy, support conservation efforts by helping monitoring animals in the wild and may shed light onto the origins of timing structures in animal vocal communication.
Verifiably Safe Off-Model Reinforc…
Updated:
February 14, 2019
The desire to use reinforcement learning in safety-critical settings has inspired a recent interest in formal methods for learning algorithms. Existing formal methods for learning and optimization primarily consider the problem of constrained learning or constrained optimization. Given a single correct model and associated safety constraint, these approaches guarantee efficient learning while provably avoiding behaviors outside the safety constraint. Acting well given an accurate environmental model is an important pre-requisite for safe learning, but is ultimately insufficient for systems that operate in complex heterogeneous environments. This paper introduces verification-preserving model updates, the first approach toward obtaining formal safety guarantees for reinforcement learning in settings where multiple environmental models must be taken into account. Through a combination of design-time model updates and runtime model falsification, we provide a first approach toward obtaining formal safety proofs for autonomous systems acting in heterogeneous environments.
Spectral Analysis of the September…
Updated:
February 11, 2019
An interval of exceptional solar activity was registered in early September 2017, late in the decay phase of solar cycle 24, involving the complex Active Region 12673 as it rotated across the western hemisphere with respect to Earth. A large number of eruptions occurred between 4-10 September, including four associated with X-class flares. The X9.3 flare on 6 September and the X8.2 flare on 10 September are currently the two largest during cycle 24. Both were accompanied by fast coronal mass ejections and gave rise to solar energetic particle (SEP) events measured by near-Earth spacecraft. In particular, the partially-occulted solar event on 10 September triggered a ground level enhancement (GLE), the second GLE of cycle 24. A further, much less energetic SEP event was recorded on 4 September. In this work we analyze observations by the Advanced Composition Explorer (ACE) and the Geostationary Operational Environmental Satellites (GOES), estimating the SEP event-integrated spectra above 300 keV and carrying out a detailed study of the spectral shape temporal evolution. Derived spectra are characterized by a low-energy break at few/tens of MeV; the 10 September event spectrum, extending up to ~1 GeV, exhibits an additional rollover at several hundred MeV. We discuss the spectral interpretation in the scenario of shock acceleration and in terms of other important external influences related to interplanetary transport and magnetic connectivity, taking advantage of multi-point observations from the Solar Terrestrial Relations Observatory (STEREO). Spectral results are also compared with those obtained for the 17 May 2012 GLE event.
A real-time hourly ozone predictio…
Updated:
January 30, 2019
This study uses a deep learning approach to forecast ozone concentrations over Seoul, South Korea for 2017. We employ a deep convolutional neural network (CNN). We apply this method to predict the hourly ozone concentration on each day for the entire year using several predictors from the previous day, including the wind fields, temperature, relative humidity, pressure, and precipitation, along with in-situ ozone and NO2 concentrations. We refer to a history of all observed parameters between 2014 and 2016 for training the predictive models. Model-measurement comparisons for the 25 monitoring sites for the year 2017 report average indexes of agreement (IOA) of 0.84-0.89 and a Pearson correlation coefficient of 0.74-0.81, indicating reasonable performance for the CNN forecasting model. Although the CNN model successfully captures daily trends as well as yearly high and low variations of the ozone concentrations, it notably underpredicts high ozone peaks during the summer. The forecasting results are generally more accurate for the stations located in the southern regions of the Han River, the result of more stable topographical and meteorological conditions. Furthermore, through two separate daytime and nighttime forecasts, we find that the monthly IOA of the CNN model is 0.05-0.30 higher during the daytime, resulting from the unavailability of some of the input parameters during the nighttime. While the CNN model can predict the next 24 hours of ozone concentrations within less than a minute, we identify several limitations of deep learning models for real-time air quality forecasting for further improvement.
Modelling and optimisation of wate…
Updated:
January 28, 2019
We consider the management of sloping, long and thin, coastal aquifers. We first develop a simple mathematical model, based on Darcy flow for porous media, which gives the water table height, and the flow velocity as a function of the underground seepage rate, the recharge rates and the extraction rates, neglecting sea water intrusion. We then validate the model with recent data from the Germasogeia aquifer which caters for most of the water demand in the area of Limassol, Cyprus. The data is provided over a three year period by the Cyprus Water Development Department (WDD), the governmental department managing the aquifer. Furthermore, based on our model, we subsequently develop an optimised recharge strategy and identify the optimal recharge rates for a desired extracted water volume while the water table height is maintained at the acceptable level. We study several scenarios of practical interest and we find that we can achieve considerable water savings, compared to the current empirical strategy followed by WDD. Additionally, we model the transport of pollutants in the aquifer in the case of accidental leakage, using an advection-diffusion equation. We find that in the case of an undetected and unhindered contamination (worst case scenario) the aquifer would get polluted in about three years. Also, we find that double recharge rates flush the pollutant out of the aquifer faster. Finally, to incorporate the possibility of sea water intrusion, which can render aquifers unusable, we develop a new, transient two-dimensional model of groundwater flow based on the Darcy-Brinkman equations, and determine the position of the water table and the seawater-freshwater interface for conditions of drought, moderate rainfall and flooding. The validation of the new seawater intrusion modelling approach has been carried out via comparison with a widely-accepted code.
End-to-End Learned Early Classific…
Updated:
December 21, 2022
Remote sensing satellites capture the cyclic dynamics of our Planet in regular time intervals recorded in satellite time series data. End-to-end trained deep learning models use this time series data to make predictions at a large scale, for instance, to produce up-to-date crop cover maps. Most time series classification approaches focus on the accuracy of predictions. However, the earliness of the prediction is also of great importance since coming to an early decision can make a crucial difference in time-sensitive applications. In this work, we present an End-to-End Learned Early Classification of Time Series (ELECTS) model that estimates a classification score and a probability of whether sufficient data has been observed to come to an early and still accurate decision. ELECTS is modular: any deep time series classification model can adopt the ELECTS conceptual idea by adding a second prediction head that outputs a probability of stopping the classification. The ELECTS loss function then optimizes the overall model on a balanced objective of earliness and accuracy. Our experiments on four crop classification datasets from Europe and Africa show that ELECTS allows reaching state-of-the-art accuracy while reducing the quantity of data massively to be downloaded, stored, and processed. The source code is available at https://github.com/marccoru/elects.
Rapid and cost-effective evaluatio…
Updated:
January 22, 2019
The fluorescence spectra of bacterial samples stained with SYTO 9 and propidium iodide (PI) were used to monitor bacterial viability. Stained mixtures of live and dead Escherichia coli with proportions of live:dead cells varying from 0 to 100% were measured using the optrode, a cost effective and convenient fibre-based spectroscopic device. We demonstrated several approaches to obtaining the proportions of live:dead E. coli in a mixture of both live and dead, from analyses of the fluorescence spectra collected by the optrode. To find a suitable technique for predicting the percentage of live bacteria in a sample, four analysis methods were assessed and compared: SYTO 9:PI fluorescence intensity ratio, an adjusted fluorescence intensity ratio, single-spectrum support vector regression (SVR) and multi-spectra SVR. Of the four analysis methods, multi-spectra SVR obtained the most reliable results and was able to predict the percentage of live bacteria in 10^8 bacteria/mL samples between c. 7% and 100% live, and in 10^7 bacteria/mL samples between c. 7% and 73% live. By demonstrating the use of multi-spectra SVR and the optrode to monitor E. coli viability, we raise points of consideration for spectroscopic analysis of SYTO 9 and PI and aim to lay the foundation for future work that use similar methods for different bacterial species.
An Open Source, Versatile, Afforda…
Updated:
August 28, 2019
Sea ice is a major feature of the polar environments. Recent changes in the climate and extent of the sea ice, together with increased economic activity and research interest in these regions, are driving factors for new measurements of sea ice dynamics. Waves in ice are important as they participate in the coupling between the open ocean and the ice-covered regions. Measurements are challenging to perform due to remoteness and harsh environmental conditions. While progress has been made in observing wave propagtion in sea ice using remote methods, these are still relatively new measurements and would benefit from more in situ data for validation. In this article, we present an open source instrument that was developed for performing such measurements. The versatile design includes an ultra-low power unit, a microcontroller-based logger, a small microcomputer for on-board data processing, and an Iridium modem for satellite communications. Virtually any sensor can be used with this design. In the present case, we use an Inertial Motion Unit to record wave motion. High quality results were obtained, which opens new possibilities for in situ measurements in the polar regions. Our instrument can be easily customized to fit many in situ measurement tasks, and we hope that our work will provide a framework for future developments of a variety of such open source instruments.
Quasi-separatrix Layers Induced by…
Updated:
January 2, 2019
Magnetic reconnection processes in the near-Earth magnetotail can be highly 3-dimensional (3D) in geometry and dynamics, even though the magnetotail configuration itself is nearly two dimensional due to the symmetry in the dusk-dawn direction. Such reconnection processes can be induced by the 3D dynamics of nonlinear ballooning instability. In this work, we explore the global 3D geometry of the reconnection process induced by ballooning instability in the near-Earth magnetotail by examining the distribution of quasi-separatrix layers associated with plasmoid formation in the entire 3D domain of magnetotail configuration, using an algorithm previously developed in context of solar physics. The 3D distribution of quasi-separatrix layers (QSLs) as well as their evolution directly follows the plasmoid formation during the nonlinear development of ballooning instability in both time and space. Such a close correlation demonstrates a strong coupling between the ballooning and the corresponding reconnection processes. It further confirms the intrinsic 3D nature of the ballooning-induced plasmoid formation and reconnection processes, in both geometry and dynamics. In addition, the reconstruction of the 3D QSL geometry may provide an alternative means for identifying the location and timing of 3D reconnection sites in magnetotail from both numerical simulations and satellite observations.
Ecology of Near-Earth Space
Updated:
December 25, 2018
The technical achievements of our civilization are accompanied by certain negative consequences affect the near-Earth space. The problem of clogging of near-Earth space by "space debris" as purely theoretical arose essentially as soon as the first artificial satellite in 1957 was launched. Since then, the rate of exploitation of outer space has increased very rapidly. As a result, the problem of clogging of near-Earth space ceased to be only theoretical and transformed into practical. Presently, anthropogenic factors of the development of near-Earth space are divided into several categories: mechanical, chemical, radioactive and electromagnetic pollution.