Next-order balanced model captures…
Updated:
February 17, 2025
Using nonlinear simulations in two settings, we demonstrate that QG$^\mathrm{+1}$, a potential-vorticity based next-order-in-Rossby balanced model, captures several aspects of ocean submesoscale physics. In forced-dissipative 3D simulations under baroclinically unstable Eady-type background states, the statistical equilibrium turbulence exhibits long cyclonic tails and a plethora of rapidly-intensifying ageostrophic fronts. Despite that the model requires setting an explicit, small value for the fixed scaling Rossby number, the emergent flows are nevertheless characterized by vorticity and convergence values larger than the local Coriolis frequency, as observed in upper-ocean submesoscale flows. Simulations of QG$^\mathrm{+1}$ under the classic strain-induced frontogenesis set-up show realistic frontal asymmetry and a provable finite time blow-up, quantitatively comparable to simulations of the semigeostrophic equations. The inversions in the QG$^\mathrm{+1}$ model are straightforward linear Poisson problems, allowing for the reconstruction of all flow fields from the PV and surface buoyancy, while avoiding the semigeostrophic coordinate transformation. Taken together, these results suggest QG$^\mathrm{+1}$ might be a useful tool for studying upper-ocean submesoscale dynamics.
On Feasibility of Intent Obfuscati…
Updated:
August 29, 2024
Intent obfuscation is a common tactic in adversarial situations, enabling the attacker to both manipulate the target system and avoid culpability. Surprisingly, it has rarely been implemented in adversarial attacks on machine learning systems. We are the first to propose using intent obfuscation to generate adversarial examples for object detectors: by perturbing another non-overlapping object to disrupt the target object, the attacker hides their intended target. We conduct a randomized experiment on 5 prominent detectors -- YOLOv3, SSD, RetinaNet, Faster R-CNN, and Cascade R-CNN -- using both targeted and untargeted attacks and achieve success on all models and attacks. We analyze the success factors characterizing intent obfuscating attacks, including target object confidence and perturb object sizes. We then demonstrate that the attacker can exploit these success factors to increase success rates for all models and attacks. Finally, we discuss main takeaways and legal repercussions.
Self-Introspective Decoding: Allev…
Updated:
March 16, 2025
While Large Vision-Language Models (LVLMs) have rapidly advanced in recent years, the prevalent issue known as the `hallucination' problem has emerged as a significant bottleneck, hindering their real-world deployments. Existing methods mitigate this issue mainly from two perspectives: One approach leverages extra knowledge like robust instruction tuning LVLMs with curated datasets or employing auxiliary analysis networks, which inevitable incur additional costs. Another approach, known as contrastive decoding, induces hallucinations by manually disturbing the vision or instruction raw inputs and mitigates them by contrasting the outputs of the disturbed and original LVLMs. However, these approaches rely on empirical holistic input disturbances and double the inference cost. To avoid these issues, we propose a simple yet effective method named Self-Introspective Decoding (SID). Our empirical investigation reveals that pretrained LVLMs can introspectively assess the importance of vision tokens based on preceding vision and text (both instruction and generated) tokens. We develop the Context and Text-aware Token Selection (CT2S) strategy, which preserves only unimportant vision tokens after early layers of LVLMs to adaptively amplify text-informed hallucination during the auto-regressive decoding. This approach ensures that multimodal knowledge absorbed in the early layers induces multimodal contextual rather than aimless hallucinations. Subsequently, the original token logits subtract the amplified vision-and-text association hallucinations, guiding LVLMs decoding faithfully. Extensive experiments illustrate SID generates less-hallucination and higher-quality texts across various metrics, without extra knowledge and much additional computation burdens.
High-throughput 3D shape completio…
Updated:
November 12, 2024
Potato yield is an important metric for farmers to further optimize their cultivation practices. Potato yield can be estimated on a harvester using an RGB-D camera that can estimate the three-dimensional (3D) volume of individual potato tubers. A challenge, however, is that the 3D shape derived from RGB-D images is only partially completed, underestimating the actual volume. To address this issue, we developed a 3D shape completion network, called CoRe++, which can complete the 3D shape from RGB-D images. CoRe++ is a deep learning network that consists of a convolutional encoder and a decoder. The encoder compresses RGB-D images into latent vectors that are used by the decoder to complete the 3D shape using the deep signed distance field network (DeepSDF). To evaluate our CoRe++ network, we collected partial and complete 3D point clouds of 339 potato tubers on an operational harvester in Japan. On the 1425 RGB-D images in the test set (representing 51 unique potato tubers), our network achieved a completion accuracy of 2.8 mm on average. For volumetric estimation, the root mean squared error (RMSE) was 22.6 ml, and this was better than the RMSE of the linear regression (31.1 ml) and the base model (36.9 ml). We found that the RMSE can be further reduced to 18.2 ml when performing the 3D shape completion in the center of the RGB-D image. With an average 3D shape completion time of 10 milliseconds per tuber, we can conclude that CoRe++ is both fast and accurate enough to be implemented on an operational harvester for high-throughput potato yield estimation. CoRe++'s high-throughput and accurate processing allows it to be applied to other tuber, fruit and vegetable crops, thereby enabling versatile, accurate and real-time yield monitoring in precision agriculture. Our code, network weights and dataset are publicly available at https://github.com/UTokyo-FieldPhenomics-Lab/corepp.git.
Semi-Supervised Teacher-Reference-…
Updated:
March 17, 2025
Existing action quality assessment (AQA) methods often require a large number of label annotations for fully supervised learning, which are laborious and expensive. In practice, the labeled data are difficult to obtain because the AQA annotation process requires domain-specific expertise. In this paper, we propose a novel semi-supervised method, which can be utilized for better assessment of the AQA task by exploiting a large amount of unlabeled data and a small portion of labeled data. Differing from the traditional teacher-student network, we propose a teacher-reference-student architecture to learn both unlabeled and labeled data, where the teacher network and the reference network are used to generate pseudo-labels for unlabeled data to supervise the student network. Specifically, the teacher predicts pseudo-labels by capturing high-level features of unlabeled data. The reference network provides adequate supervision of the student network by referring to additional action information. Moreover, we introduce confidence memory to improve the reliability of pseudo-labels by storing the most accurate ever output of the teacher network and reference network. To validate our method, we conduct extensive experiments on three AQA benchmark datasets. Experimental results show that our method achieves significant improvements and outperforms existing semi-supervised AQA methods.
EuroCropsML: A Time Series Benchma…
Updated:
July 24, 2024
We introduce EuroCropsML, an analysis-ready remote sensing machine learning dataset for time series crop type classification of agricultural parcels in Europe. It is the first dataset designed to benchmark transnational few-shot crop type classification algorithms that supports advancements in algorithmic development and research comparability. It comprises 706 683 multi-class labeled data points across 176 classes, featuring annual time series of per-parcel median pixel values from Sentinel-2 L1C data for 2021, along with crop type labels and spatial coordinates. Based on the open-source EuroCrops collection, EuroCropsML is publicly available on Zenodo.
CDDIP: Constrained Diffusion-Drive…
Updated:
January 14, 2025
Seismic data frequently exhibits missing traces, substantially affecting subsequent seismic processing and interpretation. Deep learning-based approaches have demonstrated significant advancements in reconstructing irregularly missing seismic data through supervised and unsupervised methods. Nonetheless, substantial challenges remain, such as generalization capacity and computation time cost during the inference. Our work introduces a reconstruction method that uses a pre-trained generative diffusion model for image synthesis and incorporates Deep Image Prior to enforce data consistency when reconstructing missing traces in seismic data. The proposed method has demonstrated strong robustness and high reconstruction capability of post-stack and pre-stack data with different levels of structural complexity, even in field and synthetic scenarios where test data were outside the training domain. This indicates that our method can handle the high geological variability of different exploration targets. Additionally, compared to other state-of-the-art seismic reconstruction methods using diffusion models. During inference, our approach reduces the number of sampling timesteps by up to 4x.
A Tale of Single-channel Electroen…
Updated:
March 17, 2025
Single-channel electroencephalogram (EEG) is a cost-effective, comfortable, and non-invasive method for monitoring brain activity, widely adopted by researchers, consumers, and clinicians. The increasing number and proportion of articles on single-channel EEG underscore its growing potential. This paper provides a comprehensive review of single-channel EEG, focusing on development trends, devices, datasets, signal processing methods, recent applications, and future directions. Definitions of bipolar and unipolar configurations in single-channel EEG are clarified to guide future advancements. Applications mainly span sleep staging, emotion recognition, educational research, and clinical diagnosis. Ongoing advancements of single-channel EEG in AI-based EEG generation techniques suggest potential parity or superiority over multichannel EEG performance.
What is glacier sliding
Updated:
March 7, 2025
Glacier and ice-sheet motion is fundamental to glaciology. However, there is still no clear consensus for the optimal way to describe the sliding component of glacier and ice-sheet dynamics. Typically, sliding is parametrised using a traction coefficient linked to a given theory describing one or a limited set of sliding processes. However, this precludes the possibility of multiple overlapping, spatio-temporally varying sliding modes with inaccuracies resulting in model error propagation as the system evolves away from the conditions under which it was optimised for. Here, we describe glacier sliding as a scale- and setting-dependent 'inner flow' that arises from multiple overlapping sub-processes, including viscous ice deformation and obstacle resistance (or 'form drag'). The corresponding 'outer flow' then accounts for ice deformation that is minimally influenced by bed properties. Following this framework, we suggest that a rough-smooth bed division may be more consequential than a hard-soft one, and that the importance of 'Iken's bound' may be significantly reduced if viscous ice deformation becomes significant in a given region within the inner flow, which may explain the persistent functionality of power-law sliding in large-scale process-agnostic sliding studies over rough topography. Further, reviewing observation-based studies and considering the diversity of sliding processes and settings, we suggest that a simple 'unified' sliding relationship controlled by a single tunable coefficient may not be realistic. However, we show that given reasonable assumptions, sliding relationships should generally fall within a sum of regularised-Coulomb and power-law components, and suggest that a or compound, or power-law relationship with careful consideration of the power value may provide the most flexible way to account for the varied net effect of compound sliding sub-processes.
FoodMem: Near Real-time and Precis…
Updated:
February 10, 2025
Food segmentation, including in videos, is vital for addressing real-world health, agriculture, and food biotechnology issues. Current limitations lead to inaccurate nutritional analysis, inefficient crop management, and suboptimal food processing, impacting food security and public health. Improving segmentation techniques can enhance dietary assessments, agricultural productivity, and the food production process. This study introduces the development of a robust framework for high-quality, near-real-time segmentation and tracking of food items in videos, using minimal hardware resources. We present FoodMem, a novel framework designed to segment food items from video sequences of 360-degree unbounded scenes. FoodMem can consistently generate masks of food portions in a video sequence, overcoming the limitations of existing semantic segmentation models, such as flickering and prohibitive inference speeds in video processing contexts. To address these issues, FoodMem leverages a two-phase solution: a transformer segmentation phase to create initial segmentation masks and a memory-based tracking phase to monitor food masks in complex scenes. Our framework outperforms current state-of-the-art food segmentation models, yielding superior performance across various conditions, such as camera angles, lighting, reflections, scene complexity, and food diversity. This results in reduced segmentation noise, elimination of artifacts, and completion of missing segments. Here, we also introduce a new annotated food dataset encompassing challenging scenarios absent in previous benchmarks. Extensive experiments conducted on MetaFood3D, Nutrition5k, and Vegetables & Fruits datasets demonstrate that FoodMem enhances the state-of-the-art by 2.5% mean average precision in food video segmentation and is 58 x faster on average.
Leveraging Hybrid Intelligence Tow…
Updated:
July 15, 2024
Hybrid intelligence aims to enhance decision-making, problem-solving, and overall system performance by combining the strengths of both, human cognitive abilities and artificial intelligence. With the rise of Large Language Models (LLM), progressively participating as smart agents to accelerate machine learning development, Hybrid Intelligence is becoming an increasingly important topic for effective interaction between humans and machines. This paper presents an approach to leverage Hybrid Intelligence towards sustainable and energy-aware machine learning. When developing machine learning models, final model performance commonly rules the optimization process while the efficiency of the process itself is often neglected. Moreover, in recent times, energy efficiency has become equally crucial due to the significant environmental impact of complex and large-scale computational processes. The contribution of this work covers the interactive inclusion of secondary knowledge sources through Human-in-the-loop (HITL) and LLM agents to stress out and further resolve inefficiencies in the machine learning development process.
Automated detection of gibbon call…
Updated:
July 26, 2024
Automated detection of acoustic signals is crucial for effective monitoring of vocal animals and their habitats across ecologically-relevant spatial and temporal scales. Recent advances in deep learning have made these approaches more accessible. However, there are few deep learning approaches that can be implemented natively in the R programming environment; approaches that run natively in R may be more accessible for ecologists. The "torch for R" ecosystem has made the use of transfer learning with convolutional neural networks accessible for R users. Here, we evaluate a workflow that uses transfer learning for the automated detection of acoustic signals from passive acoustic monitoring (PAM) data. Our specific goals include: 1) present a method for automated detection of gibbon calls from PAM data using the "torch for R" ecosystem; 2) compare the results of transfer learning for six pretrained CNN architectures; and 3) investigate how well the different architectures perform on datasets of the female calls from two different gibbon species: the northern grey gibbon (Hylobates funereus) and the southern yellow-cheeked crested gibbon (Nomascus gabriellae). We found that the highest performing architecture depended on the test dataset. We successfully deployed the top performing model for each gibbon species to investigate spatial of variation in gibbon calling behavior across two grids of autonomous recording units in Danum Valley Conservation Area, Malaysia and Keo Seima Wildlife Sanctuary, Cambodia. The fields of deep learning and automated detection are rapidly evolving, and we provide the methods and datasets as benchmarks for future work.
Implication of modelling choices o…
Updated:
July 5, 2024
We focus on connectivity methods used to understand and predict how landscapes and habitats facilitate or impede the movement and dispersal of species. Our objective is to compare the implication of methodological choices at three stages of the modelling framework: landscape characterisation, connectivity estimation, and connectivity assessment. What are the convergences and divergences of different modelling approaches? What are the implications of their combined results for landscape planning? We implemented two landscape characterisation approaches: expert opinion and species distribution model (SDM); four connectivity estimation models: Euclidean distance, least-cost paths (LCP), circuit theory, and stochastic movement simulation (SMS); and two connectivity indices: flux and area-weighted flux (dPCflux). We compared outcomes such as movement maps and habitat prioritisation for a rural landscape in southwestern France. Landscape characterisation is the main factor influencing connectivity assessment. The movement maps reflect the models' assumptions: LCP produced narrow beams reflecting the optimal pathways; whereas circuit theory and SMS produced wider estimation reflecting movement stochasticity, with SMS integrating behavioural drivers. The indices highlighted different aspects: dPCflux the surface of suitable habitats and flux their proximity. We recommend focusing on landscape characterisation before engaging further in the modelling framework. We emphasise the importance of stochasticity and behavioural drivers in connectivity, which can be reflected using circuit theory, SMS or other stochastic individual-based models. We stress the importance of using multiple indices to capture the multi-factorial aspect of connectivity.
Sensor-Aware Classifiers for Energ…
Updated:
July 11, 2024
Time-series data processing is an important component of many real-world applications, such as health monitoring, environmental monitoring, and digital agriculture. These applications collect distinct windows of sensor data (e.g., few seconds) and process them to assess the environment. Machine learning (ML) models are being employed in time-series applications due to their generalization abilities for classification. State-of-the-art time-series applications wait for entire sensor data window to become available before processing the data using ML algorithms, resulting in high sensor energy consumption. However, not all situations require processing full sensor window to make accurate inference. For instance, in activity recognition, sitting and standing activities can be inferred with partial windows. Using this insight, we propose to employ early exit classifiers with partial sensor windows to minimize energy consumption while maintaining accuracy. Specifically, we first utilize multiple early exits with successively increasing amount of data as they become available in a window. If early exits provide inference with high confidence, we return the label and enter low power mode for sensors. The proposed approach has potential to enable significant energy savings in time series applications. We utilize neural networks and random forest classifiers to evaluate our approach. Our evaluations with six datasets show that the proposed approach enables up to 50-60% energy savings on average without any impact on accuracy. The energy savings can enable time-series applications in remote locations with limited energy availability.
Learning Program Behavioral Models…
Updated:
March 17, 2025
We introduce Modelizer - a novel framework that, given a black-box program, learns a model from its input/output behavior using neural machine translation algorithms. The resulting model mocks the original program: Given an input, the model predicts the output that would have been produced by the program. However, the model is also reversible - that is, the model can predict the input that would have produced a given output. Finally, the model is differentiable and can be efficiently restricted to predict only a certain aspect of the program behavior. Modelizer uses grammars to synthesize and inputs and unsupervised tokenizers to decompose the resulting outputs, allowing it to learn sequence-to-sequence associations between token streams. Other than input grammars, Modelizer only requires the ability to execute the program. The resulting models are small, requiring fewer than 6.3 million parameters for languages such as Markdown or HTML; and they are accurate, achieving up to 95.4% accuracy and a BLEU score of 0.98 with standard error 0.04 in mocking real-world applications. As it learns from and predicts executions rather than code, Modelizer departs from the LLM-centric research trend, opening new opportunities for program-specific models that are fully tuned towards individual programs. Indeed, we foresee several applications of these models, especially as the output of the program can be any aspect of program behavior. Beyond mocking and predicting program behavior, the models can also synthesize inputs that are likely to produce a particular behavior, such as failures or coverage, thus assisting in program understanding and maintenance.
Addressing single object tracking …
Updated:
July 7, 2024
Object tracking in satellite videos remains a complex endeavor in remote sensing due to the intricate and dynamic nature of satellite imagery. Existing state-of-the-art trackers in computer vision integrate sophisticated architectures, attention mechanisms, and multi-modal fusion to enhance tracking accuracy across diverse environments. However, the challenges posed by satellite imagery, such as background variations, atmospheric disturbances, and low-resolution object delineation, significantly impede the precision and reliability of traditional Single Object Tracking (SOT) techniques. Our study delves into these challenges and proposes prompt engineering methodologies, leveraging the Segment Anything Model (SAM) and TAPIR (Tracking Any Point with per-frame Initialization and temporal Refinement), to create a training-free point-based tracking method for small-scale objects on satellite videos. Experiments on the VISO dataset validate our strategy, marking a significant advancement in robust tracking solutions tailored for satellite imagery in remote sensing applications.
MineNetCD: A Benchmark for Global …
Updated:
July 4, 2024
Monitoring changes triggered by mining activities is crucial for industrial controlling, environmental management and regulatory compliance, yet it poses significant challenges due to the vast and often remote locations of mining sites. Remote sensing technologies have increasingly become indispensable to detect and analyze these changes over time. We thus introduce MineNetCD, a comprehensive benchmark designed for global mining change detection using remote sensing imagery. The benchmark comprises three key contributions. First, we establish a global mining change detection dataset featuring more than 70k paired patches of bi-temporal high-resolution remote sensing images and pixel-level annotations from 100 mining sites worldwide. Second, we develop a novel baseline model based on a change-aware Fast Fourier Transform (ChangeFFT) module, which enhances various backbones by leveraging essential spectrum components within features in the frequency domain and capturing the channel-wise correlation of bi-temporal feature differences to learn change-aware representations. Third, we construct a unified change detection (UCD) framework that integrates over 13 advanced change detection models. This framework is designed for streamlined and efficient processing, utilizing the cloud platform hosted by HuggingFace. Extensive experiments have been conducted to demonstrate the superiority of the proposed baseline model compared with 12 state-of-the-art change detection approaches. Empirical studies on modularized backbones comprehensively confirm the efficacy of different representation learners on change detection. This contribution represents significant advancements in the field of remote sensing and change detection, providing a robust resource for future research and applications in global mining monitoring. Dataset and Codes are available via the link.
ISWSST: Index-space-wave State Sup…
Updated:
July 3, 2024
Currently the semantic segmentation task of multispectral remotely sensed imagery (MSRSI) faces the following problems: 1) Usually, only single domain feature (i.e., space domain or frequency domain) is considered; 2) downsampling operation in encoder generally leads to the accuracy loss of edge extraction; 3) multichannel features of MSRSI are not fully considered; and 4) prior knowledge of remote sensing is not fully utilized. To solve the aforementioned issues, an index-space-wave state superposition Transformer (ISWSST) is the first to be proposed for MSRSI semantic segmentation by the inspiration from quantum mechanics, whose superiority is as follows: 1) index, space and wave states are superposed or fused to simulate quantum superposition by adaptively voting decision (i.e., ensemble learning idea) for being a stronger classifier and improving the segmentation accuracy; 2) a lossless wavelet pyramid encoder-decoder module is designed to losslessly reconstruct image and simulate quantum entanglement based on wavelet transform and inverse wavelet transform for avoiding the edge extraction loss; 3) combining multispectral features (i.e. remote sensing index and channel attention mechanism) is proposed to accurately extract ground objects from original resolution images; and 4) quantum mechanics are introduced to interpret the underlying superiority of ISWSST. Experiments show that ISWSST is validated and superior to the state-of-the-art architectures for the MSRSI segmentation task, which improves the segmentation and edge extraction accuracy effectively. Codes will be available publicly after our paper is accepted.
Investigating the Segment Anything…
Updated:
July 1, 2024
Accurate mapping of agricultural field boundaries is crucial for enhancing outcomes like precision agriculture, crop monitoring, and yield estimation. However, extracting these boundaries from satellite images is challenging, especially for smallholder farms and data-scarce environments. This study explores the Segment Anything Model (SAM) to delineate agricultural field boundaries in Bihar, India, using 2-meter resolution SkySat imagery without additional training. We evaluate SAM's performance across three model checkpoints, various input sizes, multi-date satellite images, and edge-enhanced imagery. Our results show that SAM correctly identifies about 58% of field boundaries, comparable to other approaches requiring extensive training data. Using different input image sizes improves accuracy, with the most significant improvement observed when using multi-date satellite images. This work establishes proof of concept for using SAM and maximizing its potential in agricultural field boundary mapping. Our work highlights SAM's potential in delineating agriculture field boundary in training-data scarce settings to enable a wide range of agriculture related analysis.
Scarecrow monitoring system:employ…
Updated:
July 1, 2024
Agriculture faces a growing challenge with wildlife wreaking havoc on crops, threatening sustainability. The project employs advanced object detection, the system utilizes the Mobile Net SSD model for real-time animal classification. The methodology initiates with the creation of a dataset, where each animal is represented by annotated images. The SSD Mobile Net architecture facilitates the use of a model for image classification and object detection. The model undergoes fine-tuning and optimization during training, enhancing accuracy for precise animal classification. Real-time detection is achieved through a webcam and the OpenCV library, enabling prompt identification and categorization of approaching animals. By seamlessly integrating intelligent scarecrow technology with object detection, this system offers a robust solution to field protection, minimizing crop damage and promoting precision farming. It represents a valuable contribution to agricultural sustainability, addressing the challenge of wildlife interference with crops. The implementation of the Intelligent Scarecrow Monitoring System stands as a progressive tool for proactive field management and protection, empowering farmers with an advanced solution for precision agriculture. Keywords: Machine learning, Deep Learning, Computer Vision, MobileNet SSD
YOLOv12 to Its Genesis: A Decadal …
Updated:
February 21, 2025
This review systematically examines the progression of the You Only Look Once (YOLO) object detection algorithms from YOLOv1 to the recently unveiled YOLOv12. Employing a reverse chronological analysis, this study examines the advancements introduced by YOLO algorithms, beginning with YOLOv12 and progressing through YOLO11 (or YOLOv11), YOLOv10, YOLOv9, YOLOv8, and subsequent versions to explore each version's contributions to enhancing speed, detection accuracy, and computational efficiency in real-time object detection. Additionally, this study reviews the alternative versions derived from YOLO architectural advancements of YOLO-NAS, YOLO-X, YOLO-R, DAMO-YOLO, and Gold-YOLO. By detailing the incremental technological advancements in subsequent YOLO versions, this review chronicles the evolution of YOLO, and discusses the challenges and limitations in each of the earlier versions. The evolution signifies a path towards integrating YOLO with multimodal, context-aware, and Artificial General Intelligence (AGI) systems for the next YOLO decade, promising significant implications for future developments in AI-driven applications. (Key terms: YOLOv12, YOLOv12 architecture, YOLOv11, YOLO11, YOLO Review, YOLOv14, YOLOv15, YOLO architecture, YOLOv12 architecture)
Planted: a dataset for planted for…
Updated:
May 24, 2024
Protecting and restoring forest ecosystems is critical for biodiversity conservation and carbon sequestration. Forest monitoring on a global scale is essential for prioritizing and assessing conservation efforts. Satellite-based remote sensing is the only viable solution for providing global coverage, but to date, large-scale forest monitoring is limited to single modalities and single time points. In this paper, we present a dataset consisting of data from five public satellites for recognizing forest plantations and planted tree species across the globe. Each satellite modality consists of a multi-year time series. The dataset, named \PlantD, includes over 2M examples of 64 tree label classes (46 genera and 40 species), distributed among 41 countries. This dataset is released to foster research in forest monitoring using multimodal, multi-scale, multi-temporal data sources. Additionally, we present initial baseline results and evaluate modality fusion and data augmentation approaches for this dataset.
Continuous Urban Change Detection …
Updated:
January 17, 2025
Urbanization advances at unprecedented rates, resulting in negative effects on the environment and human well-being. Remote sensing has the potential to mitigate these effects by supporting sustainable development strategies with accurate information on urban growth. Deep learning-based methods have achieved promising urban change detection results from optical satellite image pairs using convolutional neural networks (ConvNets), transformers, and a multi-task learning setup. However, transformers have not been leveraged for urban change detection with multi-temporal data, i.e., >2 images, and multi-task learning methods lack integration approaches that combine change and segmentation outputs. To fill this research gap, we propose a continuous urban change detection method that identifies changes in each consecutive image pair of a satellite image time series (SITS). Specifically, we propose a temporal feature refinement (TFR) module that utilizes self-attention to improve ConvNet-based multi-temporal building representations. Furthermore, we propose a multi-task integration (MTI) module that utilizes Markov networks to find an optimal building map time series based on segmentation and dense change outputs. The proposed method effectively identifies urban changes based on high-resolution SITS acquired by the PlanetScope constellation (F1 score 0.551) and Gaofen-2 (F1 score 0.440). Moreover, our experiments on two challenging datasets demonstrate the effectiveness of the proposed method compared to bi-temporal and multi-temporal urban change detection and segmentation methods.
Multi-Modal Vision Transformers fo…
Updated:
June 24, 2024
Using images acquired by different satellite sensors has shown to improve classification performance in the framework of crop mapping from satellite image time series (SITS). Existing state-of-the-art architectures use self-attention mechanisms to process the temporal dimension and convolutions for the spatial dimension of SITS. Motivated by the success of purely attention-based architectures in crop mapping from single-modal SITS, we introduce several multi-modal multi-temporal transformer-based architectures. Specifically, we investigate the effectiveness of Early Fusion, Cross Attention Fusion and Synchronized Class Token Fusion within the Temporo-Spatial Vision Transformer (TSViT). Experimental results demonstrate significant improvements over state-of-the-art architectures with both convolutional and self-attention components.
Single-Temporal Supervised Learnin…
Updated:
June 22, 2024
Bitemporal supervised learning paradigm always dominates remote sensing change detection using numerous labeled bitemporal image pairs, especially for high spatial resolution (HSR) remote sensing imagery. However, it is very expensive and labor-intensive to label change regions in large-scale bitemporal HSR remote sensing image pairs. In this paper, we propose single-temporal supervised learning (STAR) for universal remote sensing change detection from a new perspective of exploiting changes between unpaired images as supervisory signals. STAR enables us to train a high-accuracy change detector only using unpaired labeled images and can generalize to real-world bitemporal image pairs. To demonstrate the flexibility and scalability of STAR, we design a simple yet unified change detector, termed ChangeStar2, capable of addressing binary change detection, object change detection, and semantic change detection in one architecture. ChangeStar2 achieves state-of-the-art performances on eight public remote sensing change detection datasets, covering above two supervised settings, multiple change types, multiple scenarios. The code is available at https://github.com/Z-Zheng/pytorch-change-models.
MAC: A Benchmark for Multiple Attr…
Updated:
March 17, 2025
Compositional Zero-Shot Learning (CZSL) aims to learn semantic primitives (attributes and objects) from seen compositions and recognize unseen attribute-object compositions. Existing CZSL datasets focus on single attributes, neglecting the fact that objects naturally exhibit multiple interrelated attributes. Their narrow attribute scope and single attribute labeling introduce annotation biases, misleading the learning of attributes and causing inaccurate evaluation. To address these issues, we introduce the Multi-Attribute Composition (MAC) dataset, encompassing 22,838 images and 17,627 compositions with comprehensive and representative attribute annotations. MAC shows complex relationship between attributes and objects, with each attribute type linked to an average of 82.2 object types, and each object type associated with 31.4 attribute types. Based on MAC, we propose multi-attribute compositional zero-shot learning that requires deeper semantic understanding and advanced attribute associations, establishing a more realistic and challenging benchmark for CZSL. We also propose Multi-attribute Visual-Primitive Integrator (MVP-Integrator), a robust baseline for multi-attribute CZSL, which disentangles semantic primitives and performs effective visual-primitive association. Experimental results demonstrate that MVP-Integrator significantly outperforms existing CZSL methods on MAC with improved inference efficiency.
Standardizing Structural Causal Mo…
Updated:
March 17, 2025
Synthetic datasets generated by structural causal models (SCMs) are commonly used for benchmarking causal structure learning algorithms. However, the variances and pairwise correlations in SCM data tend to increase along the causal ordering. Several popular algorithms exploit these artifacts, possibly leading to conclusions that do not generalize to real-world settings. Existing metrics like $\operatorname{Var}$-sortability and $\operatorname{R^2}$-sortability quantify these patterns, but they do not provide tools to remedy them. To address this, we propose internally-standardized structural causal models (iSCMs), a modification of SCMs that introduces a standardization operation at each variable during the generative process. By construction, iSCMs are not $\operatorname{Var}$-sortable. We also find empirical evidence that they are mostly not $\operatorname{R^2}$-sortable for commonly-used graph families. Moreover, contrary to the post-hoc standardization of data generated by standard SCMs, we prove that linear iSCMs are less identifiable from prior knowledge on the weights and do not collapse to deterministic relationships in large systems, which may make iSCMs a useful model in causal inference beyond the benchmarking problem studied here. Our code is publicly available at: https://github.com/werkaaa/iscm.
PyramidMamba: Rethinking Pyramid F…
Updated:
June 16, 2024
Semantic segmentation, as a basic tool for intelligent interpretation of remote sensing images, plays a vital role in many Earth Observation (EO) applications. Nowadays, accurate semantic segmentation of remote sensing images remains a challenge due to the complex spatial-temporal scenes and multi-scale geo-objects. Driven by the wave of deep learning (DL), CNN- and Transformer-based semantic segmentation methods have been explored widely, and these two architectures both revealed the importance of multi-scale feature representation for strengthening semantic information of geo-objects. However, the actual multi-scale feature fusion often comes with the semantic redundancy issue due to homogeneous semantic contents in pyramid features. To handle this issue, we propose a novel Mamba-based segmentation network, namely PyramidMamba. Specifically, we design a plug-and-play decoder, which develops a dense spatial pyramid pooling (DSPP) to encode rich multi-scale semantic features and a pyramid fusion Mamba (PFM) to reduce semantic redundancy in multi-scale feature fusion. Comprehensive ablation experiments illustrate the effectiveness and superiority of the proposed method in enhancing multi-scale feature representation as well as the great potential for real-time semantic segmentation. Moreover, our PyramidMamba yields state-of-the-art performance on three publicly available datasets, i.e. the OpenEarthMap (70.8% mIoU), ISPRS Vaihingen (84.8% mIoU) and Potsdam (88.0% mIoU) datasets. The code will be available at https://github.com/WangLibo1995/GeoSeg.
YOLOv1 to YOLOv10: A comprehensive…
Updated:
June 14, 2024
This survey investigates the transformative potential of various YOLO variants, from YOLOv1 to the state-of-the-art YOLOv10, in the context of agricultural advancements. The primary objective is to elucidate how these cutting-edge object detection models can re-energise and optimize diverse aspects of agriculture, ranging from crop monitoring to livestock management. It aims to achieve key objectives, including the identification of contemporary challenges in agriculture, a detailed assessment of YOLO's incremental advancements, and an exploration of its specific applications in agriculture. This is one of the first surveys to include the latest YOLOv10, offering a fresh perspective on its implications for precision farming and sustainable agricultural practices in the era of Artificial Intelligence and automation. Further, the survey undertakes a critical analysis of YOLO's performance, synthesizes existing research, and projects future trends. By scrutinizing the unique capabilities packed in YOLO variants and their real-world applications, this survey provides valuable insights into the evolving relationship between YOLO variants and agriculture. The findings contribute towards a nuanced understanding of the potential for precision farming and sustainable agricultural practices, marking a significant step forward in the integration of advanced object detection technologies within the agricultural sector.
Doing Battle with the Sun: Lessons…
Updated:
June 12, 2024
Capella Space, which designs, builds, and operates a constellation of Synthetic Aperture Radar (SAR) Earth-imaging small satellites, faced new challenges with the onset of Solar Cycle 25. By mid-2022, it had become clear that solar activity levels were far exceeding the 2019 prediction published by the National Atmospheric and Oceanic Administration's (NOAA) Space Weather Prediction Center (SWPC). This resulted in the atmospheric density of low Earth orbit (LEO) increasing 2-3x higher than predicted. While this raises difficulties for all satellite operators, Capella's satellites are especially sensitive to aerodynamic drag due to the high surface area of their large deployable radar reflectors. This unpredicted increase in drag threatened premature deorbit and reentry of some of Capella's fleet of spacecraft. This paper explores Capella's strategic response to this problem at all layers of the satellite lifecycle, examining the engineering challenges and insights gained from adapting an operational constellation to rapidly changing space weather conditions. A key development was the implementation of "low drag mode", which increased on-orbit satellite lifetime by 24% and decreased accumulated momentum from aerodynamic torques by 20-30%. The paper shares operational tradeoffs and lessons from the development, deployment, and validation of this flight mode, offering valuable insights for satellite operators facing similar challenges in LEO's current elevated drag environment.
A deep cut into Split Federated Se…
Updated:
March 17, 2025
Collaborative self-supervised learning has recently become feasible in highly distributed environments by dividing the network layers between client devices and a central server. However, state-of-the-art methods, such as MocoSFL, are optimized for network division at the initial layers, which decreases the protection of the client data and increases communication overhead. In this paper, we demonstrate that splitting depth is crucial for maintaining privacy and communication efficiency in distributed training. We also show that MocoSFL suffers from a catastrophic quality deterioration for the minimal communication overhead. As a remedy, we introduce Momentum-Aligned contrastive Split Federated Learning (MonAcoSFL), which aligns online and momentum client models during training procedure. Consequently, we achieve state-of-the-art accuracy while significantly reducing the communication overhead, making MonAcoSFL more practical in real-world scenarios.
Deep Learning for Slum Mapping in …
Updated:
June 12, 2024
The major Sustainable Development Goals (SDG) 2030, set by the United Nations Development Program (UNDP), include sustainable cities and communities, no poverty, and reduced inequalities. However, millions of people live in slums or informal settlements with poor living conditions in many major cities around the world, especially in less developed countries. To emancipate these settlements and their inhabitants through government intervention, accurate data about slum location and extent is required. While ground survey data is the most reliable, such surveys are costly and time-consuming. An alternative is remotely sensed data obtained from very high-resolution (VHR) imagery. With the advancement of new technology, remote sensing based mapping of slums has emerged as a prominent research area. The parallel rise of Artificial Intelligence, especially Deep Learning has added a new dimension to this field as it allows automated analysis of satellite imagery to identify complex spatial patterns associated with slums. This article offers a detailed review and meta-analysis of research on slum mapping using remote sensing imagery from 2014 to 2024, with a special focus on deep learning approaches. Our analysis reveals a trend towards increasingly complex neural network architectures, with advancements in data preprocessing and model training techniques significantly enhancing slum identification accuracy. We have attempted to identify key methodologies that are effective across diverse geographic contexts. While acknowledging the transformative impact Convolutional Neural Networks (CNNs) in slum detection, our review underscores the absence of a universally optimal model, suggesting the need for context-specific adaptations. We also identify prevailing challenges in this field, such as data limitations and a lack of model explainability and suggest potential strategies for overcoming these.
A Finite-Sample Analysis of an Act…
Updated:
March 12, 2025
Motivated by applications in risk-sensitive reinforcement learning, we study mean-variance optimization in a discounted reward Markov Decision Process (MDP). Specifically, we analyze a Temporal Difference (TD) learning algorithm with linear function approximation (LFA) for policy evaluation. We derive finite-sample bounds that hold (i) in the mean-squared sense and (ii) with high probability under tail iterate averaging, both with and without regularization. Our bounds exhibit an exponentially decaying dependence on the initial error and a convergence rate of $O(1/t)$ after $t$ iterations. Moreover, for the regularized TD variant, our bound holds for a universal step size. Next, we integrate a Simultaneous Perturbation Stochastic Approximation (SPSA)-based actor update with an LFA critic and establish an $O(n^{-1/4})$ convergence guarantee, where $n$ denotes the iterations of the SPSA-based actor-critic algorithm. These results establish finite-sample theoretical guarantees for risk-sensitive actor-critic methods in reinforcement learning, with a focus on variance as a risk measure.
Symmetric Dot-Product Attention fo…
Updated:
June 19, 2024
Initially introduced as a machine translation model, the Transformer architecture has now become the foundation for modern deep learning architecture, with applications in a wide range of fields, from computer vision to natural language processing. Nowadays, to tackle increasingly more complex tasks, Transformer-based models are stretched to enormous sizes, requiring increasingly larger training datasets, and unsustainable amount of compute resources. The ubiquitous nature of the Transformer and its core component, the attention mechanism, are thus prime targets for efficiency research. In this work, we propose an alternative compatibility function for the self-attention mechanism introduced by the Transformer architecture. This compatibility function exploits an overlap in the learned representation of the traditional scaled dot-product attention, leading to a symmetric with pairwise coefficient dot-product attention. When applied to the pre-training of BERT-like models, this new symmetric attention mechanism reaches a score of 79.36 on the GLUE benchmark against 78.74 for the traditional implementation, leads to a reduction of 6% in the number of trainable parameters, and reduces the number of training steps required before convergence by half.
An Open and Large-Scale Dataset fo…
Updated:
June 17, 2024
Precise crop yield predictions are of national importance for ensuring food security and sustainable agricultural practices. While AI-for-science approaches have exhibited promising achievements in solving many scientific problems such as drug discovery, precipitation nowcasting, etc., the development of deep learning models for predicting crop yields is constantly hindered by the lack of an open and large-scale deep learning-ready dataset with multiple modalities to accommodate sufficient information. To remedy this, we introduce the CropNet dataset, the first terabyte-sized, publicly available, and multi-modal dataset specifically targeting climate change-aware crop yield predictions for the contiguous United States (U.S.) continent at the county level. Our CropNet dataset is composed of three modalities of data, i.e., Sentinel-2 Imagery, WRF-HRRR Computed Dataset, and USDA Crop Dataset, for over 2200 U.S. counties spanning 6 years (2017-2022), expected to facilitate researchers in developing versatile deep learning models for timely and precisely predicting crop yields at the county-level, by accounting for the effects of both short-term growing season weather variations and long-term climate change on crop yields. Besides, we develop the CropNet package, offering three types of APIs, for facilitating researchers in downloading the CropNet data on the fly over the time and region of interest, and flexibly building their deep learning models for accurate crop yield predictions. Extensive experiments have been conducted on our CropNet dataset via employing various types of deep learning solutions, with the results validating the general applicability and the efficacy of the CropNet dataset in climate change-aware crop yield predictions.
SRC-Net: Bi-Temporal Spatial Relat…
Updated:
June 27, 2024
Change detection (CD) in remote sensing imagery is a crucial task with applications in environmental monitoring, urban development, and disaster management. CD involves utilizing bi-temporal images to identify changes over time. The bi-temporal spatial relationships between features at the same location at different times play a key role in this process. However, existing change detection networks often do not fully leverage these spatial relationships during bi-temporal feature extraction and fusion. In this work, we propose SRC-Net: a bi-temporal spatial relationship concerned network for CD. The proposed SRC-Net includes a Perception and Interaction Module that incorporates spatial relationships and establishes a cross-branch perception mechanism to enhance the precision and robustness of feature extraction. Additionally, a Patch-Mode joint Feature Fusion Module is introduced to address information loss in current methods. It considers different change modes and concerns about spatial relationships, resulting in more expressive fusion features. Furthermore, we construct a novel network using these two relationship concerned modules and conducted experiments on the LEVIR-CD and WHU Building datasets. The experimental results demonstrate that our network outperforms state-of-the-art (SOTA) methods while maintaining a modest parameter count. We believe our approach sets a new paradigm for change detection and will inspire further advancements in the field. The code and models are publicly available at https://github.com/Chnja/SRCNet.
PreMevE-MEO: Predicting Ultra-rela…
Updated:
June 3, 2024
Ultra-relativistic electrons with energies greater than or equal to two megaelectron-volt (MeV) pose a major radiation threat to spaceborne electronics, and thus specifying those highly energetic electrons has a significant meaning to space weather communities. Here we report the latest progress in developing our predictive model for MeV electrons in the outer radiation belt. The new version, primarily driven by electron measurements made along medium-Earth-orbits (MEO), is called PREdictive MEV Electron (PreMevE)-MEO model that nowcasts ultra-relativistic electron flux distributions across the whole outer belt. Model inputs include above 2 MeV electron fluxes observed in MEOs by a fleet of GPS satellites as well as electrons measured by one Los Alamos satellite in the geosynchronous orbit. We developed an innovative Sparse Multi-Inputs Latent Ensemble NETwork (SmileNet) which combines convolutional neural networks with transformers, and we used long-term in-situ electron data from NASA's Van Allen Probes mission to train, validate, optimize, and test the model.It is shown that PreMevE-MEO can provide hourly nowcasts with high model performance efficiency and high correlation with observations.This prototype PreMevE-MEO model demonstrates the feasibility of making high-fidelity predictions driven by observations from longstanding space infrastructure in MEO, thus has great potential of growing into an invaluable space weather operational warning tool.
Imaging the Northern Los Angeles B…
Updated:
May 31, 2024
We show reflectivity cross-sections for the San Gabriel, Chino, and San Bernardino basins north of Los Angeles, California determined from autocorrelations of ambient noise and teleseismic earthquake waves. These basins are thought to channel the seismic energy from earthquakes on the San Andreas Fault to Los Angeles and a more accurate model of their depth is important for hazard mitigation. We use the causal side of the autocorrelation function to determine the zero-offset reflection response. To minimize the smoothing effect of the source time function, we remove the common mode from the autocorrelation in order to reveal the zero-offset reflection response. We apply this to 10 temporary nodal lines consisting of a total of 758 geophones with an intraline spacing of 250-300 m. We also show that the autocorrelation function from teleseismic events can provide illumination of subsurface that is consistent with ambient noise. Both autocorrelation results compare favorably to receiver functions.
Enhanced Geological Prediction for…
Updated:
February 22, 2025
In the process of tunnel excavation, advanced geological prediction technology has become indispensable for safe, economical, and efficient tunnel construction. Although traditional methods such as drilling and geological analysis are effective, they typically involve destructive processes, carry high risks, and incur significant costs. In contrast, non-destructive geophysical exploration offers a more convenient and economical alternative. However, the accuracy and precision of these non-destructive methods can be severely compromised by complex geological structures, restrictions on observation coverage and environmental noise. To address these challenges effectively, a novel approach using frequency domain full waveform inversion, based on a penalty method and Sobolev space regularization, has been proposed to enhance the performance of non-destructive predictions. The proposed method constructs a soft-constrained optimization problem by restructuring the misfit function into a combination of data misfit and wave equation drive terms to enhance convexity. Additionally, it semi-extends the search space to both the wavefield and the model parameters to mitigate the strong nonlinearity of the optimization, facilitating high-resolution inversion. Furthermore, a Sobolev space regularization algorithm is introduced to flexibly adjust the regularization path, addressing issues related to noise and artefacts to enhance the robustness of the algorithm. We evaluated the performance of the proposed full waveform inversion using several tunnel models with fault structures by comparing the results of the enhanced method with those of traditional least-squares-based Tikhonov regularization and total variation regularization full waveform inversion methods. The verification results confirm the superior capabilities of the proposed method as expected.
Artificial Intelligence Approaches…
Updated:
May 21, 2024
Predictive Maintenance (PdM) emerged as one of the pillars of Industry 4.0, and became crucial for enhancing operational efficiency, allowing to minimize downtime, extend lifespan of equipment, and prevent failures. A wide range of PdM tasks can be performed using Artificial Intelligence (AI) methods, which often use data generated from industrial sensors. The steel industry, which is an important branch of the global economy, is one of the potential beneficiaries of this trend, given its large environmental footprint, the globalized nature of the market, and the demanding working conditions. This survey synthesizes the current state of knowledge in the field of AI-based PdM within the steel industry and is addressed to researchers and practitioners. We identified 219 articles related to this topic and formulated five research questions, allowing us to gain a global perspective on current trends and the main research gaps. We examined equipment and facilities subjected to PdM, determined common PdM approaches, and identified trends in the AI methods used to develop these solutions. We explored the characteristics of the data used in the surveyed articles and assessed the practical implications of the research presented there. Most of the research focuses on the blast furnace or hot rolling, using data from industrial sensors. Current trends show increasing interest in the domain, especially in the use of deep learning. The main challenges include implementing the proposed methods in a production environment, incorporating them into maintenance plans, and enhancing the accessibility and reproducibility of the research.
Synthetic RAW data generator for E…
Updated:
May 16, 2024
In this paper, we introduce HEEPS/MARE, the end-to-end simulator developed for the SAR oceanographic products of ESA Earth Explorer 10 mission, Harmony, expected to launch in Decembre 2029. Harmony is primarily dedicated to the observation of small-scale motion and deformation fields of the Earth surface (oceans, glaciers and ice sheets, solid Earth), thanks to passive SAR/ATI receivers carried by two companion satellites for Sentinel-1. The paper focuses on the raw data generator designed to efficiently simulate large, heterogeneous, moving oceanic areas and produce the acquired SAR/ATI bistatic IQ signals. The heterogeneous sea-surface model, bistatic scattering model, multi-GPU implementation and achieved performance are emphasized. Finally, sample results are presented, to illustrate the ability of Harmony to map wind and surface current vectors at kilometric scale.
Cross-sensor self-supervised train…
Updated:
May 16, 2024
Large-scale "foundation models" have gained traction as a way to leverage the vast amounts of unlabeled remote sensing data collected every day. However, due to the multiplicity of Earth Observation satellites, these models should learn "sensor agnostic" representations, that generalize across sensor characteristics with minimal fine-tuning. This is complicated by data availability, as low-resolution imagery, such as Sentinel-2 and Landsat-8 data, are available in large amounts, while very high-resolution aerial or satellite data is less common. To tackle these challenges, we introduce cross-sensor self-supervised training and alignment for remote sensing (X-STARS). We design a self-supervised training loss, the Multi-Sensor Alignment Dense loss (MSAD), to align representations across sensors, even with vastly different resolutions. Our X-STARS can be applied to train models from scratch, or to adapt large models pretrained on e.g low-resolution EO data to new high-resolution sensors, in a continual pretraining framework. We collect and release MSC-France, a new multi-sensor dataset, on which we train our X-STARS models, then evaluated on seven downstream classification and segmentation tasks. We demonstrate that X-STARS outperforms the state-of-the-art by a significant margin with less data across various conditions of data availability and resolutions.
Rethinking Scanning Strategies wit…
Updated:
May 14, 2024
Deep learning methods, especially Convolutional Neural Networks (CNN) and Vision Transformer (ViT), are frequently employed to perform semantic segmentation of high-resolution remotely sensed images. However, CNNs are constrained by their restricted receptive fields, while ViTs face challenges due to their quadratic complexity. Recently, the Mamba model, featuring linear complexity and a global receptive field, has gained extensive attention for vision tasks. In such tasks, images need to be serialized to form sequences compatible with the Mamba model. Numerous research efforts have explored scanning strategies to serialize images, aiming to enhance the Mamba model's understanding of images. However, the effectiveness of these scanning strategies remains uncertain. In this research, we conduct a comprehensive experimental investigation on the impact of mainstream scanning directions and their combinations on semantic segmentation of remotely sensed images. Through extensive experiments on the LoveDA, ISPRS Potsdam, and ISPRS Vaihingen datasets, we demonstrate that no single scanning strategy outperforms others, regardless of their complexity or the number of scanning directions involved. A simple, single scanning direction is deemed sufficient for semantic segmentation of high-resolution remotely sensed images. Relevant directions for future research are also recommended.
A Lightweight Sparse Focus Transfo…
Updated:
October 11, 2024
Remote sensing image change captioning (RSICC) aims to automatically generate sentences that describe content differences in remote sensing bitemporal images. Recently, attention-based transformers have become a prevalent idea for capturing the features of global change. However, existing transformer-based RSICC methods face challenges, e.g., high parameters and high computational complexity caused by the self-attention operation in the transformer encoder component. To alleviate these issues, this paper proposes a Sparse Focus Transformer (SFT) for the RSICC task. Specifically, the SFT network consists of three main components, i.e. a high-level features extractor based on a convolutional neural network (CNN), a sparse focus attention mechanism-based transformer encoder network designed to locate and capture changing regions in dual-temporal images, and a description decoder that embeds images and words to generate sentences for captioning differences. The proposed SFT network can reduce the parameter number and computational complexity by incorporating a sparse attention mechanism within the transformer encoder network. Experimental results on various datasets demonstrate that even with a reduction of over 90\% in parameters and computational complexity for the transformer encoder, our proposed network can still obtain competitive performance compared to other state-of-the-art RSICC methods. The code is available at \href{https://github.com/sundongwei/SFT_chag2cap}{Lite\_Chag2cap}.
Comparative Analysis of Advanced F…
Updated:
May 10, 2024
Feature matching determines the orientation accuracy for the High Spatial Resolution (HSR) optical satellite stereos, subsequently impacting several significant applications such as 3D reconstruction and change detection. However, the matching of off-track HSR optical satellite stereos often encounters challenging conditions including wide-baseline observation, significant radiometric differences, multi-temporal changes, varying spatial resolutions, inconsistent spectral resolution, and diverse sensors. In this study, we evaluate various advanced feature matching algorithms for HSR optical satellite stereos. Utilizing a specially constructed dataset from five satellites across six challenging scenarios, HSROSS Dataset, we conduct a comparative analysis of four algorithms: the traditional SIFT, and deep-learning based methods including SuperPoint + SuperGlue, SuperPoint + LightGlue, and LoFTR. Our findings highlight overall superior performance of SuperPoint + LightGlue in balancing robustness, accuracy, distribution, and efficiency, showcasing its potential in complex HSR optical satellite scenarios.
Tackling water table depth modelin…
Updated:
March 13, 2025
Spatial patterns of water table depth (WTD) play a crucial role in shaping ecological resilience, hydrological connectivity, and human-centric systems. Generally, a large-scale (e.g., continental or global) continuous map of static WTD can be simulated using either physically-based (PB) or machine learning-based (ML) models. We construct three fine-resolution (500 m) ML simulations of WTD, using the XGBoost algorithm and more than 20 million real and proxy observations of WTD, across the United States and Canada. The three ML models were constrained using known physical relations between WTD's drivers and WTD and were trained by sequentially adding real and proxy observations of WTD. Through an extensive (pixel-by-pixel) evaluation across the study region and within ten major ecoregions of North America, we demonstrate that our models (corr=0.6-0.75) can more accurately predict unseen real and proxy observations of WTD compared to two available PB simulations of WTD (corr=0.21-0.40). However, we still argue that currently-available large-scale simulations of static WTD could be uncertain within data-scarce regions such as steep mountainous regions. We reason that biased observational data mainly collected from low-elevation floodplains and the over-flexibility of available models can negatively affect the verifiability of large-scale simulations of WTD. Ultimately, we thoroughly discuss future directions that may help hydrogeologists decide how to improve machine learning-based WTD estimations. In particular, we advocate for the use of proxy satellite data, the incorporation of physical laws, the implementation of better model verification standards, the development of novel globally-available emergent indices, and the collection of more reliable observations.
Representation Learning of Daily M…
Updated:
December 20, 2024
Time-series representation learning is a key area of research for remote healthcare monitoring applications. In this work, we focus on a dataset of recordings of in-home activity from people living with Dementia. We design a representation learning method based on converting activity to text strings that can be encoded using a language model fine-tuned to transform data from the same participants within a $30$-day window to similar embeddings in the vector space. This allows for clustering and vector searching over participants and days, and the identification of activity deviations to aid with personalised delivery of care.
Hypergraph $p$-Laplacian regulariz…
Updated:
March 17, 2025
As a generalization of graphs, hypergraphs are widely used to model higher-order relations in data. This paper explores the benefit of the hypergraph structure for the interpolation of point cloud data that contain no explicit structural information. We define the $\varepsilon_n$-ball hypergraph and the $k_n$-nearest neighbor hypergraph on a point cloud and study the $p$-Laplacian regularization on the hypergraphs. We prove the variational consistency between the hypergraph $p$-Laplacian regularization and the continuum $p$-Laplacian regularization in a semisupervised setting when the number of points $n$ goes to infinity while the number of labeled points remains fixed. A key improvement compared to the graph case is that the results rely on weaker assumptions on the upper bound of $\varepsilon_n$ and $k_n$. To solve the convex but non-differentiable large-scale optimization problem, we utilize the stochastic primal-dual hybrid gradient algorithm. Numerical experiments on data interpolation verify that the hypergraph $p$-Laplacian regularization outperforms the graph $p$-Laplacian regularization in preventing the development of spikes at the labeled points.
MFDS-Net: Multi-Scale Feature Dept…
Updated:
May 2, 2024
Change detection as an interdisciplinary discipline in the field of computer vision and remote sensing at present has been receiving extensive attention and research. Due to the rapid development of society, the geographic information captured by remote sensing satellites is changing faster and more complex, which undoubtedly poses a higher challenge and highlights the value of change detection tasks. We propose MFDS-Net: Multi-Scale Feature Depth-Supervised Network for Remote Sensing Change Detection with Global Semantic and Detail Information (MFDS-Net) with the aim of achieving a more refined description of changing buildings as well as geographic information, enhancing the localisation of changing targets and the acquisition of weak features. To achieve the research objectives, we use a modified ResNet_34 as backbone network to perform feature extraction and DO-Conv as an alternative to traditional convolution to better focus on the association between feature information and to obtain better training results. We propose the Global Semantic Enhancement Module (GSEM) to enhance the processing of high-level semantic information from a global perspective. The Differential Feature Integration Module (DFIM) is proposed to strengthen the fusion of different depth feature information, achieving learning and extraction of differential features. The entire network is trained and optimized using a deep supervision mechanism. The experimental outcomes of MFDS-Net surpass those of current mainstream change detection networks. On the LEVIR dataset, it achieved an F1 score of 91.589 and IoU of 84.483, on the WHU dataset, the scores were F1: 92.384 and IoU: 86.807, and on the GZ-CD dataset, the scores were F1: 86.377 and IoU: 76.021. The code is available at https://github.com/AOZAKIiii/MFDS-Net
A Smartphone-Based Method for Asse…
Updated:
November 21, 2024
Early detection of fertilizer-induced stress in tomato plants is crucial for optimizing crop yield through timely management interventions. While conventional optical methods struggle to detect fertilizer stress in young leaves, these leaves contain valuable diagnostic information through their microscopic hair-like structures, particularly trichomes, which existing approaches have overlooked. This study introduces a smartphone-based noninvasive technique that leverages mobile computing and digital imaging capabilities to quantify trichome density on young leaves with superior detection latency. Our method uniquely combines augmented reality technology with image processing algorithms to analyze trichomes transferred onto specialized measurement paper. A robust automated pipeline processes these images through region extraction, perspective transformation, and illumination correction to precisely quantify trichome density. Validation experiments on hydroponically grown tomatoes under varying fertilizer conditions demonstrated the method's effectiveness. Leave-one-out cross-validation revealed strong predictive performance with the area under the precision-recall curve (PR-AUC: 0.82) and area under the receiver operating characteristic curve (ROC-AUC: 0.64), while the predicted and observed trichome densities exhibited high correlation ($r = 0.79$). This innovative approach transforms smartphones into precise diagnostic tools for plant nutrition assessment, offering a practical, cost-effective solution for precision agriculture.