Projects & Datasets

This section outlines key research projects and released datasets. These efforts focus on advancing self-supervised learning for multimodal and spatiotemporal applications.

Self-Supervised Learning for Remote Sensing

Problem Statement: Developing robust vision models for satellite imagery without extensive ground truth labels.

Methodology: This project investigates semantically-aware contrastive learning and masked modeling for multispectral data. By leveraging temporal and spectral invariants, we build representations that generalize across diverse environmental conditions.

Data Modality: Multispectral Satellite Imagery (Sentinel-2, Landsat).

Output: Published research papers and releases for SSL benchmarking in Earth observation.

Temporal Consistent Video Colorization (CVC/CAVC)

Problem Statement: Restoring color to grayscale video sequences while maintaining temporal coherence and avoiding flickering artifacts.

Methodology: We developed attention-based autoencoders (CAVC) and contrastive learning strategies (CVC) to capture long-range dependencies in video. The approach emphasizes temporal consistency through shared-weight attention and diffusion-inspired refinements.

Data Modality: Natural Video, Animation Sequences.

Output: Multiple publications in VISAPP, IntelliSys, and ICMLA; open-source implementations.

AI in Sports Analytics

Problem Statement: Identifying strategic patterns and predicting high-value events (e.g., entry into the attack zone) from spatiotemporal athlete tracking data.

Methodology: Utilizing specialized machine learning pipelines to analyze the initial seconds of ball possession. The project involves feature engineering for spatial dynamics and evaluating predictive reliability in high-stakes environments.

Data Modality: Spatiotemporal Tracking Data (Football/Soccer).

Output: Published research in PLOS ONE; methodologies for tactical performance analysis.