Work

What we've delivered

Selected projects showcasing outcomes and technical depth. Anonymised where required.

Telecom

Predictive Connectivity from Network Measurements

Context

A telecom operator needed to predict connectivity quality across its network to proactively address service degradation.

Challenge

Raw network measurements were voluminous and unstructured, lacking spatial and temporal context for actionable predictions.

Approach

Spatiotemporal feature engineering from network measurements, training predictive models for connectivity scoring, and building risk layer APIs.

Outcome

Proactive connectivity risk identification, reduced service degradation complaints, and data-driven network planning inputs.

PythonKubernetesPostgreSQL/PostGISMLflowGrafana
Drone / U-space

UAV Connectivity Risk Scoring for Mission Planning

Context

Drone operations required connectivity assurance before mission execution in complex RF environments.

Challenge

No existing system could evaluate mission-specific connectivity risk considering route, altitude, and environmental factors.

Approach

Built a connectivity risk scoring engine using network measurements, geospatial data, and mission constraints like route and altitude.

Outcome

Mission planners received connectivity risk scores before launch, reducing mission failures and enabling smarter routing decisions.

PythonFastAPIPostGISKubernetesReact
Platform

Cloud-Native Platform for Analytics & AI Delivery

Context

A data team needed a unified platform to run analytics, train ML models, and deploy services reliably.

Challenge

Fragmented tooling, no CI/CD for data workloads, and security concerns in a multi-tenant environment.

Approach

Designed and delivered a Kubernetes-first platform with security controls, CI/CD pipelines, observability, and multi-tenant isolation.

Outcome

Teams ship faster with consistent environments — from development through staging to production.

KubernetesArgoCDPrometheusVaultTerraform
Data Engineering

High-Volume Streaming Pipelines with Quality Gates

Context

An organisation required real-time data ingestion at scale with strict data quality requirements.

Challenge

Existing batch processes couldn't keep up with volume; data quality issues propagated downstream undetected.

Approach

Streaming pipeline architecture with quality validation at each stage, dead-letter queues, and monitoring dashboards.

Outcome

Sub-minute data freshness with automated quality enforcement, reducing downstream data incidents significantly.

KafkaFlinkdbtKubernetesGreat Expectations
Big Data

Lakehouse Analytics Foundation + Performance Playbook

Context

A growing analytics team needed a cost-effective, high-performance foundation for BI and ad-hoc analytics.

Challenge

Query performance was degrading with data growth, and cloud compute costs were increasing uncontrollably.

Approach

Lakehouse architecture design with optimised data layout, partitioning strategy, and a performance tuning playbook.

Outcome

Query performance improved by orders of magnitude, cloud costs reduced, and the team gained self-service analytics capabilities.

Apache IcebergTrinodbtKubernetesSuperset

Want similar outcomes?

Tell us about your challenge and we'll show you how we approach it.

Book a Call