
Senior Data Engineer
Job Description
Posted on: December 4, 2025
About PowerX
PowerX is a fast-growing, cloud-native SaaS company that helps organisations manage and optimise energy-related assets at scale. Our platform combines AI, predictive analytics and real-time monitoring to turn complex, fragmented infrastructure into actionable insights. We’re on a mission to reduce energy and maintenance costs, improve resilience, and drive sustainable efficiency.
The Role
We’re looking for a Senior Data Engineer who can take ownership of our data ecosystem. You’ll tackle a diverse range of challenges around data pipelines, real-time streaming analysis, batch processing and lakehouse architecture. You’ll be the sole dedicated data engineer, working closely with and supported by analytics, product and engineering to design systems that enable reliable self-serve access to data.
If you love solving data challenges and having autonomy over your technical decisions, you’ll feel right at home here. If you have had previous experience with Apache Flink, that is nice bonus, but not essential.
You will be joining a global team of 40+ employees across the UK. We are a fully remote company, but we come together for a company-wide meet-up once a year. Additionally, some teams choose to meet more frequently if they are located nearby.
What You’ll Do
- Design, build, and own scalable ETL/ELT pipelines using Python, Airflow, and AWS
- Develop and evolve our data lakehouse using Iceberg & Athena
- Work with Kafka to support streaming and real-time ingestion
- Implement Change Data Capture (CDC) processes to capture updates from source systems efficiently
- Partner with teams across the company to define data requirements and deliver high-quality datasets
- Implement strong data governance, observability, lineage and security
- Evolve infrastructure with Terraform and DevOps best practices
- Drive architecture decisions and help shape our long-term data strategy
What You’ll Bring
- Strong SQL and Python experience using robust typing practices, with familiarity in common data libraries such as pandas/polars
- Hands-on work in AWS data services (e.g. S3, Athena and Glue)
- Production experience with Airflow and modern data pipelines
- Solid understanding of distributed systems, batch + streaming processing
- Knowledge of Terraform or similar infrastructure as code tools
- Self-starter mindset, being comfortable owning problems end-to-end
Nice to Have
- Apache Flink experience (specifically the PyFlink wrapper)
- Experience with dbt or formal data modelling approaches
- Background delivering analytics or machine learning data products
Why Join
- Ownership of the data platform
- Modern, cloud-native stack with real-time pipelines and lakehouse patterns
- Fully remote flexibility - UK only
- Direct impact on business outcomes and technical direction
- Competitive salary up to £85,000 per year
- Discretionary bonus scheme up to 10% of base salary
- Pension - 5% employee contribution 4% company match with salary sacrifice option
- 25 days holiday
Hiring Process
- Initial Screening: If successful, you will have an interview with our internal recruiter.
- First Round Interview: Conducted with two key members of the team.
- Assessment Task: A task to be completed on your own time.
- Final Interview: With two members of the leadership team.
Please note, this process may vary depending on candidate availability and suitability, as well as the number of applicants. We will keep you updated throughout.
While we aim to respond to all candidates who apply, we may not be able to provide individual feedback to everyone, but we do appreciate your interest.
Apply now
Please let the company know that you found this position on our job board. This is a great way to support us, so we can keep posting cool jobs every day!
Remote-Work.app
Get Remote-Work.app on your phone!

AI Writing Reviewer - Remote

Senior Data Engineer

Customer Success Specialist (Remote)

Remote AI Writing Evaluator

