MLOps Engineer - (Remote, Spain)
We usually respond within a week
About Bark
Bark is an online services marketplace connecting customers with professionals across over 1,000 categories. Operating in nine countries including the UK, US, Australia, Canada, and New Zealand, we're transforming how people find trusted service providers for everything from home improvement to professional services.
Our platform uses cutting-edge technology to match customers with the right professionals quickly and efficiently. With a global team of over 220 people, we're currently undergoing an exciting transformation: migrating from a lead generation model to a full marketplace platform with subscription-based pricing.
As a profitable, PE backed scale up (EMK Capital), Bark offers the best of both worlds: the agility and innovation of a fast moving business combined with financial stability and resources for growth. We recently launched our new marketplace model in Australia (Q4 2025) and are preparing for rollout to the UK and US markets in 2026. You'll have genuine ownership, responsibility, and the opportunity to shape our commercial strategy during a pivotal transformation phase with the chance to make your own contribution to our journey.
About the Role
We are looking for a proactive MLOps Engineer to join our staff data engineer and form a new squad. This role is for a forward-thinking engineer who wants to seamlessly bridge the gap between high-throughput data engineering and Machine Learning infrastructure.
You will work on our Python and AWS-hosted data streaming platform, owning the full data lifecycle for real-time event tracking to ensure scalability, reliability, and cost-effectiveness. While the basic components are built, you will drive a large rollout of event tracking across different teams, tackling significant data validation, data modelling, and scaling challenges.
Crucially, the events you process will directly fuel our AI feature stores and models. You will collaborate closely with analysts, engineers, and product managers to enable accurate reporting for new product features and business KPIs, while simultaneously laying the foundation for our ML lifecycle. As we expand our AI capabilities, you will introduce MLOps best practices to deploy and serve models, with future opportunities to shape our LLMOps architecture.
*Please note you must be based in Spain to be considered for this fully remote role.
Responsibilities
Build and operate a scalable data platform ingesting real-time events with a high-throughput rate.
Collaborate with Data Scientists to transition ML models from experimentation to production
Build and maintain ML infrastructure for model serving (using FastAPI) and track model performance and lifecycle over time
Collaborate with analysts, engineers and product managers to understand user needs and take ownership of producing new event tracking functionality.
Implement automations, high data quality controls, ensure data integrity in the ingested data, and create necessary monitoring alerts.
Required Skills and Experience
Experience deploying and maintaining Python services in a major cloud environment (AWS, GCP, Azure). Specific experience with the AWS stack, including Kinesis, Lambdas, Glue, Firehose, and Athena, is highly desirable.
Experience with MLOps frameworks and experiment tracking tools such as AWS Sagemaker, MLFlow, Databricks, or W&B
Basic knowledge of ML Inference REST APIs (FastAPI)
Great skills improving and operating Schema Registries and Data Catalogs (Glue, Databricks, etc.)
Relevant CI/CD experience (e.g., GitHub Actions, Gitlab Pipelines) automating tests and updates in schema registries and lambda releases.
Solid experience with SQL and data warehousing, data lake environments (e.g. DataBricks, BigQuery, Redshift, S3).
Familiarity with cloud observability tools (e.g. Datadog, NewRelic, or Cloudwatch).
Desired skills and experience
Hands-on experience in a cloud data platform for event streaming such as Kinesis, Pub-Sub or Kafka.
Knowledge and practical experience with data modelling, proto or AVRO schemas, and managing schema evolution.
Production experience with containerization (Docker), orchestration (Kubernetes, AWS ECS/Fargate, etc.) and IaC (Terraform, Crossplane)
Familiarity with LLMOps and frameworks for building LLM agents (e.g., LangGraph).
Large scale data processing with PySpark and Flink
Datamart creation with DBT
Airflow job orchestration
Interview Process
Screening Call with Talent Partner (30 mins)
1st Stage - Hiring Manager Stage (30 mins)
2nd Stage - Technical Interview (45/60 mins)
3rd Stage - Values interview (30 mins)
Diversity Statement
At Bark, we are a platform for people, revolutionising the way professionals and individuals connect since 2014. Our culture is defined by excitement, ambition, and a commitment to raising the bar. We value diversity, equity, inclusion, and belonging (DEIB) and are dedicated to embedding these principles into everything we do. We are committed to fostering an inclusive environment where everyone can thrive, and our focus is on hiring, retaining and developing a globally diverse workforce that is passionate about excelling our platform and supporting our customers succeed. Be part of our dynamic team, where bold ideas thrive, and create a future worth shouting about.
- Department
- Engineering
- Locations
- Spain
- Remote status
- Fully Remote