Galeo Tech
AI

Data Scientist

Galeo Tech · Madrid, MD, ES

Actively hiring Posted about 5 hours ago

Role overview

We are looking for a Data Scientist to join our end-to-end data projects within IoT and data platform environments. You will work across the entire lifecycle: from exploratory analysis and ML model design to GenAI solutions, through to deployment and production maintenance.

Depending on the project, you will operate within an ecosystem primarily based on AWS (IoT, Lambda, S3, Redshift, Glue), although we also work with Azure.

We deeply value your technical expertise, but also that innate curiosity to “tinker” (cacharreo) and constantly learn new things. At GALEO, Python is our universal language: used daily by 100% of the team—from Data Engineers and Scientists to IoT, Systems, and Web specialists. If you are motivated by experimenting and evolving within a shared and dynamic stack, you’ll fit right in.

What you'll work on

  • Exploratory Data Analysis (EDA): Analysing datasets to identify relevant patterns, trends, anomalies, and correlations to guide solution design.
  • Machine Learning Model Development: Designing, training, and evaluating predictive models and ML/DL algorithms (supervised, unsupervised, time series) tailored to each project’s needs.
  • Generative AI Solutions: Designing and implementing solutions based on LLMs, including RAG, prompt engineering, fine-tuning, and orchestration with frameworks such as LangChain.
  • Model Deployment & Production (MLOps): Taking models from experimentation to production, including the creation of training, monitoring, and retraining pipelines.
  • Data Pipeline Collaboration: Working alongside the Data Engineering team to design ETLs and data flows that feed the models, understanding the full data lifecycle.
  • Communicating Results: Translating complex technical findings into clear reports, charts, and interactive dashboards for management and non-technical departments.
  • Continuous Optimisation: Iterating on production models, improving performance, and proposing new approaches based on results and business needs.

What we're looking for

  • At least 2 years of experience as a Data Scientist.
  • Experience with LLMs and GenAI (LangChain, Anthropic/OpenAI APIs, prompt engineering, RAG, fine-tuning).
  • Experience with PySpark and distributed processing.
  • Data processing experience in cloud environments (AWS, Azure, and/or GCP).
  • Proficiency in Forecasting, NLP, and Reinforcement Learning algorithms.
  • Knowledge of mathematical optimisation algorithms/solvers (e.g., Gurobi).
  • Previous experience in IT consultancy.
  • Fluency with visualisation tools, preferably PowerBI.
  • Proficiency with Git and Docker.
  • Knowledge of Agile project management best practices.
  • Intermediate English (B2).

Tags & focus areas

Used for matching and alerts on DevFound
Remote Machine Learning Data Science Data Engineer Ai