Our client, an established fintech product company, founded in 2016, is transforming personal lending and neobanking through combining their diverse knowledge in marketing, sales, operations, finance, risk and IT. Operating in Poland, Spain, and Romania, with headquarters in Riga, Latvia, they are expanding into new markets as part of their ambitious growth journey.
They value collaboration, respect, and learning. Their unbureaucratic environment promotes quick decision-making, open feedback, and career development, making it an exciting place to grow and make an impact.
About the Role:
They are seeking a hands-on Data Scientist who excels in building machine learning models, automating pipelines, and ensuring performance and scalability for their in-house IT solutions and financial products. You will also participate in overseeing and mentoring the 3 Data Scientist team members and have a say in influencing the vision and strategy on how things are done.
What They Offer:
- Work - Life Balance and Flexibility: Hybrid work and adjustable hours. Additional opportunity to take 2 weeks of uninterrupted remote work per year.
- Career Growth: Development/learning opportunities (e.g. paid courses) and challenging projects that allow you to stay upfront with industry trends.
- Salary: €4,500 - €6,500 EUR gross monthly and additional yearly bonus.
- Team Culture: Collaborative, respectful, and growth-oriented.
- Work Environment: Modern office, free parking, stocked kitchen.
- Other Perks: Comprehensive health-insurance; paid phone bills; company events and activities.
Main Responsibilities:
- Automate Machine Learning Pipelines: Develop and maintain automated workflows.
- Data Extraction: Source data from diverse repositories.
- Feature Engineering: Apply statistical methods and time-series analysis.
- Model Optimization: Automate training, tuning, and selection.
- Deployment & Monitoring: Ensure seamless deployment and ongoing performance tracking.
- Prototyping: Build models with Scikit-learn, PyTorch, H2O.
- Stakeholder Communication: Present findings and actionable insights.
- Dashboard Maintenance: Manage real-time evaluation tools.
What We’re Looking For:
- Demonstrated experience in analyzing and using time-series data for machine-learning models.
- At least 3+ years in machine learning using Python.
- Proficiency with FastAPI or other Python frameworks, for example, Flask.
- Experience with monitoring machine learning models (e.g. population stability, feature stability, SHAP).
- Proven experience building data science pipelines using Airflow or a similar tool.
- Hands-on experience with GBM, XGBoost, and related tree/boosting algorithms.
- Proficiency in using Docker to streamline workflows, ensure reproducibility, and facilitate collaboration.
- Experience with any cloud service provider (e.g. Google Cloud Platform).
- Strong communication skills and proficiency in the English language.
- Personality: Independent, collaborative with leadership qualities.
- Bonus, but not mandatory: Knowledge of Java/Spring Boot and RabbitMQ.
Their Tech-stack (these are not requirements):
Python/FastAPI; Java/Spring Boot; Scikit-learn, PyTorch, H2O; GBM, XGBoost; SHAP; Airflow; RabbitMQ; Docker; PostgreSQL; Google Cloud (GCP).