As limango we create a company which has been specializing in e-commerce for 18 years. Together with platforms in the Netherlands, Poland, Austria and Germany, we are part of the OTTO Group, one of Europe's leading e-commerce companies. We are the shopping platform with the largest selection of products for the whole family!
We work and play together. We value work-life balance and create a culture of respect, trust and equality. If these values are also key for you, there is a good chance that you will find your place with us.
Your role and main tasks:
Are you an experienced Data Scientist ready to take your career to the next level? If you are passionate about building and deploying cutting-edge machine learning solutions, we want to hear from you! As a Data Scientist at our organization, you will be at the forefront of driving data-driven decisions and personalization strategies. Your responsibilities will include:
- Building and deploying data-driven and machine learning solutions for portal personalization.
- Taking care of the whole machine learning process - verifying data quality, choosing optimal algorithms, feature engineering, model validation with correct metrics, interpretability.
- Monitoring, maintaining, and updating existing data / ML pipelines.
- Building solutions following best software practices - clean code, testing, automatic deployments.
- Working closely with data engineers, data scientists and development teams in building whole ML / data-driven infrastructure - reliable data pipelines and shared, clean data sources.
- Explaining and recommending optimal data-driven / ML solutions to both business stakeholders and developer teams.
- Sharing and improving ML / MLOps knowledge within the organization.
Who we're looking for:
- Very good knowledge of Python programming, SQL, and Git
- Experience in training and validating ML solutions (decision trees, neural nets, regression models)
- Experience in building and deploying ML solutions in production environments
- Ability to scale solutions according to infrastructure or business requirements
- Good understanding of data lake / lakehouse architecture
- Good knowledge of English (C1) (work in an international environment)
What will be considered an asset:
- Professional experience with PySpark programming.
- Experience in working with Databricks Lakehouse platform ecosystem
- Professional experience with recommender systems and NLP
- Experience in structured streaming and Scala programming
- Familiarity with MLOps environments such as mlflow
- Previous experience in ecommerce data ecosystems
Sounds good?
We can''t wait to get to know you. Apply now!