At Transporeon we embrace transformation and change in total sync with one another. We rethink, reinvent and rework ideas from one moment to the next – as many times as is necessary to get the job done right. That’s how we respond to the new challenges that we face each and every day. And regardless of whether you are just starting your career or are already a pro – we believe you can be the transformation. Are you ready?

You can find more information about Transporeon as an employer here >>

Data Scientist (f/m/d)

Your transformation challenge?

  • a unique chance to leave a footprint for years to come in our data landscape: be part of design, development and delivery of our next generation data platform
  • various tasks in building and maintaining customized, scalable end-to-end data pipelines that empower multi-faceted analytics and data products 
  • challenging tasks around ensuring machine learning-grade data quality through continuous improvements in data cleaning and enrichment 
  • constantly evolving internal tooling, for which you can bring in your own ideas 
  • a possibility to advance the roll-out of analytical self-services and machine learning models within the greater organization 
  • opportunities for collaboration on various projects with Data Scientists, other engineering teams as well as internal key stakeholders and data consumers

Possible countries of employment:

Germany, Poland, Italy, Austria, The Netherlands, Belgium, France, UK, Denmark, Sweden, Latvia, Finland, Slovakia, Spain, Portugal

How can you enrich our team?

  • being a quick learner and strong problem solver with attention to detail, capable of taking on loosely defined problems as well as breaking down and simplifying complex technical concepts 
  • having gathered industry experience in software or data engineering with Python, Java or Scala and at least basic knowledge of Linux and container virtualization (e.g. Docker) 
  • by enjoying automation and already applied some of the concepts behind DevOps 
  • having a track record of implementing ETL/ELT workflows in the cloud using SQL, Apache Spark (ideally in the form of AWS Glue) or the wider Hadoop ecosystem in general 
  • having strong skills in designing and evolving storage schemas as well as in efficiently querying the stored content, preferably in context of a larger data lake or warehouse (e.g. Redshift) 
  • having a basic understanding of the general ecosystem, distributed computing and stream processing in AWS