Location: Tel Aviv, Israel

Versatile is an innovative AI-driven construction intelligence startup committed to transforming the construction industry with cutting-edge technology. Our mission is to enhance construction projects’ efficiency, safety, and productivity through intelligent solutions.

About the job:

As a Senior Data Scientist (Full Stack), you will develop end-to-end data solutions, from data collection and preprocessing through model development and deployment to production monitoring and gradual improvement. 

You’ll join our Data and Research team, where you will work with Data Engineering, Data Science, and Data Analytics to extend our product offering with AI-driven, actionable, and insightful user experience. 

You will also work closely with our product managers, backend, and full-stack engineers to understand business requirements and then define, implement, deploy, and monitor your solutions.


The ideal candidate will have a strong background in data science and software engineering, with a solid familiarity with data platforms and cloud computing.

What you will be doing:
  • Design and implement robust, scalable, and maintainable code for data processing, analysis, and modeling, as well as continuously monitor and improve your solutions on production.
  • Work closely with different stakeholders to gather requirements and translate them into scalable solutions. 
  • Analyze existing data and find business opportunities for new initiatives that enhance our product offering and better cater to the needs of our users.
  • Using cloud computing and data platforms (such as Databricks, AWS, Azure, or GCP) to develop and deploy scalable data solutions.
  • Optimize and tune data pipelines, algorithms, and models for performance scalability and quality.
  • Ensure code quality through rigorous testing, code reviews, and adherence to best practices and coding standards.
  • Stay up-to-date with the latest advancements in data science, machine learning, and cloud computing technologies.
  • Your team’s Definition of Done will always be driven by having their work lie in the hands of users and impacting the company’s goals.
Requirements:
  • At least 4 years of experience in data science, machine learning, and software engineering.
  • Solid understanding of machine learning algorithms, statistical modeling, and data visualization techniques.
  • Strong proficiency in programming in Python with a focus on writing clean, scalable, and robust code. Knowledgeable in SQL and database systems
  • Experience with cloud computing and data platforms like Databricks, AWS, Azure, or GCP, including services like EC2, S3, and Lambda. Familiarity with event architecture and Kafka is a plus.
  • Familiarity with containerization technologies such as Docker and with orchestration tools.
  • An academic degree in quantitative fields such as Computer science, Mathematics, Statistics, engineering, or parallel work experience.
  • Strong communication and collaboration skills, with the ability to effectively communicate technical concepts to non-technical stakeholders.
  • Previous work experience as an algorithm engineer, backend engineer, or data analyst is a plus.
  • Experience with DevOps/MLOps practices and tools for CI/CD pipelines, such as Jenkins, GitLab CI, or CircleCI – a plus.
  • Contributions to open-source projects or participation in relevant communities –  a plus.