Καριέρες

Καριέρα: εξερευνήστε ευκαιρίες με το Versatile.

Γίνετε μέλος της ομάδας, και βοηθήστε μας να ενδυναμώσουμε τον δομημένο κόσμο.

Open opportunities

Ανοιχτές ευκαιρίες

Φιλτράρισμα ανά τοποθεσία

Senior Data Engineer

Σχετικά με τη δουλειά:

Versatile is an innovative AI-driven construction intelligence startup, committed to transforming the construction industry with cutting-edge technology. Our mission is to enhance the efficiency, safety, and productivity of construction projects through intelligent solutions.

We’re hiring a hands-on Senior Data Engineer who wants to build data products that move the needle in the physical world. Your work will help construction professionals make better, data-backed decisions every day. You’ll be part of a high-performing engineering team based in Tel Aviv.

Ευθύνες:

  • Lead the design, development, and ownership of scalable data pipelines (ETL/ELT) that power analytics, product features, and downstream consumption.
  • Collaborate closely with Product, Data Science, Data Analytics, and full-stack/platform teams to deliver data solutions that serve product and business needs.
  • Build and optimize data workflows using Databricks, Spark (PySpark, SQL), Kafka, and AWS-based tooling.
  • Implement and manage data architectures that support both real-time and batch processing, including streaming, storage, and processing layers.
  • Develop, integrate, and maintain data connectors and ingestion pipelines from multiple sources.
  • Manage the deployment, scaling, and performance of data infrastructure and clusters, including Spark on Kubernetes, Kafka, and AWS services.
  • Manage the deployment, scaling, and performance of data infrastructure and clusters, including Databricks, Kafka, and AWS services.
  • Use Terraform (and similar tools) to manage infrastructure-as-code for data platforms.
  • Model and prepare data for analytics, BI, and product-facing use cases, ensuring high performance and reliability.

Απαιτήσεις:

  • 8+ years of hands-on experience working with large-scale data systems in production environments.
  • Proven experience designing, deploying, and integrating big data frameworks - PySpark, Kafka, Databricks. 
  • Strong expertise in Python and SQL, with experience building and optimizing batch and streaming data pipelines.
  • Experience with AWS cloud services and Linux-based environments.
  • Background in building ETL/ELT pipelines and orchestrating workflows end-to-end.
    Proven experience designing, deploying, and operating data infrastructure / data platforms.
  • Mandatory hands-on experience with Apache Spark in production environments. 
  • Mandatory experience running Spark on Kubernetes.
  • Mandatory hands-on experience with Apache Kafka, including Kafka connectors.
  • Understanding of event-driven and domain-driven design principles in modern data architectures.
  • Familiarity with infrastructure-as-code tools (e.g., Terraform) — advantage.
  • Experience supporting machine learning or algorithmic applications — advantage.
  • BSc or higher in Computer Science, Engineering, Mathematics, or another quantitative field.

Εύρος μισθών:

Προχώρα, και συστήστε τον εαυτό σας.

Αν δεν βλέπετε τον τέλειο ρόλο, θα θέλαμε να σας ακούσουμε!