Client: IT Service Center
Office Location: Bucharest
Contract Duration: At least 12 months
Project No.: 001200325

General

We are looking for Senior Data Engineers to join our client’s Data Team, contributing to the design, development, and optimization of scalable data pipelines. The ideal candidate will have expertise in big data technologies, cloud platforms (GCP preferred), and data engineering best practices enabling efficient data transformation, warehousing, and analytics.

 

Responsibilities/Activities

  • Design, develop, and maintain scalable, high-performance data pipelines using Python, PySpark, and Airflow
  • Build and optimize ETL pipelines for performance, cost-efficiency, and scalability
  • Work with big data processing technologies, including Apache Spark, Hadoop, and Hive
  • Develop data transformation, ETL processes, and data warehousing solutions using SQL and ETL frameworks
  • Utilize Databricks and Data Lakes for scalable data storage and analytics
  • Design and implement data models to support data warehousing (DWH) and business intelligence (BI) needs
  • Ensure data quality, validation, and optimization to maintain reliability and consistency
  • Work with cloud platforms to build scalable, cloud-native data solutions
  • Collaborate with BI and analytics teams to ensure seamless data accessibility and insights delivery
  • Implement CI/CD pipelines for automated deployment, monitoring, and management of data solutions

Requirements

Technical

  • At least 5 years of experience in Data Engineering
  • Strong proficiency in Python, Apache Spark, and PySpark
  • Experience with workflow orchestration tools like Airflow
  • Experience in big data processing using Hadoop, Hive
  • Expertise in Data Lake & Warehouse (Databricks)
  • Experience with BI tools (Tableau/ Looker/ Power BI) for data visualization
  • Experience working with cloud platforms (GCP/ AWS/ Azure)
  • Advanced SQL skills for data manipulation, validation, and optimization
  • Strong knowledge of ETL best practices, Data Modeling and Data Warehousing
  • Proven ability in scaling ETL pipelines for efficiency and cost-effectiveness
  • Solid understanding of CI/CD pipelines

Education

  • University degree in Computer Science, Mathematics or another related field

Others

  • Good level of English (oral and written)
  • Strong analytical and problem-solving skills
  • Ability to quickly learn and adapt to new technologies
  • Ability to work in a fast-paced, agile environment

Nice to have requirements

  • Experience in financial-banking industry
  • Experience with streaming data solutions (Kafka, Pub/Sub)
  • Experience with Terraform or Infrastructure as Code (IaC)
  • Knowledge of machine learning pipelines and MLOps

Apply for this position

Allowed Type(s): .pdf, .doc, .docx