General
We are looking for Senior Data Engineers to join our client’s Data Team, contributing to the design, development, and optimization of scalable data pipelines. The ideal candidate will have expertise in big data technologies, Google Cloud Platform (GCP) services, and data engineering best practices enabling efficient data transformation, warehousing, and analytics.
Responsibilities/Activities
- Design, develop, and optimize ETL/ELT pipelines and data transformation workflows using SQL, and GCP-native ETL tools
- Ensure high performance, scalability, and cost efficiency while supporting data warehousing initiatives
- Build, optimize, and maintain scalable data pipelines leveraging Python with PySpark or Apache Beam
- Build and manage big data processing solutions leveraging Apache Spark on GCP Dataproc and Dataflow for batch and streaming workloads
- Utilize Cloud Storage, BigQuery, and other GCP data lake services to enable scalable and efficient storage, retrieval, and analytics of large datasets
- Develop and implement robust data models to support Data Warehousing (BigQuery) and Business Intelligence reporting (Looker, Data Studio)
- Ensure data quality, validation, and optimization to maintain consistent, reliable, and trustworthy datasets
- Leverage GCP cloud-native services—including Pub/Sub, Cloud Functions, Cloud Run, and Workflows—to design and deliver scalable, event-driven data solutions
- Collaborate closely with BI and analytics teams to facilitate seamless access to data and support actionable insights
- Implement and maintain CI/CD pipelines for automated deployment, monitoring, and operational management of data solutions
Requirements
Technical
- At least 5 years of experience in Data Engineering
- Strong proficiency in Python and SQL
- Experience with PySpark or Apache Beam
- Experience with workflow orchestration (Apache Airflow / Cloud Composer, GCP Workflows)
- Expertise in GCP Data Lake & Warehouse stack (Cloud Storage, BigQuery, Dataproc, Dataflow, Pub/Sub)
- Experience with BI tools (Looker, Data Studio preferred)
- Strong knowledge of ETL best practices, Data Modeling, and Data Warehousing
- Solid understanding of CI/CD pipelines and DevOps principles
Education
- University degree in Computer Science, Mathematics or another related field
Others
- Good level of English (oral and written)
- Strong analytical and problem-solving skills
- Ability to quickly learn and adapt to new technologies
- Ability to work in a fast-paced, agile environment
Nice to have requirements
- Experience with Snowflake or multi-cloud architectures
- Knowledge of Terraform and/or Deployment Manager
- Experience in the financial-banking industry