Big Data Engineer (Northern Area,Bucharest)12 Jun 2020

  • Client: Multinational Telecom Company, Romania  
  • Office Location: Northern Area, Bucharest 
  • Contract Duration: At least 2 years

 

Job Description

 

General

We are looking for Big Data Engineers to be part of our client Data Management department and to participate in development and maintaining of Big Data solutions. 

Responsibilities/Activities 

  • Develop, deploy and operate Big Data solutions
  • Maintain, test and evaluate activities
  • Develop and maintain documentation relating to Hadoop Administration tasks
  • Collect, store, process and analyse of huge sets of data
  • Choose optimal solutions to use for above purposes
  • Maintain, implement and monitoring them
  • Integrate solutions with the architecture used across the company

Requirements

Technical

  • At least 4 years of previous similar experience
  • At least 3 years of working experience with Cloudera data platform
  • Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources
  • Programming experience in Python, Spark, Kafka and/or Java
  • Experience in scripting for automation requirement: Shell/ Python/ Groovy
  • Experience in continuously improving the DevOps pipeline and tooling to provide active management of the CI/CD processes
  • Experience in developing and maintaining documentation relating to Hadoop Administration tasks: upgrades, patching, service installation and maintenance
  • Understanding Hadoop’s Security mechanisms and implementing Hadoop Security (Apache Sentry, Kerberos, Active Directory, TLS/SSL)
  • Understanding the role of Certificate Authorities, the setup of Certificates and their configuration in relation to Linux and TLS/SSL
  • Understanding of networking principles and ability to troubleshoot (DNS, TCP/IP, HTTP)
  • Knowledge in configuring and troubleshooting of all the components in the Hadoop ecosystem:  Cloudera, Cloudera Manager, HDFS, Hive, Impala, Oozie, YARN, Sqoop, Zookeeper, Flume, Spark,  Spark standalone, Kafka, Kafka Connect, Apache Kudu, Cassandra, HBase
  • Knowledge of data cleaning, wrangling, visualization and reporting, with an understanding of the best, most efficient use of associated tools and applications to complete these tasks
  • Willingness to learn new programming languages to meet goals and objectives

Education

  • Technical University degree
  • Equivalent experience will be considered

Others

  • Good level of English, verbal & written

Nice to have

  • Experience in integration with RDBMS, Lambda architectures – integration of Data Warehouses, Data Lakes – Data Hubs and in Cloud implementations of Big Data stacks
  • Experience with Big Data ML toolkits, such as Mahout, SparkML or H2O
  • Knowledge of: ElasticSearch, Kibana, Grafana, Git, SCM, Atlassian Suite (Confluence, Jira, Bitbucket), Jenkins, TeamCity, Docker, Kubernetes
  • Knowledge of NoSQL databases, MongoDB

Send your CV now, using the form below:

upload cv