Hadoop Operational Expert – closed
Responsible for ongoing administration of on-prem Hadoop infrastructure and retain daily operation to keep a 5×24 productive Cloudera installation clean and in a healthy state.
- Aligning in close collaboration with the Data/Software Engineers, who deliver Use Cases into R&D Platforms, but are not allocated to operations.
- Operate Hadoop stack on-prem using RedHat 7.6 and 7.7, Cloudera (former Hortonworks) Stack HDP 3.1 and HDF 3.4. All Components are in use (Ambari, HDFS, Spark, Nifi, Hive, HBase, SolR, Knox, Ranger etc.) under quite strict security controls (HVD – high-value data).
- Operate additionally PostGres, Tomcat and Neo4j for specialized applications.
- Manage connection to internal AppStore (Kubernetes/Gitlab) via the AppStore Container registry, from where containers are pulled and hosted on servers using podman/buildah as drop-in replacement for Docker.
- IBM Spectrum Scale Transparency Layer to connect our HDFS with the storage of Supercomputer Quriosity via Hadoop ViewFS.
Your Skills and Experience
Bachelor’s Degree in information technology, Computer Science or other relevant fields
- Knowledge on SQL and NoSQL databases / concepts, Search Engines, SolR/Elasticsearch.
- General operational expertise such as good troubleshooting skills, understanding of systems capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
- Hadoop skills like HBase, Hive, Pig, Mahout, etc.
- Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
- Fluent English skills.
- Good knowledge of Linux as Hadoop runs on Linux.
Why You'll Love Working Here
- Room for creativity in own field of responsibility
- Education and training courses for professional and personal development
- Above-average payment
If you are interested in this position, please send your CV to Mr. Le Hai via email at [email protected]