Back to Search Results


Sr. DevOps Engineer - Big Data 14816 Scottsdale, AZ 7/23/2021 3:20:00 PM

IT
Contractor - W2

Job Description

Big Data Administrator
Sr. DevOps Engineer - Big Data
Preferred
    • Minimum 4 years prior Big Data Administration related experience.
    • Extensive knowledge/experience of one or more of the following disciplines:
  • o Build and configuration of Cloudera eco-system (Hadoop, HBase, Kafka, SOLR, etc.) on CDP version 7.1.5 or later
  • o Upgrade existing Cloudera clusters to CDP version 7.1.5 or later
  • o Day to Day administration of Cloudera eco-system
  • o Installation and configuration of software packages in Linux environments
  • o Certification installation and configuration
  • o Server configuration, NFS Configuration, LDAP configuration and performance tuning
  • o Network troubleshooting (e.g. TCP/IP, DNS, server ports, switches/routers, firewalls)
  • o Analysis and diagnosis of large-scale infrastructure for networking and I/O bottlenecks
  • o Hands-on Chef and/or Ansible experience
  • o Bash scripting experience
  • o Linux Systems Administration (medium-large scale systems)
 
Overall Purpose
This position designs, develops, tests and maintains infrastructure as code, CICD patterns, Configuration Management and containerized product applications, providing technical leadership and hands-on support for internal systems.
Essential Functions
    • Design, develop, document, test and debug new and existing Configuration management patterns and infrastructure as code.
    • Design, create and maintain comprehensive policies and technical documentation of best practices for all implemented system configurations ensuring efficient planning and execution.
    • Perform requirements analysis and design a model for Infrastructure and application flow.
    • Conduct design meetings and analyzes user needs to determine technical requirements.
    • Write technical specifications (based on conceptual design and business requirements).
    • Identify and evaluate new technologies for implementation.  Recommend and implement changes to existing hardware and operating system infrastructure including patches, users, file systems and kernel parameters. Seek out and implement new technologies to continually simplify the environment while improving security and performance.
    • Analyze results, failures and bugs to determine the causes of errors and tune the automation pipeline to fix the problems to have desired outcome.
    • Diagnose and resolve hardware related server problems (failed disks, network cards, CPU, memory, etc.) and act as escalation point to troubleshoot hardware and operating system problems and suggest possible performance tuning.
    • Consult with end user to prototype, refine, test, and debug programs to meet needs.
    • Proactively monitors health of environment and act on fixing any issues and improves the performance of environments.
    • Coaching and mentoring staff on team policies, procedures, use cases and best patterns.
    • Support and maintain products and add new features.
    • Participate in and follow change management processes for change implementation.
    • Support the company’s commitment to risk management and protecting the integrity and confidentiality of systems and data
Minimum Qualifications
    • Education and/or experience typically obtained through completion of a Bachelor’s Degree in Computer Science or equivalent certifications.
    • Minimum of 7 or more years of related experience.
    • Demonstrated prior DevOps, software engineering or related experience.
    • Ability to work on multiple projects and general understanding of software environments and network topologies
    • Able to facilitate technical design sessions
    • Minimum of 3 years of experience in modern application design patterns
    • Solid understanding of an iterative software development process
    • Ability to use Linux administration command line programs and create/edit scripts
    • Knowledge of one or more of the tools – Chef, Ansible, puppet.
    • Knowledge of one or more of the tools – IAC, Containerization and orchestration (Terraform, Docker & Kubernetes)
    • Experienced with security and encryption protocols.
    • Knowledge of one of the cloud infrastructure providers – AWS, GCP and Azure
    • Must be able to work different schedules as part of an on-call rotation.
    • Background and drug screen.
Preferred
    • Minimum 4 years prior Big Data Administration related experience.
    • Extensive knowledge/experience of one or more of the following disciplines:
  • o Build and configuration of Cloudera eco-system (Hadoop, HBase, Kafka, SOLR, etc.) on CDP version 7.1.5 or later
  • o Upgrade existing Cloudera clusters to CDP version 7.1.5 or later
  • o Day to Day administration of Cloudera eco-system
  • o Installation and configuration of software packages in Linux environments
  • o Certification installation and configuration
  • o Server configuration, NFS Configuration, LDAP configuration and performance tuning
  • o Network troubleshooting (e.g. TCP/IP, DNS, server ports, switches/routers, firewalls)
  • o Analysis and diagnosis of large-scale infrastructure for networking and I/O bottlenecks
  • o Hands-on Chef and/or Ansible experience
  • o Bash scripting experience
  • o Linux Systems Administration (medium-large scale systems)

Job Requirements