E

Data Engineer (DataBricks)

EverWatch
Remote
United States






Job Title






Data Engineer (DataBricks)








Overview






EverWatch is a government solutions company providing advanced defense, intelligence, and deployed support to our country’s most critical missions.  We are a full-service government solutions company. Harnessing the most advanced technology and solutions, we strengthen defenses and control environments to preserve continuity and ensure mission success.

 

EverWatch is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy), gender identity, sexual orientation, national origin, age (40 or older), disability, genetic information, citizenship or immigration status, and veteran status or any other factor prohibited by applicable law.

EverWatch employees are focused on tackling the most difficult challenges of the US Government. We offer the best salaries and benefits packages in our industry - to identify and retain the top talent in support of our critical mission objectives. 









Responsibilities






EverWatch is pioneering the roll out of zero-trust infrastructure, called Thunderdome, across DoD federal agencies. This will be an excellent opportunity for you to gain practical, hands-on experience with this hot new cybersecurity framework that is sweeping the industry, making you a valuable asset to any team you want to work with going forward. Having zero trust technologies on your resume will be a game changer for your overall career, giving you so much flexibility in the future, as well as much higher earning potential in a shorter period of time.

 

We are seeking an experienced Data Engineer (DataBricks) to join our team! As a Data Engineer, you will  provide advanced prototype solutions that resolve dataflow and processing issues and enable systemic improvements. Normalize, extract, transform, and load (ETL) data, produce Spark analytics, prototype analytics, and develop endpoint schemas to process incoming data. Increase in the productivity of client staff through increasing the efficiency of client systems. Define and build required capabilities to achieve the client's vision through improved data strategy and operations. Have the skills and experience in client's current and planned architecture as well as experience working with the systems that flow into client repositories.









Qualifications






Basic Qualifications:

 

3-5+ years of experience using Python, SQL, and PySpark
3+ years of experience utilizing Databricks or Apache Spark
Experience designing and maintaining Data Lakes or Data Lakehouses
Experience with big data tools such as Spark, NiFi, Kafka, Flink, or others at multi-petabyte scale
Expertise in designing and maintaining ETL/ELT data pipelines utilizing storage/serialization formats/schemas such as Parquet and Avro
Experience administrating and maintaining data science workspaces and tool benches for Data Scientists and Analysts.
Secret Clearance
HS diploma or GED

DoD8570 IAT II Compliance Certification (Such as Security+, CCNA Security, GSEC, etc.)


Additional Qualifications:

 

Experience with Apache NiFi, multi-cluster or containerized environment experience preferred
Experience with SQL
Knowledge of cybersecurity concepts, including threats, vulnerabilities, security operations, encryption, boundary defense, auditing, authentication, and supply chain risk management
Experience applications, appliances, or machines aligned to DoD/DISA Security Technical Implementation Guidelines (STIG) and Security Requirements Guides (SRG)
Experience writing playbooks and scripts for automation tools including Terraform, Ansible, or Puppet for Infrastructure as Code (IaC) and Configuration as Code (CaC)

Experience working on real-time data and streaming applications 
Bachelor’s Degree

 









Clearance Level






Secret








Job Locations






US-MD-Annapolis Junction








Skills






AWS, Nifi, Big Data, ETL, PySpark, SQL, Data Pipelines, Azure