Job Title: Senior Data Engineer
Job Summary
We are seeking an experienced Data Engineer to join our team in building, enhancing, and maintaining scalable data solutions. This role offers an opportunity to work with cutting-edge data processing technologies and contribute significantly to backend development projects.
Key Responsibilities
- Architect and develop well-designed, testable, efficient, and secure code.
- Analyze complex problems and devise innovative solutions, adapting existing approaches to resolve diverse challenges with limited information.
- Exercise evaluation, judgment, and interpretation to independently select appropriate courses of action, with work reviewed at key milestones.
- Engage in the entire software development lifecycle, including CI/CD processes.
- Mentor and support new and junior team members.
- Creatively assess and resolve a wide range of issues, suggesting alternative approaches as needed.
- Work closely with data consumption stakeholders and display strong analytical knowledge to define data models that are both efficient and cost effective
Work Culture Highlights
- Sharing Culture: Teams showcase their work during our monthly Platform R&D Demos, fostering department-wide collaboration.
- Annual Hackathon: Participate in a four-day, department-wide Hackathon, teaming up across all R&D teams to innovate and develop new ideassome of which make it into our products.
Minimum Qualifications
- Bachelor's or advanced degree in Computer Science, Software Engineering, or a related field.
- A minimum of 5 years of full-time work experience as a software developer.
- Proficiency in technologies like: Apache Spark, Databricks, Kafka, SQL, Terraform.
- Strong programming experience in Python, Golang, Java, Scala, or another advanced object-oriented language.
- Hands-on experience with creating robust ETL pipelines.
- In-depth knowledge of data structures and algorithms.
- Skills in writing unit tests, data quality checks, and automated data testing frameworks.
- Proven backend development experience, including work on scalable, high-availability services.
- Excellent communication and interpersonal skills, with a track record of effective collaboration across cross-functional teams.
Additional Skills
- Experience with AWS technologies.
- Practical experience with Kubernetes for orchestration and containerization.
- Proficiency in designing, implementing, and optimizing data models for data warehouses, databases, and data lakes.
- Familiarity with data warehousing concepts and platforms like Databricks, Iceberg, Snowflake, Amazon Redshift, or Google BigQuery.
- Experience with Schema Registries and Data Catalogs e.g. AWS Glue, Unity Catalog
- Experience with real time processing systems like Apache Kafka Streams and Kinesis.
- Experience with monitoring tools like Prometheus, Grafana, or cloud-native monitoring solutions and skills in optimizing performance for large-scale data processing.
Additional Qualities We Value
- Demonstrated leadership experience in projects, even if not reflected by a formal title.
- A diverse knowledge base that facilitates solving complex technical challenges.
- Comprehensive understanding of software development practices, including expertise in writing and debugging code.
- Willingness to engage in our robust onboarding and on-the-job training programs.
Interview Process
- Phone Technical Screening: Includes a brief coding exercise (approximately 30 minutes).
- Face-to-Face Interviews: Multiple team members will conduct 2-3 interviews to understand your background, potential role, and career goals. Be prepared to collaborate on a technical problem and discuss past projects.
Security Requirements
- Adhere to Arctic Wolf's Information Security policies, standards, processes, and controls to protect the confidentiality, integrity, and availability of Arctic Wolf business information assets.
- Employment is contingent upon passing a criminal background check and employment verification.