Search by job, company or skills

CAI

Data Engineer

  • 4 months ago
  • Over 500 applicants

Job Description

Req Number

R2984

Employment Type

Full time

Worksite Flexibility

Remote

Who We Are

CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is rightwhatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.

Job Summary

We are looking for a motivated Data Engineer ready to take us to the next level! If you have framework development ( Spark , Pyspark) , AWS ( S3 , Redshift , AWS Glue , EMR , DataBricks) , ETL Pipelines , SQL , Python and Lambda and are looking for your next career move, apply now.

Job Description

We are looking for a Data Engineer. This position will be full time and remote.

What You'll Do

  • Design and develop data lakes, manage data flows that integrate information from various sources into a common data lake platform through an ETL Tool
  • Code and manage delta lake implementations on S3 using technologies like Databricks or Apache Hoodie
  • Triage, debug and fix technical issues related to Data Lakes
  • Design and Develop Data warehouses for Scale.
  • Design and Evaluate Data Models (Star, Snowflake and Flattened)
  • Design data access patterns for OLTP and OLAP based transactions.
  • Coordinate with Business and Technical teams through all the phases in the software development life cycle.
  • Participate in making major technical and architectural decisions.
  • Maintain and Manage Code repositories like Git.

What You'll Need

  • 5+ Years of Experience operating on AWS Cloud
  • 3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and Redshift , Databricks
  • 3+ Years of Experience building Data Warehouses on Snowflake, Redshift, HANA, Teradata, Exasol etc.
  • 3+ Years of working knowledge in Spark and Pyspark
  • 3+ Years of Experience in building Delta Lakes using technologies like Apache Hoodie or Data bricks
  • 3+ Years of Experience working on any ETL tools and technologies
  • 3+ Years of Experience in any programming language (Python, R, Scala, Java)
  • Bachelor's degree in computer science, information technology, data science, data analytics or related field
  • Experience working on Agile projects and Agile methodology in general

Physical Demands

  • Sedentary work that involves sitting or remaining stationary most of the time with occasional need to move around the office to attend meetings, etc.
  • Ability to conduct repetitive tasks on a computer, utilizing a mouse, keyboard, and monitor.

Reasonable accommodation statement

If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to [Confidential Information] or (888) 824 8111.

More Info

Industry:Other

Function:technology

Job Type:Permanent Job

Skills Required

Login to check your skill match score

Login

Date Posted: 29/06/2024

Job ID: 83420291

Report Job

About Company

CAI
Follow

Hi , want to stand out? Get your resume crafted by experts.

Similar Jobs

Principal Data Engineer

BrightlyCompany Name Confidential

Senior Data Engineer

AlconCompany Name Confidential
Last Updated: 29-06-2024 06:27:23 AM