Job Summary: We are looking for a
Technical Lead with a strong background in
Databricks, Airflow, PySpark/Spark, Kafka, and Cloud technologies to lead our data engineering efforts. The successful candidate will be responsible for architecting, designing, and implementing scalable data pipelines and solutions that support our business objectives. This role requires a combination of technical expertise, leadership skills, and the ability to collaborate effectively with cross-functional
teams. Technical Leadership:
Lead and mentor a team of data engineers, providing technical guidance and support.
Drive the design, development, and deployment of scalable data pipelines and solutions using Databricks, Airflow, PySpark/Spark, Kafka, and cloud technologies.
Ensure best practices and standards are followed in coding, testing, and documentation.
Architecture and Design:
Architect and design data processing systems that are robust, scalable, and maintainable.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions.
Evaluate and recommend tools and technologies to enhance our data infrastructure.- Data Pipeline Development:
Develop and maintain ETL/ELT pipelines using Databricks, Airflow, PySpark/Spark, and Kafka.
Implement data integration and data processing solutions to handle large-scale, complex datasets.
Optimize data pipelines for performance, scalability, and reliability.
Cloud Integration:
Design and implement cloud-based data solutions, leveraging platforms such as AWS, Azure, or Google Cloud.
Ensure data security, compliance, and governance in cloud environments.
Collaboration and Communication: Work closely with cross-functional teams including data science, analytics, andproduct development to deliver data-driven solutions.
Communicate technical concepts and solutions effectively to both technical and non-technical stakeholders.
Continuous Improvement:
Stay current with industry trends and emerging technologies in data engineering.
Propose and implement improvements to existing data infrastructure and processes.
Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
8+ years of experience in data engineering or related field.
Proven experience as a technical lead or senior data engineer.
Extensive experience with Databricks, Apache Airflow, PySpark/Spark
Knowledge on Apache Kafka is preferred.
Must have system designing & documentation/architecture experience
Strong experience with cloud platforms such as AWS, Azure, or Google Cloud.
Soft Skills:
Excellent problem-solving and analytical skills.
Strong leadership and team management abilities.
Effective communication and interpersonal skills.
Ability to work in a fast-paced, collaborative environment
.Job Types: Full-time, Permanent
Pay: 1,000,
- 00 - 1,500,000.00 per month
Schedule: - Day shift
Work Location: In person