We are seeking an experienced
Scala Developer with a strong background in
Kafka and
Big Data technologies. The ideal candidate will have extensive experience in designing and implementing scalable data solutions, with a focus on performance and reliability. You will work closely with our data engineering and AI teams to build and maintain high-performance data pipelines and applications.
Key Responsibilities:
- Develop and maintain scalable applications using Scala and related technologies
- Design, implement, and manage data pipelines utilizing Kafka and other Big Data tools
- Collaborate with cross-functional teams to define, design, and ship new features
- Optimize application performance, ensuring high throughput and low latency
- Monitor and troubleshoot data processing workflows, ensuring data integrity and reliability
- Participate in code reviews, provide feedback, and improve code quality
- Stay up-to-date with the latest trends and best practices in Big Data and functional programming
Requirements
- Strong proficiency in Scala with at least 4+ years of hands-on experience
- Extensive experience with Kafka: stream processing, Kafka Connect, Kafka Streams, etc
- Solid understanding of Big Data technologies including Hadoop, Spark, HDFS, and Hive
- Proficiency in SQL and NoSQL databases
- Experience with ETL pipelines and data integration workflows
- Familiarity with data warehousing concepts and cloud platforms (AWS, GCP, Azure)
- Knowledge of containerization (Docker) and orchestration tools (Kubernetes) is a plus
- Excellent problem-solving skills and a proactive attitude