Job Description
Job Description
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
8+ years of experience as a Data Engineer or in a similar role.
Strong expertise in Hadoop ecosystem (HDFS, MapReduce, Hive, HBase, etc.).
Proficiency in Scala programming for data processing and ETL.
Extensive experience with Apache Spark for distributed data processing.
Experience with data integration tools and ETL frameworks.
Familiarity with NoSQL databases (Cassandra, MongoDB) and SQL databases (MySQL, PostgreSQL).
Experience with data modeling, data warehousing, and data lakes.
Knowledge of cloud platforms (AWS, Azure, GCP) and big data services (EMR, Databricks, HDInsight).
Strong problem-solving skills and attention to detail.
Excellent communication and collaboration skills.
Experience with streaming data processing frameworks (Kafka, Flink).
Familiarity with DevOps practices and CI/CD pipelines.
Experience with machine learning and data science workflows.
Skills: apache spark,azure,databases