JOB DESCRIPTION
Role : Mid level Data Engineer
Experience : 8-12 Years
Mandatory Skill: Azure + Python + Databricks/Spark+ SQL
Location: Bangalore/Hyderabad
Notice Period: less than 60 Days
Job Description:
Skills: Azure, Python, SQL, Kafka, NoSQL, .NET C#
- At least 8-10 years of experience in development of data solutions using Azure cloud platform
- Strong programming skills in Python
- Good understanding Massively parallel processing (MPP) systems, experience building Datawarehouse/DataMart on Azure Synapse SQL pools (SQL DW)
- Strong SQL skills and experience writing complex yet efficient SPROCs/Functions/Views using T-SQL
- Solid understand of spark architecture and experience with performance tuning big data workloads in spark
- Building complex data transformations on both structure and semi-structured data (XML/JSON) using Pyspark & SQL, refactoring tradition ML model to run on spark framework
- Familiarity with Azure Databricks environment and deploying spark code in databricks cluster
- Good understanding of No SQL and its use case, Modelling No SQL schemas & containers, building integration to read/write to cosmos
- Familiarity with Cognitive Search/Elastic search and its use cases & building integrations to load data to Search services
- Good understanding on distributed systems and experience building real-time integrations with Kafka
- Good understanding of Azure cloud ecosystem; Azure data certification of DP-200/201/203 will be an advantage
- Proficient with Visual Studio 19+, IntelliJ/eclipse and source control using GIT
- Good understanding of Agile, DevOps and CI-CD automated deployment (e.g. Azure DevOps, Jenkins)
- Good knowledge on Microservices architecture and any experience in building microservices with .NET Core WebAPI will be an advantage
- Any experience building rest-full services using Python FAST API will be an advantage
QUALIFICATIONS
Role : Mid level Data Engineer
Experience : 8-12 Years
Mandatory Skill: Azure + Python + Databricks/Spark+ SQL
Location: Bangalore/Hyderabad
Notice Period: less than 60 Days
Job Description:
Skills: Azure, Python, SQL, Kafka, NoSQL, .NET C#
- At least 8-10 years of experience in development of data solutions using Azure cloud platform
- Strong programming skills in Python
- Good understanding Massively parallel processing (MPP) systems, experience building Datawarehouse/DataMart on Azure Synapse SQL pools (SQL DW)
- Strong SQL skills and experience writing complex yet efficient SPROCs/Functions/Views using T-SQL
- Solid understand of spark architecture and experience with performance tuning big data workloads in spark
- Building complex data transformations on both structure and semi-structured data (XML/JSON) using Pyspark & SQL, refactoring tradition ML model to run on spark framework
- Familiarity with Azure Databricks environment and deploying spark code in databricks cluster
- Good understanding of No SQL and its use case, Modelling No SQL schemas & containers, building integration to read/write to cosmos
- Familiarity with Cognitive Search/Elastic search and its use cases & building integrations to load data to Search services
- Good understanding on distributed systems and experience building real-time integrations with Kafka
- Good understanding of Azure cloud ecosystem; Azure data certification of DP-200/201/203 will be an advantage
- Proficient with Visual Studio 19+, IntelliJ/eclipse and source control using GIT
- Good understanding of Agile, DevOps and CI-CD automated deployment (e.g. Azure DevOps, Jenkins)
- Good knowledge on Microservices architecture and any experience in building microservices with .NET Core WebAPI will be an advantage
- Any experience building rest-full services using Python FAST API will be an advantage