Search by job, company or skills
Cloud Management,
Information Technology,
Cloud Data Services
Necessary skills / tools:
SQL & Python, PySpark
Databricks: Lakehouse concept, Unity Catalog
Azure Services: ADF, Databricks, Synapse, ADLS, App Services
Data warehousing
Data modelling
Job Description:
Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services.
Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modelling, database design & implementation, data visualization, and advanced analytics.
Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications.
Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL.
Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards.
Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks
Extensive knowledge on Unity Catalog, Delta Lake, Lakehouse and Delta Sharing.
Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained
Working with other members of the project team to support delivery of additional project components (API interfaces)
Evaluating the performance and applicability of multiple tools against customer requirements
Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
Integrate Databricks with other technologies (Ingestion tools, Visualization tools).
Proven experience working as a databricks developer.
Highly proficient in using the spark framework (python and/or Scala)
Extensive knowledge of Data Warehousing concepts, strategies, methodologies.
Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks).
Experience in designing and hands-on development in cloud-based analytics solutions.
Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential.
Thorough understanding of Azure Cloud Infrastructure offerings.
Strong experience in common data warehouse modelling principles including Kimball.
Working knowledge of Python.
Experience developing security models.
Databricks & Azure Big Data Architecture Certification would be plus
Must be team oriented with strong collaboration, prioritization, and adaptability skills required
Doctor of Ministry
Login to check your skill match score
Date Posted: 16/07/2024
Job ID: 84956031