Responsibilities- Drive and lead delivery of key projects within time and budget
Drive and lead solution design and build to ensure scalability, performance, and reuse- Ability to recommend and drive consensus around preferred data integration/engineering approaches
Ability to anticipate data bottlenecks (latency, quality, speed) and recommend appropriate remediation strategies
- Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards.
Facilitate work intake, prioritization and release timing, balancing demand and available resources. Ensure tactical initiatives are aligned with the strategic vision and business needs.- Ensure sustainability of live pipelines in production environment
Hands on experience of implementing and designing data engineering workloads using Spark, Databricks , or similar modern data processing technology
- Work with product owners, scrum masters and technical committee to define the 3 months road-map for each program increment (sprint wise)
Manage and scale data pipelines responsible for ingestion and data transformation.- Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners.
Prototype new approaches and build solutions at scale.
- Research in state-of-the-art methodologies.
Create documentation for learnings and knowledge transfer.- Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions.
- Required skill set and experience:
Bachelors degree in Computer Science, MIS, Business Management, or related field
- 10 + years experience in Information Technology
4 + years of Azure, AWS and Cloud technologies- 6+ years of experience of writing complex SQL queries in DWH or Datalake environment
5+ years of experience with a programming language (i.,e python , java or scala) preferably python.
- Experience of building frameworks for different processes such as data ingestion and dataops
Good written and verbal communication skills along with collaboration and listening skills- Well versed with Spark optimization techniques
Experience dealing with multiple vendors as necessary.
- Hands on experience of writing complex SQL queries
Big Data (Hadoop, HBase, MapReduce, Hive, HDFS etc.), Spark/PySpark- Sound skills and hands on experience with Azure Data Lake, Azure Data Factory, Azure Data Bricks ,Azure Synapse Analytics, Azure Storage Explorer
Proficient in creating Data Factory pipelines for on-cloud ETL processing; copy activity, custom Azure development etc.
- Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.
Experience with data profiling and data quality tools- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake.
- Mandatory Non-Technical Skills
Excellent remote collaboration skills
- Experience working in a matrix organization with diverse priorities.
Enthusiast for learning functional knowledge specific to finance business- Ability to work with virtual teams (remote work locations); within team of technical resources (employees and contractors) based in multiple global locations.
Participate in technical discussions, driving clarity of complex issues/requirements to build robust solutions
- Strong communication skills to meet with delivery teams and business-facing teams, understand sometimes ambiguous, needs, and translate to clear, aligned requirements.
Job Types: Full-time, Permanent
Pay: 30,
- 00 - 35,000.00 per month
Benefits: - Cell phone reimbursement
Internet reimbursement
Schedule:Supplemental Pay: Performance bonus
Education:
Experience:
- AutoCAD: 3 years (Required)
* Sketchup: 3 years (Required)
- ARCHITECT: 3 years (Required)
Language:
Work Location: In person