Location Country India
Location Region
Location City Chennai
Job Details
Job Description:
We are looking for
Ab Initio Developer to be able to design and build Ab Initio-based applications across Data Integration, Governance & Quality domains for our customer programs. The individual will be working with both Technical Leads, Senior Solution Engineers and prospective Application Managers in order to build applications, rollout and support production environments, leveraging Ab Initio techstack, and ensuring the overall success of their programs. The programs have a high visibility, and are fast paced key initiatives, which generally aims towards acquiring & curating data and metadata across internal and external sources, provide analytical insights and integrate with customer's critical systems.
Technical Stack:
- Ab Initio 3.5.x or 4.0.x software suite Co Op, EME, BRE, Conduct It, Express It, Metadata Hub, Query it, Control Center
- Ab Initio 3.5.x or 4.0.x frameworks Acquire It, DQA, Spec-To-Graph, Testing Framework
- Big Data Cloudera or Hortonworks Hadoop, Hive, Yarn
- Databases - Oracle 11G/12C, Teradata, MongoDB, Snowflake, Cassandra
- Others JIRA, Service Now, Linux 6/7/8, SQL Developer, AutoSys, and Microsoft Office
Job Duties:
- Ability to design and build Ab Initio graphs (both continuous & batch) and Conduct it Plans , and integrate with portfolio of Ab Initio software.
- Build Web-Service and RESTful graphs and create RAML or Swagger documentations.
- Complete understanding and analytical ability of Metadata Hub metamodel .
- Complete hands-on expertise on Metadata Hub OOB import feeds .
- Build graphs interfacing with heterogeneous data sources Oracle, Snowflake, Hadoop, Hive, AWS S3.
- Build application configurations for Express It frameworks Acquire It, Spec-To-Graph, Data Quality Assessment .
- Build automation pipelines for Continuous Integration & Delivery ( CI-CD ), leveraging Testing Framework & JUnit modules, integrating with Jenkins , JIRA and/or Service Now.
- Build Query It data sources for cataloguing data from different sources.
- Parse XML, JSON & YAML documents including hierarchical models.
- Build and implement data acquisition and transformation/curation requirements in a data lake or warehouse environment, and demonstrate experience in leveraging various Ab Initio components.
- Build Control Center Jobs and Schedules for process orchestration
- Build BRE rulesets for reformat, rollup & validation use-cases
- Build SQL scripts on database, performance tuning, relational model analysis and perform data migrations.
- Ability to identify performance bottlenecks in graphs, and optimize them.
- Ensure Ab Initio code base is appropriately engineered to maintain current functionality and development that adheres to performance optimization , interoperability standards and requirements, and compliance with client IT governance policies
- Build regression test cases, functional test cases and write user manuals for various projects
- Conduct bug fixing, code reviews, and unit, functional and integration testing
- Participate in the agile development process , and document and communicate issues and bugs relative to data standards
- Pair up with other data engineers to develop analytic applications leveraging Big Data technologies: Hadoop, NoSQL, and In-memory Data Grids
- Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment
- Perform other duties and/or special projects as assigned
Key Responsibilities
- Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 3 years of experience
- Minimum 3 years of experience in design, build and deployment of Ab Initio-based applications
- Expertise in handling complex large-scale Data Lake and Warehouse environments
- Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities
- Excellent verbal communication skills
Education Level Bachelor's Degree
Experience Level 5 - 7 years