nJob Description:n
nWe are looking for
Ab Initio Developer to be able to design and build Ab Initio-based applications across Data Integration, Governance & Quality domains for our customer programs. The individual will be working with both Technical Leads, Senior Solution Engineers and prospective Application Managers in order to build applications, rollout and support production environments, leveraging Ab Initio techstack, and ensuring the overall success of their programs. The programs have a high visibility, and are fast paced key initiatives, which generally aims towards acquiring & curating data and metadata across internal and external sources, provide analytical insights and integrate with customer's critical systems.n
n
n
Technical Stack:n
- Ab Initio 3.5.x or 4.0.x software suite Co>Op, EME, BRE, Conduct>It, Express>It, Metadata>Hub, Query>it, Control>Center
- nAb Initio 3.5.x or 4.0.x frameworks Acquire>It, DQA, Spec-To-Graph, Testing Framework
- nBig Data Cloudera or Hortonworks Hadoop, Hive, Yarn
- nDatabases - Oracle 11G/12C, Teradata, MongoDB, Snowflake, Cassandra
- nOthers JIRA, Service Now, Linux 6/7/8, SQL Developer, AutoSys, and Microsoft Office
n
n
Job Duties:n
- Ability to design and build Ab Initio graphs(both continuous & batch) and Conduct>it Plans, and integrate with portfolio of Ab Initio software.
n
- Build Web-Service and RESTful graphs and create RAML or Swagger documentations.
- nComplete understanding and analytical ability of Metadata Hub metamodel.
- nComplete hands-on expertise on Metadata Hub OOB import feeds.
- nBuild graphs interfacing with heterogeneous data sources Oracle, Snowflake, Hadoop, Hive, AWS S3.
- nBuild application configurations for Express>It frameworks Acquire>It, Spec-To-Graph, Data Quality Assessment.
n
- Build automation pipelines for Continuous Integration & Delivery (CI-CD), leveraging Testing Framework & JUnit modules, integrating with Jenkins, JIRA and/or Service Now.
n
- Build Query>It data sources for cataloguing data from different sources.
n
- Parse XML, JSON & YAML documents including hierarchical models.
- nBuild and implement data acquisition and transformation/curation requirements in a data lake or warehouse environment, and demonstrate experience in leveraging various Ab Initio components.
- nBuild Control Center Jobs and Schedules for process orchestration
- nBuild BRE rulesets for reformat, rollup & validation use-cases
- nBuild SQL scripts on database, performance tuning, relational model analysis and perform data migrations.
- nAbility to identify performance bottlenecks in graphs, and optimize them.
- nEnsure Ab Initio code base is appropriately engineered to maintain current functionality and development that adheres to performance optimization, interoperability standards and requirements, and compliance with client IT governance policies
- nBuild regression test cases, functional test cases and write user manuals for various projects
- nConduct bug fixing, code reviews, and unit, functional and integration testing
- nParticipate in the agile development process, and document and communicate issues and bugs relative to data standardsn
- Pair up with other data engineers to develop analytic applications leveraging Big Data technologies: Hadoop, NoSQL, and In-memory Data Gridsn
- Challenge and inspire team members to achieve business results in a fast paced and quickly changing environmentn
- Perform other duties and/or special projects as assignedn