We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Software Engineer II at JPMorgan Chase within the Corporate Data Services, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives.
Job responsibilities
- Executes software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems.
- Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.
- Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.
- Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture.
- Contributes to software engineering communities of practice and events that explore new and emerging technologies.
- Adds to team culture of diversity, equity, inclusion, and respect.
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 3+ years of applied experience.
- Experience designing and implementing data pipelines in a cloud environment is required (e.g., Apache NiFi, Informatica).
- Strong experience in migrating/developing data solutions in the AWS cloud is required, with experience needed in AWS Services and Apache Airflow.
- Building/implementing data pipelines using Databricks, such as Unity Catalog, Databricks Workflow, and Databricks Live Table.
- Solid understanding of agile methodologies such as CI/CD, application resiliency, and security.
- Hands-on object-oriented programming experience using Python (especially PySpark) to write complex, highly optimized queries across large volumes of data.
- Knowledge or experience in big data technologies such as Hadoop/Spark, and in data modeling and ETL processing.
- Hands-on experience in data profiling and advanced PL/SQL procedures.
Preferred qualifications, capabilities, and skills
- Familiarity with Oracle, ETL, and data warehousing, with cloud expertise being a plus.
- Exposure to cloud technologies.