Job Title: Backend Developer
Location: Pune
Work Type: Work from Office
Experience: 3- 6 Years
Job Requirements:
- Hands-on experience in coding. Proficient in any one programming language (Java) or the J2EE technology stack.
- Strong in collections and basic/standard algorithms.
- Has a keen interest in data structures.
- Strong problem-solving skills.
- Having an attitude to getting stuff done.
Nice to Have:
- Practices clean coding standards (SOLID, DRY, KISS, YAGNI, design patterns, etc).
- Very strong object-oriented design skills, awareness of design patterns and architectural patterns.
- Experience in performance tuning and troubleshooting memory issues, GC tuning,resource leaks, etc.
- Experience in agile methodologies like Scrum, Kanban.
- Good understanding of branching, build, deployment, continuous integration methodologies.
- Ability to make decisions independently.
Job Responsibilities:
- Practicing TDD as your primary way of doing software development
- Constantly developing new features in our products
- Continuously refactor code to ensure high-quality and maintainable code
- Using Clean Code principles while writing and changing code
- Mentor junior engineers on design, coding, and troubleshooting
- Using Domain Driven Design (DDD) in your daily functioning at work
- Performance engineering of slow and resource intensive code
- Spending time on critical build engineering and developer productivity engineering activities
- Carving out new reusable modules whenever the opportunity arises
Problem Examples:
- Understand large data warehouses (>500,000 tables/views) better than any functional/business expert of any data warehouse.
- Make optimization recommendations by discovering common access patterns within large data warehouses.
- Suggest performance improvement opportunities by discovering redundant processing within different areas of a data warehouse.
- Enable metadata discovery by figuring out how data moves between clusters of thousands of tables and views.
- Help business users discover interesting/reusable data from among thousands and thousands of possible candidate tables.
- Assess impact of changes/outages within one section of a data warehouse and how those changes affect other parts.
- Create strategies on how to migrate one data warehouse to others (such as cloud data warehouses).
- Show beautiful and powerful visualizations of data and execution lineage to business/IT users.
- Convert large multi lakh table data warehouses to the cloud.
- Move complex ETL tool workloads with their orchestration DAGs to cloud native technologies which fulfill their needs better.
Tech Stack:
- Languages: Kotlin, Groovy, Scala, Java, Python, Bash, SQL (Multiple flavors)
- Frameworks: Spring (Boot, JPA, MVC, Core, Security, Neo4j), Apache Spark, Apache Calcite,
Google Cloud Dataflow, RxJava, Micrometer
- Databases: Neo4j, Teradata, Netezza, Oracle, MySQL, BigQuery, Apache Hive, Apache Spark, SQL
- ETL Tools: Informatica, Datastage
- Scheduling Tools: Control-M, Airflow/Composer, TWS
- DevOps and Build Engineering: Docker, Kubernetes, Helm, Gitlab, Gradle, Maven, Grafana, Prometheus
- UI Technologies: Reactjs, Redux, Typescript, ES6
- Cloud Technologies: Google Cloud (Compute, Dataflow, Dataproc, GCS, BigQuery)