Neiron, in collaboration with Juvo Plus, is looking to fill a Senior Data Engineer based in India. This position will support the Juvo Plus business, but will be employed directly by Neiron. Juvo Plus works with Neiron as our local representative to support local employment and ensure a world class employment experience for our lndia based Team Members.
Juvo+ (Juvo Plus) is a leading developer and marketer of consumer products across eighteen brands, working with the largest marketplaces and retailers in the world. We create products that transform houses to homes, that celebrate the big and little moments in life, and that just make everyday things easier. We constantly apply Retail Science, our in-house approach to leveraging data, applying machine learning tools, and next-gen processes to continually expand our catalog, delighting our customers and swiftly moving to meet our mission of having at least one of our products in every home.
We're committed to building upon our success through the selection and development of outstanding people; with the understanding that as we grow, you grow. From top to bottom, our team fosters a collaborative environment where we drive results, optimize our efforts, and achieve amazing things while having a great time and celebrating our successes along the way!
In this position, you will be pivotal in constructing our core data platform, facilitating seamless data integration from sources like Amazon's Seller Central and other ecommerce platforms. Collaborating with in-house systems, you will contribute to the development of a robust business intelligence platform serving as the central hub for recurring reporting and ad-hoc analysis, particularly for our Finance and Analytics teams. If you thrive in fast-paced environments, enjoy solving problems, and are passionate about leveraging data to uncover opportunities, we'd love to meet you and have you play a key role in our continued growth and innovation.
Key Responsibilities:
- Data Architecture Design: Lead the design and implementation of robust, scalable, and optimized data architecture that supports the collection, processing, storage, and retrieval of large-scale datasets.
- ETL Development: Develop, maintain, and optimize Extract, Transform, Load (ETL) pipelines to ensure seamless and accurate data movement from various sources to data warehouses and lakes.
- Data Integration: Integrate data from various internal and external sources, ensuring data quality, consistency, and reliability.
- Performance Optimization: Identify and address bottlenecks and performance issues in data processing and storage, utilizing best practices to enhance system efficiency.
- Data Security and Compliance: Implement security protocols and data governance strategies to ensure data integrity, confidentiality, and compliance with relevant regulations (e.g., GDPR, HIPAA).
- Collaboration: Work closely with data scientists, analysts, and software engineers to understand data requirements, provide data solutions, and support their data-related initiatives.
- Technical Leadership: Provide guidance and mentorship to junior data engineers, contribute to code reviews, and advocate for best practices in data engineering.
- Monitoring and Maintenance: Develop and maintain monitoring solutions to proactively identify data issues, ensure data pipeline reliability, and troubleshoot and resolve any anomalies.
- Documentation: Create comprehensive documentation for data pipelines, processes, and systems to ensure transparency and knowledge sharing within the team.
- Innovation: Stay updated with emerging technologies and trends in data engineering and recommend innovative solutions that can drive efficiency and improvements within the organization.
Qualifications
- Bachelor's degree in computer science, Engineering, or a related field.
- 7+ years of professional experience in data engineering, with a strong focus on building and maintaining data pipelines and infrastructure.
- Experience working in AWS Cloud
- Proficiency in programming languages such as SQL and Python or Java or Scala.
- Extensive experience with big data technologies, such as Hadoop, AWS Glue, Spark, and Kafka.
- Hands-on experience with data warehousing solutions like Amazon RedShift.
- Hands-on experience in data model design
- Strong understanding of relational and NoSQL databases and database optimization.
- Expertise in ETL tools and frameworks, such as Apache Airflow
- Expertise in using tools like dbt for transformation and Terraform for Infrastructure as code.
- Experience working with version control systems (e.g., Git) and familiarity with CI/CD processes.
Start-up experience.