Who we are:
Skypoint's mission is to unify data integration, analytics, and AI into a cohesive SaaS offering powered by Data Lakehouse architecture. Our Generative AI Platform brings people and data together with a focus on enhancing healthcare, financial services and other regulated industries. We are proud partners of Microsoft, OpenAI, DataStax and Databricks.
Website: Skypoint.ai
Location: Global Technology Park, Bellandur, Bangalore, India.
Here is what you can expect to work on in this critical role:
You will lead the efforts to leverage the data to its maximum value. Our platform processes billions of rows in data every month on behalf of millions of users.
How do our Data Engineers spend their time
You can expect to spend about 50% building and scaling the Skypoint Lakehouse, data pipelines and about 20% of your time defining and implementing DataOps methodologies.
Additionally, 20% of your time will be spent writing and optimizing queries and algorithms. Lastly, you'll spend about 10% of your time supporting and monitoring pipelines.
Our team values collaboration, a passion for learning and a desire to become a master of your craft. We thrive in asynchronous communication. You will have a lot of support from leadership when you communicate proactively with detailed information about any roadblocks you may encounter.
Qualities of Data Engineers Who Thrive in This Role
You are a driven, self-starter type of person who isn't afraid to dig for answers, stays up-to-date on industry trends and is always looking for ways to enhance your knowledge (yes, Databricks-related podcasts count! )
Your skill set includes a blend of Databricks-related technologies in Azure or AWS or GCP
Experience with Python or Scala is a must! (you've got a software engineering hat)
Working with Python, Scala, Spark (Databricks) interacting with Delta Lakehouse, Delta Live Tables and Unity Catalog
Skills & Experience Required:
2-4 years of industry experience
Spark (Scala or Python), Databricks
Strong backend programming skills for data processing, with practical knowledge of availability, scalability, clustering, micro services, multi-threaded development and performance patterns.
- Experience with the use of a wide array of algorithms and data structures
- Experience in workflow orchestration platforms like Databricks DLT/ADF/AWS Glue/Airflow etc
- Strong Distributed System fundamentals
- Strong handle on REST APIs
- Experience in NoSQL databases
- Most recent work experience MUST include work on Scala or Python and Spark (Databricks)
Educational Level:
BS / BE / MS in Computer Science from a Top Tier school (IITs / RECs etc.)
Join Skypoint and Let's Soar Beyond Boundaries Together!
Ready to Surf the AI Revolution and Dive into Innovation Armed with top-tier education and cutting-edge expertise, come aboard Skypoint, where we're not merely redefining technology we're crafting the future with our revolutionary Generative AI Platform. If you're all set to embrace the extraordinary, seize this opportunity, and together, we'll conquer new horizons! Don't wait, Apply Now and Supercharge Your Career with Skypoint!
Skypoint is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.