8 years of experience in programming and handling data of various forms and sizes.
Excellent hands-on experience in SQL
Knowledge in Hadoop (Hive, HBase, Spark)
Good experience working with Python
Knowledge in Data Warehousing and Data Mart
Data Modelling knowledge
Hands-on experience with ETL mechanisms
Analytical thinking
Basic Visualization knowledge
Any Cloud Exposure
Good to Have:
Exposure to AWS Stack
Knowledge of a distributed system
Experience handling overall projects/migrations from scratch.
Responsibilities:
Build systems that collect, manage, and convert raw data into usable information.
Ensure the data is collected and managed to cater to clients requirements as well as the Data Science team to build AI models.
Develop data pipelines from various internal and external sources and build structures for previously unstructured data
Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
Design and develop data management and data persistence solutions for application use cases leveraging relational, and non-relational databases and enhancing our data processing capabilities
Develop POCs to influence platform architects, product managers, and software engineers to validate solution proposals and migrate