The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up.
What you'll be responsible for
- Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems.
Qualification and other skills What youd have
We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
They should also have experience using the following software/tools:
- Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc
- Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL.
- Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc
- Experience with object-oriented/object function scripting languages: Python, Scala, etc
- Experience building and optimizing data pipelines, architectures and data sets.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as we'll as working familiarity with a variety of databases.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of message queuing, stream processing, and highly scalable data stores
Why join us
- Impactful Work: Play a pivotal role in safeguarding Tanlas assets, data, and reputation in the industry.
- Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development.
- Innovative Environment : Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated.