Conviva pioneered and continues to define the standards for cross-screen, end-to-end streaming media intelligence. We help digital businesses of all sizes around the world stream their best.
The Conviva Differentiator Lies In
- Unmatched scale: Inside 3.3billion applications, analyzing 1.8 trillion data events per day, across 180 countries.
- Real-time Intelligence: Cross-screen, viewer-centric data organized for every second of every stream.
- Comprehensive data: Publisher-controlled, privacy-secure access to 500 million viewers.
- Built for Streaming: Purpose built to drive decisions that maximize engagement, revenue & ROI.
- 55 Patent Issued, With 55 patents issued, Conviva is the real-time measurement and intelligence platform for streaming TV. Built for video, the Conviva platform and products enable you to understand and act on experience, advertising, social and content insights, for every stream across every screen, every second. More than 250 industry leaders - including CBS, CNN, DAZN, HBO Max, Hulu, Red Bull, RTL, Sky, Sling TV, TED, Turner, and Univision - rely on Conviva to drive business and time-critical decisions at every level of their organization, reducing customer churn, increasing viewer engagement, driving new monetization models, and growing ROI.
Job Description
Function: Software Engineering Backend Development, Big Data / DWH / ETL
- Scala
- Rust
- Spark
- Kafka
- Big Data
Responsibilities
- Design, built and maintained the stream processing and time-series analysis system which is at the heart of Conviva's products
- Responsible for the architecture of the Conviva platform
- Build features, enhancements, new services, and bug fixing in Scala and Rust on a Jenkins based pipeline to be deployed as Docker containers on Kubernetes
- Own the entire lifecycle of your microservice including early specs, design, technology choice, development, unit-testing, integration-testing, documentation, deployment, troubleshooting, enhancements, etc.
- Lead a team to develop a feature or parts of a product
- Adhere to the Agile model of software development to plan, estimate, and ship per business priority
Requirements
- 14+ years of work experience in software development of data processing products.
- Engineering degree in software or equivalent from a premier institute.
- Excellent knowledge of fundamentals of Computer Science like algorithms and data structures. Hands-on with functional programming and know-how of its concepts
- Excellent programming and debugging skills. Proficient in writing code in Rust/Scala /Haskell/Erlang that is reliable, maintainable, secure, and performant
- Experience/knowledge of actor models of concurrency (Akka in Scala or Actix in Rust) is a big plus. Knowledge of design patterns like event-streaming, CQRS and DDD to build large microservice architectures will be a big plus
- Experience with big data technologies like Spark, Flink, Kafka, Druid, HDFS etc.
- Deep understanding of distributed systems concepts and scalability challenges including multi-threading, concurrency, sharding, partitioning etc.
- Experience/knowledge of Akka/Lagom framework and/or stream processing technologies like RxJava or Project Reactor will be a big plus. Knowledge of design patterns like event-streaming, CQRS and DDD to build large microservice architectures will be a big plus
- Excellent communication skills. Willingness to work under pressure. Hunger to learn and succeed. Comfortable with ambiguity. Comfortable with complexity
Skills: concurrency,scala,haskell,data engineering,rxjava,lagom,akka,hdfs,scala,,kafka,flink,bigdata,rust,druid,big data,spark,backend development,dwh,etl