Company Description
About CyberArk:
CyberArk (NASDAQ:
CYBR) is the global leader in identity security. Centered on
intelligent privilege controls, CyberArk provides the most comprehensive security offering for any identity human or machine across business applications, distributed workforces, hybrid cloud environments and throughout the DevOps lifecycle. The world's leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visit
https://www.cyberark.com, read the
CyberArk blogs or follow on
LinkedIn,
X,
Facebook or
YouTube.
Job Description
Data Engineer Position
We're seeking a skilled and driven Data Engineer to join our growing team within the Data Operations group. You'll collaborate closely with business analysts to understand needs, translate them into technical requirements, and develop robust solutions from data modeling and architecture to impactful Data implementations.
Here's what you'll be doing:
- Partner with business stakeholders: Collaborate closely with analysts to translate their needs into technical requirements and data-driven solutions.
- Ingest large scale data: Use python, Spark & SQL to pipeline data from business applications and products APIs.
- Architect and deliver robust Data solutions: Design and implement ETL pipelines and data warehouses Models using SQL and Python.
- Fuel machine learning and AI: Prepare and wrangle data, ensuring high quality and readiness for advanced analytics initiatives.
- Maintain data quality and performance: Ensure data integrity, accuracy, and optimal performance within data pipelines and BI systems.
- Collaborate for success: Partner closely with IT Infrastructure, DevOps, and Security teams to ensure seamless integration and secure deployment of all data solutions.
- Stay ahead of the curve: Continuously learn and stay updated on the latest data engineering and BI trends and technologies.
Essential Skills:
- Data Pipeline developer: Minimum 3 years of experience designing and implementing efficient ETL pipelines using industry-standard tools like Informatica, Rivery, SSIS, or equivalent.
- SQL Master: Deep expertise in writing complex queries and manipulating data with advanced techniques in Oracle, Snowflake, or similar relational databases Minimum 3 years.
- Python Proficient: Strong ability to use Python libraries and frameworks (NumPy, Pandas, Spark) for data manipulation, analysis, and scripting tasks.
Highly Favored:
- Tools Specialist: Experience building and training BI models using Databricks or comparable tools. Experience with Rivery, Snowflake.
Soft Skills:
- Global Collaborator: Proven ability to collaborate effectively with geographically dispersed teams in a fast-paced environment.
- Communication Expert: Skilled in explaining technical concepts clearly to both technical and non-technical audiences.
- Problem Solver: Capacity to approach challenges with creative solutions and adapt quickly to evolving requirements.
- Multitasking Master: Demonstrated ability to manage multiple data workloads concurrently while delivering results efficiently.
Additional Requirement:
- Fluent in spoken and written English.