Join Aristocrat in Bringing Joy to Life Through the Power of Play. Be part of our growing global team where people come first because they fuel our success. Here, it’s All About the Player and we create a world of its own for everyone, everywhere with premium casino and world-class digital and mobile products. Our value of Good Business, Good Citizen ensures that corporate growth and responsible gameplay go hand in hand to help our industry remain sustainable.
Aristocrat offers a highly diverse, inclusive, and equitable culture as well as the professional tools and resources to ensure your Talent is Unleashed. We achieve success through Collective Brilliance. Individually, we are great, but together, we are unstoppable. Aristocrat enhances the player experience—and careers—with opportunities featuring meaningful challenges, strong advancement potential, and global exposure.
Explore a Career with Our Team: Aristocrat Interactive
Responsibilities:
- Create data pipelines – both as batch and in real-time - to ingest data from dissimilar sources
- Collaborate with the other teams to address data sourcing and provision requirements
- Design and monitor robust, recoverable data pipelines following best practices with an eye out for performance, reliability, and monitoring
- Innovation drives us - carry out research and development and work on PoCs to propose, trial and adopt new processes and technologies
- Coordinate with the Product & Technology teams to ensure all platforms collect and provide appropriate data
- Liaise with the other teams to ensure reporting and analytics needs can be addressed by the central data lake
- Support the Data Quality and Security initiatives by building into the architecture the necessary data access, integrity, and accuracy controls
Requirements:
- 4+ years of experience in Data Engineering
- Degree in Computer Science, Software Development or Engineering
- Proficient in Python. Past exposure to Java will be considered an asset
- Understanding of RDMS, Columnar and NoSQL engines & their performance
- Experience with cloud architecture and tools: Microsoft Azure, Amazon or GCP
- Experience with orchestration tools such as Apache AirFlow, dbt
- Prior exposure to the Snowflake ecosystem will be considered an asset
- Familiarity with Docker/Kubernetes and containerisation
- Strong background in stream data processing technologies such as NiFi, Kinesis, Kafka
- A grasp of DevOps concepts and tools including Terraform and Ansible are an advantage
- Understanding of distributed logging platforms - ideally the ELK stack
Skills:
- Fluency in spoken and written English is essential
- Passionate about data and on the lookout for opportunities to optimise
- Passionate about technology and eager to trial and recommend new tools or platforms
We offer:
- High-level compensation and regular performance based salary and career development reviews
- Possibility to work in a big and successful company
- PE accounting and support
- Medical insurance (health), employee assistance program
- Paid vacation, holidays and sick leaves
- Sport compensation
- English classes with native speakers, training, conferences participation
- Referral program
- Team buildings, corporate events