Our client is one of the largest game studios known for their very successful MOBA and FPS franchises. You will be a member of the Data Team which collects and uses data to improve the player's experience.
Data Team is an initiative accountable for decision science, data products, data capture, and data warehousing and governance. Their mission is to harness the power of data for player-centric decisions and AI/ML products that make it better to be a player.
As a core contributor, you will play a vital role in building robust data solutions capable of processing petabytes of information. Your responsibilities will encompass safeguarding player privacy, utilising big data tools and AWS services to organise and optimise data warehouses, building a platform for data ingestion and real-time analytics.
The ultimate goal is to help product teams to operate their services with enhanced efficiency. Your extensive experience working with large-scale data and globally distributed systems will be invaluable in aiding our efforts to develop effective solutions.
- Please note, availability to attend afternoon/evening meetings is a requirement for this role as most of the team is located on the US West Coast (LA and Seattle)
- Build new data products on AWS/Databricks using DBT and PySpark
- Collaborate with the data engineering team to troubleshoot and optimize ETL pipelines, ensuring efficient data flow and processing.
- Reduce ambiguity in complex problem spaces by leading technical discovery and prototyping efforts that have a strategic impact on the team.
- Identify as well as investigate key problem or opportunity spaces and formulate recommendations and strategies for whether and how to pursue these
- Prepare design docs, implementation strategy and choose appropriate tools
- Hands-on work with live production systems
- Monitoring of production infrastructure (Datadog)
- Minimum of 5 years commercial work experience
- Bachelor's or higher degree in Computer Science, Software Engineering, or a related field
- Proficiency in Python, essential for data processing and analysis tasks
- Hands-on experience with DBT (Data Build Tool)
- Commercial experience using PySpark and Databricks
- Knowledge in Infrastructure as Code tooling, e.g. Terraform
- Effective communication and teamwork skills
- Experience in the gaming industry, particularly with online multiplayer games
- Experience working with cross-discipline organizations that build data products
- Proficient in large-scale data manipulation across various data types
- Demonstrated ability to troubleshoot and optimize complex ETL pipelines
- Familiarity with relational databases (e.g., MySQL, Postgres), document stores (e.g., ElasticSearch), and distributed storage systems (e.g., HDFS, S3, Google Cloud Storage)
- Knowledge of data design patterns like medallion architecture