About
I'm a Data Engineer with 5 years experience, currently living and working in Melbourne, Australia but open to relocation. Handling and serving data for high performance applications is what really interests me and I'm always looking for new technologies to explore.
At REA Group, I'm building new data pipelines for business critical streaming datasets using Kafka and Flink. I'm helping the team design well architected services in Scala that encourage data correctness and safety and providing guidance to other teams who are just starting to use Kafka and Flink. At the same time, we're exploring new data custodianship techniques as a team such as schema powered APIs and in-depth data quality monitoring.
Supporting and uplifting other team members is very important to me. I've helped onboard many developers of different skill levels, timezones and language backgrounds, and provided mentoring in our best-practices, all while working remotely. I also run the introduction to Elasticsearch training at REA which is becoming a lot more popular as teams begin to depend on Elasticsearch for more critical services.
Outside of work, I love to tinker with code. Over the years I've created many experiments such as a fluent query builder for Neo4j's query language Cypher, or a library that automatically generates a GraphQL data access API for Elasticsearch. More recently, I've been attempting to write my own statically typed, functional programming language.
Experience
Senior Data Engineer
REA Group
2019 - now- Senior data engineer since October 2021
- Building REA's next generation of streaming pipelines using Kafka and Flink
- Designing and optimising Elasticsearch APIs that service core features on apps and webpages
- Exploring new ways to treat data as a product
Full stack developer
Matrak
2018 - 2019- Designed an automated lambda pipeline to process floorplan PDFs
- Developed a scriptable reporting component to visualise construction costs and progression
Full stack developer
DCode Group
February - December 2017- Designed relational database schemas and developed full-stack applications on AWS
Experiments
- Simple, open source, query builder for Neo4J Cypher
- Supports streaming records using observables
- Automatically generates a feature rich GraphQL API from an index's mappings
- Greatly reduces maintenance for internal Elasticsearch-based APIs
Education
Bachelor of Computer Science
Swinburne University of Technology
2015 - 2018- Received the Dean's Scholarship of Outstanding Achievement
References
About
I'm a Data Engineer with 5 years experience, currently living and working in Melbourne, Australia but open to relocation. Handling and serving data for high performance applications is what really interests me and I'm always looking for new technologies to explore.
At REA Group, I'm building new data pipelines for business critical streaming datasets using Kafka and Flink. I'm helping the team design well architected services in Scala that encourage data correctness and safety and providing guidance to other teams who are just starting to use Kafka and Flink. At the same time, we're exploring new data custodianship techniques as a team such as schema powered APIs and in-depth data quality monitoring.
Supporting and uplifting other team members is very important to me. I've helped onboard many developers of different skill levels, timezones and language backgrounds, and provided mentoring in our best-practices, all while working remotely. I also run the introduction to Elasticsearch training at REA which is becoming a lot more popular as teams begin to depend on Elasticsearch for more critical services.
Outside of work, I love to tinker with code. Over the years I've created many experiments such as a fluent query builder for Neo4j's query language Cypher, or a library that automatically generates a GraphQL data access API for Elasticsearch. More recently, I've been attempting to write my own statically typed, functional programming language.
Experiments
- Simple, open source, query builder for Neo4J Cypher
- Supports streaming records using observables
- Automatically generates a feature rich GraphQL API from an index's mappings
- Greatly reduces maintenance for internal Elasticsearch-based APIs
Experience
Senior Data Engineer
REA Group
2019 - now- Senior data engineer since October 2021
- Building REA's next generation of streaming pipelines using Kafka and Flink
- Designing and optimising Elasticsearch APIs that service core features on apps and webpages
- Exploring new ways to treat data as a product
Full stack developer
Matrak
2018 - 2019- Designed an automated lambda pipeline to process floorplan PDFs
- Developed a scriptable reporting component to visualise construction costs and progression
Full stack developer
DCode Group
February - December 2017- Designed relational database schemas and developed full-stack applications on AWS
Education
Bachelor of Computer Science
Swinburne University of Technology
2015 - 2018- Received the Dean's Scholarship of Outstanding Achievement