Our main goal is to develop a Data Lake platform for a new, highly funded, UK-based, speciality insurance company. We work across two teams: one responsible for ingesting and curating hundreds of datasets from various providers; the second one creating Data Marts used by analysts, business users, and Data Science teams. There are lots of greenfield areas and possibilities to impact the whole company.
AWS, Snowflake, dbt, SQL, Terraform, Prefect
- Delivering high-quality code for data processing and transformation using Snowflake and dbt;
- Automating and optimizing day-to-day work while making sure the process remains reliable;
- Working closely with Data Architects and Business Analysts to create Data Marts using Kimball Dimensional Modelling;
- Connecting to multiple data sources and sinks to support complex business processes
The Data Engineering team consists of around 20 engineers of which about a half is Snowflake and the other half are Scala/Spark engineers. Many of them are from well-known consultancies. Apart from that teams consist of Data Architects, Business Analysts, Product Owner and Delivery Lead.
What we expect in general:
- Strong SQL skills
- Dimensional modeling (Kimball)
- Strong engineering skills
- Demonstrated ability to design, build, and implementation of software solutions in Snowflake with an unwavering focus on quality
- Ability to work in an agile environment, partnering with team members and peers to find solutions to challenging problems with transparency
- Experience in working using CI / CD and DevSecOps approach
We do not expect you to qualify for all of the above points. A good understanding of some of these areas and a willingness to develop expertise in others may be sufficient.