Wheely is a chauffeur-hailing app that is setting a new standard in urban transportation. It provides affluent consumers with a premium, comfortable, and consistent chauffeur-driven experience on demand.
Wheely is currently available in London, Moscow and St. Petersburg. We have raised a $15 million Series B round to fund growth in top tier cities, including Paris this summer. So far Wheely has raised $28m and managed to grow to ~$80m in gross bookings run rate.
We are very ambitious with our plans and have the know-how and resources to make it happen - the missing piece is you!
Duties & Responsibilities:
- Data engineering:
- Ability to manipulate large data sets using SQL and Python efficiently.
- Develop a deep understanding of how our ETL pipeline works. We currently use Alooma for data extraction and load and DBT for data transformation.
- Implement required changes to our data model in order to process analysis and reporting.
- Ensure the maintenance of a high level of data integrity. Contribute to the development of automated tests aimed at flagging discrepancies that might arise between our data model and the source of truth.
- Audit the performance of our ETL run time and tune the performance of queries in order to reduce overall run time.
- Release Python models built by analysts to the ETL process. This might include performance tuning of the model, creating the automated process to incorporate the model into our ETL process etc. Those models could cover various areas such as LTV, geospatial data, etc.
- Develop and improve current ETL processes as well as the team’s knowledge in data engineering overall. We are always aiming to improve the efficiency of our processes in order to ensure scalability, reduce run times and limit potential issues.
- Contribute to the development of reporting in order to support ETL (i.e. discrepancy checks automation, storage use prediction, etc.)
- Support the creation and maintenance of reporting for the various teams we serve across the business.
Experience & Qualifications:
- 3+ years experience in a similar role, ideally in a data-heavy environment (e.g. tech/online industry, etc)
- Strong technical skills including SQL (analytic functions, performance tuning, etc) and Python (pandas, etc). The team uses those two languages on a daily basis and you will be expected to be proficient from day one.
- Strong knowledge of database performance tuning and ideally experience with Redshift.
- Solid understanding of object oriented programming and how it can help create efficient processes for ETL.
- Strong knowledge of overall ETL optimisation techniques such as data modelling, data normalisation, etc.
- Strong attention to detail and accountability.
- An inquisitive mind by nature. You always go the extra mile in order to understand ‘why’ things happen rather than only ‘what’ happens.
- A natural desire to change the status quo in order to improve systems and processes. We are always looking to improve.
- Experience with Airflow is a plus but not necessary.
- Experience with the manipulation and analysis of geospatial data is a plus but not necessary.
- Experience with version control tools (e.g. git) is a plus but not necessary.
- Experience with visualisation tools (e.g. Tableau, Looker) is a plus but not necessary.
- Fluency in English
Personal attributes & other requirements:
- We need people that have a strong technical understanding and that are good problem solvers.
- We are looking for people that are detail oriented, systematic and process driven. They need to be able to make sense and organise vast amounts of data.
- We want people that are able to think quickly on their feet, are inquisitive and have a good business understanding.
Why this role:
- We are currently redoing the way the analytics work, so there is a lot of room for ambitious / enterprising people to set their mark.
- Forward thinking team using latest tools (e.g. Python) and always wanting to improve. We are also always open to try new things.
- Possibility to work in many areas from an ETL point of view which will make the work interesting (geospatial data, marketing, product, etc)