This role will help to design the data end-to-end pipelines from scratch and design the data architecture for the whole data platfrom.
Job Responsibilities:
- Work closely with business users and data teams to enable data consumption and automation
- Coordinate, monitor, and enhance data/ ETL processes on Azure
- Develop data models and implement data pipelines, queries, and dashboards
- Establish data governance processes focusing on quality, integrity, and documentation
- Proactively improve data infrastructure and processes using industry best practices and cutting-edge technologies
Requirements:
- Hold a degree in Information Engineering, Mathematics, Computer Science, or a related field
- Possess a minimum of 5 years of robust experience in designing and overseeing end-to-end big data processes
- Proficiency in Python (Spark/Pandas), Delta & Parquet, and SQL
- Hands-on expertise in Azure data services such as Synapse DW, Data Factory, Data Lake, and Databricks
- Machine Learning
- Strong problem-solving abilities, agility, and a quick learning pace
- Enthusiasm for adopting state-of-the-art data engineering technology
- Excellent written and verbal communication skills in English and Chinese