Data Engineer (remote APAC) - Fintech [25F02]
Data Engineer (remote APAC)
Our client, a a late stage fintech startup, one of the fastest growing companies in the US is looking for an experienced Data Engineer for their international businesses in Asia Pacific. This role presents an exciting opportunity to be an early team member in Asia to build a winning strategy.
Making data-driven decisions is key to the company’s culture. To support that, the company needs to scale their data systems while maintaining correct and complete data. You will provide tooling and guidance to teams across engineering, product, and business and help them explore data quickly and safely to get the data insights they need, which ultimately helps the company serve the customers more effectively. Data Engineers heavily leverage SQL and Python to build data workflows. We use tools like DBT, Databricks, Azure Event Hub and Terraform to orchestrate data pipelines and define workflows. We work with engineers, product managers, business intelligence, credit analysts, and many other teams to build Empowerʼs data strategy and a data-first mindset.
Travel for company offsites is expected at a minimum 2 times a year.
Responsibilities
Write secure, clear, well-structured, and performant code using Python, Spark and SQL
Build, scale, and optimize secure, self-service frameworks for batch and streaming data.
Automate and streamline data infrastructure updates and CI/CD deployment processes.
Develop tools and frameworks that enhance data governance and improve data quality standards.
Partner with stakeholders, such as Analytics, Credit/Risk, Data Science, Product Engineering, and other business partners to meet their data needs.
Basic Qualifications
4+ years of dedicated data engineering experience, solving complex data pipelines issues at scale.
Experience building data models and data pipelines on top of large datasets (terabytes of data)
Experience with SQL as a flexible and extensible tool, and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow.
Experience working with different performant warehouses and data lakes; Redshift, Azure Synapse, Snowflake, Databricks.
Experience building and maintaining both batch and real-time pipelines using technologies like Spark and Kafka/Azure Event Hub.
Experience with Terraform or similar Infrastructure-as-Code laC tools, and a cloud provider stack AWS, GCP, or Azure).
Understanding of key metrics of a data ecosystem and experience building infrastructure or data solutions to partner teams