Business Function Group Finance aims to deliver world-class standards in reporting, financial planning and finance processes. We provide insights and analyses that help the bank make sound business decisions – whether in the areas of product development or customer profitability. We also provide capital management, business planning, forecasting, and tax and accounting advisory services. The bank is also a certified Accredited Training Organisation (ATO) for the Singapore Qualification Programme by the Singapore Accountancy Commission and an Association of Chartered Certified Accountants (ACCA) Approved Employer.
To source, develop and operationalise data pipelines that are required by algorithms/models for addressing specific business problems around the areas of Portfolio Management & Balance Sheet Management
To understand business requirements from users and develop solutions to address their problems, specifically the implementation of business logic and transformations
To work closely with business leads to identify opportunities for collaboration and development of data pipelines and model deployment strategies
To operationalize pipelines into re-usable libraries for the team
To automate models by leveraging on existing tools and frameworks
Individual with a strong interest and natural curiosity in exploring various technical domains in the areas of data engineering and machine learning.
1-3 years of experience in applied statistical learning, machine learning and data science with tangible outcomes.
8-10 years of experience in data engineering, specifically on the sourcing, designing and development of pipelines.
Excellent written and communication skills, especially one who can explain how data pipelines are sourced and built.
A keen learner who desires to learn and acquire new knowledge both in business and technical domains, as well as to acquire proficiency in new tools, languages and techniques.
A strong team player who can contribute as an individual
Extremely proficient in software development, with in-depth knowledge of PySpark, libraries such as Scikit learn, TensorFlow, PyTorch, Keras, as well as big-data tools such as Spark, Hive & Alluxio.
Experience in working with AWS and proficiency in BI tools such as Qlikview is preferred.
Degree or Master’s in Computer Science, Statistics, Math, Engineering or other quantitative disciplines
Apply Now We offer a competitive salary and benefits package and the professional advantages of a dynamic environment that supports your development and recognises your achievements.