8 Mar 2022

Senior Data Engineer at National Merchant Bank (NMB)

Never Miss a Job Update Again. Click Here to Subscribe


Job Description


Senior Data Engineer (1 Position(s))

Job Purpose:
  • To build and maintain optimized and highly available data pipelines that facilitate deeper analysis and reporting.
Main Responsibilities:
  • Design, implement and maintain the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies into a Data Warehouse based on internal process improvements, automation and optimization of data delivery.
  • Identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
  • Design and build solutions to empower users to perform self-serve analytics needs.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and design custom ETL and ELT processes.
  • Implement enhancements and new features across data infrastructure systems including Data warehouse, ETL, Master Data Management & BI platform
  • Maintain the overall data infrastructure systems
  • Troubleshoot data issues, data platforms and perform root cause analysis
  • Design and implement data products and features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology
  • Design and build an optimal organizational data infrastructure and architecture for optimal extraction, transformation, and loading of large data volumes from a wide variety of data sources using SQL and Azure, AWS big data technologies.
  • Profile and analyze data for the purpose of designing scalable solutions
  • Define and apply appropriate data acquisition and consumption strategies for given technical scenarios
  • Work with architecture and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to
  • Anticipate, identify and solve issues concerning data management to improve data quality.
  • Work with analytics tools that utilize the data pipeline to provide actionable insights into operational efficiency and other key business performance metrics.
  • Advise on the best tools/services/resources to build robust data pipelines for data ingestion, connection, transformation, and distribution
  • Transform data and engineer new features for machine learning models
  • Perform deep-dive analysis including the application of advanced analytical techniques to solve some of the more critical and complex business problems
  • Create new methods to visualize core business metrics through reports, dashboards, and analytics tools.
  • Work with different stakeholders to assist with data-related technical issues and support their data infrastructure needs.
  • Work with business users to understand the domain knowledge and troubleshooting of product issues
Knowledge and Skills:
  • Understanding of ETL framework and tools
  • Understanding of reporting & data visualization tools
  • Excellent analytical, creative and problem-solving skills.
  • Excellent verbal and written communication skills with the ability to interact effectively with people at all levels.
  • Ability to work effectively within a team.
  • Ability to prioritize, meet deadlines and work under pressure
  • Ability to work independently with minimal supervision
  • Attention to detail
Qualifications and Experience:
  • BSc in Computer Science, Computer Engineering, Data science, or relevant field.
  • Strong programming experience in SQL, Python, R
  • 5 years of experience in data engineering role
  • 3 years of big data experience
  • Experience in building and optimizing data warehouse and big data pipeline architectures
  • Experience with design, development and maintenance of ETL tools
  • Experience with maintenance and troubleshooting of BI platforms; SQL and NoSQL databases
  • Experience in data mining & machine learning
  • Successful history of manipulating processing and extracting value from large disconnected datasets
  • Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, etc.
Experience :
5.0 Year(s)
Job opening date : 24-Feb-2022
Job closing date : 10-Mar-2022
Sharing is Caring! Click on the Icons Below and Share




Method of Application

Submit your CV and Application on Company Website : Click Here

Closing Date : 10th  March, 2022.







Dont Miss Latest Jobs In Tanzania. Subscribe Today. CLICK HERE





Apply for this Job