Jose Merciline logo

Senior data engineer

Jose Merciline
On-site
New York, United States

Job Description 


Description – External Description 

Company and Department Profile

Our company is a leading global financial services firm providing a wide range of investment banking, securities, investment management and wealth management services.

Technology works as a strategic partner with our company business units and the world's leading technology companies to redefine how we do business in ever more global, complex, and dynamic financial markets. As a market leader, the talent and passion of our people is critical to our success. Our Company can provide a superior foundation for building a professional career - a place for people to learn, to achieve and grow.

Operations Technology is responsible for building and operating the technology platform catering to critical business processes at our company including Settlements, Confirmations, Regulatory Reporting, Position Keeping, Corporate Actions processing and other post-trade functions. The Firm operates at scale with up to 40 million trades processed on peak volume days and hundreds of billions of dollars of daily settlements with activity ongoing in multiple countries and currencies across the globe. There is a great breadth of financial products that our Operations plant handles across equity and fixed income, from cash products to complex derivatives and loans.

Team Profile 

Global Settlements technology team has significant presence in Mumbai, New York and Montreal. We are organized into Agile delivery teams closely aligned with Operations product owners, and there is an opportunity to directly see the business benefit of the solutions we develop through our close partnership with our stakeholders and users. We believe in the forward momentum and progress of technology and aim to harness the very best in new technology and systems design to deliver exceptional capabilities to our businesses. 

Role

We are looking for a Senior data engineer. 

Role requires taking ownership for end to end delivery in an Agile team setting. He/she will interact directly with the users and with other technologists across the firm to build requisite framework and application development for the strategic Settlement platform renovation multi-year initiative. If you have a track record of data modelling, implementing high volume data pipelines and building business intelligence, we would like to meet you.

 Qualifications – External Skills Required: 

#5-14 years of experience in ADF, Azure SQL, Azure Databricks , Python, any one of Teradata/Redshit/Greenplum/Snowflake , Spark/Kafka

#The candidate will be working in the team as a data engineer building data pipeline solutions along with Enterprise Data Warehouse and Azure Data Lake solutions in cloud with the above skill set.

#He should have good experience with relational SQL including Postgres /DB2/Azure SQL/Snowflake/SparkSQL etc.

#Experience building and optimizing #big data# data pipelines, architectures and data sets using big data tools: ADF, Spark, Kafka, Azure Databricks etc is a big plus.

#Experience with data modelling of relational and dimensional data models

#Experience in Multi-Parallel Processing (MPP) database or Teradata/Redshift/Greenplum/Azure Synapse/Snowflake is a big plus.

#Good analytical, problem solving, communication and interpersonal skills.

#Self-motivated and ability to work consistently and efficiently to achieve end goals.

#Interest in learning the business functionality

#Experience supporting and working with cross-functional teams in a dynamic environment

#Skilled in identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

#Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

#Experience in building processes supporting data transformation, data structures, metadata, dependency and workload management.

#Working knowledge of message queuing, stream processing, and highly scalable #big data# data stores. 

Skills Desired:

#ADF, Azure SQL, Python, Any one of Teradata/Redshit/Greenplum/Snowflake , Data Engineering, Either of Spark/Kafka#, Azure Databricks