We are looking for an experienced Data Modeler with expertise in Securities Services (Middle Office, Back Office). The ideal candidate will have hands-on experience in IT solution design supporting Securities Services (Middle Office, Back Office).
You will play a key role in designing, implementing, and optimizing IT solutions for post-trade securities processing while ensuring compliance with regulations.
Responsibilities:
* Analyze data objects and current data models for the large and complex systems, specifically for the back-office platform catering to the securities and investment business of UC Italy and UC Germany.
* Analyze data objects and current data models (data dictionary) from the new back office/custodian system having experience on our TOBE systems are preferred.
* Create, build and manage business data entities and their relationships.
* Develop and maintain data models for all business domains of the bank's enterprise data architecture, ensuring consistency, accuracy, and scalability.
* Collaborate with business stakeholders, data analysts, and data engineers to understand data requirements and design appropriate data models.
* Create and manage data dictionaries, metadata, and documentation to ensure data governance and compliance with industry regulations.
* Design and implement data modeling best practices to optimize data storage, retrieval, and processing.
* Collaborate with the IT team to integrate data models into the bank's data management systems and infrastructure.
* Perform data profiling and analysis to identify data quality issues and implement data quality improvements as well as monitor and evaluate data model performance to improve efficiency.
Qualifications:
* 5+ years of experience in IT projects with a focus on data architecture and engineering.
* Strong expertise in securities and investment data models, including settlement, trade processing, corporate actions, and reconciliation.
* Proficiency in designing and managing hybrid data architectures (GCP & on-premises), ensuring seamless data migration and integration.
* Hands-on experience with Google Cloud data services, including BigQuery, Cloud Storage, Spanner, Dataflow, Dataproc, and Pub/Sub.
* Expertise in designing and implementing data pipelines (ETL/ELT) for both batch (Apache Beam, Dataflow) and streaming (Kafka, Pub/Sub, Spark Streaming) use cases.
* Deep knowledge of data modeling (relational, NoSQL, graph), storage optimization, and partitioning strategies for high-performance analytics and processing.
* Experience with API-based integrations and real-time data streaming between systems.
* Strong proficiency in SQL and programming languages such as Python, Java, or Scala for data engineering and automation.
* Understanding of data governance, security, and compliance standards in financial services (e.g., GDPR, BCBS 239, MiFID II).
* Fluent in English and Italian; German (not mandatory).
* University degree in Computer Science or related field.
#J-18808-Ljbffr