Client Story
Leading Custodial Bank
Nvidia
Large bank building as seen from ground level towering into a clear blue sky

AHEAD worked with one of the world’s largest custodial banks to establish an AI hub with a massively-parallel computing environment that democratized access in support of training and inference operations across over 90 teams at the bank.

The bank had identified a desire to leverage and lead with AI development and integration across their banking, asset management, and securities businesses. They desired to:

  • Build a full-stack internal platform to support ideation and experimentation
  • Leverage sensitive and critical information in the pursuit of unique enterprise value
  • Develop efficient environments for training and inference
  • Create standards for deployment of these resources at global scale and distribution
  • Drive exposure to robust AI tools and environments across business units
  • Develop a team with extensive engineering capacity across these platforms

AHEAD worked to design and deploy a parallel computing environment based on the Nvidia SuperPOD architecture. The AHEAD team worked by 4 Principles:

  • Balance reference architectural guidelines while integrating BNYM standards
  • Deploy a powerful, scalable and flexible environment for private AI endpoints
  • Design within the environmental confines of existing data center infrastructure
  • Focus on operationalizing access to resources

AHEAD and Nvidia have handed the platform over to the AI Platform owner, data science, and engineering teams. Our teams completed testing, validation, and integration with scheduling tools.

The bank had identified a desire to leverage and lead with AI development and integration across their banking, asset management, and securities businesses. They desired to:

  • Build a full-stack internal platform to support ideation and experimentation
  • Leverage sensitive and critical information in the pursuit of unique enterprise value
  • Develop efficient environments for training and inference
  • Create standards for deployment of these resources at global scale and distribution
  • Drive exposure to robust AI tools and environments across business units
  • Develop a team with extensive engineering capacity across these platforms

AHEAD worked to design and deploy a parallel computing environment based on the Nvidia SuperPOD architecture. The AHEAD team worked by 4 Principles:

  • Balance reference architectural guidelines while integrating BNYM standards
  • Deploy a powerful, scalable and flexible environment for private AI endpoints
  • Design within the environmental confines of existing data center infrastructure
  • Focus on operationalizing access to resources

AHEAD and Nvidia have handed the platform over to the AI Platform owner, data science, and engineering teams. Our teams completed testing, validation, and integration with scheduling tools.