AI Red Team Build

  • Wyatt Partners
  • Jun 28, 2025
Full time I.T. & Communications

Job Description

Interested in improving the safety of Generative AI models?

A large investment firm building its own LLMs is looking to establish an AI Red Team to identify vulnerabilities, biases, and safety concerns in their models.

You will work on testing the security and robustness of these systems, as well as assessing their potential to cause harm to humans.

Ideal candidates may come from a traditional Pen Testing background in Financial Services but have recently transitioned into ML & AI systems.

We are also very open to hearing from Academics and AI Research engineers interested in Red Teaming.