
Cybersecurity
What is 'red teaming' and how can it lead to safer AI?
Red teaming is critical for AI safety – combining clear policies, creative testing and ongoing evaluation to uncover and manage real-world AI risks.
Masters in Artificial Intelligence
Responsible AI researcher at Infosys