Lessons from red teaming 100 generative AI products
Microsoft,
Mar 04, 2025
In computing, the practice of 'red teaming' is employed to find faults in software to try whatever can be done to break it. So "AI red teaming has emerged as a practice for probing the safety and security of generative AI systems." This artcile (21 page PDF) described Microsofts process including "our internal threat model ontology and eight main lessons we have learned."
Today: Total: [Share]
] [