Content-type: text/html Downes.ca ~ Stephen's Web ~ Lessons from red teaming 100 generative AI products

Stephen Downes

Knowledge, Learning, Community

In computing, the practice of 'red teaming' is employed to find faults in software to try whatever can be done to break it. So "AI red teaming has emerged as a practice for probing the safety and security of generative AI systems." This artcile (21 page PDF) described Microsofts process including "our internal threat model ontology and eight main lessons we have learned." 

Today: Total: [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Apr 12, 2025 1:41 p.m.

Canadian Flag Creative Commons License.

Force:yes