The researchers are making use of a method referred to as adversarial teaching to stop ChatGPT from allowing people trick it into behaving poorly (often called jailbreaking). This do the job pits a number of chatbots towards one another: a single chatbot plays the adversary and assaults A different chatbot by producing textual content to drive it t