ReferIndia News ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How

ReferIndia News

Need a stunning portfolio website?

ReferIndia is your one-stop solution for design, development, and deployment—fast and professional!

Create Now
News Image

ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How

Published on: Dec. 1, 2025, 10:38 a.m. | Source: Times Now

Recent research from Italy's Icaro Lab has revealed significant weaknesses in AI models like ChatGPT and Gemini, allowing attackers to bypass safety measures by framing harmful requests as poetry. The study tested 20 harmful prompts in poetic form, achieving a 62% success rate across various AI systems, including Moonshot AI and Mistral AI. , Technology & Science, Times Now

Checkout more news
Ad Banner

🧠 AI की ताक़त से तैयार करें प्रोफेशनल वेबसाइट — स

कोई कोडिंग नहीं, कोई टेक्निकल झंझट नहीं — अपने बिज़नेस को आज ही डिजिटल दुनिया से जोड़ें

Start Now
ReferIndia News contact