PETAR: AI that writes localized PET/CT findings
Empty content
Empty content
Even when advanced AI systems refuse to give dangerous instructions, their seemingly harmless answers can be reused to teach smaller models risky skills. A new study shows that safety filters at the output level are not enough on their own. This matters because it affects how quickly powerful know‑how
Cornell University researchers report that a type of AI called a graph neural network can learn to solve classic routing puzzles on its own and produce answers in one shot. This matters because many real tasks — from delivery planning to chip design — boil down to such puzzles, where speed and
Researchers have built a method to make artificial intelligence more reliable when it reads emotions in text, such as clinical notes, counselling chats and posts in online support groups. This matters because early triage and risk assessment often depend on what people write and how that writing is interpreted. Why
A research team has built an AI system that designs and improves safety tests for other AI models on its own. In trials, it found ways to make models break their own rules more often than methods designed by people. This matters because safety testing needs to keep pace with