Agent Skills in the Wild: 26% of AI add-ons show security risks

Agent Skills in the Wild: 26% of AI add-ons show security risks

AI agents get their superpowers from "skills" - plug-ins that add instructions and code. A new large-scale study of 31,132 skills from two marketplaces finds a big security gap.

  • 26.1% of skills had at least one vulnerability across 14 patterns in four buckets: prompt injection, data exfiltration, privilege escalation, and supply chain risks.
  • Data exfiltration (13.3%) and privilege escalation (11.8%) were most common; 5.2% showed high-severity, likely malicious behavior.
  • Skills that bundle executable scripts were 2.12x more likely to be vulnerable than instruction-only skills.
  • The team built SkillScan (86.7% precision, 82.5% recall) and released an open dataset/toolkit.

Why it matters: skills often run with implicit trust. Without guardrails, they can siphon data, abuse permissions, or smuggle in supply-chain attacks at scale.

Call to action: adopt capability-based permissions, sandbox execution, and mandatory vetting before publishing or installing skills. Paper: https://arxiv.org/abs/2601.10338v1 (Yi Liu et al.)

Paper: https://arxiv.org/abs/2601.10338v1

Register: https://www.AiFeta.com

AI Cybersecurity AIAgents LLM Security SupplyChain DataExfiltration PromptInjection AppSec

Read more