← Back to Index

Documentation

About this Index

The Skills Security Index is a comprehensive security resource for the AI agent ecosystem. As AI agents become more prevalent, the modular "skills" they use represent a significant new attack surface. This index provides transparency into the security posture of shared AI skills.

Purpose: Our goal is to help developers and security teams understand the risks associated with third-party AI skills, promoting a "secure-by-default" approach to agent development.

What are Skills?

Skills are modular units of functionality that extend the capabilities of an AI agent. They usually consist of a set of instructions, tools, and sometimes custom code that allow the agent to perform specific tasks.

How Skill risk is determined

Our risk assessment process is a multi-dimensional security analysis performed by automated scanners, specialized AI models, and manual review. We follow a practical security philosophy: we don't just look for "dangerous" code; we evaluate instructions within the context of the skill's intended purpose.

The Three Pillars of Analysis

  • Instructional Intent: We categorize instructions into 10 key areas (e.g., Code Execution, Web Access, File System). We ask: Are these actions justified for the skill's stated purpose?
  • Vulnerability Findings: We sweep for 12+ specific risk categories, including Prompt Injection, Credential Exposure, Data Exfiltration, and Supply Chain risks.
  • Permission Mapping: We map the requested tool access against the principle of least privilege, ensuring the skill doesn't ask for more "power" than it needs.

Risk Level Definitions

The final rating is determined by the highest risk identified in any module:

High / Critical

Direct paths to data exfiltration, unvetted remote code execution, or hardcoded production credentials.

Medium

Broad access without sufficient guardrails, or dependency on unvetted third-party services.

Low

Well-scoped instructions, standard activities for its purpose, and minimal potential "blast radius".

Pass

No detectable risks outside of standard, expected agent behavior with minimal permissions.

What are Capabilities?

Capabilities represent specific high-level actions or functionalities that an AI skill can perform. They are detected through automated analysis of the skill's instructions, code, and metadata.

Detection: Our scanning engine uses LLM-based analysis (Gemini Pro) to identify instructional intent that aligns with known capability categories like "Web Access", "File Operations", or "Code Execution".

Risk Determination: Each capability is assigned a risk level based on its potential for misuse or data exposure. For example, "File Deletion" is high risk, while "Read-only Web Access" might be low or medium risk depending on context.

What are Findings?

Findings are specific security concerns or potential vulnerabilities identified during the skill's analysis. They provide a granular view of why a skill might be considered risky.

Detection: Findings are generated by comparing the skill's behavior and instructions against a set of security best practices and known attack patterns (like prompt injection or insecure data handling).

Risk Determination: The overall risk of a finding is determined by its severity (critical, high, medium, low) and the likelihood of exploitation.

What are Permissions?

Permissions refer to the level of access a skill requires from the host environment or the user's data to function.

Detection: Permissions are identified by analyzing the tools, environment variables, and API surface area defined in the skill's configuration files (like risk.json or SKILL.md).

Risk Determination: Risk is assessed based on the principle of least privilege. A skill requesting broad access (e.g., "Full Filesystem Access") is flagged with a higher risk than one with specific, limited permissions.

How do I get my Skill analyzed?

To get your skill analyzed, you can submit a request via our Skill Scan Request page. Currently, we automatically scan skills from popular repositories, but manual submissions are prioritized for deep analysis.

How do I report a problem with a skill?

If you encounter a security issue, a false positive in our analysis, or any other problem with a skill listed here, please use the "Report a mistake" button located at the bottom of the skill's detail page. You can also contact our security team directly.

For more details on your legal rights and obligations, please refer to our Terms of Use.