Security terms we use when auditing LLM skill files
An attack where malicious input overrides or manipulates an LLM's system instructions.
Unauthorized transfer of data from a system to an external destination.
Attempts to gain elevated access or execute commands with higher permissions than intended.
Compromise through malicious or hallucinated dependencies referenced in skill files.
Techniques used to hide malicious content from human review.
Configuration files that define behavior for AI coding assistants like Cursor and Windsurf.