CodeThreat|skillaudit.sh
CodeThreat AI Hub→ScanChecksGlossaryFor
Menu
CodeThreat AI Hub→ScanChecksGlossaryFor

Glossary

Security terms we use when auditing LLM skill files

Prompt Injection

An attack where malicious input overrides or manipulates an LLM's system instructions.

→

Data Exfiltration

Unauthorized transfer of data from a system to an external destination.

→

Privilege Escalation

Attempts to gain elevated access or execute commands with higher permissions than intended.

→

Supply Chain Attack

Compromise through malicious or hallucinated dependencies referenced in skill files.

→

Obfuscation

Techniques used to hide malicious content from human review.

→

Skill File

Configuration files that define behavior for AI coding assistants like Cursor and Windsurf.

→

← Back to scan

Supported by CodeThreat