About LOLAI

Documentation Open Source

Mission

LOLAI (Living Off The Land AI) aims to document and catalog AI agents and assistants that can be weaponized, abused, or exploited in both enterprise and personal computing environments. As AI agents become more prevalent and powerful, understanding their security implications is crucial for defenders and security professionals.

What We Track

We focus on documenting AI agents with the following characteristics:

  • System Access: Agents with file system, terminal, or API access
  • Code Execution: Ability to execute arbitrary code or commands
  • Autonomous Behavior: Agents that can operate without constant human oversight
  • Network Capabilities: Tools with network communication abilities
  • Plugin/Extension Systems: Platforms that support third-party extensions

Inspiration

LOLAI is inspired by successful "Living Off The Land" projects:

  • GTFOBins - Unix binaries that can be used to bypass local security restrictions
  • LOLBAS - Living Off The Land Binaries, Scripts and Libraries for Windows
  • LOLDrivers - Windows drivers used by attackers to bypass security software
  • LOLRMM - Remote monitoring and management tools used in attacks

Methodology

Each agent entry includes:

  • Attack Vectors: Documented methods of abuse with practical examples
  • Technical Details: Capabilities, permissions, and system access
  • MITRE ATT&CK Mapping: Alignment with industry-standard attack frameworks
  • Detection Methods: Artifacts, logs, and indicators of compromise
  • Prevention Strategies: Mitigation techniques and best practices

Use Cases

LOLAI serves multiple audiences:

  • Security Teams: Understand risks of AI agents in your environment
  • Penetration Testers: Reference for red team engagements
  • Developers: Security considerations when building AI-powered tools
  • Researchers: Catalog of AI security techniques and attack patterns
  • Policy Makers: Information for AI governance and compliance

Contributing

LOLAI is an open-source community project. We welcome contributions:

  • Submit new AI agents with documented attack vectors
  • Improve existing documentation
  • Add detection methods and IOCs
  • Report issues or suggest improvements

Visit our GitHub repository to get started.

Disclaimer

⚠️ Important: This project is for educational and defensive purposes only. The techniques documented here should only be used in authorized security testing, research, or defensive contexts. Unauthorized access to computer systems is illegal.