Solutions
Device management
Remotely manage, and protect laptops and mobile devices.
Orchestration
Automate tasks across devices, from app installs to scripts.
Software management
Inventory, patch, and manage installed software.
Infrastructure as code
See every change, undo any error, repeat every success.
Extend Fleet
Integrate your favorite tools with Fleet.
More
Device management
Remotely manage, and protect laptops and mobile devices.
Orchestration
Automate tasks across devices, from app installs to scripts.
Software management
Inventory, patch, and manage installed software.
Infrastructure as code
See every change, undo any error, repeat every success.
Extend Fleet
Integrate your favorite tools with Fleet.
Brock Walters
Brock Walters
AI-powered coding assistants, autonomous security agents, and extension-based development tools are transforming how engineering teams operate. The innovations are real, but they introduce a new category of shadow IT with significant security and compliance implications.
For CTOs, CIOs, and CISOs, understanding and controlling this risk must become a priority.
AI agents and extensions operate across multiple vectors:
Static artifacts — Skills, plugins, configurations installed across user directories and repositories
Runtime processes — Active AI tools executing code during development and workflows
Extension registries — VSCode and IDE extensions that persist and execute automatically
Network communication — Agents and tools communicating with external APIs, cloud services, and command and control infrastructure
Traditional SIEM alerts rarely catch these activities because individual signals appear normal during development workflows.
An effective detection strategy must combine multiple detection vectors:
AI tools often install persistent artifacts across user home directories and project repositories. Path-based queries scan for:
~/.claude/skills, ~/.openclaw/skills, ~/.gemini/skills~/.claude/.plugins, .vscode/extensions.claude/ directoriesThis captures dormant threats that may not appear in process lists.
Active AI tools manifest as running processes with recognizable signatures:
Process monitoring combined with user context provides real-time visibility into tool usage.
IDE extensions and browser plugins can execute code even when parent applications are inactive:
Extension catalogs provide version metadata for vulnerability correlation and attack surface analysis.
AI agents communicate with cloud services, API endpoints, and local infrastructure:
Network telemetry reveals who's talking to what, and when.
Effective threat hunting requires a layered approach:
CISOs should define clear alerting thresholds:
Rely on contextual correlation rather than single-sign alerts. An AI tool run by an authenticated developer performing authorized development work may generate multiple signals. Alerting should prioritize unusual patterns or violations of access controls.
When AI tool violations are detected:
This sequence should be supported by clear evidence trails for audit and compliance purposes.
Board-level visibility requires focused metrics:
These metrics should be reviewed quarterly with the board and technical teams.
AI and autonomous tooling is no longer optional for modern engineering teams. The question isn't whether to adopt these tools — it's how to govern them effectively.
Organizations that implement detection frameworks now will be better positioned to:
The time to establish control is now, before AI tools become entrenched. With a comprehensive detection framework in place, leadership can enable innovation while maintaining visibility and control.
For more on this topic, check out Threat Hunting AI Agents and Autonomous Tooling: What Your Board Needs to Know from our very own VP of Security Solutions, Dhruv Majumdar.
In our next article (part 3 of this series) Fleet's Adam Baali will go beyond the overview given here and provide mitigations that security teams can use today.