Yeeth - 2025 Year in Review
In 2025, Yeeth shifted from exploratory research to operational security work, focusing on malware detection in OpenVSX extensions and reducing the time between discovery and takedown.
We spent most of our time actively fighting threat actors, building out dev-guard, and publishing public research. We worked closely with the community and marketplaces while investing heavily in internal tooling and research. Our work focusing on the OpenVSX marketplace reduced the number of extensions needing manual review from 50-100 per day to around 3-5 high-signal alerts that the pipeline surfaces for human review.
Going into 2026, our focus continues on defending the community from threats at large and publicly sharing our knowledge and capabilities.
Three Things We Did This Year
1. Research and Capabilities
We focused heavily on research and capabilities. This involved delivering the DevGuard website, the dev-guard extension, and the Yeeth blog.
To support this work we built a powerful AI-powered pipeline for detecting malicious code in OpenVSX extensions. This system scans extensions at scale and flags suspicious patterns with agentic workflows. We focused on adding value to the community by reporting directly to the OpenVSX marketplace rather than close guarding information.
We worked with community projects on problems related to malware infiltration by identifying and reporting these threats. These engagements reinforced Yeeth’s research and challenged some assumptions around our tooling. Several insights from this work directly influenced how we approach research today.
2. Impact
By focusing primarily on helping the community, we connected with the OpenVSX team and found ourselves in the best position to help them directly. Rather than just reporting threats on top of the marketplace, we were invited to help secure the marketplace itself. We are deeply grateful to the Eclipse Foundation for trusting us with this work. This partnership represents exactly what we set out to do: turn research into real-world protection for developers everywhere.
3. Community Content
We experimented with sharing more of our thinking publicly. This included research around malware families like SleepyDuck and GlassWorm, as well as practical security tooling like Aho-Corasick. This helped clarify what we want Yeeth to represent publicly.
What’s Next
We see a future where AI-powered security monitoring becomes essential for defending against malicious behavior in applications where source code is readily available and users must run untrusted code.
This pattern exists across many ecosystems: IDE extensions (VS Code, OpenVSX), package managers (npm, PyPI, crates.io), browser extensions, GitHub Actions, and CMS plugins. In each case, developers install and execute code from third parties. The code is there to read, but knowing what it actually does is hard.
The volume of published content is growing faster than any team can manually review. Traditional tooling also struggles to keep up with new malware since rulesets are often defined after the threat is already identified.
AI changes this equation. Unlike static rules, AI can reason about behavior and intent. It generalizes from known threats to catch novel variants without needing an exact signature for each one. It filters noise, reduces the search space, and surfaces high-signal alerts. This allows security experts to focus their effort where it matters most. We believe this shift will define how security teams operate going forward.
From our work so far, it’s clear that context is the biggest challenge in agentic analysis of malicious code. Understanding what code does requires understanding where it runs, what it has access to, and what behavior is normal.
There’s also a practical limit: flooding a model with tool outputs degrades quality and quickly hits context window limits. The challenge is getting the right information to the model without overwhelming it. We want to solve this problem by building smarter, more context-aware tooling into our workflow.
In 2026, our focus is narrow and intentional:
- Continue to help the community and publish research
- Reduce noisy false positives in favor of narrow high quality findings
- Continually improve our agentic malicious code review pipeline to use the latest AI techniques, and expand our coverage to additional ecosystems
- Turn dev-guard tooling into a public artifact
If any of this resonates, feel free to reach out. Otherwise, we’ll be back to building.