Over the past year, security researchers have found hundreds of thousands of malicious packages in public registries that developers download and run, including the supporting libraries installed automatically.
Meanwhile, high-profile attacks like Shai-Hulud show that attackers are no longer focused exclusively on production infrastructure. Instead, they are targeting the developer workstation itself, deliberately shifting the attack surface upstream.

If you think about what’s sitting on a typical developer’s laptop, it’s no wonder that they’ve become attack vectors. We’re talking about:
Write access to multiple repositories.
API keys for third-party services.
Database credentials for testing.
SSH keys to staging or production.
Cloud provider credentials.
VPN access.
Local copies of sensitive source code.
It makes economic sense for attackers. Why spend months trying to breach hardened production systems when compromising a single developer workstation can provide direct access to the codebase and CI/CD pipeline? And then potentially inject malicious code into products used by millions.
Developer’s workstations have always had direct access to repositories, but what’s changed is the complex systems surrounding them, and how closely everything is now interconnected.
Several factors have been behind this shift:
Remote and hybrid work pushed development outside traditional network perimeters.
The explosion of npm packages, IDE extensions, plugins, and other developer tools created a sprawling supply chain.
AI coding assistants introduced entirely new data flows that many organizations don’t have security policies for.
Developers often operate with elevated privileges to do their jobs properly.
The following attacks are part of a broader pattern of threat actors deliberately targeting developer environments.
Multiple malicious versions of the widely used nx build system package were published to the npm registry in an attack named “s1ngularity.” The attackers embedded malware that systematically searched for sensitive files, including wallets, keystores, .env files, GitHub tokens, npm tokens, and SSH keys.
The attack affected hundreds of users and organizations, with more than 5,500 private repositories made public using leaked credentials. It seems attackers used AI to generate the malicious script, suggesting that LLMs are being used more widely to hack the supply chain.
Earlier this year, attackers compromised a legitimate publisher’s account on the Open VSX Registry and pushed malicious updates to four established VS Code extensions with thousands of downloads.
Once on the developer’s machine, GlassWorm stole npm, GitHub, and Git tokens along with other credentials.
GitHub Actions compromises showed how attackers target CI/CD infrastructure directly. Malicious Actions were uploaded to the GitHub Marketplace, designed to steal secrets from workflows.
With the widespread adoption of AI assistants like GitHub Copilot, attackers began looking for new angles:
Prompt injection.
Malicious code suggestions.
Exfiltration of proprietary code via chat interfaces.
Abuse of Model Context Protocol (MCP) integrations.
Most organizations still don’t have formal policies governing how AI tools access code or handle sensitive data, leaving developers to make these decisions individually.
In the OWASP Top 10:2025, developer workstations are included in the software supply chain and are ranked among the most critical risk areas.
Under this new framework, the supply chain includes dependencies and third-party libraries, build tools and CI/CD systems, developer environments, deployment infrastructure, and even the operational environment. Workstations are now officially part of an organization’s attack surface.
It also has compliance implications. Frameworks such as the NIS2 Directive and the Digital Operational Resilience Act (DORA) increasingly expect organizations to demonstrate controls across the full development lifecycle.
Developers are now subject to a range of attacks that traditional security awareness training wasn’t designed for.
Typical awareness programs focus on phishing emails, password hygiene, and suspicious attachments. However, developer-targeted attacks don’t look like that because they mimic everyday workflow events.
Attackers send things such as:
Security alerts about repositories.
Vulnerability notifications from package managers.
Pull requests that appear legitimate.
Fake automated updates that resemble tools like Dependabot.
These attacks succeed because they target developer workflows specifically and exploit the urgency of security notifications.
Malicious dependencies are another category that many developers need to watch out for.
Common techniques include:
Typosquatting (e.g., “reactt” instead of “react”).
Dependency confusion between public and private registries.
Compromised maintainer accounts.
Compromised developer tools can provide persistent access to a machine, and include IDE extensions, browser plugins, themes, and even CDN-hosted tool updates. Once installed, they can:
Monitor keystrokes.
Access workspace files.
Intercept network traffic.
Extract environment variables.
Attackers study developer workflows and design attacks that feel native to them. Without targeted training that reflects these realities, developers are left to make high-risk security decisions inside environments that were never designed with zero trust in mind.
To secure developer workstations, some initial basic measures are required.
Use multi-factor authentication across development platforms.
Enforce signed commits.
Implement a least-privilege repository and cloud access.
Deploy endpoint detection on developer machines.
Scan for secrets in commits and CI pipelines.
Formalize policies around AI coding assistant usage.
Then go a step further and review dependency install scripts instead of just running them. It’s best to apply least privilege to individual developer accounts, even if they technically “need” broad access. Another thing is to separate experimentation environments from the primary development machine when testing untrusted code.
It’s easy to just accept what AI coding tools generate, but it’s better to slow down before accepting suggestions and take the time to understand what data is being shared and where it’s processed.
When it comes to workstation security and supply chain awareness, developers need a security-first mindset and practical experience that reflects how attacks occur, including within their workflows, tools, and pipelines.
SecureFlag’s hands-on labs mirror real-world scenarios in virtualized development environments, including topics such as:
Malicious dependencies.
Secrets exposure.
CI/CD pipeline abuse.
AI-Assisted code risks.
Authentication and access practices.
Training developers in both secure coding and developer-environment security helps organizations build broader awareness throughout the development lifecycle.
When engineers understand how attackers operate and how their own workstations can be targeted, they make more informed decisions long before code reaches production. That’s how supply chain risk is reduced at the source.