Tuesday, March 17, 2026

Cybersecurity's Original Sin: Why the Industry Has Failed to Fix Its Own Known Problems — and Who Benefits from the Failure

 

Executive Summary

In 2001, the SANS Institute published a landmark list of the ten most common cybersecurity mistakes made by individuals and organizations. Twenty-five years later, nine of those ten problems remain largely unsolved. This is not an accident, a technical mystery, or a resource problem. It is the predictable outcome of three interlocking failures: a broken economic incentive structure that shifts the cost of insecurity onto the victims; a legal framework built around End User License Agreements (EULAs) that systematically shield vendors from liability for the consequences of those failures; and an industry that has found it more profitable to treat symptoms than to cure diseases.

Here I examine each of the SANS Top 10 mistakes in light of these structural forces, traces the deliberate legal strategy that the software industry adopted in the late 1990s and early 2000s to disclaim responsibility for insecure products, and argues that the only path to genuine improvement runs through liability reform, regulatory mandate, and a fundamental shift in who pays when software fails.

 

1. The “SANS Top 10 Mistakes Individuals Make”: A 25-Year Audit

The table below maps each of the original SANS 2001 mistakes to its current status and to the specific EULA or contractual defense that the software industry deployed to insulate itself from accountability for that category of failure.

 


 

Of the ten systemic problems identified a quarter-century ago by one of the most respected cybersecurity education bodies in the world, none has been comprehensively fixed, only two show partial improvement, and the partial improvements both have caveats. Default credential hardening was driven not by the industry’s conscience but by regulatory pressure in specific verticals and by the reputational damage caused by IoT botnets that made headlines. Backup adoption improved primarily because cloud vendors found a profitable subscription model to attach to the solution. In both cases, the fix arrived when it became economically advantageous, not before.

 

2. The EULA as a Liability Shield: A Legal Strategy, Not a Legal Inevitability

To understand why these problems persisted, let’s examine the legal strategy that the software industry consciously built in the 1990s. As software became infrastructure i.e., running hospitals, banks, power grids, and government systems, the industry faced a fork in the road. It could accept that infrastructure-grade software carried infrastructure-grade liability, as manufacturers of aircraft, pharmaceuticals, and automobiles do or it could construct a legal argument that software was categorically different and immune from ordinary product liability doctrine.

It chose the latter, and it did so deliberately.

 “As-Is” — The Core Legal Fiction

The phrase that appears in virtually every EULA, in some form, is: "THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED." This single clause, replicated across hundreds of thousands of software products, effectively disclaims liability for every item on the SANS Top 10. The vendor ships unpatched code: no warranty. The vendor enables unnecessary services by default: no warranty. The vendor’s product is exploited via a known vulnerability: no warranty. The user clicks a malicious attachment that the vendor’s own email client failed to flag: user error.


2.1 How the Strategy Worked in Practice

The legal argument rested on several pillars, each of which was constructed and defended by industry lobbying and sustained by courts that were reluctant to impose product liability on an industry they did not fully understand:

       Pillar 1: Software is speech, not a product  therefore product liability doctrine should not apply.

       Pillar 2: Software is infinitely complex; defects are inherent and unknowable therefore no warranty of fitness is commercially reasonable.

       Pillar 3: Users accept the EULA before use therefore any risk that is not patched after installation is assumed by the user.

       Pillar 4: Vendor patches are advisory, not mandatory therefore a breach caused by an unpatched system is the customer’s fault, not the vendors.

 

The combined effect was a complete inversion of the normal manufacturer-consumer liability relationship. A car manufacturer who knowingly ships a car with faulty brakes faces recall obligations, regulatory sanction, and civil liability. A software vendor who knowingly ships code with buffer overflow vulnerabilities, a class of defect that has been fully understood since the 1970s, faces no equivalent obligation. The EULA said so. Courts largely agreed with this between 1995 to 2020.

2.2 Mapping the EULA Defense to the SANS 10

The following analysis shows how each of the SANS 10 mistakes was directly addressed by the software industry’s legal strategy, not by fixing the underlying problem, but by contractually assigning the consequence to someone else.

Mistakes 1 & 5: Default Passwords and Excessive Privilege

Vendors shipped products with default credentials (admin/admin, root/root, guest/guest) and with services running at maximum privilege levels because it reduced support calls and accelerated deployment. When those defaults were exploited, the EULA defense was clean: the vendor had provided a working product; the customer had failed to harden it. The customer accepted the risk by accepting the license. This argument was so embedded in industry culture that regulatory language mandating unique default credentials did not begin appearing until the mid-2010s in specific sectors, and did not reach general software until the UK’s Product Security and Telecommunications Infrastructure Act in 2022, more than two decades after SANS identified the problem.

Mistake 2: Unpatched Software

The unpatched software problem is where the EULA defense is most nakedly visible. Vendors issue security advisories, which are legally non-binding recommendations. The EULA disclaims any liability for harm caused by vulnerabilities that are disclosed but not yet patched by the customer. This creates a remarkable legal structure: the vendor acknowledges a defect exists, declines to make the patching mandatory, and successfully argues in court that the resulting breach is the customer’s fault for not acting on the advisory quickly enough. The Equifax breach of 2017, which exposed 147 million people’s sensitive data, was caused by an unpatched Apache Struts vulnerability for which a patch had been available for two months. Equifax paid a settlement, but no software vendor faced any liability whatsoever.

Mistakes 3 & 4: Open Ports and Lack of Auditing

Vendors ship operating systems and applications with services enabled by default, listening on open ports, because doing so is more convenient for the general user. The cost of a dramatically expanded attack surface is borne by the customer. Similarly, robust logging and auditing capabilities are routinely sold as premium features or enterprise add-ons rather than included as baseline security controls. In both cases, the EULA assigns the risk of misconfiguration and of inadequate visibility entirely to the buyer. The vendor has no obligation to ship a secure default configuration, only a functional one.

Mistake 7: Email Attachments and Social Engineering

This is perhaps the most cynical application of the EULA defense. Vendors classify social engineering attacks such as phishing, malicious attachments, fraudulent links as “user error” rather than as failures of the products that failed to detect or block them. This categorization is enormously convenient for vendors of email clients, antivirus software, and endpoint protection platforms, because it means that a breach caused by a sophisticated phishing email is never the vendor’s liability, even if the vendor’s product was specifically marketed and sold to prevent exactly that attack. The product’s failure becomes the user’s fault by contractual definition.

Mistakes 8, 9 & 10: Policy, Backups, Physical Security

These three items were explicitly and entirely relegated to customer responsibility via EULA. Vendors had no obligation to provide policy templates, backup functionality, or guidance on physical access controls. That these items appear on a list of mistakes individuals make rather than on a list of mistakes vendors make by not providing them reflects how completely the industry had succeeded in framing security as a user behavior problem rather than a product design problem.

 

3. Economic Insecurity Model

The EULA shield did not merely protect vendors from lawsuits. It allowed an entire secondary industry to be built on top of the resulting insecurity. Understanding this economy is essential to understanding why the SANS 10 problems have been so durable.

3.1 The Insecurity Industrial Complex

By the mid-2000s, the cybersecurity industry had grown into a multi-billion-dollar ecosystem whose revenue was structurally dependent on the persistence of the problems identified in the SANS 2001 list. Every unpatched system is a market for a patch management platform. Every misconfigured firewall is a market for a configuration auditing tool. Every phishing attack is a market for security awareness training. Every breach is a market for incident response consulting, forensic analysis, cyber insurance, and remediation services.

The Core Perverse Incentive

The cybersecurity industry is more analogous to the pharmaceutical industry which profits from managing chronic conditions than to the sanitation industry, which profits from eliminating disease vectors. A software vendor that ships memory-safe code by default eliminates the market for a class of endpoint detection tools. A platform that enforces unique passwords at the system level eliminates the market for enterprise password management solutions. The industry therefore has a structural incentive to treat symptoms, not causes.


3.2 Security as a Premium Feature

This incentive model expanded the practice of selling security capabilities as premium add-ons to an insecure base product. For example, advanced logging, privileged access management, multi-factor authentication enforcement, and threat detection have historically been available only in enterprise-tier licenses, priced beyond the reach of small and mid-size organizations. This means that the organizations least able to absorb a breach are precisely the ones denied access to the tools that could prevent it not because those tools are technically complex to provide, but because they are more profitable when segmented as premium features.

The SANS mistake of lacking auditing and logging (Mistake 4) persists not because logging is technically difficult, but because comprehensive logging has been deliberately withheld from base-tier products and sold separately. Microsoft’s decision in 2023 to expand default logging access following criticism from CISA and Congress after the Storm-0558 breach exposed that basic log data was unavailable to affected organizations, showed only external pressure forced the feature out of the premium tier.

3.3 The Compliance Illusion

The rise of compliance frameworks, PCI-DSS, HIPAA, SOX, ISO 27001, SOC 2, etc.,  created what appeared to be accountability but in practice created a substitute for security. Organizations could satisfy a compliance audit by demonstrating the presence of certain controls without those controls being effective. A web application firewall counts for PCI compliance whether or not the underlying application has SQL injection vulnerabilities. An annual security awareness training tick-box satisfies HIPAA whether or not employees’ behavior changes.

Compliance became the goal, and security became incidental. Vendors benefited from this substitution because compliance requirements created mandatory procurement events where organizations had to buy something to satisfy the auditor without requiring that the purchased products actually solve the underlying problem. The SANS 10 mistakes survived twenty-five years in part because compliance frameworks never required that they be fixed, only that a compensating control be installed.

 

 

4. The One Problem That Did Get Fixed — and Why

The partial progress on default credentials (SANS Mistake 1) is instructive precisely because it is the exception. Understanding why this one item moved while the others did not, reveals the mechanism that would be required to fix the rest of the items in the list.

Default credential exploitation became impossible for the industry to disclaim for three reasons. First, IoT botnets with the Mirai botnet in 2016 being the canonical example, caused widespread, visible, and publicly attributable damage that could not be narratively reassigned to user error. The victims were not sophisticated enterprises who had “failed to patch”. They were home users who had plugged in a device and did exactly what the vendor intended. The EULA defense collapsed under the weight of that narrative. Second, regulatory action followed: California’s SB-327 (2018), the UK PSTI Act, and NIST guidance created binding obligations that the EULA could not override. Third, the reputational damage to specific vendor categories namely IP camera manufacturers, home router vendors,  was direct and sustained.

The lesson is clear: the EULA shield holds until the cost of the insecurity can no longer be externalized. When breaches become politically and reputationally undeniable, when regulatory liability attaches, and when the victim class is sympathetic and visible, the industry fixes the problem. The SANS 10 items that remain unfixed are those where the cost is still successfully shifted onto enterprises, SMBs, and individuals who lack the political leverage to force accountability.

 

5. When Software Kills: Operational Technology and the End of the EULA Shield

Every argument surveyed so far, the EULA shield, the compliance illusion, the perverse economics of the insecurity industry, rests on a single enabling condition: that the harm caused by insecure software can be successfully shifted to the user. Data breaches are expensive and embarrassing, but their victims are diffuse, their damages are monetized at pennies per record, and their consequences fall on consumers and enterprises rather than on vendors. The entire legal and economic architecture of the past twenty-five years was built to exploit this condition.

Operational Technology (OT) changes that condition fundamentally. When insecure software controls a water treatment plant, a power grid, a hospital's life support systems, or an oil pipeline, the harm it enables is no longer a data record on a dark web marketplace. It is a city without power, a patient who cannot receive surgery, a chemical plant that explodes, or drinking water laced with caustic lye. These are physical damage to people, not just breach notification letters. Physical harm is very difficult to disclaim in a EULA.

5.1 What OT Is and Why It Is Different

Operational Technology refers to the hardware and software systems that monitor and control physical processes in the real world: industrial control systems (ICS), supervisory control and data acquisition systems (SCADA), programmable logic controllers (PLCs), distributed control systems (DCS), and the human-machine interfaces (HMIs) that operators use to manage them. These systems run power generation and distribution, water and wastewater treatment, oil and gas pipelines, chemical manufacturing, railway signaling, hospital equipment, building management, and nuclear facilities.

OT systems differ from conventional IT in ways that make the standard cybersecurity toolkit of patching quickly, isolating the compromised host, restoring from backup either impractical or dangerous. IT systems have a lifetime measured in years. OT systems have a lifetime measured in decades.  Many OT systems run continuously; a patch window that requires a twenty-minute shutdown may mean a production halt that costs millions. Legacy PLCs and SCADA components may run unpatched operating systems not because operators are negligent but because the vendor discontinued support and replacement requires re-engineering an entire physical production line. Some OT systems are safety-critical: patching incorrectly, or at the wrong moment in a process cycle, can itself cause the kind of physical accident the system was designed to prevent. And critically, the consequence of a failure is not measured in lost records but in physical damage, environmental harm, and human lives.

 

5.2 The Attack Record: From Proof of Concept to Mass Casualty Risk

The history of OT cyberattacks is a compressed, accelerating timeline from theoretical concern to near-catastrophe. The following cases are not hypotheticals from a threat model; they are documented incidents that reveal exactly what the EULA shield has permitted to persist in systems where the stakes are human life.

Stuxnet (Discovered 2010): The Proof That Physics Could Be Hacked

Stuxnet was the event that forced the world to take OT security seriously. A sophisticated worm, widely attributed to US and Israeli intelligence services, was introduced into Iran's Natanz uranium enrichment facility via infected USB drives, exploiting SANS Mistake 7 (unknown media/attachments), in an environment that was physically air-gapped from the internet. Once inside, it targeted Siemens S7-315 and S7-417 PLCs controlling centrifuge motor speeds, causing approximately 900 centrifuges to spin at destructive speeds while simultaneously reporting normal operation to the operators monitoring the facility.

The physical consequence was unambiguous: roughly 20% of Iran's uranium enrichment capacity was destroyed, not metaphorically, but as shattered physical machinery. Stuxnet demonstrated three things that the industry could no longer deny: that software running OT systems could be weaponized to cause deliberate physical destruction, that air gaps were not the security guarantee they appeared to be and that the attacker did not need to be present or even known to the victim to cause large-scale physical harm.

Ukraine Power Grid Attacks (2015 & 2016): Lights Out as a Cyberweapon

In December 2015, malware subsequently named BlackEnergy was used to attack three Ukrainian regional electricity distribution companies, cutting power to approximately 230,000 customers in the middle of a Ukrainian winter. The attackers had spent months conducting reconnaissance inside the utilities' IT networks  facilitated by spear-phishing emails exploiting SANS Mistake 7, before pivoting to OT networks and issuing commands to open breakers at multiple substations simultaneously. They then overwrote firmware on serial-to-Ethernet converters to prevent remote recovery, and launched a telephony denial-of-service attack against the utilities' customer service lines to prevent customers from reporting the outage.

A year later, in December 2016, a second attack using more sophisticated malware named Industroyer targeted Kiev's transmission substation. This attack was designed not just to cut power but to permanently damage physical substation equipment by causing rapid, repeated switching cycles attempting long-term physical destruction, not just temporary disruption. Had it succeeded more fully, restoration would have required physical replacement of high-voltage equipment with months-long lead times.

TRITON / TRISIS (2017): Targeting the Last Line of Defense

The most alarming OT attack on record was not one that caused a disaster — it was one that tried to cause a disaster by disabling the systems specifically designed to prevent one. In 2017, attackers subsequently attributed to a Russian state research institute, compromised the Safety Instrumented System (SIS) at a petrochemical facility in Saudi Arabia. SIS controllers are the last line of defense in industrial facilities: they monitor for dangerous process conditions and automatically shut down or trip equipment when readings exceed safe thresholds. They exist specifically to prevent explosions, chemical releases, and other catastrophic physical events.

TRITON was designed to reprogram the Triconex SIS controllers to either disable safety shutdowns entirely or cause them to fail in dangerous ways. The attack was only discovered because a programming error in the malware accidentally triggered a safety shutdown, alerting operators to the intrusion. Had the malware functioned as intended, subsequent process failures would have had no automatic safety backstop. The intended consequence was a large-scale explosion or toxic chemical release at an operational industrial facility resulting in a potential mass casualty event, deliberately engineered through software compromise.

No EULA clause covers this scenario. No compliance framework tick-box adequately addresses it. The 'software is provided as-is' disclaimer is not a legal defense against the deliberate engineering of an explosion at a petrochemical plant.

Colonial Pipeline (2021): Infrastructure Held Hostage

The Colonial Pipeline ransomware attack is the case study that brought OT cybersecurity into mainstream political consciousness. On 7 May 2021, the DarkSide ransomware group compromised Colonial Pipeline's IT network using a leaked employee password found on the dark web, SANS Mistake 1 (weak/reused credentials) combined with Mistake 3 (an unmonitored legacy VPN account that had not been decommissioned). The company shut down the pipeline which supplies approximately 45% of fuel consumed on the US East Coast, not because the OT systems were themselves compromised, but as a precaution to prevent the ransomware spreading from IT to OT.

The physical and economic consequences were immediate and national in scale. Fuel shortages spread across the Southeast. Panic buying caused station outages in multiple states. The US government declared a state of emergency. The price of gasoline spiked. Colonial Pipeline paid a $4.4 million ransom. President Biden addressed the situation publicly and called for new cybersecurity standards for critical infrastructure. The TSA subsequently issued binding security directives for pipeline operators becoming the first such mandatory regulatory requirements in the sector's history.

The Colonial Pipeline attack is the canonical illustration of the central thesis of this paper: the EULA shield held right up until the consequences became impossible to externalize. When a data breach at an insurance company exposes medical records, the victims are dispersed and the narrative can be managed. When half the US East Coast cannot buy petrol and the President holds a press conference, the narrative management is over.

Oldsmar Water Treatment Facility (2021): Poison in the Tap

On 5 February 2021, an operator at the Oldsmar, Florida water treatment plant watched his mouse cursor move across the screen without his input and navigate to the controls for sodium hydroxide (lye) concentration in the water supply raising the setpoint from 100 parts per million to 11,100 parts per million, a factor of 111 times the normal level. The operator immediately reversed the change and notified supervisors. Subsequent investigation found that the facility's SCADA systems were running an end-of-life operating system, that all operator computers shared a single password for remote access, and that the facility had no firewall protecting its internet-connected control systems.

Every single one of these conditions, the unpatched operating system, the shared weak password, the absent firewall, the unmonitored remote access appears on the SANS Top 10 from 2001. The facility that nearly poisoned 15,000 people's drinking water was running on the exact configuration that the cybersecurity industry had documented as dangerous a full two decades earlier. The investigation raised questions about whether this was a genuine external attack or an internal error. However, either conclusion is damning. If it was an attack, the facility's defenses were wholly inadequate. If it was an insider error, the facility had no controls preventing an individual from making a catastrophic configuration change undetected.

 

5.3 The OT Attack Escalation Curve

These cases are not historical curiosities. Since 2020, researchers have recorded nearly 100 publicly known cyber incidents with direct physical consequences to OT operations. Sadly, this figure represents only the incidents that became public. The threat landscape has broadened from sophisticated state-sponsored operations to ransomware-as-a-service operators who breach OT environments as a side effect of IT network compromises, and to hacktivist groups that specifically target industrial infrastructure as a form of political disruption.

The IT/OT convergence that accelerated through the 2010s has created a structural vulnerability: OT systems designed for isolation are now reachable via the same IT network pathways that carry email, remote desktop access, and VPN connections. Attackers do not need to understand industrial control protocols to cause physical damage. They need only to compromise the IT network and then shut down, encrypt, or corrupt the systems that operators rely on to monitor and control physical processes.

5.4 OT as the Forcing Function: Why This Changes Everything

The reason OT attacks matter so profoundly to the question of software liability is not primarily technical. It is legal and political. Every structural argument that allowed the software industry to avoid liability for twenty-five years breaks down when applied to OT environments.

       The 'no physical harm' defense collapses. EULA clauses disclaim liability for 'loss of data' and 'business interruption’. Courts are increasingly finding that they cannot disclaim liability for bodily injury, death, or large-scale physical destruction caused by foreseeable product failures.

       The 'user error' reclassification fails. When a water utility operator follows the vendor's documented procedure for remote access and an attacker exploits a default credential that the vendor chose not to change, the vendor cannot credibly characterize the resulting harm as user error. The user did what the product was designed to let them do.

       The victim class becomes sympathetic and visible. The legal and political economy of liability reform requires sympathetic, identifiable victims. 15,000 people potentially poisoned by their drinking water, or 230,000 people losing power in winter, or fuel shortages causing national panic — these are victim classes that generate congressional hearings, presidential statements, and binding regulatory directives in ways that dispersed data breach victims do not.

       The causal chain becomes legally traceable. In a data breach, establishing that a specific vendor's product defect caused a specific plaintiff's specific harm is difficult. In a SCADA attack, the causal chain from unpatched PLC firmware to physical damage is often direct, documented, and incontrovertible.

 

5.5 The Regulatory Response: OT as the Catalyst for Software-as-Product

The policy response to OT attacks is already beginning to do what twenty-five years of data breach advocacy could not: force software to be treated as a product with attendant safety obligations rather than as a service provided 'as-is' with no warranty.

The EU's Cyber Resilience Act, which entered into force on 10 December 2024 and whose main obligations apply from 11 December 2027, is the most significant development in this space. For the first time, it imposes mandatory cybersecurity requirements on manufacturers of products with digital elements as a condition of market access, not as a voluntary framework or a compliance checkbox, but as a legal prerequisite to selling in the world's largest single market. Manufacturers must ship products in secure-by-default configurations, maintain vulnerability handling processes, provide security updates throughout the product lifecycle, and cannot disclaim these obligations via EULA. Penalties reach up to €15 million or 2.5% of global annual turnover.

Critically, the CRA explicitly acknowledges that the pre-existing legal framework — which covered physical product safety but not cybersecurity — was inadequate precisely because software vulnerabilities in connected products create physical safety risks. The regulation treats software security as a product safety issue, not a customer configuration problem. This is the legal foundation that the industry argued for twenty-five years did not exist.

In the United States, the Colonial Pipeline attack triggered the first binding TSA security directives for pipeline operators, ending a decade of voluntary guidelines that the industry had largely ignored. The Biden Administration's National Cybersecurity Strategy explicitly called for shifting liability to software vendors. CISA's Secure by Design initiative, while not yet backed by legislation, signals the direction of federal regulatory thinking. The Oldsmar incident directly accelerated EPA and White House action on water sector cybersecurity mandates.

The Trajectory Is Clear

OT attacks are doing in two to three years what twenty-five years of IT data breaches could not: creating the political and legal conditions for software product liability. When software failures kill people or shut down national infrastructure, the 'as-is, no warranty' defense becomes politically untenable and legally vulnerable. The CRA is the first binding operationalization of this shift. It will not be the last.


5.6 What OT Liability Means for the Software Industry's Business Model

If software vendors are required to stand behind the security of their products as a condition of market access as the CRA now mandates for the EU market, the implications for the industry’s existing business model are profound and largely irreversible. It is tempting to treat this shift as narrowly applicable to OT: to pipeline controllers, SCADA systems, and industrial PLCs where the physical consequences of failure are undeniable. That reading would be a serious mistake. The liability principles that OT attacks have forced into the open do not stop at the OT boundary. They apply, with equal logical force, to every category of software and IT system (enterprise applications, cloud platforms, operating systems, consumer software) and the IT networks that connect them all. OT did not create a new legal theory; it destroyed the industry’s last credible argument for why the existing theory should not apply.

Consider what the “no physical harm” carve-out actually required the industry to argue: that software controlling a hospital’s billing system, an airline’s reservation platform, a bank’s transaction processing engine, or a government’s benefits distribution network occupies a categorically different liability universe than software controlling a centrifuge or a pipeline valve. This was always a legally and morally incoherent position. A ransomware attack that shuts down a hospital’s IT network by delaying surgeries, disabling medication dispensing systems, and forcing emergency patients to be diverted, causes patient deaths just as surely as a compromised OT safety system. The 2020 ransomware attack on Düsseldorf University Hospital, which forced the diversion of an emergency patient who subsequently died, illustrates the point with fatal clarity. The harm was not caused by compromised OT. It was caused by insecure IT which the SANS Institute documented in 2001.

The implications for the software industry’s business model therefore extend across its entire product portfolio, not only its industrial control offerings. First, ‘secure by default’ becomes a universal legal obligation, not a feature tier or a marketing differentiator. A vendor cannot ship any software whether a SCADA platform, an enterprise resource planning system, a cloud-hosted collaboration tool, or a consumer application with default credentials, unnecessary open services, and no patching mechanism, and then disclaim liability when it is exploited. The SANS  mistakes 1, 2, 3, and 5 become potential bases for regulatory action and civil liability across all product categories, not merely in industrial environments.

Second, the security premium tier model becomes legally precarious across the entire software market. If comprehensive logging (SANS Mistake 4) is a safety-critical feature in an OT environment, it is equally a safety-critical feature in a hospital’s electronic health record system, a financial institution’s trading platform, and a government agency’s identity management infrastructure. The practice of selling visibility, access controls, and audit capabilities as enterprise-tier add-ons while deploying base-tier products in environments where a breach causes public harm is not a business model that can survive serious product liability scrutiny in OT or IT. For example, Microsoft’s grudging expansion of default logging access in 2023, after Congressional pressure following the Storm-0558 breach, demonstrates that this argument is already winning in the IT domain. OT simply made the stakes undeniable.

Third, the lifecycle support obligation expands dramatically and universally. Under the CRA, manufacturers must provide security updates for the expected lifetime of any product with digital elements not only industrial equipment. For OT systems that may be in service for fifteen to thirty years, this is a well-understood engineering challenge. But the same principle applies to the enterprise software that runs on servers for a decade past its vendor’s stated support window, the embedded software in medical devices certified for a twenty-year deployment cycle, and the IT infrastructure components that organizations cannot replace on the vendor’s preferred upgrade cadence. “End of support” cannot remain a vendor’s mechanism for transferring accumulated security debt to its customers. That practice created the legacy vulnerability crisis in both OT and IT environments, and it is no longer defensible in either.

Fourth, and most consequentially, the ‘software is speech, not a product’ legal fiction becomes impossible to sustain across the entire software industry once it has collapsed at the OT boundary. The intellectual argument that software should be exempt from product liability doctrine was always weakest where the connection between software state and real-world consequence was most direct. OT made that connection undeniable. But the same connection exists, one or two causal steps further removed, in virtually every category of software deployed in critical systems. The ransomware that encrypts a hospital’s patient records does not directly control a ventilator but it prevents the staff who operate that ventilator from accessing the patient’s medication history, allergy records, and treatment protocols. The distinction between “software that causes physical harm directly” and “software that causes physical harm indirectly through IT failure” is not a distinction that courts, regulators, or juries are likely to treat as dispositive once they have accepted the underlying liability principle. OT broke the seal. IT will not remain exempt.

The software industry will resist these changes with the same lobbying intensity it deployed to construct the EULA shield in the first place. It will argue that OT is a special case, that IT systems are too complex for liability to attach, that innovation will be chilled, and that the market should be allowed to self-correct. These are the same arguments it has made for thirty years while the SANS Top 10 remained unfixed. The outcome is not predetermined. But the direction of the argument has shifted and the shift was driven not by the cybersecurity community’s advocacy, but in the OT world by centrifuges spinning themselves to destruction, power grids going dark in winter, and pipelines shutting down the Eastern Seaboard. The lesson for IT is not that its problems are different. It is that IT has not yet produced a catastrophe visible enough to force the same reckoning. When it does, the liability framework that OT built will be waiting.

6. The Path Forward: Making the Broken Economics Whole

If these problems persist because the economic and legal incentive structure makes persistence more profitable than resolution, then the solution set is clear, even if politically difficult.

6.1 Software Product Liability Reform

The most structurally significant change would be to make software vendors liable, in law, for foreseeable harms caused by known vulnerability classes that were not mitigated by design. This does not require perfection; it requires that vendors be held to the same standard of care as manufacturers in other safety-critical industries. A vendor who ships a product with SQL injection vulnerabilities in 2026 is not making an unknowable mistake; they are shipping a product with a defect class that has been fully documented, freely preventable by design, and actively exploited for thirty years. The “software is too complex” defense is no longer credible.

The Biden Administration’s National Cybersecurity Strategy (2023) explicitly endorsed shifting liability to software vendors, stating that “liability should be placed on the entities that fail to take reasonable precautions to secure their software.” The EU’s Cyber Resilience Act (2024) begins to operationalize this for products sold in the European market. The direction of travel is clear; the pace is the variable.

6.2 Secure-by-Default as a Regulatory Mandate

For items on the SANS 10 that relate to default configurations such as open ports, excessive privilege, default credentials, regulatory mandates for secure-by-default design are the direct remedy. This approach has precedent: automobile safety standards, pharmaceutical manufacturing standards, and aviation design standards all impose mandatory baseline safety requirements on products regardless of what liability waivers the manufacturer might prefer. There is no principled reason software should be exempt.

6.3 Ending the Security Premium Tier

Baseline security capabilities i.e., comprehensive logging, multi-factor authentication, privilege management, audit trails, should be included in base-tier products for any software deployed in critical infrastructure, healthcare, financial services, or government. The practice of selling visibility and access controls as premium features in sectors where a breach causes public harm is not a business model that should be publicly subsidized by the victims of that harm.

6.4 Mandatory Breach Disclosure with Root Cause Attribution

Current breach disclosure requirements, where they exist, do not typically require vendors to disclose whether a breach resulted from a known, unpatched vulnerability class in their own product. Requiring root-cause attribution in mandatory disclosures would create market pressure on vendors whose products repeatedly appear as the enabling condition for breach incidents. Transparency alone does not fix incentive structures, but it creates the informational precondition for market and regulatory pressure to do so. Sites like https://privacyrights.org categorize and list data breaches so publishing information about data breaches is not unusual.

7. Conclusion

The SANS Top 10 list of 2001 is not a monument to the difficulty of cybersecurity. It is a monument to the success of a legal and economic strategy that made insecurity profitable and consequence-free for the parties best positioned to fix it. The EULA disclaimer of liability was not an inevitable feature of the software landscape; it was a deliberate choice, made by an industry that understood exactly what it was disclaiming and why.

Twenty-five years of harm in the form of breaches, ransomware, identity theft, critical infrastructure disruption have been made possible not primarily by sophisticated attackers, but by the legal architecture that ensured no one with the power to fix the root causes had to bear the cost of leaving them unfixed. The attackers in our memes are clever. But the most consequential exploits of the past twenty-five years were not executed in terminal windows. They were drafted by lawyers, ratified by courts, and embedded in the license agreements that nobody reads.

 

Sources & Further Reading

This blog was written with the assistance of claude.ai

(2001). Mistakes People Make that Lead to Security Breaches. https://sans.org/security-resources/mistakes

(2023). Top 10 Cybersecurity Misconfigurations. https://cisa.gov/news-events/cybersecurity-advisories/aa23-278a

(2023). National Cybersecurity Strategy. https://bidenwhitehouse.archives.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf

(2023). Human Error Remains the Top Security Issue. https://www.techtarget.com/searchsecurity/news/252522226/SANS-Institute-Human-error-remains-the-top-security-issue 

(2025). Since Stuxnet: A History of Critical Infrastructure Attacks. https://www.forescout.com/blog/cybersecurity-threat-evolution-of-otics-and-iot-devices/

(2025). OT Cybersecurity: The Top 10 Attacks Since 2020. https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/ot-cybersecurity-the-top-10-attacks-since-2020/

(2024). Behind the Breach: Analyzing Critical ICS/OT Cyberattacks. https://www.opswat.com/blog/behind-the-breach-analyzing-critical-ics-ot-cyberattacks

(2021). Recommendations Following the Oldsmar Water Treatment Facility Cyber Attack. https://www.dragos.com/blog/recommendations-following-the-oldsmar-water-treatment-facility-cyber-attack/

(2023). Cybersecuring the Pipeline. https://houstonlawreview.org/article/73666-cybersecuring-the-pipeline

(2024). Cyber Resilience Act — Regulation (EU) 2024/2847. https://eur-lex.europa.eu/eli/reg/2024/2847/oj/eng

(2018) “Click Here to Kill Everybody”, Bruce Schneier, https://www.schneier.com/books/click-here/