Ransomware Get-well plan

The US Government should pursue five policies through legislation, investment, and action, to drive the right incentives for asset-owners to improve security.

Ransomware Get-well plan

In late June I wrote broadly on implementing regulation in ransomware responses, and that the government should not prescribe to asset owners (in private or public sector) exactly how they shall secure their assets, but that government must take steps to apply the right incentive structure to drive good security.

This follow-on essay outlines five such policies the US Government should pursue through legislation, investment, and action.


1. You are responsible for what you build.

This is the 'Dan Geer principle,' based on his 3rd point (addressing software liability) from his 2014 Blackhat keynote. Congress should pass laws to clarify that organizations are responsible (liable) for the software products they build.

Geer goes into detail while discussing tradeoffs and implementations of software liability. His keynote is worth reading. You can read the relevant section in-line by clicking here

...
3. Source code liability -- CHOICE

Nat Howard said that "Security will always be exactly as bad as it
can possibly be while allowing everything to still function,"[NH]
but with each passing day, that "and still function" clause requires
a higher standard. As Ken Thompson told us in his Turing Award
lecture, there is no technical escape;[KT] in strict mathematical
terms you neither trust a program nor a house unless you created
it 100% yourself, but in reality most of us will trust a house built
by a suitably skilled professional, usually we will trust it more
than one we had built ourselves, and this even if we have never met
the builder, or even if he is long since dead.

The reason for this trust is that shoddy building work has had that
crucial "or else ..." clause for more than 3700 years:

If a builder builds a house for someone, and does not construct
it properly, and the house which he built falls in and kills
its owner, then the builder shall be put to death.
-- Code of Hammurabi, approx 1750 B.C.

Today the relevant legal concept is "product liability" and the
fundamental formula is "If you make money selling something, then
you better do it well, or you will be held responsible for the
trouble it causes." For better or poorer, the only two products
not covered by product liability today are religion and software,
and software should not escape for much longer. Poul-Henning Kamp
and I have a strawman proposal for how software liability regulation
could be structured.

.......................
0. Consult criminal code to see if damage caused was due to intent
or willfulness.
.......................

We are only trying to assign liability for unintentionally caused
damage, whether that's sloppy coding, insufficient testing, cost
cutting, incomplete documentation, or just plain incompetence.
Clause zero moves any kind of intentionally inflicted damage out
of scope. That is for your criminal code to deal with, and most
already do.

.......................

  1. If you deliver your software with complete and buildable source
    code and a license that allows disabling any functionality or
    code the licensee decides, your liability is limited to a refund.
    .......................

Clause one is how to avoid liability: Make it possible for your
users to inspect and chop out any and all bits of your software
they do not trust or want to run. That includes a bill of materials
("Library ABC comes from XYZ") so that trust has some basis,
paralleling why there are ingredient lists on processed foods.

The word "disabling" is chosen very carefully: You do not need to
give permission to change or modify how the program works, only to
disable the parts of it that the licensee does not want or trust.
Liability is limited even if the licensee never actually looks at
the source code; as long has he has received it, you (as maker) are
off the hook. All your other copyrights are still yours to control,
and your license can contain any language and restriction you care
for, leaving the situation unchanged with respect to hardware-locking,
confidentiality, secrets, software piracy, magic numbers, etc.

Free and Open Source Software (FOSS) is obviously covered by this
clause which leaves its situation unchanged.

.......................
2. In any other case, you are liable for whatever damage your
software causes when it is used normally.
.......................

If you do not want to accept the information sharing in Clause 1,
you fall under Clause 2, and must live with normal product liability,
just like manufactures of cars, blenders, chain-saws and hot coffee.

How dire the consequences, and what constitutes "used normally" is
for your legislature and courts to decide, but let us put up a
strawman example:

A sales-person from one of your long time vendors visits and
delivers new product documentation on a USB key, you plug the
USB key into your computer and copy the files onto the computer.

This is "used normally" and it should never cause your computer to
become part of a botnet, transmit your credit card number to Elbonia,
or copy all your design documents to the vendor. If it does, your
computer's operating system is defective.

The majority of today's commercial software would fall under Clause
2 and software houses need a reasonable chance to clean up their
act or to move under Clause 1, so a sunrise period is required.
But no longer than five years -- we are trying to solve a dire
computer security problem here.

And that is it really: Either software houses deliver quality and
back it up with product liability, or they will have to let their
users protect themselves. The current situation -- users can't see
whether they need to protect themselves and have no recourse to
being unprotected -- cannot go on. We prefer self-protection (and
fast recovery), but other's mileage may differ.

Would it work? In the long run, absolutely yes. In the short run,
it is pretty certain that there will be some nasty surprises as
badly constructed source code gets a wider airing. The FOSS community
will, in parallel, have to be clear about the level of care they
have taken, and their build environments as well as their source
code will have to be kept available indefinitely.

The software houses will yell bloody murder the minute legislation
like this is introduced, and any pundit and lobbyist they can afford
will spew their dire predictions that "This law will mean the end
of computing as we know it!"

To which our considered answer will be:

Yes, please!  That was exactly the idea.

Developers must exercise due-care in the products they generate and cannot continue passing all of the risk to consumers, and this legislation would bring the tech industry into the 21st century by applying common liability frameworks from other verticals.

2. 'Building' includes deployment, not just development.

Under the legislation from Policy 1 above, a developing company holds the liability for flaws natively in its software. Congress should additionally specify that 'deployment' falls under development for the purposes of liability under this law: if an asset is deployed insecurely, the deploying business is liable.

This would both eradicate a significant gap in the legislation (Microsoft's Active Directory, for example, can be improperly deployed regardless of the security of the underlying software), as well as drive better deployment guidance from developing companies (if organizations were directly liable for, say, the exposure of customer data in their S3 buckets, the free market could then provide a better solution as customers require AWS to improve the security of default settings and have clearer deployment guidelines).

Asset owners must exercise due-care in the environments they own and cannot just transfer risk to a third party as a way of avoiding liability—the buck stops with the owner. If an asset owner gets breached after transferring risk and an investigation (conducted, perhaps, by a Cybersecurity Safety Review Board) concludes the asset owner did not conduct due-care in partnering with the third party, then that risk transfer is voided and the asset owner still owns the liability.

This might be tailored in legislation to only apply to Systematically Important Critical Infrastructure ('SICI:' defined on page 97 of the Cyberspace Solarium Commission Report as "entities responsible for systems and assets that underpin national critical functions.), or could be applied to organizations of a certain size (measured in annual revenue, monthly users, value of assets held, or other metrics).

3. 'Building' includes maintenance, not just development or deployment.

Asset owners should be liable for how they maintain deployed systems. As the government enforces responsibility through applying liability, that legislation should include proper maintenance as a responsibility for asset owners.  

4. Reject/disincentivize monoculture.

Monoculture increases the risk of class breaks, and supporting diversity of products improves security. The government can most easily support product diversity through engaging in robust anti-monopoly policies through legislation and enforcement, as well as through investment in incubators and small businesses. Legislation and enforcement will ensure innovation is properly rewarded, and investment will directly foster innovation, in turn driving a diversity of software and resulting in more stability.

5. Engage in aggression against perpetrators.

This principle can be supported solely on the government side, through legislation and (non-kinetic) military action.

As long as states (primarily Russia, but also China) argue that they are not involved in ransomware actions against the US, those actors—as non-state actors threatening US national security—are fair game for offensive cyber action through the use of Cyber Mission Forces (CMF) under USCYBERCOM. You can read last week's WYSK for more discussion on use of the CMF for combatting ransomware.

There is strong precedent for this, as the US has shown far less restraint in our drone program, where we have taken much more aggressive actions against adversaries who are causing less harm to the US and violating fewer laws than ransomware actors are. Additionally, since actions against ransomware operators would be approved by Congress, they would be legally authorized.


To recap:

In the proposed legislative solutions the government does not dictate specific standards that industry (or even government) must adhere to, but it does require due-care for development, deployment, and maintenance. This leaves the tech industry free to innovate wildly and rapidly, but applies basic liability principles common in other industries (building, automotive, healthcare, hospitality, retail, etc.) in order to produce better security outcomes for the country.

In the proposed Executive Branch actions, the US is following our own precedent as well as international norms, and is supported by legislation ensuring compliance with US law.


Photo by Eugen Str on Unsplash