slsa-framework / slsa

Supply-chain Levels for Software Artifacts

Home Page:https://slsa.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add Bad Design as a supply chain scenario

moshe-apiiro opened this issue · comments

Perhaps we should consider extending the SLSA flowchart to the left towards the step of design and decisions on this steps that leads to insecure situations (regardless of implementation in code) that have been done deliberately by an actor in order to gain an advantageous edge on attacking the system.

Seen on a revised take on the current SLSA flowchart below, on the left side -
image

Design can be affected by either an internal or external actors that either have direct or indirect affect on the design decision, realizing a decision might gain an actor a privilege by definition or by particular power/knowledge the actor has.

Actions such as standardization to meet common denominator, regulations (through lobbying perhaps), and crowdsourced decisions can be prone to this method of supply chain attack.

Some examples of this might include:

  • SSL Export certificates (which also lead to the FREAK vulnerability) - the export regulations were mandated by the US when dealing with SSL certificate schemes, the influence of such a regulation resulted in insecure design decision directly affecting SSL servers. The reasoning behind it was that gov actors would be able to decrypt the traffic, which, back then, was not something that was thought of as possible by the industry.
  • On a similar note - multiple cases in the past that involved designing a ‘master key’ for decryption by design. Some of those decisions were heavy with debate for enabling this feature.

Note of clarification - OWASP TOP-10 2021 added insecure design as category A04 - it is not to be confused with this notion as it deals with the developer/business decision making process more than a malicious intent.

"Thinking earlier" is good, but I think should be clarified in a different way.

In my mind, developers include those who create designs, and designs should be managed by the source code control system (there may be multiple parts, but it better all be under version control). If your design is being dictated from somewhere external to your software development process, then there's your problem, go fix that first.

However: there certainly are externally-imposed requirements, and there's also the challenge of education (or lack thereof). So instead of "design", I'd have as simple inputs "Requirements" and "Education", so that we can emphasize that these are "external" things that we can influence.

Thoughts?

Good points, @david-a-wheeler .

I like your take on making things simpler to digest in the sense of using a more simple terms.

If we choose to use 'requirements' it might be too narrow, as such a supply chain attack can be from an insider-threat such as an employee that was paid to influence the design (similar to the lobbyist scenario, to some extent, but when someone internal is solicited).

Also when reading your description you might have articulated a new attack scenario on the same "block" of design process - let's say the org is using a source control system for design (techniclally, wikis are source controlled) but someone forged or modified these designs to influence the finalized design- without employees signature verification and authorization process on the wiki/source - there's another attack vector on the supply chain.

I think that the generalized term 'influence towards bad design' holds its generality within those scenarios mentioned.

Finally, I'm not sure where 'education' fits within an attack vector scenario, but eager to hear more about your thought process there.

@david-a-wheeler looking forward for further insights and discussion.

I don't think "influence to bad design" helps; I think it's terribly confusing. People don't receive information labelled "influence to bad design".

How about just "external information" as the input to developers? It's manifestly obvious that developers receive external information. You can include externally-developed requirements, externally-developed designs, education, bad Stack Overflow answers, and many other things in that bucket :-).

Also, remove the block "Design". Just have "external information" go straight to the developer. The proposed block diagram implies there's some "design" created external to developers, which is simply false in many cases, and it implies a waterfall development approach that is inherently risky (see Winston Royce's paper for more).

External information strikes a good balance, indeed, imo as well.

I am partial about the second comment, but still a solid point on not to imply waterfall - the thing is that design, especially upon a new/major component does require it's step by the developer (generalizing the term for architects, engineers etc) on many cases and while others are involved into forming those notions, it can be overlooked.

But overall, as said before - I think we are in agreement here.

Tried to encompass those notions within the diagram
image

It says "Insecure External Design Requirements" currently, and I don't think that's the right phrase.

"Design Requirements" are way too specific. What about other kinds of requirements? Education (or its lack)? External Designs? And the input doesn't have to be "Insecure" - hopefully you have good external information you're allowed to use, in fact, we want to increase the likelihood it's good.

I still think the left-hand entry should be "External information". That covers all the cases (external requirements, external design, education/training, etc.) without needing lots of words. Everything else has very few words, we should strive for the same here.

I agree it is too wordy.
External information might reflect too passive influences while we don't want to lose sight of the ill intended and errornous influencers that can pose a threat we don't want to ignore. How can we make sure it is not overlooked by the model?

Great, landed on "External information" then.

I will proceed with appropriate changes PRs.

@david-a-wheeler and @moshe-apiiro, how do you see this affecting the SLSA requirements? This seems out of scope to me, so I'm worried that it will add needless complexity. But perhaps I'm missing something.

That said, perhaps a compromise is to add this to https://slsa.dev/threats without updating the diagram?

Imo, we should make the distinction between introducing bad code to be influenced/biased towards a design decision that affects security/integrity downstream.
as for requirements - i was thinking of a single requirement - broken down to 3 levels
vetted and non-biased information, requirements gathered (level 1)
the bespoken information should be checked for integrity (level 3), and
tamperproof (level 4)

But how would someone verify this? It seems out of scope of SLSA.

I agree with @MarkLodato. I'm 100% in agreement that there's a whole universe to the left of SLSA's entrypoint (of which design is a big consideration) but I'm not sure it helps to draw it in SLSA right now.

SLSA's FAQ, the Limitations section of the main spec, and the limitations headers of each individual use case all call out that you might have a SLSA 0 dependency entering your SLSA 4 process.

This issue is dependent on #276. Once we decide the scope of SLSA, it will be more clear if/how design fits in.

Agree, I wanted to make this type of comment and link following yesterday's WG discussion, you beated me to it @MarkLodato 😄 Thanks