doyensec / electronegativity

Electronegativity is a tool to identify misconfigurations and security anti-patterns in Electron applications.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Guideline for handling False Positives / False Negatives

maxmeffert opened this issue · comments

Guideline for handling False Positives / False Negatives

Hi,
is there a guideline for handling False Positives / False Negatives?

We are currently looking into using electronegativity for automated security checks in a project.
However, we noticed that our code design and architecture can have a great effect on the results.

For example CONTEXT_ISOLATION_JS_CHECK:

Case a)

Declaration of webPreferences.contextIsolation is part of the BrowserWindow configuration object literal.

const window = new BrowserWindow({
  webPreferences: {
    contextIsolation: true,
  },
});

No issue is found, works as expected.

Case b)

Value of webPreferences.contextIsolation is not part of the BrowserWindow configuration object literal.

const contextIsolation = true
const window = new BrowserWindow({
  webPreferences: {
    contextIsolation: contextIsolation,
  },
});

CONTEXT_ISOLATION_JS_CHECK fails, which was not expected.

Case c)

BrowserWindow configuration is extracted to a variable and possibly to another file for reuse.

const configuration= {
  webPreferences: {
    contextIsolation: true,
  },
};
const window = new BrowserWindow(configuration);

CONTEXT_ISOLATION_JS_CHECK fails, which was not expected.

The reason for this became obvious when we looked at how the CONTEXT_ISOLATION_JS_CHECK is implemented.
It just leverages simple static analysis using only the AST which works fine for a).
However, b) and c) would require some form of control flow or data flow analysis.
Since control flow and data flow analysis is heavy lifting especially for JavaScript projects, we don't expect this to be implemented soon (or ever).
For now, we are considering just using your checks for manual review without running electronegativity.
But a guide for handling false postive / false negatives and how design decisions affect the outcome of the analysis would be nice.

Nevertheless, many thanks for this project and security recommendations for electron!

Hello @maxmeffert and thank you for deciding to contribute!

As you already noted, Electronegativity was born as a simple "naive" SAST tool. As for every SAST tool, its role should be supporting a manual auditor when reviewing the source code of a software to identify sources of vulnerabilities. In order to help less experienced tester, the team decided to create a documentation reference for every issue featured in Electronegativity, where the auditor can read the associated risks, reproduction/exploitation steps, and reference links of any reported issue.

Nonetheless, I understand your struggle. As you pointed out, by design we're being very cautious when deciding to NOT issue a finding, deciding instead to lower its confidence or even flag it for manual review instead of discarding it altogether. Because of this I believe we do inherently have a much higher rate of false positives than false negatives errors in our checks, but that's a trade-off we accepted and are aware of, given that the tool was meant to be used mainly as a support to a manual auditor.

With the desire to improve Electronegativity's accuracy, in recent years we recognized that we needed some sort of context-awareness or "data flow analysis" as you well put. This is why we encapsulated Escope (#40) to introduce a variable scoping / resolution mechanism to some notable checks (mainly the BrowserView/BrowserWindow options ones). This works most of the time on simple cases, but it still fails when the variables' value can't be determined statically (because assigned by non-pure functions, environment variables, or simply declared without an assignment detected).

In bcae6bb I introduced a small change to the CONTEXT_ISOLATION_JS_CHECK using Escope to also solve (b). As you already pointed out (a) works out of the box, while (c) works too when the configuration JSON is "complete" without other referenced variables, e.g.:

const configuration= {
  webPreferences: {
    contextIsolation: true,
  },
};
const window = new BrowserWindow(configuration);

While we don't currently support such recursive or convoluted resolution (as per your initial submission):

var safe_settings = {
  browserWindows : [ 
    { 
      contextIsolation: true
    }
  ]
};

const configuration= {
  webPreferences: {
    contextIsolation: safe_settings.browserWindows[0].contextIsolation,
  },
};
const window = new BrowserWindow(configuration);

While the changes introduced help covering (b), a SAST tool will never effectively solve your initial problem. I'm positive above adding a disclaimer detailing how we're generally deciding what to report, though (i.e. our better-safe-than-sorry approach).

But a guide for handling false positive / false negatives and how design decisions affect the outcome of the analysis would be nice.

This really varies check by check. Parts of the checks' behavior is described by the tests, but their logic can vary greatly depending on the different electron versions, other checks values, or check-specific considerations. While in the long run having such specs would be desirable, I can't see myself committing to this kind of effort in the near future. Having said that, the tool is always open for contributions by the community! :P

Another solution for your use-case would be creating some kind of exclusion list for false-positive issues (not currently supported, but should be easy enough to implement). In this way, if you were to test the same codebase in a development/build pipeline you could be warned about new problems every time some new dangerous code is pushed.

Let me know if this answers your questions,

Just wanted to drop by to mention that in #78 we implemented a simple exclusions system if you are using the tool in a development/build pipeline. The change is live since v1.8.0. We also have plans to improve variable resolution accuracy for the future, so stay tuned for more updates.

Hi @phosphore,
thanks for quick initial reply, fast code adjustment and continued attention to this issue.
I'm very sorry for my late reply, I was reassigned to another project for a short while and a little packed with stuff to do.
Anyways, now I'm back on topic:

In retrospective, I think, I filed this issue a little prematurely. My team and I where relatively new to electron so we started - among other stuff - by checking the Security Recommendations of electron where this project is referenced. There it states:

To automate the detection of misconfigurations and insecure patterns, it is possible to use electronegativity.

So we hoped to run scans as part of our CI/CD pipeline. However, as we realized that some cases resulting from our design choices could not be analyzed accurately or are simply not supported due to the inherent naiveness of static analysis we decided to:

  1. use electronegativity manually during reviews
  2. to write unit tests from the list of checks in the wiki for things that currently cannot be detected by electronegativity.

Also I may have titled the issue misleadingly.

But a guide for handling false positive / false negatives and how design decisions affect the outcome of the analysis would be nice.

I may have accidentally mixed not supported cases with false negatives. Actually the main point of this issue was "how design decisions affect the outcome of the analysis", i.e. which code patterns are supported and which are not.

Perhaps the wiki page for each check could list cases which are currently supported and a (naturally incomplete) description of cases which are not supported, i.e. when a misconfiguration cannot be detected yet. But I see that this can become a time consuming task.

A quick "fix" could also be to address this issue in the introductory statement in the README since it currently only states to be careful with some things electronegativity can detect:

Software developers and security auditors can use this tool to detect and mitigate potential weaknesses and implementation bugs when developing applications using Electron. A good understanding of Electron (in)security is still required when using Electronegativity, as some of the potential issues detected by the tool require manual investigation.

This could also help to reduce the risk of providing a false sense of security when in fact an application is configured in an insecure manner which simply cannot be detected by electronegativity. This is natural due to the challenges involved in thorough static analysis. However, less experienced developers may treat electronegativity as a magic black box.

I consider this issue closed.

Thanks again for this helpful project! :-)