openssl / openssl

TLS/SSL and crypto library

Home Page:https://www.openssl.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Scrypt cannot be used with more than 16 megabytes of memory

randombit opened this issue · comments

The following PKCS8 key (encrypted with password 'Rabbit' and using scrypt for password hashing) taken from RFC 7914 cannot be decrypted by OpenSSL:

-----BEGIN ENCRYPTED PRIVATE KEY-----
MIHiME0GCSqGSIb3DQEFDTBAMB8GCSsGAQQB2kcECzASBAVNb3VzZQIDEAAAAgEI
AgEBMB0GCWCGSAFlAwQBKgQQyYmguHMsOwzGMPoyObk/JgSBkJb47EWd5iAqJlyy
+ni5ftd6gZgOPaLQClL7mEZc2KQay0VhjZm/7MbBUNbqOAXNM6OGebXxVp6sHUAL
iBGY/Dls7B1TsWeGObE0sS1MXEpuREuloZjcsNVcNXWPlLdZtkSH6uwWzR0PyG/Z
+ZXfNodZtd/voKlvLOw5B3opGIFaLkbtLZQwMiGtl42AS89lZg==
-----END ENCRYPTED PRIVATE KEY-----
$ ./apps/openssl version
OpenSSL 3.4.0-dev  (Library: OpenSSL 3.4.0-dev )
$ git rev-parse HEAD
1977c00f00ad0546421a5ec0b40c1326aee4cddb
$ ./apps/openssl pkcs8 -in ~/key.pem -passin=pass:Rabbit
Enter Password:
Error decrypting key
C08477BD877F0000:error:030000AC:digital envelope routines:scrypt_alg:memory limit exceeded:providers/implementations/kdfs/scrypt.c:515:
C08477BD877F0000:error:030000AB:digital envelope routines:PKCS5_v2_scrypt_keyivgen_ex:illegal scrypt parameters:crypto/asn1/p5_scrypt.c:285:

This seems due to a mistake in RFC 7914 where it is claimed that the N parameter should be "less than 2^(128 * r / 8)". This error caused the earlier report of this issue (in #10003) to be closed as working as intended.

But it's pretty easy to see this is an error in the RFC (an errata was reported in 2020 but apparently never acted upon) when you consider the inclusion of test vectors (including both the PKCS8 file as well as another N=1048576, r=8, p=1 test in Appendix 12) that would violate this supposed restriction.

Golang x/crypto and BoringSSL both include the N=1048576 test from RFC 7914 (though in both cases commented out since it takes a while to run). Tarsnap/scrypt (written by the designer of scrypt and co-author of RFC 7914) also includes this test, as does the original scrypt paper (Appendix B).

OWASP recommends N=131072,r=8,p=1

The OpenSSL man page for scrypt itself suggests using N=1048576, r=8, p=1

I'm hesitant to make a change here, given the errata wasn't acted upon, but in the interim, can you confirm that this patch resolves the problem:

diff --git a/providers/implementations/kdfs/scrypt.c b/providers/implementations/kdfs/scrypt.c
index ee2d4a7d32..098df10d2d 100644
--- a/providers/implementations/kdfs/scrypt.c
+++ b/providers/implementations/kdfs/scrypt.c
@@ -461,18 +461,6 @@ static int scrypt_alg(const char *pass, size_t passlen,
         return 0;
     }
 
-    /*
-     * Need to check N: if 2^(128 * r / 8) overflows limit this is
-     * automatically satisfied since N <= UINT64_MAX.
-     */
-
-    if (16 * r <= LOG2_UINT64_MAX) {
-        if (N >= (((uint64_t)1) << (16 * r))) {
-            ERR_raise(ERR_LIB_EVP, EVP_R_MEMORY_LIMIT_EXCEEDED);
-            return 0;
-        }
-    }
-

It still fails but in a slightly different way

C0443CE2517F0000:error:030000AC:digital envelope routines:scrypt_alg:memory limit exceeded:providers/implementations/kdfs/scrypt.c:503:
C0443CE2517F0000:error:030000AB:digital envelope routines:PKCS5_v2_scrypt_keyivgen_ex:illegal scrypt parameters:crypto/asn1/p5_scrypt.c:285:

This is the check if (Blen + Vlen > maxmem) - I checked and maxmem is apparently being set to 32 MB somehow. Unfortunately it seems the command line doesn't offer any way to configure this.

thats...wierd. The default maxmem bytes are 1025*1024^2

Are you passing a OSSL_KDF_PARAM_SCRYPT_MAXMEM param when you do the derivation? what value are you setting there? Thats the only way I could see that value getting reduced like that

No - as far as I know I'm not doing anything special here, literally just on the command line

openssl pkcs8 -in key.pem -passin=pass:Rabbit

I see the problem

When we decrypt the key file, openssl identifies the password based encryption scheme by the key encoded nid (scrypt), but passes 0 as the maxmem parameter, not knowing what other value to use. As such EVP_PBE_scrypt_ex, seeing a value of 0, sets the default maxmem value to SCRYPT_MAX_MEM, which is 32MB

That complicates this. From an API standpoint, this is a non-issue. If you were writing your own application, it would be easy, you would just call EVP_PBE_scrypt_ex passing an appropriate value.

But because this is an application which is decoding a pem file, the call is burried deep in the call stack, and it has no guidance as to what a reasonable value is there

If I rip out the safety check for maxmem entirely it works:

nhorman@fedora:~/git/openssl$ LD_LIBRARY_PATH=$PWD ./apps/openssl pkcs8 -in ~/key.pem -passin=pass:Rabbit
-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQg4RaNK5CuHY3CXr9f
/CdVgOhEurMohrQmWbbLZK4ZInyhRANCAARs2WMV6UMlLjLaoc0Dsdnj4Vlffc9T
t48lJU0RiCzXc280Vg/H5fm1xAP1B7UnIVcBqgDHDcfqWm1h/xSeCHXS
-----END PRIVATE KEY-----

this all works fine. But that destroys all the protections which maxmem offers

I'm sorry, right now, between the errata that never got addressed, and the need to remove the maxmem protections that keep a system from consuming all of ram, I'm not sure this can be addressed

Can I suggest the following which would IMO improve the situation quite a bit

  • Remove the check on N/r which is based on the incorrect RFC text. (What you removed in your original suggested patch.) This check (IIUC) actually prevents any usage of scrypt with larger parameters at all. This already would improve the situation a lot IMO (from 16Mb maximum possible N, to 32 Mb)

and possibly also

  • For private key decryption, I can understand that you want to keep it bounded to avoid potential DoS, and can't easily change the call stack to let the caller indicate how much memory is acceptable/safe. But is there any possibility of increasing this limit a bit? This seems pretty clearly an arbitrary limit (at least it was set to this value in the original commit a95fb9e with no particular justification afaict) - any chance of increasing it to say 64M or 128M? These days there are machines with 32 MB of L3 cache.

@paulidale @mattcaswell @t8m @levitte
Can I ask you to comment on item 1 above?

There is an errata for RFC 7914 here:
https://www.rfc-editor.org/errata/rfc7914

which was reported, in which it was suggested that the bounds check on N was invalid, but it was never verified or acted upon by the authors. I'm not sure if we have a policy or best practice for handling non-approved errata, but the math makes sense.

Regarding your request above @randombit, I think that can be considered, and I agree, it does seem somewhat arbitrary, but given that I'm not sure what a 'reasonable' limit is. An adjustement seems in order for larger systems, but openssl supports a wide variety of systems, both modern (large x86_64 system), old (VMS), large (again, x86_64 servers), and small (embedded armv7 systems), so it feels like just picking a different value is something of a recipe for kicking the can down the road, until a smaller system says the limit is too big.

As a counter proposal, I'd suggest a api in e_os.h to preform a run time test of the host platform to determine the amount of ram available and set the limit in scrypt to a fraction of that value. @t8m @mattcaswell @levitte thoughts on that as an approach?

@kroeckx thats why I suggested a run time check, not a compile time check. We certainly can't determine a valid memory limit at build time, that would be no better than the arbitrary limit we have now

As to weather or not we can trust this input data, is it reasonable to say that we can always trust the input? I'm not sure how we can assert that the input being passed in is always trusted

yes, this is about a KDF that contains settings, the problem is that the settings get set in certain cases as part of a larger operation (in this case PKCS5 decoding), so the setting has to be done based on input that is not part of the command line

A configuration setting might be reasonable. A static global variable in the scrypt code initalized at build time to -1, gating a call to the NCONF api to look this up would work I think.

@randombit I'm not sure if I have time to do this, are you interested in putting together a PR for this proposal? I'm happy to assign this to you as a community issue

commented

I'd be very reluctant to determine system memory as the basis for the max memory setting. There is simply so much else happening on a device that we're not going to be able to reliably portion a lump off for scrypt. This information will have to come from the calling application.

Something coming from NCONF or the environment would be reasonable. As would making the maximum memory limit build time configurable (and possibly increasing the default). I wouldn't increase the default without a bypass mechanism. OpenSSL runs on a lot of smaller devices.

I don't see a problem with the N/r check being removed, so long as a sanity check remains elsewhere (which it does).

ACK, thanks @paulidale

@randombit do you think you can try add a configuration option?

I agree with @paulidale, there's should be no problem removing the offending check.

Yes I would be willing to do a PR addressing this, using whatever approach the team thinks best.

@randombit @paulidale @levitte @kroeckx thank you

@randombit as I read it the forward action here is as follows:

  • Remove the N check as per the errata (feel free to use my test patch if you like, or write your own)
  • Add a CONF variable to the configuration file specifying the maximum memory scrypt should use
  • Add code in EVP_PBE_scrypt_ex, to query the configuration option in the above to set maxmem appropriately, allow overides from the passed in parameter maxmem, if it is not 0.

Thank you for being willing to undertake this, I'm assigning it to you as a community issue

When you have a changeset ready, please open a PR and link it here

Hi all - I started looking into this and am not quite sure of the best way to access a configuration value from EVP_PBE_scrypt_ex. The only parameter that looks likely to have access to a configuration state is the OSSL_LIB_CTX, and while I find functions that translate from an NCONF to a lib ctx (NCONF_get0_libctx) I don't see any thing similar for extracting configuration information from a OSSL_LIB_CTX. I'll continue looking into it, but a hint or reference to code doing a similar operation would be helpful.

ugh, thats right, we don't have a way to fetch the context from the loaded config. I think what you need to do is, in CONF_modules_load_file, after the call to NCONF_load, interrogate the appropriate section where the scrypt maxmem value is stored, and save that to a new variable in the libctx struct, which you can then fetch from the ctx in EVP_PBE_scrypt_ex.

@paulidale @levitte does that seem like the right approach to you?

commented

I don't think there is a way to get a NCONF from a lib ctx.

The usual approach is to call CONF_module_add() to add a configuration processing module. This then gets called when the configuration is loaded & it is expected to save the relevant details somewhere. In this case, it would have to be in the libctx (either directly or as a lib ctx based global).

I'm not sure how good an idea doing this would be but that's the usual process.

barring that, is there a better way to avoid having to set a hard coded limit for maxmem here that you can think of?