keybase / triplesec

Triple Security for the browser and Node.js

Home Page:https://keybase.io/triplesec

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Skipping key stretching?

shesek opened this issue · comments

I'm using keys that are randomly generated and not based on a user provided password. For this case, stretching the key doesn't add any value.

Is there a way to tell TripleSec to skip that?

There's not really a convenient way to do this. You could write your own Encryptor/Decryptor based on what we have and skip part of the @kdf step in @resalt.

Keep in mind that you still might need to turn your random key into a longer key, since you need 192 bytes of key material to run triplesec --- you need computationally independent keys for the 3 ciphers and the 2 HMACs.

Would it be something that makes sense as part of triplesec in your
opinion?
On Mar 21, 2014 1:20 AM, "Maxwell Krohn" notifications@github.com wrote:

Closed #32 #32.

Reply to this email directly or view it on GitHubhttps://github.com//issues/32
.

We haven't had a need for it, but seems like others have :)

One easy way to do this, without any changes to the code, is to make a "version" of triplesec that has a trivial key-stretch component. Like run scrypt with N=2, and then you'll get the key extension via PBKDF2 (which is a subroutine of scrypt).

Thanks @maxtaco, adding a version works and is indeed a very easy way to achieve that, though it feels kinda wrong to add that as a "version", as its not what it really is... I'll do it for now as a quick way to move forward with my project, but will hopefully change it in the future (probably to a custom Encryptor/Decryptor that takes an arbitrary length key and turns it to a fixed 192 bytes key).

If there were added functions to the spec called EncryptWithoutScrypt and DecryptWithoutScrypt then you could have functionally equivalent functions without added overhead. Basically the functions that DO need Scrypt, can run Scrypt from a user-provided-passphrase, get the 192 bytes, and then pass those bytes onto Encrypt- or Decrypt-WithoutScrypt. Otherwise, internal or server-based applications that can safely generate and store the 192 bytes needed for a TripleSec payload don't have to run Scrypt.

Mind you, this is putting the onus on the developers of systems to make sure that they DO keep their TripleSec key material safe, but still allows for a specific standard of file transport. If, for example, keys were generated as part of a transport protocol, like DiffeHellman, then you don't necessarily NEED Scrypt except to do key stretching to the full 192 bytes, but you could easily fit a 192 byte payload inside of an RSA operation as well, no stretching needed.

I bring this up, because I had the same idea for miniLock payloads. If the keys used in the process were purely ephemeral, then there's no need to do Scrypt to generate the 32 byte secret key, just a good RNG. miniLock is already an ECC process (using NaCl and curve25519), so extending it out to a wider protocol isn't that difficult.

I don't think this made it into V4 #51 and come to think of it, in my C# port I don't think I put it in either... Not sure if this is still needed, but I'll bump this up to the discussion for V5 #72 .
The two schools of thought proposed here (skimming the comments) are:

  1. To have functions that allow you to skip the keying altogether, and have methods that require the full key bytes to be passed in (which puts the onus on the implementer to make secure key material), or...
  2. To have a "reduced" KDF (be it Scrypt or Argon2 or whatever) which will still produce the internal key bytes needed, but would allow faster server or transport keys and reduce the overhead for automatically handled files rather than "human accessed" files. This could also allow a master key for a GROUP of files, each generating their own salt and internal keys based on one master key, and each file then doesn't need to take 2-5 seconds each on pre-processing. The master key has to be strong enough to hold up to brute-force attacks (using the full KDF compute since we're likely dealing with human input), and the output has to be long enough that ONLY exhaustive search or a collision would produce the key that unlock the individual files (the theory being since the files use a "cheap" KDF vs. the master, it would be "easier" to attack that, but the file key would still be very long going through that "cheap" KDF so the attacker isn't really catching a break; this strength then puts the onus on the individual encryption keys, which should be strong in the first place otherwise we're chasing our tails).