TokTok / c-toxcore

The future of online communications.

Home Page:https://tox.chat

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tox Handshake Vulnerable to KCI

zx2c4 opened this issue · comments

Hello,

I found this source code confusingly written (and downright scary at times) and the specification woefully underspecified and inexplicit, so it's entirely possible my understanding of the handshake is inaccurate. But on the off-chance that 5 minutes of source code review at 4am yielded something accurate, here is my understanding of the handshake:

Peer A (Alice) has the longterm static keypair (S_A^{pub}, S_A^{priv}). Peer A has the session-generated ephemeral keypair (E_A^{pub}, E_A^{priv}). Peer B (Bob) has the longterm static keypair (S_B^{pub}, S_B^{priv}). Peer B has the session-generated ephemeral keypair (E_B^{pub}, E_B^{priv}).

Message 1: A -> B

XAEAD(key=ECDH(S_A^{priv}, S_B^{pub}), payload=E_A^{pub})

Message 2: B -> A

XAEAD(key=ECDH(S_B^{priv}, S_A^{pub}), payload=E_B^{pub})

Session Key Derivation

ECDH(E_A^{priv}, E_B^{pub}) = ECDH(E_B^{priv}, E_A^{pub})

Is this an accurate representation of the handshake? If so, keep reading. If not, you may safely stop reading here, close the issue, and accept my apologies for the misunderstanding.

The issue is that this naive handshake is vulnerable to key-compromise impersonation, something that basically all modern authenticated key exchanges (AKEs) are designed to protect against. Concretely, the issue is that if A's longterm static private key is stolen, an attacker can impersonate anybody to A without A realizing. Let's say that Mallory, M, has stolen A's private key and wants to pretend to be B:

Message 1: M -> A

XAEAD(key=ECDH(S_A^{priv}, S_B^{pub}), payload=E_M^{pub})

Message 2: A -> M

XAEAD(key=ECDH(S_A^{priv}, S_B^{pub}), payload=E_A^{pub})

Session Key Derivation

ECDH(E_A^{priv}, E_M^{pub}) = ECDH(E_M^{priv}, E_A^{pub})

A now thinks he is talking to B, but is actually talking to M.

Perhaps Tox doesn't care about this, or about many of the threat models that modern AKEs are designed to protect against, in which case, probably it's fine to continue using your homebrewed crypto. But if you actually desire some kind of high assurance security, I strongly recommend not building your own protocols and instead use something designed by an educated expert, such as Noise.

This is just what immediately leaped out at me after a few short minutes of review. I haven't even begun to look at key derivation and other interesting aspects (are you guys really just using raw ECDH results as keys?).

Again, apologies if this doesn't actually represent the handshake you're using; I'm not 100% certain. But in case it does, then let this be a wake-up call to developers not to roll your own crypto, as well as a wake-up call to users not to rely on crypto software written by non-experts.

Hi Jason, thanks for the report. We are aware of all three issues you've mentioned, but it's great to have them written down. I'll explain a bit of background about what we're doing here, and the reasons for why issues like this have not been addressed.

We started the TokTok project about a year ago with a (now slightly outdated) plan. We inherited toxcore and the protocol it implements from the Tox project. We're now in some mix of phase 1 and 2, where we slowly improve the code while keeping the protocol exactly the same, with all its flaws and shortcomings. You've described one of them, but there are others. We should be more explicit about this on the website (I have filed an issue for this just now).

Initially, the plan was for us to not touch toxcore at all, and instead rewrite the specification, which does contain all the information we need, just not in an obvious way. That plan relied on others working on toxcore. Since nobody would take on the toxcore part, we had to take it on ourselves, which is the main reason we're not as far along in the plan as we had initially hoped.

The new plan is roughly:

  1. Improve toxcore code base, not making any protocol changes, with focus on testability.
  2. Implement a formal model of the protocol and run equivalence tests between it and c-toxcore. This part goes together with improving the spec, since the model is the formal version of the textual spec. Up to this point, we actively ignore any design flaws and focus purely on ensuring that the implementation matches the specification.
  3. Publish a threat model. Implement attacks on network, random users, and specific users. Still not changing the protocol.
  4. Redesign the protocol and make a single cutover from old protocol to the new one.

We do have crypto experts on board, but they are very much closing their eyes to the issues most of the time. I might have more to say about this, but not in public. I'm happy to discuss in private (IRC/email/ricochet) if you're interested.

I think the main action we can take related to this particular issue right now is to implement the attack. This was supposed to happen in step 3, but I don't see good reasons to keep it that far in the future. Perhaps it's a good time to publish all known attacks and their implications somewhere.

Hi @iphydf,

Thanks for your response. So, it sounds like you're aware that this is an issue and confirm that indeed the handshake follows this construction and is therefore vulnerable to KCI.

In that case, I strongly recommend that you put a large red disclaimer on the Tox website and in all applications indicating to users that Tox is not secure. As is, the security assurances made on the website, marketing, and in-app GUI are dangerous.

Hi,

It seems either someone micromanaged too much or you guys got the wrong workflow figured out entirely.

Initially, the plan was for us to not touch toxcore at all, and instead rewrite the specification, which does contain all the information we need, just not in an obvious way. That plan relied on others working on toxcore. Since nobody would take on the toxcore part, we had to take it on ourselves, which is the main reason we're not as far along in the plan as we had initially hoped.

Upkeep of the core and porting is more important than fixing fundamental security flaws in the protocol itself, which, apparently, is live and used by people? This does not make sense to me.

The new plan is roughly:

  1. Improve toxcore code base, not making any protocol changes, with focus on testability.
  2. Implement a formal model of the protocol and run equivalence tests between it and c-toxcore. This part goes together with improving the spec, since the model is the formal version of the textual spec. Up to this point, we actively ignore any design flaws and focus purely on ensuring that the implementation matches the specification.
  3. Publish a threat model. Implement attacks on network, random users, and specific users. Still not changing the protocol.
  4. Redesign the protocol and make a single cutover from old protocol to the new one.

Usually I'd start with a threat model, so you can think about what/whom you want to defend against/protect, which attack vectors are relevant, etc. - a formal model sounds nice, but having a rough idea about how the protocol should look like first is maybe a better entry point. Modeling, testing etc should be done if you have a rough impression of what you're actually working on.

Sorry I'm just very confused by this response. Marking the project "experimental" after the fact is also problematic as you already have a user-base you need to care about (which of course means upkeep of your core, but first of all you want to supply them with strong security, as this is the point of the whole project, I take it? Your website says so.).

Now there are two discussions in this thread.

Roadmap/workflow

@azet here is a thought process:

  • We want Tox to be secure for a well-specified and published definition of secure (i.e. threat model).
  • We have a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C).
  • Thus, any change we make has the potential to make Tox less secure, running counter to our goal.
  • We could throw away all the code and rewrite using a different protocol with different security properties, but it would take a while.
  • we are working with very tight resources: a few volunteers with limited time.
  • It would be very hard to motivate those few people to drop a working product and write a whole new one. I've actually tried, but there is no audience for such plans.
  • It seems to me that the route we're taking is one that allows us to reach the goal with the starting point we inherited.

I would be quite interested in your thoughts around this, and perhaps we can steer in a different direction that's better for the project.

Security properties

First I should note the obvious, which is that exclaiming "X is not secure" is as useless a thing as saying "X is secure". As @zx2c4 correctly said, it depends on the threat model. There are very few ways to make information transmission secure to every possible known and unknown attack (and then a crowbar to the wrist can break that security as well).

Regarding the particular issue:

  • KCI depends on getting a user's secret key. If your secret key is compromised, you have several things to worry about, KCI is only one of them.
  • Preventing KCI in the current protocol is possible but would break deniability in the simple case.

Regarding the general issue of "oh my god tox is not secure don't use it": this is slightly overreacting to the actual issues. As said, there are a number of possible attacks on individuals or on the network, but if secret keys remain secret, none of those attacks can compromise message content.

Tox provides some strong security guarantees. We haven't got to the point where we can enumerate them properly, given the general lack of understanding of the code and specification. This is the point we are currently working on: improving the code and at the same time improving our understanding of it, so that we can make large scale changes in a safe way.

@zx2c4 can you point at the part of the Noise spec that explains how deniability is achieved?

Regarding the general issue of "oh my god tox is not secure don't use it": this is slightly overreacting to the actual issues.

I think when your homebrewed crypto protocol falls to basic crypto 101 vulnerabilities that modern AKEs are explicitly designed to prevent, it's time to pin up the red banners telling people not to use your stuff.

And to put this in context - this is what I found after a few minutes of scrolling. Judging by your replies, I'm a bit frightened to look in more depth...

I agree that we should tell users about the particular security guarantees Tox does and does not provide. We will add this to the website.

I would be interested in discussing further action if you're willing to talk. I would also be interested in discussing the implications of your findings if you're interested in looking more in depth and sharing what you think of it.

By the way, do you consider OTR secure or should they put up a red banner as well? What about the SIGMA protocol? Both these protocols provide a different set of security properties. The left and right set differences are non-empty.

For discussion of the current protocol I would like to ask you to direct questions at @irungentoo, who created the design and implementation of this protocol.

Can you point at the part of the Noise spec that explains how deniability is achieved? Also, can you point me to the parts of the code that you reviewed and whose logic you found to be of concern?

You might benefit from a bit of humility before comparing your protocol to OTR and SIGMA, both of which were groundbreaking works created by experts, as opposed to a slapdash protocol that has neither a specification for any coherent evaluation of security properties nor a sturdy codebase.

I'm sorry I made it sound like I'm comparing us to them. I was asking about your opinion regarding these protocols, which both provide and lack certain security properties. I am still interested in your evaluation of the importance of each of their security properties, especially wrt. a similar lack or presence of these properties in Tox.

I'm also sorry to learn that a discussion I was hoping to be respectful and constructive has so quickly degenerated. I am sorry for the slightly snarky comment about those other protocols and red banners. I hope we can go back to where we started: a constructive discussion.

As said, we are quite aware of the situation we have inherited, and we are actively working on improving it. Your help in this endeavour would be greatly appreciated.

For anyone reading this, without a crypto background. The assertions being made are the same as saying: the lock on your house is broken because if someone steals your keys they can unlock your door.

I agree with iphy on this, the reaction and outrage doesn't match the reality of the issue. All of it sounds like concern trolling to me.

the lock on your house is broken because if someone steals your keys they can unlock your door.

That's not a great analogy. KCI is a bit more subtle than that.

All of it sounds like concern trolling to me.

No, not really. As I wrote in the original post: if you don't actually care about having a secure protocol that meets modern expectations of an AKE, by all means defend and justify your homemade situation. However, if you're interested in gaining the trust of users and confirmation from cryptographers, you'd benefit immensely from not trying to tout the current situation as secure, but rather put up a large scary warning indicating to your users that you're working on it but that you're not there yet.

I was asking about your opinion regarding these protocols, which both provide and lack certain security properties. I am still interested in your evaluation of the importance of each of their security properties, especially wrt. a similar lack or presence of these properties in Tox. I hope we can go back to where we started: a constructive discussion. As said, we are quite aware of the situation we have inherited, and we are actively working on improving it. Your help in this endeavour would be greatly appreciated.

I think the best place to design a new crypto protocol is probably not a Github issue report. Take some time, write it out, work out the details, talk to your professors, etc. Alternatively, spend time reading existing papers and evaluating if they fit what you want and whether they have an implementation ready built for you to use. Message boards are a pretty bad place for ad-hoc design of something so critical.


Anyway, I'll tuck out now for a little while to see how this evolves. I've done my part. There's an vuln found in 5 minutes of review. There's homebrewed crypto. There's a "a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C)" (@iphydf). Now it's up to you how you want to handle this. Treat it as serious and worthy of a red "do not use" banner, if you'd like to give the impression that you care about the same standards of security that the world of cryptographers do. Carry on as usual if you simply want your thing to continue to be casually used by people who don't care that much and are okay with using naive constructions.

Also, if you personally are worried about someone stealing your key without you knowing. And your friends aren't rapidly disconnecting and reconnecting. No one else has your key.

put up a large scary warning indicating to your users that you're working on it but that you're not there yet.

You're right, a totally rational and nuanced response to an attack that would quickly become discovered.

Because you don't seem interested in discussing anything other than a low risk attack in hyperbolic style. I'm going to make sure this this thread doesn't devolve into what it already has started to. Anyone who would like a deeper discussion can join #toktok on freenode.

commented

I'm a cryptographer. I disagree with the lock analogy assertion that it's trivial or obvious. Being able to impersonate someone whose secrets have been compromised is, indeed, obvious; KCI works in the other direction. I don't think the other direction is obvious at all. (I agree that the tone is not one I'd use, but that's neither here nor there.)

commented

@lvh I agree that KCI is a non-intuitive (to users, at least) issue.
I also agree with @GrayHatter that it isn't a “let's set our pants on fire and run around screaming” kind of issue, as it requires first a key compromise.

However, I would be even more interested in moving that conversation away from the name-calling and back in rational, constructive discussion. That seems to be a much harder problem, unfortunately :(

commented

@nbraud OK, does that mean you agree with the suggested resolution in the form of documenting the known attacks, including the handshake not being secure against KCI?

commented

@GrayHatter When you say "an attack that would quickly become discovered", is that because you're asserting that adversaries can't compromise keys without you finding out, or is there some other subtlety I'm missing?

I'm still interested in having this discussion. KCI is an interesting and important topic, and I'd like to know more about @zx2c4's and @lvh's thoughts here.

I would also like to give @irungentoo a chance to weigh in on the concrete issue. In my experience, Tox solves some security issues in a non-obvious way. I have looked around the code and specification several times and found a number of issues, most of which I have later found out were somehow mitigated by non-obvious means. It is quite possible that the same is true in this case. I think it's reasonable to wait for the person who knows the protocol best to provide insight.

@lvh: we should and will definitely be documenting known attacks.

commented

@zx2c4 As mentioned by others, the plan is to provide users a single-cutoff switch to a better protocol, with a documented threat model & security claims.

The current “slapdash protocol”, along with its lack of an actual spec and of a robust implementation, is what we inherited from @irungentoo . As @iphydf mentioned, the goal is to first gain an understanding of where we stand and develop a robust codebase, so as to be able to provide a sane upgrade path.

Of course, part of that is documenting the current protocol's failings, in particular which security properties it fails to provide, under which threat models, and why they are relevant to users.
I don't believe, however, that putting a big fat warning that “everything is broken” is accurate or helpful to users, though.

commented

@lvh Sure, see above answer. You were just a bit too fast ;-)

PS: I should have specified that I'm not the most active contributor here, in part due to issues outside of my control, so don't take my opinion as representative of what other TokTok contributors think.

commented

Just an aside:

The best thing to do in situations like this is to make a clean break. Start over with a secure protocol (in this case AKE) rather than try to smoothly transition users towards a secure protocol and introduce downgrade attacks.

@GrayHatter When you say "an attack that would quickly become discovered", is that because you're asserting that adversaries can't compromise keys without you finding out, or is there some other subtlety I'm missing?

Because of how the protcol works, if someone else tried to impersonate you, your friends would rapidly connect and disconnect from you. You can see what this would look like in the client by running the same tox "profile" on two systems at the same time.

@GrayHatter: The issue of KCI is not "I stole your key, now I can pretend to be you" - it's "I stole your key; now whenever you try to talk to someone, I can gaslight you instead, pretending to be them"

This is best combined with any of the MANY techniques for network-level interception, such that you never even have a chance to talk to anyone but the attacker

(This then trivially bootstraps to a fully-general MITM).

would like to mention that "Noise" could also be be called "homebrewed crypto" in that someone has actually sat down and written it. It is also Yet Another Encrypted Messaging Protocol, like there has never been enough of them (OTR, insecure Axolotl gimmick)

commented

@kebolio I don't think that's a statement you could get cryptographers to support (certainly not me). Noise is peer-reviewed, and explicitly highlights many issues and how it addresses them, including specifically AKE KCI.

commented

@eternaleye As far a I get @GrayHatter's point, the user being impersonated (Alice) would see the user whose key was compromised (Bob) rapidly connect and disconnect while the attack is ongoing.
Of course, running the attack while Alice is offline likely sidesteps the issue.

@nbraud That would be true if it wasn't trivial to deny the connection to Alice using network-level techniques.

commented

@kebolio The difference is mainly in who designed it, what is the supporting documentation (threat model, security claims, proofs, ...) and in who reviewed it.

commented

Regarding threat model: #210

commented

@eternaleye Yes, that was implicit in “Alice being offline”. AFAIU, a DHT attack could achieve that, for instance.

@eternaleye

@GrayHatter: The issue of KCI is not "I stole your key, now I can pretend to be you" - it's "I stole your key; now whenever you try to talk to someone, I can gaslight you instead, pretending to be them"

This is the first time I've heard of any attack being used in this way. And my knowledge of how ECDH works would lead me to believe that isn't possible. Do you have anything that describes how this attack would work?

@lvh @nbraud Not in on this crypto secret club gimmick. Are you implying that the authors of the Tox protocol are inherently lesser cryptographers (not withstanding the protocol itself and this vulnerability). On the point of peer review Tox has been around for over 3 years and I am surprised this wasn't spotted beforehand if it turns out to be so fundamentally compromised.

@lvh your crypto book is cool btw

commented

@kebolio It has nothing to do with a “secret club”. A few TokTok contributors have the level of expertise required, but as was explained earlier we needed first to document the protocol (i.e. write a specification) and documents its goals (i.e. security claims and threat model) before that made sense.
Also, it's not “wasn't spotted beforehand”: we are aware of some issues in the current protocol, and should definitely communicate better about it.

( Full disclaimer: I'm not a cryptographer, despite living in Vincent Rijmen's former office :P )

commented

@kebolio Peer review does not require being a part of a club. The cryptographers in question are just people who have studied protocols, and many like it. I try not to make judgements about people and instead try to make objective statements about protocols: Noise explicitly deals with KCI and Tox does not. Re: your surprise; I don't know what to say -- the vuln is found and modern AKEs (designed in the last decade, like MQV, A0) :)

Glad you like the book. I guess I should write something about KCI in it -- I think I have something about identity misbinding already :)

@GrayHatter Did you see the earlier example? The problem is that there's no strong binding between ephemeral and long-term keys. This isn't about ECHDE; the decisional Diffie-Hellman assumption isn't violated nor are any discrete logs are computed.

@kebolio the attack described is only possible if you gain access to a user's secret key (let's call that user A). That in itself always opens the door to impersonation as A, but not to A. The KCI attack described allows impersonation as anyone to A. KCI is normally talked to in relation to TLS, where getting access to the secret key of the TLS user's client certificate allows them to impersonate any server. This applies to protocols using the single round of DH on long term keys as described in the report. Whether the protocol actually behaves this way or whether Tox mitigates KCI in a different way, we should either dig into the code, or wait for @irungentoo to report in.

@iphydf

I would be quite interested in your thoughts around this, and perhaps we can steer in a different direction that's better for the project.

You're not going to like it.

I've been involved in a lot of (open-source) projects, among them security and some crypto related ones. In standard-processes. I've been proven wrong. I've let projects go because there were better solutions out there and at a certain point one has to face the fact that one's just wasting cycles on something that's not worth the effort. It's hard if you're already engaged in something and if it's your toy. But this thread clearly outlines that you do not have sufficient resources to provide what you're aiming (and advertising) for currently.

I'd put the following quote (from your post) on the webpage, I think that sums up the state of the project as far as I can tell very accurately:

Tox provides some strong security guarantees. We haven't got to the point where we can enumerate them properly, given the general lack of understanding of the code and specification.

After June of 2013 a lot of people were eager to improve security on the internet, a lot of new messaging applications sprung up on the internet, some commercial, some open-source. Most of them turned out to be insecure in one way or the other. A few years later we can say that the clear winner is Signal and the Noise protocol. In terms of novelty, security and mass-deployment.

I'd recommend taking the project/network off-line for the time being until you have a proper threat model sorted out and discussed -- in depth -- with cryptographers and engineers the protocol design you want to implement and deploy. This takes a lot of time, e.g. major networking companies hire senior/distinguished engineers just to discuss such issues in IETF, W3C and similar organizations. It's a painstaking process, and that's why protocols like OTR and TLS are still being worked on (and as some have pointed out, still have issues).

If you just want to do some good, there're a lot of projects that have been peer-reviewed and still need a lot of engineering effort, thought and love (Certbot/Let's Encrypt, LEAP, Tor, Signal, just to name a few).

@kebolio

would like to mention that "Noise" could also be be called "homebrewed crypto" in that someone has actually sat down and written it. It is also Yet Another Encrypted Messaging Protocol, like there has never been enough of them (OTR, insecure Axolotl gimmick)

It is the most advanced and secure instant messaging protocol to date. This is why the designers got an award in front of a conference full of seasoned cryptographers and security engineers a week back for improving Real World Cryptography. Are you serious?

@zetok

Regarding threat model: #210

This is not a threat model. I don't even understand most of the points in the table, it's not language used in security engineering or information security proper. It's confusing and does not tell a user or an engineer anything about the security properties of the protocol in it's current state.

Compare with the threat model written for Certificate Transparency for example: https://tools.ietf.org/html/draft-ietf-trans-threat-analysis-10

@azet I think that's mostly reasonable. Thanks for that. We can't really take the network offline, because we don't control it (that's one of the major points of Tox: nobody controls the network). What we can do is be honest about the state of the project and our roadmap. We've started working on that only quite recently within toktok (improving documentation and web presence). Does that seem reasonable to you?

azet: perhaps help find some crypto expert to correct issue instead of recommending to take down, if you have involved in lot of projects then maybe you have some contacts with some programming expert?

But this thread clearly outlines that you do not have sufficient resources to provide what you're aiming (and advertising) for currently.

This is not true, we have just started writing a spec and making good progress. As we know about this issue I am sure it will be adressed soon. There are a lot of people involved in toktok and we are working on making it stable more and more. We can not just have everything right now, it is always about iteration and improvement and this issue is something we can take to improve.

@iphydf

I think that's mostly reasonable. Thanks for that. We can't really take the network offline, because we don't control it (that's one of the major points of Tox: nobody controls the network). What we can do is be honest about the state of the project and our roadmap. We've started working on that only quite recently within toktok (improving documentation and web presence). Does that seem reasonable to you?

There're a few ways to take such a project down, a simple one is not to provide working binaries to end-users anymore and make the source-code on GitHub accessible only for experts that want to play with or improve upon Tox properly (i.e. don't automate builds or something like that) - that's harsh, I know :)

Being honest about the current state of affairs is certainly the best way to approach your user-base.

@fcore117

azet: perhaps help find some crypto expert to correct issue instead of recommending to take down, if you have involved in lot of projects then maybe you have some contacts with some programming expert?

It's not only a single issue, the project as a whole seems lacking manpower, qualified security people and maintenance. I'm sorry but I cannot recommend anyone serious that would want to work on this project in it's current state. For similar projects in the commercial space there're consultants that take a lot of money to fix everything, but this isn't an option as you probably don't have funding to hire people to work on F/OSS, currently. Also I'd rather see qualified people working on and improving existing technologies that urgently need more eyes on them as they're vital to communications world-wide.

@cebe

This is not true, we have just started writing a spec and making good progress. As we know about this issue I am sure it will be adressed soon. There are a lot of people involved in toktok and we are working on making it stable more and more. We can not just have everything right now, it is always about iteration and improvement and this issue is something we can take to improve.

You're currently shipping an instant messaging application advertised as secure and end-to-end encrypted (most users won't know the difference), where a bored hacker found a crypto vulnerability by simply scrolling over your code. You cannot expect the community to do code-audits for you in the current state, but I'd assume if a few pairs of trained eyes were on the code the whole security of the system will fall apart within hours. There're certainly some good choices in terms of used primitives in there (NaCl for example) - just the way they are used is wrong and obviously unaudited. Also the way a lot of the code is written seems to be prone to classical vulnerabilities.

azet: If there would be no Tox then IM app choices would still be much worse because some im apps do not have encryption at all and Skype encryption is practically a Swiss cheese. Tox second good thing is that even in current alpha state it is very easy to use without messing with million settings or server accounts.

When Tox goes out of alpha state i am sure day comes when someone decides to pay and/or make security audit for free.

commented

@fcore117 I'm not sure that comparing with terrible options is a good point to make.

@azet

There're a few ways to take such a project down, a simple one is not to provide working binaries to end-users anymore

Anyone can make and distribute binaries for the project. The TokTok org has no control over the Tox Project binaries, or any others.

and make the source-code on GitHub accessible only for experts that want to play with or improve upon Tox properly (i.e. don't automate builds or something like that) - that's harsh, I know :)

How does this prevent forks? TokTok itself is just a fork. "Shutting down" TokTok would just prompt yet another fork, or force client developers to use a stale fork. You can rest assured that all the client developers (myself included) aren't going to throw in the towel because of a minor and fixable security flaw.

Being honest about the current state of affairs is certainly the best way to approach your user-base.

You mean like this? https://tox.chat/download.html#warning

One of the underlying design principles of Tox was to make it impervious to centralized control. This applies both to the network itself, and the development process. No person or organization has control over the network, nor development. In light of this fact, your options are:

  1. Help
  2. Don't help

Attempts to discourage or demoralize the development team from furthering the project have been ongoing since day one, and in my experience it's a counterproductive strategy.

You are fucked if you get your key stolen. There are so many more fun things you can do if you steal someones key that I simply didn't bother trying to handle that case because it would not provide any actual security.

Every once in a while a few tox devs get together to play https://github.com/OpenRA/OpenRA/ and while I don't mean to derail this thread to much, but @lvh @azet @kebolio @eternaleye @paragonie-scott would you like to play a few games with us? Most of the games start from #utox on freenode http://webchat.freenode.net/?channels=#utox and anyone interested in tox (hate it or love it) is invited.

If I didn't highlight you it's just because you're already a tox contributor, and you're already always welcome to join us!

I am a user of tox. I am not a coder. For a long time I was a skype user - for texting and calling, no video. After it was sold to microsoft I was sure its usability and privacy would depreciate but I never found something to replace it. I was very happy to find a solution some weeks ago in tox - which is easy to install and use on different operating systems. I don't care about a 0,001% chance of beeing hacked, and this is what all users do. If you are Julien Assange and need a super secure vehicle, then you will find one. For normal users that are fed up with the commercialisation of skype, whatsapp and others, this application is totally sufficient! I am highly disturbed by the people suggesting to shut the project down, or even about making big remarks about "insecurity". This is a very egocentric point of view. It feels like you are trying to sabotage this awesome project. If your so smart to review the code in five minutes and find big problems, go solve them by actually contributing code!

@zx2c4 The Tox crypto isn't exactly homebrewed, it's NaCl with the addition of session-level forward secrecy through the use of ephemeral keys. They key derivation, encryption, message authentication etc are all from NaCl. As far as I can tell, the KCI vulnerability is a fundamental and probably unavoidable property of NaCl's authenticated encryption; possession of one party's private key is by design sufficient to forge messages from either party to the other. This is sort-of documented at https://nacl.cr.yp.to/box.html in the context of repudiability, but the KCI issue isn't mentioned and the proposed solution provides non-repudiation which is undesirable here. If this is really serious enough that it justifies a big red disclaimer, then presumably Daniel J Bernstein should add a similar one to the documentation for crypto_box, especially given how widely recommended it is as an easy, secure and misuse-resistant crypto solution.

commented

Protocols aren't primitives. Secure primitives certainly do not imply secure
protocols (Example: AES is a secure block cipher, but AES-ECB is clearly not a
secure way to encrypt messages). Secure protocols mostly imply secure
primitives. (Counterexample: a protocol using HMAC-MD5 doesn't have forgery
issues even though MD5 is not a secure hash function.)

There are several levels of "homebrew" of "roll-your-own" cryptography:

  • Designing your own block ciphers or hash functions.
  • Designing your own compositions of primitives, like AE or MAC.
  • Designing your own protocols, like TLS or Noise.

This vulnerability exists on that third level. As a consequence, this isn't a
repudiation of NaCl or libsodium. They're excellent libraries. Curve25519 is a
DH primitive, and there's no DH vulnerability here. The problem is that it's not
an AKE, and that's what you're using it as. The docs clearly enumerate what it
does and does not do:

Security model

crypto_scalarmult is designed to be strong as a component of various
well-known "hashed Diffie–Hellman" applications. In particular, it is
designed to make the "computational Diffie–Hellman" problem (CDH) difficult
with respect to the standard base.

crypto_scalarmult is also designed to make CDH difficult with respect to
other nontrivial bases. In particular, if a represented group element has
small order, then it is annihilated by all represented scalars. This feature
allows protocols to avoid validating membership in the subgroup generated by
the standard base.

NaCl does not make any promises regarding the "decisional Diffie–Hellman"
problem (DDH), the "static Diffie–Hellman" problem (SDH), etc. Users are
responsible for hashing group elements.

For example, this clearly states that you're responsible for hashing group
elements, which ostensibly the Tox AKE does not do. If you build an AKE, there
are other documented aspects of Curve25519 to consider; for example, some AKE
protocols require contributory behavior, which means that in Curve25519 you're
(exceptionally) required to consider representations of points of low order (see
https://cr.yp.to/ecdh.html).

The claim that libsodium doesn't give you the tools to produce a secure AKE is
incorrect. Firstly, you can do a traditional signing key exchanges. Secondly,
Noise is a proof from construction; there are implementations of the Noise
protocol available on the site, and you'll see that it defines a KCI-secure AKE
that you can implement using nothing but NaCl/libsodium.

Finally, as much as I try to draw this conversation away from individuals and
towards technical discussion, I hope you'll find that I've tried pretty hard
both here and in general to provide constructive contributions, and trying to
educate those who'll listen. And, I tell people to consult a cryptographer,
although you could do a lot worse than NaCl as a set of solid primitives :)

If a chainsaw does a bad job of cutting an apple, it's not a bad chainsaw.

I'll respond to a few small obvious things I've seen since I left the thread alone yesterday.

@irungentoo's hubris / "KCI ain't that bad"

You are fucked if you get your key stolen. There are so many more fun things you can do if you steal someones key that I simply didn't bother trying to handle that case because it would not provide any actual security.

This isn't as true for modern AKEs, which give pretty nice security properties that your handrolled naiveté just doesn't account for. How so, you ask?

  • Compromise one key, A, with Tox AKE --> mount an active man in the middle attack on an infinite quantity of keypairs (A, {everybody else}).
  • Compromise one key, A, with modern mutual AKE --> man in the middle attack not feasible.
  • Compromise two keys, A & B, with modern mutual AKE --> man in the middle attack feasible between one keypair (A, B).

So, if you quantify it this way, in terms of "number of full man in the middle attacks feasible after compromising N keys", a modern mutual AKE is infinity times better than the Tox AKE.

If you're serious about doing things right, you wouldn't hubristically categorize things as "any actual security" so hastily, when in fact there's a massive body of research and human accomplishment that's preceded your novice crypto. Open your eyes. Read some papers. Humble yourself while looking at your species, in awe of the wonderful cryptographic techniques created before you. So you made a mistake. We all do. Time to educate yourself and improve now.

"Since we use NaCl, we must be safe, unless DJB is an idiot!"

Making a protocol is different from using safe primitives. There are dragons at every step in the process. NaCl came with an implementation of CurveCP -- a protocol -- which exists accessibly as libchloride. Are you using libchloride? No you're not. So not only do you fail to use NaCl in a safe way, but you fail to use the protocl that NaCl says is safe!

"It's still more safe than cleartext or Skype"

Maybe true (if you ignore the C implementation vulnerabilities in toxcore), but there are easily accessible things more secure than Tox that actually use modern cryptography, rather than Tox's handrolled non-peer-reviewed no-security-model drivel. So it seems obvious to recommend that you use those actually secure protocols instead. When folks expect for the word "secure" to indicate what cryptographers consider acceptable, using "secure" for something like Tox is disingenuous and even potentially dangerous.

"I'm confused and don't understand KCI; my analogies are incorrect!"

Probably this discussion is not for you, then. Also, you probably shouldn't be developing cryptographic software in that case.

"Cryptographers are a secret Illuminati club"

No they're not. They're just people who took the time to study and actively engage in peer review and open constructive criticism.

"We'll never quit! Help us, or get lost! You can't break our spirits!"

It's not really about that. By all means, continue to develop software and expand your experience and education, in private. But publicly releasing and promoting software with known fundamental insecurities is irresponsible and reckless.

"But we can't just take down the network because it's a DHT"

You can remove code repositories, pre-built binaries, websites (contacting the owners of tox.{im,chat}), etc. You can also add big red disclaimers "DO NOT USE - EXPERIMENTAL & INSECURE" to every medium you have.

@iphydf's admissions

We have a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C).
Tox provides some strong security guarantees. We haven't got to the point where we can enumerate them properly, given the general lack of understanding of the code and specification.

I appreciate the honesty here. It's also a pretty good indication that you should start from scratch. Determine what security goals you want; then develop software around them. The fact that nobody understands the code, the crypto, or the protocol all but admits complete failure. You can't guarantee something if you don't know what you're guaranteeing, or the mechanism by which this guarantee is brought about. You admitted this yourself -- that because you don't understand Tox, "any change we make has the potential to make Tox less secure, running counter to our goal."

Heeding to advice

You've got cryptographers and security experts telling you to shutdown and take a more conservative approach. Your reaction has been one of pride and stubbornness. Yes, you've worked very hard on this and it's your baby so you want to keep it. But responsibility is something important. Providing software that does not provide adequate security under the label of "secure software" is dishonest and irresponsible. The webpage touts Tox as "VERY secure", which it clearly is not.

This doesn't have to do with sabotage or demoralization. It's about responsibility.

Hang up the red "we're only an experiment" banners, or abandon ship.

commented

Hang up the red "we're only an experiment" banners, or abandon ship.

Here's an image that @Bascule created for this exact use case:

Danger: Experimental

Here's the Markdown code to embed in READMEs, etc.

![Danger: Experimental](https://camo.githubusercontent.com/275bc882f21b154b5537b9c123a171a30de9e6aa/68747470733a2f2f7261772e6769746875622e636f6d2f63727970746f7370686572652f63727970746f7370686572652f6d61737465722f696d616765732f6578706572696d656e74616c2e706e67)

To be clear: There is no shame in your project being experimental. One of mine proudly emblazens itself as an experiment until such a time that it can be audited by a team of penetration testers and cryptographers.

I would suggest you take roughly this course of action:

  1. Slap up the image above.
  2. Figure out how to implement a protocol like Noise into Tox, and ask a cryptographer to review it.
  3. Develop your newer protocol based on info from step 2.
  4. When you think you're ready, ask a (ideally, different) cryptographer to review your implementation.
  5. If they give it a clean bill of health, publish their findings and ask the original cryptographer to peer-review it.
  6. If all is well, then you can call yourself secure again, until someone else finds a protocol flaw that can compromise your security goals. Hopefully it won't be an obvious or trivial one.

Every once in a while a few tox devs get together to play https://github.com/OpenRA/OpenRA/ and while I don't mean to derail this thread to much, but @lvh @azet @kebolio @eternaleye @paragonie-scott would you like to play a few games with us?

Sorry, I didn't see your message last night. I'm afraid I must decline due to other responsibilities. (I barely have the free time to play video games with my closest friends these days.)

The fact that tox is providing a fast text and voice messaging service without a server (of a company) in the middle is important to users. I am mostly concerned about my data beeing stored with somebody else (and synchronised between clients), and not so much about the random chance that a single conversation might be hacked. By indicating to label this a big "danger" people actively destroy the potential of this application. I am a user with no business here, just wanted to make clear that the fine points of cryptographic security might make up the last 10% of this. 90% are already there. If a professional cryptographer wants to code the rest, why not? :)

@zx2c4

So, if you quantify it this way, in terms of "number of full man in the middle attacks feasible after compromising N keys", a modern mutual AKE is infinity times better than the Tox AKE.

Perhaps your reasoning is flawed if it leads you to such a hyperbolic conclusion.

It's not really about that. By all means, continue to develop software and expand your experience and education, in private.

You should read my previous response more carefully.

The webpage touts Tox as "VERY secure", which it clearly is not.

Tox's security claims assume that your private key remains private. I think this is a reasonable assumption, as there is no software in the world that can be considered "VERY secure" if your private key has been compromised. There are only varying degrees of fucked, which most of us agree should be limited as best as reasonably possible.

This doesn't have to do with sabotage or demoralization. It's about responsibility.

According to your idea of responsibility, the internet in its entirety should be shut down, as known security vulnerabilities range from ARP all the way up to HTTPS. Security is not a black and white issue, and I would expect a self-proclaimed expert who is so sure of himself that he thumbs up his own posts to know this.

commented

I have spent countless hours of my life providing free cryptographic consultancy and design, up to and including literally writing a book and then giving it away for free.

I find your suggestion that I am explaining cryptography on an ostensibly underfunded crypto project to actually be about making shit up so I can scare people into giving me money ridiculous and offensive.

@lvh thank you for that! As far as I see, @JFreegman's comment is addressed to @zx2c4, not to you.

First @lvh you're always welcome here. I appreciate your reasonable and rational responses. So I'm not going respond line by line. I assume you care about crypto, and about teaching how to use it correctly. I'll just hit the broad points.

There are several levels of "homebrew" of "roll-your-own" cryptography:

Right, but would you dissagree that, among the security/crypto circles. It's use derogatorily to reference shit code written by someone with no idea what they're doing? So unless you're trying to imply that the original author fucked up, don't you think it becomes a bit problematic, if not outright insulting?

Also, It's doesn't even apply in this case. We're not even rolling our own crypto. An argument can be made that we're created a "crypto system". But that's even a hard sell, given we're using the NaCl API as the documentation instructs.

The claim that libsodium doesn't give you the tools to produce a secure AKE is
incorrect.

I must have missed that part of the NaCl documentation. As NaCl compatibility was one of the original design goals for Toxcore. (While I'm here, I'm also going to mention again that tox.chat already warns users not to get their key stolen. If you'd like to have a separate discussion on the merits of THAT warning. We should open a new issue) Also, let's remind everyone that @irungentoo, the original author of the codebase was aware of the attack verctor, and decided not to include it as a part of the threat model.

Noise is a proof from construction; there are implementations of the Noise
protocol available on the site, and you'll see that it defines a KCI-secure AKE
that you can implement using nothing but NaCl/libsodium.

Link, the Noise stuff I saw didn't really offer ANY documentation. But then I never looked THAT hard.

Finally, as much as I try to draw this conversation away from individuals and
towards technical discussion, I hope you'll find that I've tried pretty hard
both here and in general to provide constructive contributions, and trying to
educate those who'll listen. And, I tell people to consult a cryptographer,
although you could do a lot worse than NaCl as a set of solid primitives :)

You've been awesome, as I said at the start, you're always welcome around here. (Hopefully you'll hit up HN again and answer my pending question to you)

If a chainsaw does a bad job of cutting an apple, it's not a bad chainsaw.

Right, but if you then call up the chainsaw maker, and shit all over the work they've done. They're allowed to be offended.

I believe this discussion has come to an end. We acknowledge that the issue exists and will work towards fixing it. We do welcome contributions in this direction.

@zx2c4 thank you for starting the discussion and giving the explanation in your report. I would appreciate if you could help review the PR that adds a notice to the website about the lack of security review.
@lvh thank you for further helping create a better understanding of the issue and ways to solve it. We (toktok team) appreciate all your help and would in no way consider it an act of malice. We continue to welcome reports of security flaws.

I will say this very clearly once again: there is an avoidable security flaw in the Tox handshake. This is not something someone made up. The effect is that if your secret key is stolen, an attacker can impersonate anyone to you. We will fix this issue, most likely by adopting Noise for handshakes.

I will post one more message on this issue and then lock it. Please contact me (my github email is public and I'm usually on IRC: iphy @ freenode) if you feel this decision is inappropriate. I am keeping this issue open until it is solved.

I would appreciate if all the collaborators could stop posting on this issue as well. I'm locking the conversation now.

commented

Two points; I was replying to @bvrules. Secondly, in my example libsodium primitives are the chainsaw. Anyway, in fairness, it's worth stating that the actual maintainers have been courteous.

I'm going to check out from these threads because I'm not particularly interested in emotional abuse from the peanut gallery, but dear maintainers: you know where to find me if you'd like some free crypto advice. I'll try to remember to answer the KCI example question with some papers if you'd like some light reading :)

commented

I'm trying to get at the meat of this discussion. The following is true?

With Tox, if you have your private key stolen, someone can impersonate your friends. There are protocols that make this impossible, but they require nonrepuidation. The Tox specification was designed with the assumption that nonrepuidation is more dangerous than impersonation.

@Halfwake

With Tox, if you have your private key stolen, someone can impersonate your friends.

This is true, someone can impersonate your friends to you.

There are protocols that make this impossible, but they require nonrepuidation.

I think you're talking about deniability here. And in general you don't have to sacrifice deniability in order to be protected from this kind of potential vulnerability.

When talking about current Tox implementation, then yes, you chose either deniability or protection against KCI, but as was discussed previously, the whole protocol should better be re-designed to address this issue fundamentally.

At least in current state people get very easy to use IM app that is still more secure than Skype, if someone can upgrade protocol fast to be more secure it is good but if it takes another 4 or more years then no one sees alternative to Skype never.

It is pain that lot of people are forced to use proprietary Skype(including me) or other bloated IM apps to communicate that are usually server based and can be easily blocked.

nazar-pc: if you know someone who can help speed up C development within months not years then call them here to develop but otherwise Tox in current state can still save lot of people from endless Skype slavery.

More overall optimizations/fixes and new group chats and Tox can be serious alternative to Skype.

All security is broken anyway when someone bugs your pc or house and passwords will be revealed at gun point in worst case scenario.

@fcore117, this is an issue about technical implications and possible solutions for mentioned vulnerability, not about whether Tox is an alternative to anything or not. So let's keep discussion close to the topic if you have anything to add to the point.
Also I'm not representing Tox team in any way and I'm not a part of it.

If the implication of the technical problem and its solution of a reorganisation of the whole code would mean that no further development will take part in the most recent code, while forking to the other code base will take like 4 years to develop a similar state of working applications, then I would vote against this decision, on a basis of the technical issues. A parallel development would be welcomed of course.

commented

@Halfwake the part about KCI is true; the part about the necessary trade-off is not (in general) true. The property you're describing is slightly different than "non-repudiation" (a property of signatures, which is one way to get KCI), instead you want "deniability", which other protocols offer. Deniability in this case means that the receiver can authenticate the sender, but the receiver can not convince anyone else that the actual sender must be the sender of a given message. (This is different still from "indistinguishability", which means different things in different context, but in this context specifically it would usually mean that a passive network observer can't tell that you're speaking $PROTOCOL.)

This is true, someone can impersonate your friends to you.

From my understanding, isn't this something the clients can solve (rather than tox-core)? e.g. a button that implements socialist millionaire authentication...

It's been years, is there any update at all about this subject?
At a time when so many people are ditching mainstream IM services, it would be extremely beneficial to tackle a problem like this.
Telegram got 25 million users in the past 3 days just from people abandoning Whatsapp because of their anti-privacy policies.
Imagine if Tox didn't have an issue like this and we could actually recommend it to our friends and family.

commented

@horusra
Yes, there is an update to this issue.

I wrote my master thesis on “Adopting the Noise key exchange in Tox”. You can download the full master thesis from here: https://pub.fh-campuswien.ac.at/obvfcwhsacc/content/titleinfo/5430137

I presented the results of my master thesis at Remote Chaos Experience (rC3): https://media.ccc.de/v/rc3-709912-adopting_the_noise_key_exchange_in_tox
Talk slides are available here: https://pretalx.rc3.studio/media/rc3-channels-2020/submissions/PWNJYW/resources/Buchberger_ToxNoise_rC3_0zhUC7T.pdf

As @zx2c4 wrote in the initial post, the handshake is not easy to understand based on the existing spec and implementation. My thesis includes a detailed description of Tox’ handshake, including the (KCI-vulnerable) authenticated key exchange (AKE), all exchanged messages and all performed computations (see chapter 2.4). Hopefully this will be used for further security analysis.

The DHT key pairs, which are used to calculate a shared secret in the cookie phase of the Tox handshake, make an actual KCI-attack on the Tox’ AKE more complicated than the simplified description from @zx2c4. This is because

  • M needs to obtain the X25519 static private key S_A^{priv} from A to impersonate B to A
  • M needs to obtain the X25519 static public key from B (S_B^{pub} or B’s Tox ID) to be able to impersonate them to A
  • A and B need to be online at the same time because the attacker is not able to initiate a handshake on their own due to the usage of a shared secret based on the DHT key pairs during the cookie phase of the handshake. If it’s possible for M to spoof B’s DHT key pair, M could initiate a handshake on their own. I didn’t have time to look at the DHT module during my thesis, therefore I’m not sure if this is possible.
  • The attacker needs to have control over the network between A and B (i.e. the internet -> NSA-style) to be able to intercept and drop packets
  • The attacker would need to reimplement toxcore because it's not possible to exploit KCI by using the "normal" toxcore

Anyway, this vulnerability should be fixed. I created a PoC implementation which is based on the Noise-C library: https://github.com/goldroom/c-toxcore/tree/tb_noise_handshake_IK_noise_patch
The PoC implementation in it’s current state shouldn’t be used in practice. Also it’s not backwards compatible.

I added AKE-related comments to net_crypto.c, hopefully they are also helpful for other people to gather more understanding of Tox’ handshake implementation.

In future work the Noise IK pattern should be implemented directly in toxcore instead of using Noise-C (cf. Wireguard).

Maybe @lvh is also interested in this update.

@goldroom Wow, great work I had no idea so much was done already.
Then my only concern that remains is the mobile client, I just checked the android version and the one in fdroid has not been updated since 2 years ago and on the google store there are a lot of issue it seems from user reviews
What is the current end-user focus exactly? what client for what platform? (meaning, what client is the most complete and up to date to promote for the average IM user)

There are a few different mobile clients on Android, I'm not sure which one you're referencing.
These three are all being updated regularly:

  • aTox (android design, stock, polished)
  • TRIfA (featureful, extended protocol, uses a fork of c-toxcore)
  • Protox (Qt based, I don't know that much else about this one)

Antox on android hasn't been maintained in a few years, which might be what you're looking at.

If you're talking about platforms in general,

  • qTox is probably the most used desktop GUI client, runs on Linux/Windows/macOS
  • uTox is lighter weight, runs on Linux/Windows
  • Toxic is ncurses based, probably the most polished of the three

If you see the "Tok" apps, note that those are unaffiliated with Toktok, are closed source, and seem to use some modified version of Tox that isn't distributed. I wouldn't recommend using those.

@anthonybilinski Thank you for this post, however, why is the main site then only listing Antox for android? Why is this not removed when it hasn't been maintained for years? The site doesn't mention anything about aTox, TRIfA or Protox

https://tox.chat/clients.html
image

Good point, looks like maybe because of you pointing it out, Antox is being removed and aTox is being added: Tox/tox.chat#228. I'm not sure about the other two though, I'm not really involved in the site and I'm not sure how clients are chosen to be on there.

i've seen too much lazy developer while reading through this thread

@goldroom I'm glad your are tackling this issue and hope you get pthe funding from NLnet.. A thought after reading your gist: could you comment on the following uninformed conjecture:

Although the KCI is exploitable for impersonation with presumably a huge effort, handshake disruption might be achievable by much more modest means.

I.e. denial-of-service, which might suit some adversaries just fine.

commented

Hello,
I hope you all are doing good. Is there any update on "redesign of tox cryptographic handshake"?
Furthermore, is there any active discussion happening over mailing list or tox gc itself?

Best regards,

commented

@eqn-group did you read the blog post? Contact information is provided there. Development discussion is currently mostly happening via the NGC group 360497DA684BCE2A500C1AF9B3A5CE949BBB9F6FB1F91589806FB04CA039E313.

Main clients development has been stopped as i see....

commented

@fcore117 the main platforms are still actively supported:

https://github.com/Zoxcore/qTox_enhanced
https://github.com/Zoxcore/Antidote
https://github.com/zoff99/ToxAndroidRefImpl
https://github.com/JFreegman/toxic

be very careful with older qtox --> Zoxcore/qTox_enhanced#6
there is an RCE vulnerability

https://github.com/qTox/qTox on windows platform is vulnerable!

@zoff99 Thanks, did not know that qTox repo. uTox should return too as ultralight client. Seems that tox.chat propagates releases with vulnerabilities?

@fcore117 I raised the fact that Tox's flagship portal is distibuting softare with known CVEs as a separate issue: JFreegman/toxic#648

@zoff99 As bad as ZeroNet :-( leycec/raiagent#101 (comment)

This is not productive and comes across as trolling.

My sincere apologies - I mistook the toxic version number for the c-toxcore library number that had a CVE. I'll delete my comment and the issue I raised - mea culpa.

be very careful with older qtox --> Zoxcore/qTox_enhanced#6
there is an RCE vulnerability

@zoff - is that RCE are you refering to only in that repo/version?