w3c / vc-data-model

W3C Verifiable Credentials v2.0 Specification

Home Page:https://w3c.github.io/vc-data-model/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add Security Considerations related to advances in Artificial Intelligence

msporny opened this issue · comments

I've been having a set of discussions with leading researchers in the field of artificial intelligence and challenges that will be created as capabilities in that field advance. Even today, arguably, artificial intelligence "passes" traditional and more modern Turing tests, with humans being unable to tell the difference between an AI and a human when interacting with each other through a text-based medium. This has implications on verifiable credentials, where AIs will be able to legitimately acquire and utilize the same verifiable credentials that humans do. There is a growing desire to understand when a human is at the other end of a transaction on the Web, and when that actor might be an AI. The Security Considerations in the specification should mention something about advanced AI, its use of verifiable credentials, and how that might change the security posture of a system.

PR #1508 has been raised to address this issue. This issue will be closed once PR #1508 has been merged.

The issue was discussed in a meeting on 2024-06-26

  • no resolutions were taken
View the transcript

2.5. Add Security Considerations related to advances in Artificial Intelligence (issue vc-data-model#1507)

See github issue vc-data-model#1507.

Manu Sporny: I have been working with a number of AI companies on how VCs can be used to determine if an online entity is a real person or an AI bot.
… AI systems can now pass Turing test.
… how AI affects IDM systems need to be documented.
… will be a number of research papers published this summer that go into greater details.

See github pull request vc-data-model#1508.

Gabe Cohen: AI does not affect the VC DM data structures.

Brent Zundel: are you saying this PR text is in the wrong section of the spec?

Gabe Cohen: yes.
… move to validation/verification section.

Joe Andrieu: confidenceMethod can be affected by AI.
… there is an AI arms race at the moment.

Joe Andrieu: +1 to that, Manu.

Manu Sporny: the text will not provide text on exact solutions, but we should point to research papers when they become available.

Joe Andrieu: "We have something AI doesn't have. That is cryptography." That's great framing.

Steve McCown: Have we started actively discussing moves towards post quantum cryptography?

Ted Thibodeau Jr.: AI is a moving target so not something we can solve now.
… I have provided substantial text edits to the existing paragraphs.
… leave text as is and more text in Validation section.

Ivan Herman: +1 to Ted.

Joe Andrieu: +1 to Ted. That was a good argument for keeping it in Security Considerations.

David Chadwick: Joe said that humans have cryptography and AI doesn't, that's a good point, but I think AI can have that too.

Steve McCown: AI's are currently being created for brute force attacks on cryptography.

Manu Sporny: yes, +1 to what Joe said, that's what I meant too.

Will Abramson: +1.

Steve McCown: ECC isn't quantum secure...

Gabe Cohen: will continue in issue.

David Chadwick: the issue about cryptography is not that AI cannot use crypto and sign, but rather that AI cannot break crypto.
… Therefore AI cannot fake a signed document.

Steve McCown: I would contend that AI can break crypto.

The issue was discussed in a meeting on 2024-07-03

  • no resolutions were taken
View the transcript

1.3. Add Security Considerations related to advances in Artificial Intelligence (issue vc-data-model#1507)

See github issue vc-data-model#1507.

Brent Zundel: let's talk about AI! 1507 Add Security Considerations related to advances in Artificial Intelligence.
… there are vendors concerned about AI and interactions with VCs. we talked and said it could in validation/verification, or security considerations. getting some pushback from Mike -- let's see if we can find some consensus.

See github pull request vc-data-model#1508.

Manu Sporny: I moved to the validation section as Gabe requested. I know Ted pushed back a bit. It's not out of place in either section. Pulled in all the WG's requests for changes.

Michael Jones: I have expressed in GitHub. as editors we need to make judgment calls on what is useful/actionable vs what makes the spec longer. This doesn't improve implementations. I don't want stuff in it that I'm embarrassed to see. Should we also have security considerations around cloud computing? I'm puzzled.

Ted Thibodeau Jr.: Didn't get the joke. There's a difference between cloud computing and an 'active agent' - we know it's an independent actor that can be put to use now in new ways. I think it is a relevant caution. We should say 'be aware of this new thing, a moving target'.
… could be decades until things settle down. let's put in a brief warning and move on.

Gabe Cohen: What would make this more real to you, Mike? Is there language we could change?
… Concerns around AI and data legitimacy are real. If we could improve the text that would be good.

Manu Sporny: I appreciate your opinion Mike. At this point just about everybody is disagreeing with your point. There are people working on some of the largest AI companies in the world working on research around AI and Verifiable Credentials. It quotes the work we're doing here directly.
… it is possible for AI to pass tests today that were thought to only be passed by humans before (GRE, high school diploma, etc.). If people are building systems, and the security is built on VCs identifying certain capabilities and proof of personhood...we need to warn that may not be good enough anymore. Security researchers need to take that into accoutn.
… captcha is broken now. AIs can solve it better than humans. it would be strange for us to not say something about this.

Dave Longley: "VCs that seem like like they might only be acquirable by human persons might also become acquirable by artificial intelligence systems, be aware of this when validating / making decisions".

Manu Sporny: see no reason to not put this into the spec.

Michael Jones: Gabe used the word that is key. Is the guidance 'actionable'. Are there things we're recommending? Are there actions that can be taken? If there are actions -- cool. If I get overruled I would rather this be a security consideration. If there is not a validation consideration then it doesn't belong there.

Dave Longley: Text should say something like 'VCs that seem like they may only be acquired by humans today may be acquired by AI systems' - don't assume only a human can do it.

Manu Sporny: the philosophy that a spec should only contain normative actionable statements that end up in impls is a philosophy I do not believe we have ever - or should - employ. we have plenty of statements like this today, e.g. describing the ecosystem so implementers can make better decisions. -1 to a notion that everything we write needs to be actionable.
… implementers need to be able to take guidance and apply it to their specific use case.

Joe Andrieu: we do need to write something since people are asking this question and using this technology. 2 differences. 1 - confidence method is part of how we're trying to solve this problem; not figured out yet (still a reserved property). the text does have actionable advice, though we can improve it. need to say something.

Michael Jones: I like what Dave Longley said - since it is actionable. Verifiers should not assume tests heretofore that were only passable by human beings are not achievable by machines at this point. don't make an assumption that passing a turing test = party is a human being.

Brent Zundel: thanks Mike. seems like we have a path forward. language in chat.

Manu Sporny: the language is already in the PR. I would like to stop playing 'go fetch a rock' with this PR. I will integrate Dave's changes.

Michael Jones: I will re-review after that, please ping me.

PR #1508 has been merged, closing.