Encrypted models
zned45 opened this issue · comments
Hi,
From the perspective of the IP of the models running on the edge, does TF-encrypted allow the following (or is this even possible today?):
- The encrypted model is served on the edge
- The client has access to the model, encrypts the data and performs inference
- Is there any possibility of the results of the inference only be read using a third party key or module on centralized server, restricting the edge service (client) from decrypting? Like a process of encrypted that is different from decryption process?
Thank you.
@zned45 what you're suggesting is possible. Is the goal to protect the model from the client, and protect the input from the server, while revealing the result to the server?
I would think of it as:
- Encrypted the model on the server.
- Encrypted the input data on the client.
- Run the encrypted inference.
- Decrypt the result on the server.
With an MPC protocol the encrypted inference runs between the client and the server.
@gavinuhma thank you for the help.
The main goal is:
- You have an edge inference server which is available to eveyone. It should run the inference on a input. Since it is available to everyone the key requrement is to encrypt the model. The input does not need encryption
- Then another server should be able to decrypt the result of the inferece which comes encrypted.
I'm a newbie, so could you please elaborate a little bit more?
Thank you.
Please check the examples on ML inference and training on secret-shared images. https://github.com/tf-encrypted/tf-encrypted/tree/master/examples/benchmark