tf-encrypted / tf-encrypted

A Framework for Encrypted Machine Learning in TensorFlow

Home Page:https://tf-encrypted.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

In the federated learning of examples, how to protect the DataOwner's gradient?

qxzhou1010 opened this issue · comments

commented

In the examples/application/federated-learning/ , I study the run.py. In the function federated_training as following:

@tf.function
def federated_training(model_owner, data_owners):
    # share model owner's model weights to data owners
    update_weights = model_owner.share_weights()
    for data_owner in data_owners:
        data_owner.update_model(*update_weights)
    # collect encrypted gradients from data owners
    model_grads = zip(*(data_owner.compute_gradient() for data_owner in data_owners))
    # compute mean of gradients (without decrypting)
    with tf.name_scope("secure_aggregation"):
        aggregated_model_grads = [
            tfe.add_n(grads) / len(grads) for grads in model_grads
        ]
    # send the encrypted aggregated gradients
    # to the model owner for it to decrypt and update
    model_owner.update_model(*aggregated_model_grads)

This line tfe.add_n(grads) / len(grads) for grads in model_grads , the grads and model_grads looks like a plaintext rather than a secret sharing value of ABY3.

The following function generates the gradients.

    def _build_gradient_func(self):
        @tfe.local_computation(player_name=self.player.name)
        def compute_gradient():
            with tf.name_scope("local_training"):
                with tf.GradientTape() as tape:
                    x, y = next(self.data_iter)
                    y_pred = self.model.call(x)
                    loss = self.loss(y, y_pred)
                gradients = tape.gradient(loss, self.model.trainable_variables)

            return gradients

        self.gradient_func = compute_gradient

However, I still haven't found the code for making sharing for gradients of DataOwner, just like tfe.define_private_variable. Obviously, tfe.add_n does not provide the process of make sharing.

So, in the above code, where is the make-sharing handling of the gradient provided by DataOwner?

commented

I believe the decorator @tfe.local_computation secret shares the decorated function's outputs. In this case, the output of _compute_gradient