You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand that for cross-silo horizontal learning settings, what need to be encrypted are gradients from different data owners, then performing BatchCrypt means HE + "local batch norm", and it will hold true for an acceptable loss of precision, and is not likely to harm convergence and performance of the model.
Do you have any idea on how this could be done in a cross-silo vertical learning setting? since the intermediate results are not only gradients, but linear computation result, as well as a component used for computing the gradient. Applying "BatchNorm" seems doesn't make sense on these, any suggestions?
The text was updated successfully, but these errors were encountered:
I understand that for cross-silo horizontal learning settings, what need to be encrypted are gradients from different data owners, then performing BatchCrypt means HE + "local batch norm", and it will hold true for an acceptable loss of precision, and is not likely to harm convergence and performance of the model.
Do you have any idea on how this could be done in a cross-silo vertical learning setting? since the intermediate results are not only gradients, but linear computation result, as well as a component used for computing the gradient. Applying "BatchNorm" seems doesn't make sense on these, any suggestions?
The text was updated successfully, but these errors were encountered: