We are pleased to announce that our implementation of SecureNN has landed in tf-encrypted!

SecureNN is a secure multiparty computation (MPC) protocol developed by Sameer Wagh et al, in conjunction with researchers at Microsoft Research, India. The authors introduce a protocol for calculating layers like ReLU and MaxPool in the secret sharing setting.

Traditionally, MPC protocols work in a ring, which allows the secure implementation of multiplication and addition, but is not very efficient at calculating bitwise operations, such as comparison.

When performing comparisons, most MPC algorithms will simply use a different protocol. A common strategy is to switch to Yao’s garbled circuits. While garbled circuits are efficient to evaluate, creating them can be costly. They are also bitwise-based, and in the secure ML setting, we are generally working with at least 64 bit numbers, so there is quite a lot of overhead to making a circuit (64x more than normal!). Further, each circuit is single-use, which introduces further inefficiencies when we want to use them for multiple batches of inferences.

ReLU and Maxpooling show up quite frequently in deep learning. Most neural networks use ReLU activations after almost every single layer. Convolutional networks also tend to include pooling layers after each convolution. In practice, this is most commonly a max-pooling layer. As traditional secret sharing protocols perform poorly at these comparison based layers, they are a natural place to look for improvement.

Previously, ReLU was a tricky operation to support with MPC using SPDZ. SPDZ provides very strong support for addition and multiplication, but comparison is quite slow. As ReLU calculates *max(0, x)*, comparison is necessary.

In the tf-encrypted implementation of SPDZ (Pond), we used Chebyshev Polynomial Approximation to support ReLU. This is quite fast but numbers outside of a small range explode (see Approx. 1 in Figure 1). We are able to expand the range but sacrifice accuracy near 0 (see Approx. 2 in Figure 1). Most models have a lot of ReLU layers — almost after every layer — the error this accumulates over the course of an inference adds up and harms model efficacy. Now, tf-encrypted can provide an exact ReLU. We get the dark line in the above graph, and the error no longer snowballs and explodes.

Maxpooling was not available in tf-encrypted before SecureNN. Maxpooling layers follow convolutions and take the max value in some block of values. You can read more about them here.

Implementing Maxpooling on private variables has the same problem as we had with ReLU. In order to figure out which value is largest in our window, we must compare them.

tf-encrypted’s approach previously was to use average pooling rather than maxpooling. A major goal of the field of private machine learning is to not prescribe model architecture to our users. Now, we don’t have to!

The paper details 3 models (A, B, & C) and runs benchmarks against these.

The model architectures are defined as

We are happy to report that our performance numbers in tf-encrypted are quite close to what is reported in the secureNN paper.

We will be following up with a more detailed post on these numbers.

SecureNN is not on by default in tf-encrypted. But it is quite simple to turn on

If you are unfamiliar with tf-encrypted, you can check out our getting started guide here.

Due to the prevalence of ReLU and Maxpooling in deep learning models, and the inefficiency of calculating them in a ring, they are a natural place to look for improvement. SecureNN defines these improvements, thus making it an important contribution for practical private models. Having exact ReLU and MaxPooling is a huge step for the tf-encrypted project. We are very excited for researchers, data scientists, and developers to be able to play with it!

We would like to give a special thanks to Sameer Wagh for his help and insight during implementation.