Skip to content
Snippets Groups Projects
mnist.rst 7.35 KiB

(Local) Robustness of MNIST dataset

CAISAR provides a convenient way for verifying (local) robustness properties of neural networks working on datasets with values in [0, 1], for classification problems only. For the moment, CAISAR supports datasets in a specific CSV format only, where each CSV lines is interpreted as providing the classification label in the first column, and the dataset element features in the remaining columns.

We recall that a neural network is deemed robust on a dataset element whenever it classify with a same label all other elements being at an l_\infty-distance of at most \epsilon \geq 0 from it. More in general, a neural network is deemed (locally) robust on a dataset whenever the former property is valid on all the dataset elements. The CAISAR standard library specifies such a property in terms of the predicate robust, which CAISAR implements as a builtin.

In the following, we will describe how to use CAISAR for verifying a neural network robust on (a fragment of) the MNIST dataset.

Use case presentation

MNIST is a dataset of handwritten digits normalized and centered to fit into grayscale images of 28 \times 28 pixels, along with the classification labels [LiDeng2012]. Although it is mostly irrelevant as dataset for benchmarking machine learning models for computer vision tasks, MNIST is still valuable for assessing robustness properties by means of formal method tools.

CAISAR provides in mnist_test.csv a fragment (100 images) of the MNIST dataset under the examples/mnist/csv folder. Each line in this file represents an MNIST image: in particular, the first column represents the classification label, and the remaining 784 columns represent the greyscale value of the respective pixels, rescaled into [0, 1].

Properties

Generally speaking, the property we are interested in verifying is the local robustness of a machine learning model on the elements of a set. More formally, let \mathcal{X} be an input set, (in our case a subset of \mathbb{R}^{28\times 28}), \mathcal{Y} \subset \mathbb{R} be an output set, and C : \mathcal{X} \mapsto \mathcal{Y} a classifier. For a given \epsilon \in \mathbb{R}, a classifier is \epsilon-locally-robust if it classifies all elements of \mathcal{X} being at an l_\infty-distance of at most \epsilon \geq 0 with the same label. A general formulation of this property is the following: \forall x,x' \in X. \ \lVert x - x' \rVert_\infty \leq \epsilon \Rightarrow C(x) = C(x').

Since we actually deal with a dataset of finite elements, we will instead verify a slightly different property: given a classifier C, an element x \in X, and some perturbation \epsilon \geq 0, it must hold that \forall x'. \ \lVert x - x' \rVert_\infty \leq \epsilon \Rightarrow C(x) = C(x'). Obviously, such a property must be verified for all elements of a dataset.

Modelling the problem using WhyML

As described for the example on :ref:`acas_xu`, we first need to write a specification file containing a WhyML theory to describe the verification problem. In principle, we need to formalize the local robustness property as well as the notions of classifier and dataset. The CAISAR interpretation language caisar.mlw provides theories that defines those concepts. We will import the relevant theories with the use keyword. As described in :ref:`interpretation`, the Vector theory provides a vector type, a getter ([]) operation and a valid_index predicate that determines whether the get operation is within the range of the vector length. NeuralNetwork defines a type and an application function (@@). We will also need integers and floating point numbers to declare and define \epsilon.

use ieee_float.Float64
use int.Int
use interpretation.Vector
use interpretation.NeuralNetwork
use interpretation.Dataset

type image = vector t
type label_ = int

We will first write some predicates to take into account the fact that MNIST counts 10 labels (integer from 0 to 9) in the dataset sample, and that the input images are normalized (floating point values between 0. and 1.). We will also define a predicate that, given a label l and an image i, checks whether the neural network nn indeed advises the correct label.

predicate valid_image (i: image) =
  forall v: index. valid_index i v -> (0.0: t) .<= i[v] .<= (1.0: t)

predicate valid_label (l: label_) = 0 <= l <= 9

predicate advises (n: nn) (i: image) (l: label_) =
  valid_label l ->
  forall j: int. valid_label j ->  j <> l -> (n@@i)[l] .> (n@@i)[j]

We write \lVert x - x' \rVert_\infty \leq \epsilon with another predicate:

predicate bounded_by_epsilon (i: image) (eps: t) =
  forall v: index. valid_index i v -> .- eps .<= i[v] .<= eps

We can now define the property to check that is a straightforward description of property

predicate robust_around (n: nn) (eps: t) (l: label_) (i: image) =
  forall perturbed_image: image.
    has_length perturbed_image (length i) ->
    valid_image perturbed_image ->
    let perturbation = perturbed_image - i in
    bounded_by_epsilon perturbation eps ->
    advises n perturbed_image l

predicate robust (n: nn) (d: dataset label_ image) (eps: t) =
  Dataset.forall_ d (robust_around n eps)

Finally, to instanciate this property on concrete neural networks and data samples, we can define a goal and check the property

goal robustness:
  let nn = read_neural_network "nets/MNIST_256_2.onnx" ONNX in
  let dataset = read_dataset "csv/mnist_test.csv" CSV in
  let eps = (0.375:t) in
  robust nn dataset eps

The final property file is available, as is, under the /examples/mnist/ folder as property.why. The corresponding neural network in ONNX format is available under the /examples/mnist/nets/ folder as MNIST_256_2.onnx.

Verifying the property with CAISAR

Now we may verify whether the previous robustness specification holds on the by means of the nnenum prover. This can be done via CAISAR as follows:

$ caisar verify --prover nnenum -L examples/mnist/ --format whyml examples/mnist/property.why
[caisar] Goal robustness: Invalid

The result tells us that there exists at least one image in mnist_test.csv for which nnenum is sure that the model MNIST_256_2.onnx is not robust with respect to 1 \% perturbation. At the moment, CAISAR is not able to tell which are the images in the dataset that cause such result.

[LiDeng2012] Li Deng, The MNIST Database of Handwritten Digit Images for Machine Learning Research, IEEE Signal Process. Mag., 2012, pp. 141-142, doi: 10.1109/MSP.2012.2211477