@@ -163,13 +163,15 @@ The full reference for those WhyML extensions is available under the
...
@@ -163,13 +163,15 @@ The full reference for those WhyML extensions is available under the
Now is the time to define our verification goal, that will call ``P1_1_1`` for
Now is the time to define our verification goal, that will call ``P1_1_1`` for
property :math:`\phi_1` on neural network :math:`N_{1,1}`.
property :math:`\phi_1` on neural network :math:`N_{1,1}`.
We first model the inputs of the neural network :math:`\rho, \theta, \psi,
We first need to model the inputs of the neural network
v_{own}, v_{int}` respectively as the floating-points constants :math:`x_i` for
:math:`\rho, \theta, \psi, v_{own}, v_{int}` to the range of floating-point
:math:`i \in [0..4]`. Moreover, we constrain these to the range of
values each may take. We can do that by writing a predicate that encode those specification constraints.
floating-point values each may take. According to the original authors, values
Since neural networks take vectors as inputs, we use the
were normalized during the training of the network, and so we adapt the values
WhyML extension ``interpretation.Vector``.
they provide in their `repository
Since we manipulate integer indexes, we require the use of the ``int.Int`` Why3 library.
<https://github.com/NeuralNetworkVerification/Marabou/tree/master/resources/properties>`_. Since we will manipulate integer indexes, we require the use of the ``int.Int`` Why3 library. We can write that as a predicate for clarity:
We can write this as a predicate for clarity:
.. code-block:: whyml
.. code-block:: whyml
...
@@ -189,6 +191,11 @@ they provide in their `repository
...
@@ -189,6 +191,11 @@ they provide in their `repository
Note that there is an additional normalization step on the inputs, according
to the original authors. For this specific benchmark, we adapt the values
they provide in their `repository
<https://github.com/NeuralNetworkVerification/Marabou/tree/master/resources/properties>`_, hence the diverging values from the specification. I would
We must then define the result of the application of ``nn_1_1`` on the inputs.
We must then define the result of the application of ``nn_1_1`` on the inputs.
The built-in function ``@@`` serves this purpose. Its type, ``nn -> vector 'a -> vector 'a``, describes what it does: given a neural network ``nn`` and an input vector ``x``, return the vector that is the result of the application of ``nn`` on ``x``.
The built-in function ``@@`` serves this purpose. Its type, ``nn -> vector 'a -> vector 'a``, describes what it does: given a neural network ``nn`` and an input vector ``x``, return the vector that is the result of the application of ``nn`` on ``x``.
Note that thanks to type polymorphism, ``@@`` can be used to
Note that thanks to type polymorphism, ``@@`` can be used to
...
@@ -208,19 +215,26 @@ The final WhyML file looks like this:
...
@@ -208,19 +215,26 @@ The final WhyML file looks like this:
use int.Int
use int.Int
use interpretation.Vector
use interpretation.Vector
use interpretation.NeuralNetwork
use interpretation.NeuralNetwork
constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX
constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX