Skip to content
Snippets Groups Projects
Commit 4da75ba8 authored by Julien Girard-Satabin's avatar Julien Girard-Satabin
Browse files

[doc] Clarified acas-xu examples

parent 32378657
No related branches found
No related tags found
No related merge requests found
...@@ -163,13 +163,15 @@ The full reference for those WhyML extensions is available under the ...@@ -163,13 +163,15 @@ The full reference for those WhyML extensions is available under the
Now is the time to define our verification goal, that will call ``P1_1_1`` for Now is the time to define our verification goal, that will call ``P1_1_1`` for
property :math:`\phi_1` on neural network :math:`N_{1,1}`. property :math:`\phi_1` on neural network :math:`N_{1,1}`.
We first model the inputs of the neural network :math:`\rho, \theta, \psi, We first need to model the inputs of the neural network
v_{own}, v_{int}` respectively as the floating-points constants :math:`x_i` for :math:`\rho, \theta, \psi, v_{own}, v_{int}` to the range of floating-point
:math:`i \in [0..4]`. Moreover, we constrain these to the range of values each may take. We can do that by writing a predicate that encode those specification constraints.
floating-point values each may take. According to the original authors, values Since neural networks take vectors as inputs, we use the
were normalized during the training of the network, and so we adapt the values WhyML extension ``interpretation.Vector``.
they provide in their `repository Since we manipulate integer indexes, we require the use of the ``int.Int`` Why3 library.
<https://github.com/NeuralNetworkVerification/Marabou/tree/master/resources/properties>`_. Since we will manipulate integer indexes, we require the use of the ``int.Int`` Why3 library. We can write that as a predicate for clarity:
We can write this as a predicate for clarity:
.. code-block:: whyml .. code-block:: whyml
...@@ -189,6 +191,11 @@ they provide in their `repository ...@@ -189,6 +191,11 @@ they provide in their `repository
/\ (-0.5:t) .<= i[4] .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t) /\ (-0.5:t) .<= i[4] .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
end end
Note that there is an additional normalization step on the inputs, according
to the original authors. For this specific benchmark, we adapt the values
they provide in their `repository
<https://github.com/NeuralNetworkVerification/Marabou/tree/master/resources/properties>`_, hence the diverging values from the specification. I would
We must then define the result of the application of ``nn_1_1`` on the inputs. We must then define the result of the application of ``nn_1_1`` on the inputs.
The built-in function ``@@`` serves this purpose. Its type, ``nn -> vector 'a -> vector 'a``, describes what it does: given a neural network ``nn`` and an input vector ``x``, return the vector that is the result of the application of ``nn`` on ``x``. The built-in function ``@@`` serves this purpose. Its type, ``nn -> vector 'a -> vector 'a``, describes what it does: given a neural network ``nn`` and an input vector ``x``, return the vector that is the result of the application of ``nn`` on ``x``.
Note that thanks to type polymorphism, ``@@`` can be used to Note that thanks to type polymorphism, ``@@`` can be used to
...@@ -208,19 +215,26 @@ The final WhyML file looks like this: ...@@ -208,19 +215,26 @@ The final WhyML file looks like this:
use int.Int use int.Int
use interpretation.Vector use interpretation.Vector
use interpretation.NeuralNetwork use interpretation.NeuralNetwork
constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX
predicate valid_input (i: vector t) = predicate valid_input (i: vector t) =
(0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= i[0] .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t) let rho = i[0] in
/\ (-0.5:t) .<= i[1] .<= (0.5:t) let theta = i[1] in
/\ (-0.5:t) .<= i[2] .<= (0.5:t) let psi = i[2] in
/\ (0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= i[3] .<= (0.5:t) let vown = i[3] in
/\ (-0.5:t) .<= i[4] .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t) let vint = i[4] in
(0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= rho .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t)
/\ (-0.5:t) .<= theta .<= (0.5:t)
/\ (-0.5:t) .<= psi .<= (0.5:t)
/\
(0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= vown .<= (0.5:t)
/\ (-0.5:t) .<= vint .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
goal P1_1_1: goal P1_1_1:
forall i: vector t. has_length i 5 -> valid_input i -> forall i: vector t. has_length i 5 -> valid_input i ->
(nn_1_1@@i)[0] .<= (3.9911256458999999630066213285317644476890563964843750000000000000:t) let coc = (nn_1_1@@i)[0] in
coc .<= (3.9911256458999999630066213285317644476890563964843750000000000000:t)
end end
This file is available, as is, under the ``/examples/acasxu/`` folder as This file is available, as is, under the ``/examples/acasxu/`` folder as
...@@ -277,8 +291,8 @@ networks :math:`N_{1,1}` and :math:`N_{2,7}` [Katz2017]_. ...@@ -277,8 +291,8 @@ networks :math:`N_{1,1}` and :math:`N_{2,7}` [Katz2017]_.
From the modelling standpoint, the main evident difference concerns the desired From the modelling standpoint, the main evident difference concerns the desired
output property, meaining that *COC* should not be the minimal value. A output property, meaining that *COC* should not be the minimal value. A
straightforward way to express this property is that the corresponding straightforward way to express this property is that the neural network
floating-point constant :math:`y_0` is greater than or equal to at least one of output is greater than or equal to at least one of
the other five outputs. This can be formalized in first-order logic as a the other five outputs. This can be formalized in first-order logic as a
disjunction of clauses, that can be directly encoded into WhyML as follows: disjunction of clauses, that can be directly encoded into WhyML as follows:
...@@ -313,20 +327,25 @@ In the end, the WhyML file looks like this: ...@@ -313,20 +327,25 @@ In the end, the WhyML file looks like this:
constant nn_2_7: nn = read_neural_network "nets/onnx/ACASXU_2_7.onnx" ONNX constant nn_2_7: nn = read_neural_network "nets/onnx/ACASXU_2_7.onnx" ONNX
predicate valid_input (i: vector t) = predicate valid_input (i: vector t) =
(-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= i[0] .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t) let rho = i[0] in
/\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= i[1] .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t) let theta = i[1] in
/\ (0.4933803236000000036476365039561642333865165710449218750000000000:t) .<= i[2] .<= (0.5:t) let psi = i[2] in
/\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= i[3] .<= (0.5:t) let vown = i[3] in
/\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= i[4] .<= (0.5:t) let vint = i[4] in
(-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= rho .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t)
/\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= theta .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t)
/\ (0.4933803236000000036476365039561642333865165710449218750000000000:t) .<= psi .<= (0.5:t)
/\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= vown .<= (0.5:t)
/\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= vint .<= (0.5:t)
predicate is_min (o: vector t) (i: int) = predicate is_min (o: vector t) (i: int) =
forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j] forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
goal P3_1_1: goal P3_1_1:
forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_1_1@@i) 0) forall i: vector t. has_length i 5 -> valid_input i -> let is_coc_min = (is_min (nn_1_1@@i) 0) in not is_coc_min
goal P3_2_7: goal P3_2_7:
forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_2_7@@i) 0) forall i: vector t. has_length i 5 -> valid_input i -> let is_coc_min = (is_min (nn_2_7@@i) 0) in not is_coc_min
end end
Note how the two verification goals ``P3_1_1`` and ``P3_2_7`` are clearly almost Note how the two verification goals ``P3_1_1`` and ``P3_2_7`` are clearly almost
......
...@@ -7,13 +7,20 @@ theory ACASXU_P1 ...@@ -7,13 +7,20 @@ theory ACASXU_P1
constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX
predicate valid_input (i: vector t) = predicate valid_input (i: vector t) =
(0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= i[0] .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t) let rho = i[0] in
/\ (-0.5:t) .<= i[1] .<= (0.5:t) let theta = i[1] in
/\ (-0.5:t) .<= i[2] .<= (0.5:t) let psi = i[2] in
/\ (0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= i[3] .<= (0.5:t) let vown = i[3] in
/\ (-0.5:t) .<= i[4] .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t) let vint = i[4] in
(0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= rho .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t)
/\ (-0.5:t) .<= theta .<= (0.5:t)
/\ (-0.5:t) .<= psi .<= (0.5:t)
/\
(0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= vown .<= (0.5:t)
/\ (-0.5:t) .<= vint .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
goal P1_1_1: goal P1_1_1:
forall i: vector t. has_length i 5 -> valid_input i -> forall i: vector t. has_length i 5 -> valid_input i ->
(nn_1_1@@i)[0] .<= (3.9911256458999999630066213285317644476890563964843750000000000000:t) let coc = (nn_1_1@@i)[0] in
coc .<= (3.9911256458999999630066213285317644476890563964843750000000000000:t)
end end
...@@ -8,18 +8,23 @@ theory ACASXU_P3 ...@@ -8,18 +8,23 @@ theory ACASXU_P3
constant nn_2_7: nn = read_neural_network "nets/onnx/ACASXU_2_7.onnx" ONNX constant nn_2_7: nn = read_neural_network "nets/onnx/ACASXU_2_7.onnx" ONNX
predicate valid_input (i: vector t) = predicate valid_input (i: vector t) =
(-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= i[0] .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t) let rho = i[0] in
/\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= i[1] .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t) let theta = i[1] in
/\ (0.4933803236000000036476365039561642333865165710449218750000000000:t) .<= i[2] .<= (0.5:t) let psi = i[2] in
/\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= i[3] .<= (0.5:t) let vown = i[3] in
/\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= i[4] .<= (0.5:t) let vint = i[4] in
(-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= rho .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t)
/\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= theta .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t)
/\ (0.4933803236000000036476365039561642333865165710449218750000000000:t) .<= psi .<= (0.5:t)
/\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= vown .<= (0.5:t)
/\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= vint .<= (0.5:t)
predicate is_min (o: vector t) (i: int) = predicate is_min (o: vector t) (i: int) =
forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j] forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
goal P3_1_1: goal P3_1_1:
forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_1_1@@i) 0) forall i: vector t. has_length i 5 -> valid_input i -> let is_coc_min = (is_min (nn_1_1@@i) 0) in not is_coc_min
goal P3_2_7: goal P3_2_7:
forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_2_7@@i) 0) forall i: vector t. has_length i 5 -> valid_input i -> let is_coc_min = (is_min (nn_2_7@@i) 0) in not is_coc_min
end end
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment