diff --git a/README.md b/README.md
index 43ffe34e8ea910ce0f59c9cdf36c0464e9b836b6..47547832b3cfaed07d528811a6ccd1af28fff198 100644
--- a/README.md
+++ b/README.md
@@ -8,9 +8,14 @@ artificial intelligence based software.
 ## Getting CAISAR
 
 The latest release of CAISAR is available as an [opam](https://opam.ocaml.org/)
-package or a [Docker](https://www.docker.com/) image.
+package or a [Docker](https://www.docker.com/) image on GNU/Linux.
+CAISAR is available on Windows either under the [Windows
+Subsystem Linux](https://learn.microsoft.com/en-us/windows/wsl/install)
+(WSL) or the aforementioned docker image.
 
 The development version of CAISAR is available only by compiling the source code.
+CAISAR needs access to a ``python3`` interpreter, which is
+a default in all major GNU/Linux distributions.
 
 ### opam package
 
@@ -67,7 +72,9 @@ make test
 
 ## Documentation
 
-**Please note:** The CAISAR manual requires the documentation generator
+The CAISAR user manual is available on the [platform website](https://caisar-platform.com/documentation/).
+
+It is possible to generate it locally. This operation requires the documentation generator
 [Sphinx](https://www.sphinx-doc.org/en/master/index.html), which is typically
 avaible in all major GNU/Linux distributions.
 
diff --git a/doc/acas_xu.rst b/doc/acas_xu.rst
index 2e4a586c5ebd16205bb72e6d0746d2ec65a19a76..e88ac4db16f7c13c144b8f48ba1c26ec63665116 100644
--- a/doc/acas_xu.rst
+++ b/doc/acas_xu.rst
@@ -47,7 +47,7 @@ only while maintaining the same number of outputs. We will consider five-inputs
 networks in the remaining of this example.
 
 Properties
-----------
+==========
 
 There are several functional properties one may want to verify on this system,
 for instance:
@@ -99,7 +99,7 @@ whether a given neural network respects them.
    * :math:`COC` is not the minimal score.
 
 Modelling the problem using WhyML
----------------------------------
+=================================
 
 The first step for verifying anything with CAISAR is to write a specification
 file that describe the problem to verify as a so-called *theory*. A theory can
@@ -125,11 +125,13 @@ our theory ``ACASXU_P1``.
 
 We will need to write down some numerical values. As of now, CAISAR allows
 writing values using floating-point arithmetic only. Why3 defines a float type
-and the relevant arithmetic operations according to the IEEE floating-point
+and the relevant arithmetic operations according to the IEEE 754 floating-point
 standard in a theory, astutely called ``ieee_float``. Specifically, we will
 import the ``Float64`` sub-theory, that defines everything we need for 64-bit
 precision floating-point numbers. We thus import it in our theory using the
-``use`` keyword.
+``use`` keyword. This currently requires to write exact IEEE 754 floating point
+values in the specification file, which is understandably a setback in terms of
+usability, but have the advantage of making the specification unambiguous.
 
 Our file looks like this so far:
 
@@ -144,12 +146,20 @@ CAISAR provide WhyML extensions to recognize and apply
 neural networks in ONNX and NNet formats on vector inputs.
 Given a file of such formats, CAISAR is able to provide the following:
 
-* a logical symbol of type ``nn``, built using the ``read_neural_network`` function, of type ``string -> format -> nn``. The first argument is the path to the neural network file, ``format`` is either ``ONNX`` or ``NNet``, and ``nn`` is the type of the neural network in WhyML;
-* a function symbol that returns the output of the application of the neural network to a given input;
+* a logical symbol of type ``nn``, built using the ``read_neural_network``
+  function, of type ``string -> format -> nn``. The first argument is the path
+  to the neural network file, ``format`` is either ``ONNX`` or ``NNet``, and
+  ``nn`` is the type of the neural network in WhyML;
+* a function symbol that returns the output of the application of the neural
+  network to a given input;
 * types and predicates to manipulate inputs vectors;
 
 The full reference for those WhyML extensions is available under the
-`stdlib/interpretation.mlw <https://git.frama-c.com/pub/caisar/-/blob/master/stdlib/interpretation.mlw>`_ file. To create a logical symbol for a neural network located in "nets/onnx/ACASXU_1_1.onnx", we can import the relevant theories in our file and use the ``read_neural_network`` function symbol like this:
+`stdlib/interpretation.mlw
+<https://git.frama-c.com/pub/caisar/-/blob/master/stdlib/interpretation.mlw>`_
+file. To create a logical symbol for a neural network located in
+``nets/onnx/ACASXU_1_1.onnx``, we can import the relevant theories in our file and
+use the ``read_neural_network`` function symbol like this:
 
 .. code-block:: whyml
 
@@ -162,14 +172,15 @@ The full reference for those WhyML extensions is available under the
    end
 
 Now is the time to define our verification goal, that will call ``P1_1_1`` for
-property :math:`\phi_1` on neural network :math:`N_{1,1}`.
-We first model the inputs of the neural network :math:`\rho, \theta, \psi,
-v_{own}, v_{int}` respectively as the floating-points constants :math:`x_i` for
-:math:`i \in [0..4]`. Moreover, we constrain these to the range of
-floating-point values each may take. According to the original authors, values
-were normalized during the training of the network, and so we adapt the values
-they provide in their `repository
-<https://github.com/NeuralNetworkVerification/Marabou/tree/master/resources/properties>`_. Since we will manipulate integer indexes, we require the use of the ``int.Int`` Why3 library. We can write that as a predicate for clarity:
+property :math:`\phi_1` on neural network :math:`N_{1,1}`. We first need to
+model the inputs of the neural network :math:`\rho, \theta, \psi, v_{own},
+v_{int}` to the range of floating-point values each may take. We can do that by
+writing a predicate that encode those specification constraints. Since neural
+networks take vectors as inputs, we use the CAISAR library
+``interpretation.Vector``. Similarly, as we need to manipulate integer indexes,
+we require the use of the ``int.Int`` Why3 library.
+
+We can write this as a predicate for clarity:
 
 .. code-block:: whyml
 
@@ -189,14 +200,29 @@ they provide in their `repository
       /\ (-0.5:t) .<= i[4] .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
   end
 
+.. warning ::
+  You will notice that the input values in the example are different than what
+  the original property states. Recall that :math:`1500 \leq \rho \leq 1800`,
+  and :math:`-0.06 \leq \theta \leq 0.06`. Training networks on such wide
+  variation range leads to instability, hence the need to normalize the inputs.
+  For this specific benchmark, we refer to the IEEE 754 floating-point
+  representation of the normalized values the original authors provide in their
+  `repository
+  <https://github.com/NeuralNetworkVerification/Marabou/tree/master/resources/properties>`_.
+
+  A future release of CAISAR will provide a way for specifying and perform such
+  a normalization process in order to bridge the gap between the original
+  properties and the actual specifications.
+
 We must then define the result of the application of ``nn_1_1`` on the inputs.
-The built-in function ``@@`` serves this purpose. Its type, ``nn -> vector 'a -> vector 'a``, describes what it does: given a neural network ``nn`` and an input vector ``x``, return the vector that is the result of the application of ``nn`` on ``x``.
-Note that thanks to type polymorphism, ``@@`` can be used to
-describe a variety of input vectors, including floating points, integers, or strings.
-We can finally define the output constraint
-we want to enforce on the first coordinate of the output vector that we use to
-model the advisory *COC*. We use the WhyML extension
-predicate ``has_length`` to further check that our inputs
+The built-in function ``@@`` serves this purpose. Its type, ``nn -> vector 'a ->
+vector 'a``, describes what it does: given a neural network ``nn`` and an input
+vector ``x``, return the vector that is the result of the application of ``nn``
+on ``x``. Note that thanks to type polymorphism, ``@@`` can be used to describe
+a variety of input vectors, including floating points, integers, or strings. We
+can finally define the output constraint we want to enforce on the first
+coordinate of the output vector that we use to model the advisory *COC*. We use
+the WhyML extension predicate ``has_length`` to further check that our inputs
 are of valid length.
 
 The final WhyML file looks like this:
@@ -208,19 +234,26 @@ The final WhyML file looks like this:
     use int.Int
     use interpretation.Vector
     use interpretation.NeuralNetwork
-  
+ 
     constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX
 
     predicate valid_input (i: vector t) =
-      (0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= i[0] .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t)
-      /\ (-0.5:t) .<= i[1] .<= (0.5:t)
-      /\ (-0.5:t) .<= i[2] .<= (0.5:t)
-      /\ (0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= i[3] .<= (0.5:t)
-      /\ (-0.5:t) .<= i[4] .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
-  
+      let rho = i[0] in
+      let theta = i[1] in
+      let psi = i[2] in
+      let vown = i[3] in
+      let vint = i[4] in
+      (0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= rho .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t)
+      /\ (-0.5:t) .<= theta .<= (0.5:t)
+      /\ (-0.5:t) .<= psi .<= (0.5:t)
+      /\
+      (0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= vown .<= (0.5:t)
+      /\ (-0.5:t) .<= vint .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
+ 
     goal P1_1_1:
       forall i: vector t. has_length i 5 -> valid_input i ->
-        (nn_1_1@@i)[0] .<= (3.9911256458999999630066213285317644476890563964843750000000000000:t)
+        let coc = (nn_1_1@@i)[0] in
+        coc .<= (3.9911256458999999630066213285317644476890563964843750000000000000:t)
   end
 
 This file is available, as is, under the ``/examples/acasxu/`` folder as
@@ -232,7 +265,7 @@ The corresponding neural network in ONNX format is available under the
 
 
 Verifying the property with CAISAR
-----------------------------------
+==================================
 
 Once formalized, the specified property can be assessed by using CAISAR. We will
 use the *open-source* provers CAISAR supports for verifying properties of neural
@@ -270,34 +303,33 @@ either ``Valid``, ``Invalid``, ``Unknown`` or ``Timeout``. CAISAR may output
 (typically, a prover internal failure).
 
 Using more advanced WhyML constructs
-------------------------------------
+====================================
 
 Let us model the ACAS-Xu property :math:`\phi_3`, and verify it for the neural
 networks :math:`N_{1,1}` and :math:`N_{2,7}` [Katz2017]_.
 
 From the modelling standpoint, the main evident difference concerns the desired
 output property, meaining that *COC* should not be the minimal value. A
-straightforward way to express this property is that the corresponding
-floating-point constant :math:`y_0` is greater than or equal to at least one of
-the other five outputs. This can be formalized in first-order logic as a
-disjunction of clauses, that can be directly encoded into WhyML as follows:
+straightforward way to express this property is that the neural network
+output is greater than or equal to at least one of
+the other four outputs. We can write a ``is_min`` predicate
+that, for a vector ``o`` and int ``i``, states whether the
+:math:`i`-th coordinate of ``o`` is the minimal value:
 
 .. code-block:: whyml
 
-    y0 .>= y1 \/ y0 .>= y2 \/ y0 .>= y3 \/ y0 .>= y4
+  predicate is_min (o: vector t) (i: int) =
+    forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
 
-The delicate point is how to model the same property for two different neural
+We want to check the same property for two different neural
 networks. Of course, we could define a theory with two identical but distinct
 verification goals or two entirely distinct theories in a same WhyML file.
 However, these two solutions are not advisable in terms of clarity and
 maintainability.
-
-Reassuringly enough, WhyML provides all necessary features to come up with a
-better solution. First, we can use the ``read_neural_network``
-extension as much as needed to represent distinct neural networks.
-Second, WhyML allows for the hypotheses on
-the floating-point constants modelling the neural network inputs to be exported
-from the verification goal into the theory general context as axioms.
+Reassuringly enough, CAISAR provides all necessary features to come up with a
+better solution. Indeed, we can use the ``read_neural_network``
+extension to define as many symbols as needed to represent
+distinct neural networks.
 
 In the end, the WhyML file looks like this:
 
@@ -313,24 +345,36 @@ In the end, the WhyML file looks like this:
     constant nn_2_7: nn = read_neural_network "nets/onnx/ACASXU_2_7.onnx" ONNX
   
     predicate valid_input (i: vector t) =
-      (-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= i[0] .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t)
-      /\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= i[1] .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t)
-      /\ (0.4933803236000000036476365039561642333865165710449218750000000000:t) .<= i[2] .<= (0.5:t)
-      /\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= i[3] .<= (0.5:t)
-      /\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= i[4] .<= (0.5:t)
+      let rho = i[0] in
+      let theta = i[1] in
+      let psi = i[2] in
+      let vown = i[3] in
+      let vint = i[4] in
+      (-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= rho .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t)
+      /\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= theta .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t)
+      /\ (0.4933803236000000036476365039561642333865165710449218750000000000:t) .<= psi .<= (0.5:t)
+      /\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= vown .<= (0.5:t)
+      /\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= vint .<= (0.5:t)
   
     predicate is_min (o: vector t) (i: int) =
       forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
   
     goal P3_1_1:
-      forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_1_1@@i) 0)
+      forall i: vector t.
+        has_length i 5 -> valid_input i ->
+        let is_coc_min = is_min (nn_1_1@@i) 0 in
+        not is_coc_min
   
     goal P3_2_7:
-      forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_2_7@@i) 0)
+      forall i: vector t.
+        has_length i 5 -> valid_input i ->
+        let is_coc_min = is_min (nn_2_7@@i) 0 in
+        not is_coc_min
   end
 
+
 Note how the two verification goals ``P3_1_1`` and ``P3_2_7`` are clearly almost
-identical, but for the ``nn`` logic symbol used, identifying respectively
+identical, but for the ``nn_x_y`` logic symbol used, identifying respectively
 the ``ACASXU_1_1.onnx`` and ``ACASXU_2_7.onnx`` neural networks.
 
 We can then verify the resulting verification problem as before:
diff --git a/doc/installation.rst b/doc/installation.rst
index c98a4f8e142544abd60476b0861b68a85f992096..b5df738d5b3deb10e76fe34afed02056dd462c5d 100644
--- a/doc/installation.rst
+++ b/doc/installation.rst
@@ -5,7 +5,9 @@ Installation
 
 The latest release of CAISAR is available as an
 `opam <https://opam.ocaml.org/>`_ package or
-a `Docker <https://www.docker.com/>`_ image.
+a `Docker <https://www.docker.com/>`_ image on GNU/Linux.
+CAISAR is available on Windows either under the `Windows
+Subsystem Linux <https://learn.microsoft.com/en-us/windows/wsl/install>`_ (WSL) or the aforementioned docker image.
 
 The development version of CAISAR is available only by compiling the source code.
 CAISAR needs access to a ``python3`` interpreter, which is
diff --git a/doc/mnist.rst b/doc/mnist.rst
index e99cbd400fd9ca10f35e31f91d72161837975872..c80180c615e20f6cf9aeed8fa74a23ae3ac00930 100644
--- a/doc/mnist.rst
+++ b/doc/mnist.rst
@@ -39,26 +39,29 @@ remaining :math:`784` columns represent the greyscale value of the respective
 pixels, rescaled into :math:`[0, 1]`.
 
 Properties
-----------
+==========
 
 Generally speaking, the property we are interested in verifying is the local
-robustness of a machine learning model on the elements of a set. That is, the
-model classifies all elements of a set being at an :math:`l_\infty`-distance of
-at most :math:`\epsilon \geq 0` with a same label. A general formulation of this
-latter states that, given a classifier :math:`C`, a set :math:`X`, and some
-perturbation :math:`\epsilon \geq 0`, it must hold that :math:`\forall x,x'
-\in X. \ \lVert x - x' \rVert_\infty \leq \epsilon \Rightarrow C(x) = C(x')`.
-
-Since we actually deal with a *dataset* of finite elements for which we also
-know the expected labels, we will instead verify a slightly different property:
-given a classifier :math:`C`, an element :math:`x \in X` such that :math:`C(x) =
-y`, and some perturbation :math:`\epsilon \geq 0`, it must hold that
-:math:`\forall x'. \ \lVert x - x' \rVert_\infty \leq \epsilon \Rightarrow C(x)
-= y = C(x')`. Obviously, such a property must be verified for all elements of a
+robustness of a machine learning model on the elements of a set. More formally,
+let :math:`\mathcal{X}` be an input set, (in our case a subset of
+:math:`\mathbb{R}^{28\times 28}`), :math:`\mathcal{Y} \subset \mathbb{R}` be an
+output set, and :math:`C : \mathcal{X} \mapsto \mathcal{Y}` a classifier. For a
+given :math:`\epsilon \in \mathbb{R}`, a classifier is
+:math:`\epsilon`-locally-robust if it classifies all elements of
+:math:`\mathcal{X}` being at an :math:`l_\infty`-distance of at most
+:math:`\epsilon \geq 0` with the same label. A general formulation of this
+property is the following: :math:`\forall x,x' \in X. \ \lVert x - x'
+\rVert_\infty \leq \epsilon \Rightarrow C(x) = C(x')`.
+
+Since we actually deal with a *dataset* of finite elements, we will instead
+verify a slightly different property: given a classifier :math:`C`, an element
+:math:`x \in X`, and some perturbation :math:`\epsilon \geq 0`, it must hold
+that :math:`\forall x'. \ \lVert x - x' \rVert_\infty \leq \epsilon \Rightarrow
+C(x) = C(x')`. Obviously, such a property must be verified for all elements of a
 dataset.
 
 Modelling the problem using WhyML
----------------------------------
+=================================
 
 As described for the example on :ref:`acas_xu`, we first need to write a
 specification file containing a WhyML theory to describe the verification
@@ -133,7 +136,7 @@ The corresponding neural network in ONNX format is available under the
 <https://git.frama-c.com/pub/caisar/-/blob/master/examples/mnist/nets/MNIST_256_2.onnx>`_.
 
 Verifying the property with CAISAR
-----------------------------------
+==================================
 
 Now we may verify whether the previous robustness specification holds on the
 MNIST fragment ``mnist_test.csv`` by means of the nnenum prover. This can be
@@ -149,6 +152,7 @@ for which nnenum is sure that the model ``MNIST_256_2.onnx`` is not robust with
 respect to :math:`1 \%` perturbation. At the moment, CAISAR is not able to tell
 which are the images in the dataset that cause such result.
 
+
 .. [LiDeng2012] Li Deng, *The MNIST Database of Handwritten Digit Images for
                Machine Learning Research*, IEEE Signal Process. Mag., 2012, pp.
                141-142, doi: 10.1109/MSP.2012.2211477
diff --git a/doc/usage.rst b/doc/usage.rst
index 70e7ccc875ad9b293bab3d99e3dbc9b6e26072e5..8f2f6170552cb979167c8b7dee5977b1acf5c8cc 100644
--- a/doc/usage.rst
+++ b/doc/usage.rst
@@ -84,7 +84,11 @@ Some combination of options may be suited for one
 verification problem, while inefficient for another. As they
 also have different interface, it is also necessary to register
 their call in a way CAISAR can understand.
-To do so, you must register provers inside the ``config/caisar-detection-data.conf`` file.
+
+Registering for autodetection
+*****************************
+
+The first step is to register provers inside the ``config/caisar-detection-data.conf`` file.
 Each supported prover is registered by specifying the following fields:
 
 * ``name``: the name under which CAISAR will know the prover
@@ -106,5 +110,24 @@ Each supported prover is registered by specifying the following fields:
   * ``%{nnet-onnx}``: the name of the neural network file
 * ``driver``: location of the CAISAR driver for the prover, if any
 
+Building the specification for the prover
+*****************************************
+It is necessary to convert the WhyML specification into an
+input that is understandable by the prover.
+CAISAR provide built-in translation from WhyML to the
+VNN-Lib and SMTLIB2 specification input formats. If your
+prover support those abovementioned input formats, adding
+support to your prover requires adding it in the files
+``src/provers.ml``, ``src/provers.mli`` and
+the function ``call_prover`` in ``src/verification.ml``.
+For other
+input formats, it is necessary to write custom OCaml code in
+the ``src/printers/`` folder as well.
+
+Parsing the output of the prover
+********************************
+CAISAR rely on Why3's `drivers <https://why3.lri.fr/doc/technical.html#drivers-for-external-provers>`_ to parse the output of the prover.
+
+
 If you add support for a new prover, consider opening a merge
 request on our `forge <https://git.frama-c.com/pub/caisar/-/merge_requests>`_!
diff --git a/examples/acasxu/property_1.why b/examples/acasxu/property_1.why
index c5901987504c17e7f0d19dd6f398ad7f75417f55..98ef775436bc2baaf53ca74fb3b48efcc1178c5c 100644
--- a/examples/acasxu/property_1.why
+++ b/examples/acasxu/property_1.why
@@ -7,13 +7,21 @@ theory ACASXU_P1
   constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= i[0] .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t)
-    /\ (-0.5:t) .<= i[1] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[2] .<= (0.5:t)
-    /\ (0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= i[3] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[4] .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= rho .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t)
+    /\ (-0.5:t) .<= theta .<= (0.5:t)
+    /\ (-0.5:t) .<= psi .<= (0.5:t)
+    /\
+    (0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= vown .<= (0.5:t)
+    /\ (-0.5:t) .<= vint .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
 
   goal P1_1_1:
-    forall i: vector t. has_length i 5 -> valid_input i ->
-      (nn_1_1@@i)[0] .<= (3.9911256458999999630066213285317644476890563964843750000000000000:t)
+    forall i: vector t.
+      has_length i 5 -> valid_input i ->
+      let coc = (nn_1_1@@i)[0] in
+      coc .<= (3.9911256458999999630066213285317644476890563964843750000000000000:t)
 end
diff --git a/examples/acasxu/property_10.why b/examples/acasxu/property_10.why
index 0d2d36c6824ca387cbf420f67bcd34bef4ebb58e..001576b0b291efcdb65a621e0e2fde14f586b91c 100644
--- a/examples/acasxu/property_10.why
+++ b/examples/acasxu/property_10.why
@@ -7,11 +7,16 @@ theory ACASXU_P10
   constant nn_4_5: nn = read_neural_network "nets/onnx/ACASXU_4_5.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (0.2689779999999999948734341614908771589398384094238281250000000000:t) .<= i[0] .<= (0.6798579999999999623483404320722911506891250610351562500000000000:t)
-    /\ (0.1114079999999999931459271351741335820406675338745117187500000000:t) .<= i[1] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[2] .<= (-0.4933803236000000036476365039561642333865165710449218750000000000:t)
-    /\ (0.2272730000000000027959856652159942314028739929199218750000000000:t) .<= i[3] .<= (0.5:t)
-    /\ (0.0:t) .<= i[4] .<= (0.5:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (0.2689779999999999948734341614908771589398384094238281250000000000:t) .<= rho .<= (0.6798579999999999623483404320722911506891250610351562500000000000:t)
+    /\ (0.1114079999999999931459271351741335820406675338745117187500000000:t) .<= theta .<= (0.5:t)
+    /\ (-0.5:t) .<= psi .<= (-0.4933803236000000036476365039561642333865165710449218750000000000:t)
+    /\ (0.2272730000000000027959856652159942314028739929199218750000000000:t) .<= vown .<= (0.5:t)
+    /\ (0.0:t) .<= vint .<= (0.5:t)
 
   predicate is_min (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
diff --git a/examples/acasxu/property_2.why b/examples/acasxu/property_2.why
index 92778b15708a69b60f053f20d0a5ef88010ae0c0..cace1fdd82b85685156ba84c1fbc8cbf7ee7d7fa 100644
--- a/examples/acasxu/property_2.why
+++ b/examples/acasxu/property_2.why
@@ -8,18 +8,29 @@ theory ACASXU_P2
   constant nn_2_2: nn = read_neural_network "nets/onnx/ACASXU_2_2.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= i[0] .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t)
-    /\ (-0.5:t) .<= i[1] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[2] .<= (0.5:t)
-    /\ (0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= i[3] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[4] .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (0.5999999999999999777955395074968691915273666381835937500000000000:t) .<= rho .<= (0.6798577687000000313588543576770462095737457275390625000000000000:t)
+    /\ (-0.5:t) .<= theta .<= (0.5:t)
+    /\ (-0.5:t) .<= psi .<= (0.5:t)
+    /\ (0.4500000000000000111022302462515654042363166809082031250000000000:t) .<= vown .<= (0.5:t)
+    /\ (-0.5:t) .<= vint .<= (-0.4500000000000000111022302462515654042363166809082031250000000000:t)
 
   predicate is_max (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .>= o[j]
 
   goal P2_1_9:
-    forall i: vector t. has_length i 5 -> valid_input i -> not (is_max (nn_1_9@@i) 0)
+    forall i: vector t.
+      has_length i 5 -> valid_input i ->
+      let is_coc_max = is_max (nn_1_9@@i) 0 in
+      not is_coc_max
 
   goal P2_2_2:
-    forall i: vector t. has_length i 5 -> valid_input i -> not (is_max (nn_2_2@@i) 0)
+    forall i: vector t.
+      has_length i 5 -> valid_input i ->
+      let is_coc_max = is_max (nn_2_2@@i) 0 in
+      not is_coc_max
 end
diff --git a/examples/acasxu/property_3.why b/examples/acasxu/property_3.why
index dd6df0993523bf82434e8907209b6d9df03f98bb..b74b995025330408f10020a33603cc797486401d 100644
--- a/examples/acasxu/property_3.why
+++ b/examples/acasxu/property_3.why
@@ -8,18 +8,29 @@ theory ACASXU_P3
   constant nn_2_7: nn = read_neural_network "nets/onnx/ACASXU_2_7.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= i[0] .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t)
-    /\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= i[1] .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t)
-    /\ (0.4933803236000000036476365039561642333865165710449218750000000000:t) .<= i[2] .<= (0.5:t)
-    /\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= i[3] .<= (0.5:t)
-    /\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= i[4] .<= (0.5:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= rho .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t)
+    /\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= theta .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t)
+    /\ (0.4933803236000000036476365039561642333865165710449218750000000000:t) .<= psi .<= (0.5:t)
+    /\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= vown .<= (0.5:t)
+    /\ (0.2999999999999999888977697537484345957636833190917968750000000000:t) .<= vint .<= (0.5:t)
 
   predicate is_min (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
 
   goal P3_1_1:
-    forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_1_1@@i) 0)
+    forall i: vector t.
+      has_length i 5 -> valid_input i ->
+      let is_coc_min = is_min (nn_1_1@@i) 0 in
+      not is_coc_min
 
   goal P3_2_7:
-    forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_2_7@@i) 0)
+    forall i: vector t.
+      has_length i 5 -> valid_input i ->
+      let is_coc_min = is_min (nn_2_7@@i) 0 in
+      not is_coc_min
 end
diff --git a/examples/acasxu/property_4.why b/examples/acasxu/property_4.why
index d27816bc9bb08401da98fa3ead0b4170bfc4e4f3..e31559531a063602bc226da3ba0eb5fb56932fc3 100644
--- a/examples/acasxu/property_4.why
+++ b/examples/acasxu/property_4.why
@@ -8,18 +8,29 @@ theory ACASXU_P4
   constant nn_1_9: nn = read_neural_network "nets/onnx/ACASXU_1_9.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= i[0] .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t)
-    /\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= i[1] .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t)
-    /\ (0.0:t) .<= i[2] .<= (0.0:t)
-    /\ (0.3181818182000000216902435568044893443584442138671875000000000000:t) .<= i[3] .<= (0.5:t)
-    /\ (0.0833333333000000064938461719066253863275051116943359375000000000:t) .<= i[4] .<= (0.1666666666999999935061538280933746136724948883056640625000000000:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (-0.3035311560999999769272506000561406835913658142089843750000000000:t) .<= rho .<= (-0.2985528118999999924731980627257144078612327575683593750000000000:t)
+    /\ (-0.0095492965999999998572000947660853853449225425720214843750000000:t) .<= theta .<= (0.0095492965999999998572000947660853853449225425720214843750000000:t)
+    /\ (0.0:t) .<= psi .<= (0.0:t)
+    /\ (0.3181818182000000216902435568044893443584442138671875000000000000:t) .<= vown .<= (0.5:t)
+    /\ (0.0833333333000000064938461719066253863275051116943359375000000000:t) .<= vint .<= (0.1666666666999999935061538280933746136724948883056640625000000000:t)
 
   predicate is_min (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
 
   goal P4_1_1:
-    forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_1_1@@i) 0)
+    forall i: vector t.
+      has_length i 5 -> valid_input i ->
+      let is_coc_min = (is_min (nn_1_1@@i) 0) in
+      not is_coc_min
 
   goal P4_1_9:
-    forall i: vector t. has_length i 5 -> valid_input i -> not (is_min (nn_1_9@@i) 0)
+    forall i: vector t.
+      has_length i 5 -> valid_input i ->
+      let is_coc_min = (is_min (nn_1_9@@i) 0) in
+      not is_coc_min
 end
diff --git a/examples/acasxu/property_5.why b/examples/acasxu/property_5.why
index 76e1944afc1eb8373f062da1e33b0090ecb2e7a9..93382284cc4fa28571be253d9e41c6ed34f9f9a9 100644
--- a/examples/acasxu/property_5.why
+++ b/examples/acasxu/property_5.why
@@ -7,11 +7,16 @@ theory ACASXU_P5
   constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (-0.3242740000000000066826544298237422481179237365722656250000000000:t) .<= i[0] .<= (-0.3217849999999999877076106713502667844295501708984375000000000000:t)
-    /\ (0.0318309999999999981845633101329440250992774963378906250000000000:t) .<= i[1] .<= (0.0636619999999999963691266202658880501985549926757812500000000000:t)
-    /\ (-0.5:t) .<= i[2] .<= (-0.4992039999999999810853523740661330521106719970703125000000000000:t)
-    /\ (-0.5:t) .<= i[3] .<= (-0.2272730000000000027959856652159942314028739929199218750000000000:t)
-    /\ (-0.5:t) .<= i[4] .<= (-0.1666670000000000095852215054037515074014663696289062500000000000:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (-0.3242740000000000066826544298237422481179237365722656250000000000:t) .<= rho .<= (-0.3217849999999999877076106713502667844295501708984375000000000000:t)
+    /\ (0.0318309999999999981845633101329440250992774963378906250000000000:t) .<= theta .<= (0.0636619999999999963691266202658880501985549926757812500000000000:t)
+    /\ (-0.5:t) .<= psi .<= (-0.4992039999999999810853523740661330521106719970703125000000000000:t)
+    /\ (-0.5:t) .<= vown .<= (-0.2272730000000000027959856652159942314028739929199218750000000000:t)
+    /\ (-0.5:t) .<= vint .<= (-0.1666670000000000095852215054037515074014663696289062500000000000:t)
 
   predicate is_min (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
diff --git a/examples/acasxu/property_6.why b/examples/acasxu/property_6.why
index 6662ed1c189755dce4108b0f777c226662bdab3b..8aad3f878a28a45a0c03b1976f98222c185d0668 100644
--- a/examples/acasxu/property_6.why
+++ b/examples/acasxu/property_6.why
@@ -7,13 +7,18 @@ theory ACASXU_P6
   constant nn_1_1: nn = read_neural_network "nets/onnx/ACASXU_1_1.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (-0.1292889999999999872670741751790046691894531250000000000000000000:t) .<= i[0] .<= (0.7004349250000000415283807342348154634237289428710937500000000000:t)
-    /\ ((-0.5:t) .<= i[1] .<= (-0.1114079999999999931459271351741335820406675338745117187500000000:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (-0.1292889999999999872670741751790046691894531250000000000000000000:t) .<= rho .<= (0.7004349250000000415283807342348154634237289428710937500000000000:t)
+    /\ ((-0.5:t) .<= theta .<= (-0.1114079999999999931459271351741335820406675338745117187500000000:t)
      \/
-     (0.1114079999999999931459271351741335820406675338745117187500000000:t) .<= i[1] .<= (0.5:t))
-    /\ (-0.5:t) .<= i[2] .<= (-0.4992039999999999810853523740661330521106719970703125000000000000:t)
-    /\ (-0.5:t) .<= i[3] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[4] .<= (0.5:t)
+     (0.1114079999999999931459271351741335820406675338745117187500000000:t) .<= theta .<= (0.5:t))
+    /\ (-0.5:t) .<= psi .<= (-0.4992039999999999810853523740661330521106719970703125000000000000:t)
+    /\ (-0.5:t) .<= vown .<= (0.5:t)
+    /\ (-0.5:t) .<= vint .<= (0.5:t)
 
   predicate is_min (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
diff --git a/examples/acasxu/property_7.why b/examples/acasxu/property_7.why
index 15caca564179334375ee74d6d775816f73431522..000ed359377953d579862dd106de5e490f2d7405 100644
--- a/examples/acasxu/property_7.why
+++ b/examples/acasxu/property_7.why
@@ -7,11 +7,16 @@ theory ACASXU_P7
   constant nn_1_9: nn = read_neural_network "nets/onnx/ACASXU_1_9.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (-0.3284230000000000204707362172484863549470901489257812500000000000:t) .<= i[0] .<= (0.6798579999999999623483404320722911506891250610351562500000000000:t)
-    /\ (-0.5:t) .<= i[1] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[2] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[3] .<= (0.5:t)
-    /\ (-0.5:t) .<= i[4] .<= (0.5:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (-0.3284230000000000204707362172484863549470901489257812500000000000:t) .<= rho .<= (0.6798579999999999623483404320722911506891250610351562500000000000:t)
+    /\ (-0.5:t) .<= theta .<= (0.5:t)
+    /\ (-0.5:t) .<= psi .<= (0.5:t)
+    /\ (-0.5:t) .<= vown .<= (0.5:t)
+    /\ (-0.5:t) .<= vint .<= (0.5:t)
 
   predicate is_min (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
@@ -19,5 +24,7 @@ theory ACASXU_P7
   goal P7_1_9:
     forall i: vector t.
       has_length i 5 -> valid_input i ->
-      (not (is_min (nn_1_9@@i) 4)) /\ (not (is_min (nn_1_9@@i) 3))
+      let is_strong_left_min = is_min (nn_1_9@@i) 3 in
+      let is_strong_right_min = is_min (nn_1_9@@i) 4 in
+      (not is_strong_left_min) /\ (not is_strong_right_min)
 end
diff --git a/examples/acasxu/property_8.why b/examples/acasxu/property_8.why
index 3ba80be0cdbc23dbcf83f44045bd768335139034..092215e2fc57fab3d744b57b0d6a49ada1a65c2d 100644
--- a/examples/acasxu/property_8.why
+++ b/examples/acasxu/property_8.why
@@ -7,11 +7,16 @@ theory ACASXU_P8
   constant nn_2_9: nn = read_neural_network "nets/onnx/ACASXU_2_9.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (-0.3284230000000000204707362172484863549470901489257812500000000000:t) .<= i[0] .<= (0.6798579999999999623483404320722911506891250610351562500000000000:t)
-    /\ (-0.5:t) .<= i[1] .<= (0.375:t)
-    /\ (-0.0159149999999999985922372047753015067428350448608398437500000000:t) .<= i[2] .<= (0.0159149999999999985922372047753015067428350448608398437500000000:t)
-    /\ (-0.0454550000000000023470114740575809264555573463439941406250000000:t) .<= i[3] .<= (0.5:t)
-    /\ (0.0:t) .<= i[4] .<= (0.5:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (-0.3284230000000000204707362172484863549470901489257812500000000000:t) .<= rho .<= (0.6798579999999999623483404320722911506891250610351562500000000000:t)
+    /\ (-0.5:t) .<= theta .<= (0.375:t)
+    /\ (-0.0159149999999999985922372047753015067428350448608398437500000000:t) .<= psi .<= (0.0159149999999999985922372047753015067428350448608398437500000000:t)
+    /\ (-0.0454550000000000023470114740575809264555573463439941406250000000:t) .<= vown .<= (0.5:t)
+    /\ (0.0:t) .<= vint .<= (0.5:t)
 
   predicate is_min (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]
diff --git a/examples/acasxu/property_9.why b/examples/acasxu/property_9.why
index 3770fb3ad5d5b867fd6c7822717831641189d4fa..d78d5fc3a1cfc5751ca1ea09eae14b85d2491029 100644
--- a/examples/acasxu/property_9.why
+++ b/examples/acasxu/property_9.why
@@ -7,11 +7,16 @@ theory ACASXU_P9
   constant nn_3_3: nn = read_neural_network "nets/onnx/ACASXU_3_3.onnx" ONNX
 
   predicate valid_input (i: vector t) =
-    (-0.2952339158000000240988924815610516816377639770507812500000000000:t) .<= i[0] .<= (-0.2122615123999999908743774312824825756251811981201171875000000000:t)
-    /\ (-0.0636619771999999972678097037714906036853790283203125000000000000:t) .<= i[1] .<= (-0.0222816919999999987767047571196599164977669715881347656250000000:t)
-    /\ (-0.4999998959999999992298569395643426105380058288574218750000000000:t) .<= i[2] .<= (-0.4984083464999999879552206039079464972019195556640625000000000000:t)
-    /\ (-0.5:t) .<= i[3] .<= (-0.4545454545000000012855423392466036602854728698730468750000000000:t)
-    /\ (-0.5:t) .<= i[4] .<= (-0.375:t)
+    let rho = i[0] in
+    let theta = i[1] in
+    let psi = i[2] in
+    let vown = i[3] in
+    let vint = i[4] in
+    (-0.2952339158000000240988924815610516816377639770507812500000000000:t) .<= rho .<= (-0.2122615123999999908743774312824825756251811981201171875000000000:t)
+    /\ (-0.0636619771999999972678097037714906036853790283203125000000000000:t) .<= theta .<= (-0.0222816919999999987767047571196599164977669715881347656250000000:t)
+    /\ (-0.4999998959999999992298569395643426105380058288574218750000000000:t) .<= psi .<= (-0.4984083464999999879552206039079464972019195556640625000000000000:t)
+    /\ (-0.5:t) .<= vown .<= (-0.4545454545000000012855423392466036602854728698730468750000000000:t)
+    /\ (-0.5:t) .<= vint .<= (-0.375:t)
 
   predicate is_min (o: vector t) (i: int) =
     forall j: int. 0 <= j < 5 -> i <> j -> o[i] .<= o[j]