diff --git a/doc/contributing.rst b/doc/contributing.rst
new file mode 100644
index 0000000000000000000000000000000000000000..cc1d17a088ea5fcaffb3f85a12ad3ddb6a54acf4
--- /dev/null
+++ b/doc/contributing.rst
@@ -0,0 +1,58 @@
+.. _contributing:
+
+Hacking CAISAR
+==============
+.. index:: single: Driver; driver
+
+CAISAR project structure
+************************
+
+The root of the repository has the following folders:
+
+* ``bin/`` contains utils for instanciating python-based prover for CAISAR.
+* ``ci/caisar-public/`` contains automated scripts for
+  replicating CAISAR repository on partner forges.
+* ``config/`` contains various utilities for CAISAR to
+  interface with external provers. In particular, driver definitions are located in ``config/drivers/``.
+* ``doc/`` contains all material related to documentation generation.
+* ``docker/`` contains utils to build the CAISAR docker image for
+  release.
+* ``examples/`` contains examples of WhyML files
+  representing known properties that CAISAR can verify.
+* ``lib/`` contains OCaml library files that are primarly
+  written to interface with CAISAR, but can be used in other
+  projects as well.
+* ``licenses/`` contains license files.
+* ``src/`` contains most of the OCaml source of CAISAR.
+* ``stdlib/`` contains WhyML files used to define
+  theories for the CAISAR interpreted language.
+* ``tests/`` are non-regression tests written in `dune cram test syntax <https://dune.readthedocs.io/en/stable/tests.html>`_.
+
+
+Setting up a development environment
+************************************
+
+With an Opam switch
+-------------------
+
+Once an opam switch is setup, one needs to download `ocaml language server <https://github.com/ocaml/ocaml-lsp>`_, `ocamlformat <https://github.com/ocaml-ppx/ocamlformat>`_ v0.25.1 and `ocp-indent <https://github.com/OCamlPro/ocp-indent>`_.
+
+With nix-shell
+--------------
+
+With nix setup as detailed in :ref:`nix`, typing `nix develop` will create a
+shell environment tailored for CAISAR development. You can build CAISAR by
+following the :ref:`source` instructions.
+
+.. index:: Prover;prover
+
+Opening a merge request
+***********************
+
+We gladly accept merge requests (MR) on our `public forge <https://git.frama-c.com/pub/caisar/-/merge_requests>`_.
+Some possible topics for opening a MR are:
+
+* support for a new prover
+* enhancing the support for an existing prover
+* new proof strategies
+* bug fixes
diff --git a/doc/foreword.rst b/doc/foreword.rst
index e51ff15f9303c64295cff046bbbdfcd7ef63b70f..5b22b2309b2987ce792ee4c14d273780ac6a2f14 100644
--- a/doc/foreword.rst
+++ b/doc/foreword.rst
@@ -1,5 +1,8 @@
 .. _foreword:
 
+.. index:: Prover;prover
+           Driver;driver
+
 Foreword
 ========
 
@@ -8,7 +11,16 @@ And Robustness. It supports an expressive modelling language that allows the
 specification of a wide variety of properties to enforce on a machine learning
 system, and relies on external provers to discharge verification conditions.
 
-.. _overalldesign:
+CAISAR aims to be component-agnostic: it can model and assess the
+trustworthiness of artificial intelligence systems that potentially mix both
+symbolic- and data-driven approaches. It supports the verification of properties
+for multiple machine learning models. Neural Network (NN) models are handled by means of
+the `ONNX <https://onnx.ai/>`_ exchange format. Support Vector Machines (SVM)
+are supported by means of an ad-hoc CSV format.
+
+The motivations and design ideas behind CAISAR have been presented at the
+workshop `AISafety 2022 <https://www.aisafetyw.org/>`_ [GABCL2022]_ and at the `Journées Francophones des Langages Applicatifs 2023 <jfla.inria.fr/jfla2023.html>`_ [GABC2023]_.
+
 
 Overall Design
 --------------
@@ -18,28 +30,6 @@ provide both the specification language, called WhyML, and the infrastructure
 needed to interoperate with external provers. These are generally called
 *provers*, although some of them do not directly provide a logical proof.
 
-The supported provers are the following:
-
-* `PyRAT <https://laiser.frama-c.com/laiser-websites/pyrat-website/>`_, a neural
-  network verifier based on abstract interpretation,
-* `Marabou <https://github.com/NeuralNetworkVerification/Marabou>`_, a
-  Satisfiability Modulo Theory (SMT) solver specialized in neural network
-  verification,
-* `SAVer <https://github.com/abstract-machine-learning/saver>`_, a support
-  vector machine verifier based on abstract interpretation,
-* `nnenum <https://github.com/stanleybak/nnenum>`_ a neural network verifier
-  that combines abstract interpretation and linear programming techniques,
-* Standard SAT/SMT solvers that support the SMT-LIBv2 input language.
-
-CAISAR aims to be component-agnostic: it can model and assess the
-trustworthiness of artificial intelligence system that potentially mixes both
-symbolic- and data-driven approaches. It supports the verification of properties
-for multiple machine learning models. Neural Network (NN) are handled through
-the `ONNX <https://onnx.ai/>`_ exchange format. Support Vector Machines (SVM)
-are supported through an ad-hoc CSV format.
-
-The motivations and design ideas behind CAISAR have been presented at the
-workshop `AISafety 2022 <https://www.aisafetyw.org/>`_ [GABCL2022]_.
 
 Availability
 ------------
@@ -71,6 +61,10 @@ We gratefully thank the people who contributed to CAISAR, directly or
 indirectly: Zakaria Chihani, Serge Durand, Tristan Le Gall, Augustin Lemesle,
 Aymeric Varasse.
 
+.. [GABC2023] Girard-Satabin, J., Alberti, M., Bobot, F., Chihani, Z.,
+              *Caractériser des propriétés de confiance d’IA avec Why3*,
+              Journées Francophones des Langages Applicatifs (J FLA 2023)
+
 .. [GABCL2022] Girard-Satabin, J., Alberti, M., Bobot, F., Chihani, Z., Lemesle,
                A., *CAISAR: A platform for Characterizing Artificial
                Intelligence Safety and Robustness*, Proceedings of the Workshop
diff --git a/doc/index.rst b/doc/index.rst
index 455dd91ae15631cf1b328ff8a888aeff1397ea9f..694f329274fa453c6fa500fdf42041a85257dcf1 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -33,6 +33,9 @@ This work has been partly supported by the `Confiance.ai`_ program.
    installation
    usage
    examples
+   interpretation
+   contributing
+   provers
    genindex
    CAISAR website <http://www.caisar-platform.com>
 
diff --git a/doc/installation.rst b/doc/installation.rst
index 62199f954fef870d5739735ae668254ee4b7b3ae..cb1055b0c444992ad675b71559adb39287ba51c9 100644
--- a/doc/installation.rst
+++ b/doc/installation.rst
@@ -53,6 +53,8 @@ To run the CAISAR Docker image, do the following:
 
    $ docker run -it laiser/caisar:pub sh
 
+.. _nix:
+
 Install through Nix
 -------------------
 
@@ -101,6 +103,8 @@ It will contain the ocaml toolchain already setup and installed, and the ocaml l
 
    $ nix develop
 
+.. _source:
+
 Compile from source
 -------------------
 
diff --git a/doc/interpretation.rst b/doc/interpretation.rst
new file mode 100644
index 0000000000000000000000000000000000000000..1d17a5c6bd7a78265c5541a0cff53b8e55895d78
--- /dev/null
+++ b/doc/interpretation.rst
@@ -0,0 +1,159 @@
+.. _interpretation:
+
+The CAISAR modelling language
+=============================
+
+Origin: WhyML
+-------------
+
+CAISAR heavily relies on Why3 and uses the WhyML language as a basis for
+its own interpretation language.
+A reference of WhyML is available on the original `Why3
+manual <https://why3.lri.fr/doc/syntaxref.html>`_.
+
+However, since Why3 aims to verify a whole range of programs, it cannot
+specialize on a particular program structure.
+For further background, the paper [F2013]_ from one of Why3's
+original author details the rationale and tradeoff involved
+in the WhyML design and implementation. To quote the author, "[the Why3 team] has come up with logic of compromise".
+
+As CAISAR focuses on artificial intelligence systems, it can
+make some additional assumptions on the nature of the
+inputs, program and users:
+
+* users will have a basic background on linear algebra, and
+  expect CAISAR to reflect this
+
+* inputs will mostly be multidimensional vectors (machine
+  learning community coined the term "tensor" as well) of
+  floating point values, strings, ints or chars
+
+* the program control flow will mostly be composed of a lot of
+  real or floating-point arithmetic operations: there is no
+  loop with runtime invariant, nor conditional
+
+With those constraints in mind, CAISAR provides several
+extensions to WhyML, that we detail here. They can be used
+directly in any WhyML file provided to CAISAR.
+
+Some of those extensions will be "interpreted".
+During the translation from "pure" WhyML terms to actual inputs to provers,
+symbols will be replaced with other symbols, or directly computed by CAISAR.
+
+Built-ins
+---------
+
+.. index:: Interpretation; interpretation
+
+The built-ins are available under ``stdlib/interpretation.mlw``.
+To access the symbols they define, the corresponding theory needs to be imported
+in the scope of the current theory. For instance, to import
+the symbols defined by the theory ``Vector``, prepend ``use caisar.Vector`` at
+the top of the file.
+
+Vector
+******
+
+Types
+~~~~~
+.. code-block:: whyml
+
+  type vector 'a
+  type index = int
+
+Functions
+~~~~~~~~~
+.. code-block:: whyml
+
+  (** Returns the i-th element of v. **)
+  function ([]) (v: vector 'a) (i: index) : 'a
+
+  function length (v: vector 'a) : int
+
+  (** Returns a vector with the i-th element equals to v1[i] - v2[i]. **)
+  function (-) (v1: vector 'a) (v2: vector 'a) : vector 'a
+
+  function mapi (v: vector 'a) (f: index -> 'a -> 'b) : vector 'b
+  function map (v: vector 'a) (f: 'a -> 'b) : vector 'b
+  function map2 (v1: vector 'a) (v2: vector 'b) (f: 'a -> 'b -> 'c) :
+    vector 'c
+  function foreach (v: vector 'a) (f: 'a -> 'b) : vector 'b =
+    map v f
+  function foreach2
+    (v1: vector 'a) (v2: vector 'b) (f: 'a -> 'b -> 'c) : vector 'c =
+      map2 v1 v2 f
+
+Predicates
+~~~~~~~~~~
+.. code-block:: whyml
+
+  predicate forall_ (v: vector 'a) (f: 'a -> bool) =
+    forall i: index. valid_index v i -> f v[i]
+  predicate forall2
+  (v1: vector 'a) (v2: vector 'b) (f: 'a -> 'b -> bool) =
+    length(v1) = length(v2) ->
+      forall i: index. valid_index v1 i -> f v1[i] v2[i]
+
+NeuralNetwork
+*************
+
+Types
+~~~~~
+
+.. code-block:: whyml
+
+  (** Type describing a neural network.
+  CAISAR is able to interpret a symbol of type nn
+  to get its control flow graph. **)
+  type nn
+  type format = ONNX | NNet
+
+Functions
+~~~~~~~~~
+
+.. code-block:: whyml
+
+  (** Returns a symbol of type nn from the neural network located at n. **)
+  function read_neural_network (n: string) (f: format) : nn
+
+  (** Returns a vector that represents the application of neural network nn
+  on input vector v.**)
+  function (@@) (n: nn) (v: vector 'a) : vector 'a
+
+Dataset
+*******
+
+Types
+~~~~~
+
+.. code-block:: whyml
+
+  (** Dataset type. dataset 'a 'b is a dataset with
+    inputs of type 'a vector and output of type 'b vector. **)
+  type dataset 'a 'b = vector ('a, 'b)
+  type format = CSV
+
+Functions
+~~~~~~~~~
+
+.. code-block:: whyml
+
+  (** Returns a symbol of type nn from the neural network located at n. **)
+  function read_dataset (f: string) (k: format) : dataset 'a 'b
+  function foreach
+    (d: dataset 'a 'b) (f: 'a -> 'b -> 'c) : vector 'c =
+      Vector.foreach d (fun e -> let a, b = e in f a b)
+
+Predicates
+~~~~~~~~~~
+
+.. code-block:: whyml
+
+  predicate forall_
+    (d: dataset 'a 'b) (f: 'a -> 'b -> bool) =
+      Vector.forall_ d (fun e -> let a, b = e in f a b)
+
+
+.. [F2013] Jean-Christophe Filiâtre, *One Logic to Use Them All*,
+   CADE 24 - the 24th International Conference on
+   Automated Deduction
diff --git a/doc/mnist.rst b/doc/mnist.rst
index c80180c615e20f6cf9aeed8fa74a23ae3ac00930..a703f0260b93aa84814a05e2eee7c8e85391f238 100644
--- a/doc/mnist.rst
+++ b/doc/mnist.rst
@@ -33,7 +33,8 @@ valuable for assessing robustness properties by means of formal method tools.
 CAISAR provides in `mnist_test.csv
 <https://git.frama-c.com/pub/caisar/-/blob/master/examples/mnist/csv/mnist_test.csv>`_
 a fragment (:math:`100` images) of the MNIST dataset under the
-``examples/mnist/csv`` folder. Each line in this file represents an MNIST image:
+``examples/mnist/csv`` folder.
+Each line in this file represents an MNIST image:
 in particular, the first column represents the classification label, and the
 remaining :math:`784` columns represent the greyscale value of the respective
 pixels, rescaled into :math:`[0, 1]`.
@@ -53,6 +54,8 @@ given :math:`\epsilon \in \mathbb{R}`, a classifier is
 property is the following: :math:`\forall x,x' \in X. \ \lVert x - x'
 \rVert_\infty \leq \epsilon \Rightarrow C(x) = C(x')`.
 
+.. _property:
+
 Since we actually deal with a *dataset* of finite elements, we will instead
 verify a slightly different property: given a classifier :math:`C`, an element
 :math:`x \in X`, and some perturbation :math:`\epsilon \geq 0`, it must hold
@@ -67,68 +70,82 @@ As described for the example on :ref:`acas_xu`, we first need to write a
 specification file containing a WhyML theory to describe the verification
 problem. In principle, we need to formalize the local robustness property as
 well as the notions of classifier and dataset.
+The CAISAR interpretation language `caisar.mlw
+<https://git.frama-c.com/pub/caisar/-/blob/master/stdlib/interpretation.mlw>`_ provides theories that defines those concepts.
+We will import the relevant theories with the ``use`` keyword.
+As described in :ref:`interpretation`, the ``Vector`` theory provides
+a vector type, a getter (``[]``) operation and a ``valid_index`` predicate
+that determines whether the get operation is within the range of the vector length.
+``NeuralNetwork`` defines a type and an application function (``@@``).
+We will also need integers and floating point numbers
+to declare and define :math:`\epsilon`. 
+
+.. code-block:: whyml
+
+   use ieee_float.Float64
+   use int.Int
+   use interpretation.Vector
+   use interpretation.NeuralNetwork
+   use interpretation.Dataset
+
+   type image = vector t
+   type label_ = int
 
-The CAISAR standard library `caisar.mlw
-<https://git.frama-c.com/pub/caisar/-/blob/master/stdlib/caisar.mlw>`_ provides
-some utilities for dealing with verification problems about classification
-datasets. Of particular interest for us is the ``robust`` predicate, defined in
-the theory ``DatasetClassificationProps`` as follows:
+We will first write some predicates to take into account the fact
+that MNIST counts 10 labels (integer from 0 to 9) in the dataset sample,
+and that the
+input images are normalized (floating point values between 0. and 1.).
+We will also define a predicate that, given a label ``l`` and an image ``i``, checks whether the neural network ``nn`` indeed advises the correct label.
 
 .. code-block:: whyml
 
-  predicate robust (m: model) (d: dataset) (eps: t) =
-    forall i: int. 0 <= i < d.data.length -> robust_at m d.data[i] eps
+   predicate valid_image (i: image) =
+     forall v: index. valid_index i v -> (0.0: t) .<= i[v] .<= (1.0: t)
+  
+   predicate valid_label (l: label_) = 0 <= l <= 9
+  
+   predicate advises (n: nn) (i: image) (l: label_) =
+     valid_label l ->
+     forall j: int. valid_label j ->  j <> l -> (n@@i)[l] .> (n@@i)[j]
 
-Note that the predicate is defined over a ``model``, a ``dataset`` and a
-floating-point value ``eps``. The latter determines the perturbation
-:math:`\epsilon`. The other two are custom WhyML types that respectively
-formalize the notions of classifier and dataset in CAISAR. These types are
-both defined in the ``DatasetClassification`` theory.
+We write :math:`\lVert x - x' \rVert_\infty \leq \epsilon` with another
+predicate:
+
+.. code-block:: whyml
+
+  predicate bounded_by_epsilon (i: image) (eps: t) =
+    forall v: index. valid_index i v -> .- eps .<= i[v] .<= eps
 
-Moreover, it is defined in terms of the predicate ``robust_at`` that formalizes
-the local robustness property:
+
+We can now define the property to check that is a straightforward
+description of property_
 
 .. code-block:: whyml
 
-  predicate robust_at (m: model) (d: record) (eps: t) =
-    forall x': features.
-      let (x, _) = d in
-      linfty_distance x x' eps ->
-      predict m x = predict m x'
-
-Note that a ``record`` is a dataset element given as a pair of *features* and
-(classification) *label*. Morever, ``linfty_distance`` is a predicate that
-describes how two arrays of floating-point values (*i.e.* ``features``) are
-considered close up-to a pertubation ``eps``, while ``predict`` is a function
-that formalizes the execution of a model on some features to obtain the
-corresponding classification label.
-
-In order to use the ``robust`` predicate in our WhyML specification, we need
-values of types ``model`` and ``dataset``. For the former, CAISAR makes
-available the constant ``model`` upon interpreting the ``AsArray`` sub-theory
-that is built by the extension of the Why3 standard library for recognizing
-neural network ONNX and NNet formats. For the latter, the CAISAR standard
-library provides the constant ``dataset`` in ``DatasetClassification`` that will
-be interpreted as the actual dataset the user needs to provide via the
-command-line interface when launching a CAISAR verification.
-
-Assuming we have a neural network named ``MNIST_256_2.onnx`` for MNIST
-classification, the final WhyML file for specifying its local robustness on a
-dataset, with each element's feature pertubed by :math:`1 \%`, looks like this:
+   predicate robust_around (n: nn) (eps: t) (l: label_) (i: image) =
+     forall perturbed_image: image.
+       has_length perturbed_image (length i) ->
+       valid_image perturbed_image ->
+       let perturbation = perturbed_image - i in
+       bounded_by_epsilon perturbation eps ->
+       advises n perturbed_image l
+  
+   predicate robust (n: nn) (d: dataset label_ image) (eps: t) =
+     Dataset.forall_ d (robust_around n eps)
+
+Finally, to instanciate this property on concrete neural networks and data
+samples, we can define a goal and check the property
 
 .. code-block:: whyml
 
-   theory MNIST
-     use MNIST_256_2.AsArray
-     use ieee_float.Float64
-     use caisar.DatasetClassification
-     use caisar.DatasetClassificationProps
+  goal robustness:
+    let nn = read_neural_network "nets/MNIST_256_2.onnx" ONNX in
+    let dataset = read_dataset "csv/mnist_test.csv" CSV in
+    let eps = (0.375:t) in
+    robust nn dataset eps
 
-     goal robustness:
-      robust model dataset (0.0100000000000000002081668171172168513294309377670288085937500000:t)
-   end
 
-This file is available, as is, under the ``/examples/mnist/`` folder as
+The final property file is available, as is, under the ``/examples/mnist/`` folder as
 `property.why
 <https://git.frama-c.com/pub/caisar/-/blob/master/examples/mnist/property.why>`_.
 The corresponding neural network in ONNX format is available under the
@@ -139,12 +156,12 @@ Verifying the property with CAISAR
 ==================================
 
 Now we may verify whether the previous robustness specification holds on the
-MNIST fragment ``mnist_test.csv`` by means of the nnenum prover. This can be
+by means of the nnenum prover. This can be
 done via CAISAR as follows:
 
 .. code-block:: console
 
-   $ caisar verify --prover nnenum -L examples/mnist/nets --format whyml --dataset=examples/mnist/csv/mnist_test.csv examples/mnist/property.why
+   $ caisar verify --prover nnenum -L examples/mnist/ --format whyml examples/mnist/property.why
    [caisar] Goal robustness: Invalid
 
 The result tells us that there exists at least one image in ``mnist_test.csv``
diff --git a/doc/provers.rst b/doc/provers.rst
new file mode 100644
index 0000000000000000000000000000000000000000..c6db8d0ddb64d4c916ebe8e57e7d0c1852fbee8e
--- /dev/null
+++ b/doc/provers.rst
@@ -0,0 +1,44 @@
+.. _provers:
+
+Supported provers
+*****************
+
+The supported provers are the following:
+
+PyRAT
+-----
+`PyRAT <https://pyrat-analyzer.com>`_
+is a neural network prover based on abstract interpretation,
+
+Marabou
+-------
+`Marabou <https://github.com/NeuralNetworkVerification/Marabou>`_
+is a Satisfiability Modulo Theory (SMT) solver specialized in neural network
+verification.
+
+SAVer
+-----
+`SAVer <https://github.com/abstract-machine-learning/saver>`_
+is a support vector machine prover based on abstract interpretation.
+
+nnenum
+------
+
+`nnenum <https://github.com/stanleybak/nnenum>`_ is a neural network prover that combines abstract interpretation, linear programming techniques and input split heuristics.
+
+alpha-beta-CROWN
+--------------------------
+
+`alpha-beta-CROWN <https://github.com/Verified-Intelligence/alpha-beta-CROWN>`_ is a neural network prover, winner of the VNN-COMP 2021 and 2022.
+
+SMT solvers
+-----------
+Standard SAT/SMT solvers that support the SMT-LIBv2 input language.
+As of now, only CVC5 had been tested.
+
+
+.. warning ::
+   Marabou and VNN-LIB provers (nnenum, alpha-beta-CROWN) do not accept strict
+   inequalities. Thus CAISAR treats strict inequalities as non-strict ones. We
+   are aware that this is not correct in general, but it is the current common
+   practice in the community.
diff --git a/doc/usage.rst b/doc/usage.rst
index 8f2f6170552cb979167c8b7dee5977b1acf5c8cc..d62f28a7cb8733565c683da30b7e32a749dfc3ec 100644
--- a/doc/usage.rst
+++ b/doc/usage.rst
@@ -10,6 +10,46 @@ The CAISAR executable provide several subcommands, called
 using ``caisar [COMMAND]``. ``caisar --help`` provides a list of
 all available commands. Additionally, ``caisar [COMMAND] --help`` gives a synopsis for each available command.
 
+Command line options
+--------------------
+
+``caisar verify`` have the following specific command line options:
+
+* ``-D name:value`` or ``--define=name:value``: define a declared constant ``name`` with the given value ``value``. Constant must be declared in the theory file.
+* ``--dataset=FILE``: Path of a dataset file. To be deprecated.
+* ``-g [theory]:goal,..,goal`` or ``--goal [theory]:goal,...,goal``: Verify only the ``goal`` in the ``theory``. Theory must already be defined in the specification file. When no theory is provided, verify the specified goals for all theories.
+* ``-T theory`` or ``--theory=theory``: Verify all goals in ``theory``.
+* ``-m MEM`` or ``--memlimit=MEM``: Memory limit. ``MEM`` is intended in megabytes
+  (MB); (GB) and (TB) can also be specified. Default: 4000MB.
+* ``-t TIME`` or ``--timelimit=TIME``: Time limit. ``TIME`` is intended in seconds (s); minutes (m) and hours (h) can also be specified.
+  (MB), (GB) and (TB) can also be specified. Default: 20s.
+* ``-p PROVER`` or ``--prover=PROVER``. Prover to use. Support is provided for
+  the following: Marabou, PyRAT, SAVer, AIMOS, CVC5, nnenum, alpha-beta-CROWN.
+  See :ref:`provers` for a detailed description of each prover
+  capacity.
+* ``--prover-altern=VAL``: alternative prover alternatives to use. Prover
+  alternatives are prover-specific configurations that help provers to better perform on specific problems. Supported prover alternative are ``ACAS`` for alpha-beta-CROWN and PyRAT.
+
+
+Logging
+*******
+
+CAISAR provides different log verbosity levels. To change the log verbosity
+level, provide the option ``--verbosity`` to the ``verify`` command. Available
+levels are ``quiet``, ``error``, ``warning``, ``info`` and ``debug`` (defaults
+to ``warning``).
+
+CAISAR also provides log *tags*. Each tag adds additional information in the
+log, independently of the log verbosity level. However, note that the ``debug``
+verbosity level allows logging with all tags enabled. To enable a particular
+tag, use the option ``--ltag=TAG``, with ``TAG`` one of the following:
+
+* ``Autodetect``: display the output of the autodetect routine of CAISAR described in :ref:`prover-registration`
+* ``ProverSpec``: display the actual specification provided to the prover
+* ``ProverCall``: display the actual command used to call the prover
+* ``InterpretGoal``: display the goal interpretation
+* ``StackTrace``: display the stack trace upon a CAISAR error
+
 .. _prover-registration:
 
 Prover registration
@@ -19,9 +59,7 @@ CAISAR relies on external provers to work. You must install
 them first, then point CAISAR to their location. Please
 refer to each prover documentation to install them.
 
-CAISAR
-already support several provers, listed in
-:ref:`overalldesign`.
+CAISAR already supports several provers, listed in :ref:`provers`.
 Assuming you have installed a prover with an entry in the
 ``caisar-detection.conf`` file, you can register the prover to CAISAR using the
 following command: ``PATH=$PATH:/path/to/solver/executable caisar config -d``.
@@ -40,7 +78,7 @@ documentation to get a first grasp on using WhyML.
 
 Provided a file ``trust.why`` containing a goal to verify, the command ``caisar verify --prover=PROVER trust.why``
 will verify the goals using the specified prover. A list of
-supported provers is available in :ref:`overalldesign`. The prover
+supported provers is available in :ref:`provers`. The prover
 must already be registered by CAISAR. Currently, only one
 prover can be selected at a time; future work will allow
 selecting multiple provers and orchestrate verification
@@ -49,25 +87,6 @@ Internally, CAISAR will translate the goals into a form that
 is acceptable by the targeted prover, generating a file (the
 ``%f`` defined in the previous section).
 
-Logging
-*******
-
-CAISAR provides different log verbosity levels. To change the log verbosity
-level, provide the option ``--verbosity`` to the ``verify`` command. Available
-levels are ``quiet``, ``error``, ``warning``, ``info`` and ``debug`` (defaults
-to ``warning``).
-
-CAISAR also provides log *tags*. Each tag adds additional information in the
-log, independently of the log verbosity level. However, note that the ``debug``
-verbosity level allows logging with all tags enabled. To enable a particular
-tag, use the option ``--ltag=TAG``, with ``TAG`` one of the following:
-
-* ``Autodetect``: display the output of the autodetect routine of CAISAR described in :ref:`prover-registration`
-* ``ProverSpec``: display the actual specification provided to the prover
-* ``ProverCall``: display the actual command used to call the prover
-* ``InterpretGoal``: display the goal interpretation
-* ``StackTrace``: display the stack trace upon a CAISAR error
-
 Built-in properties
 -------------------
 In addition to all the theories provided by Why3, CAISAR
@@ -128,6 +147,5 @@ Parsing the output of the prover
 ********************************
 CAISAR rely on Why3's `drivers <https://why3.lri.fr/doc/technical.html#drivers-for-external-provers>`_ to parse the output of the prover.
 
-
 If you add support for a new prover, consider opening a merge
 request on our `forge <https://git.frama-c.com/pub/caisar/-/merge_requests>`_!
diff --git a/examples/mnist/property.why b/examples/mnist/property.why
index 1642bd92159e0cd3e8ddf3eed7584d4e6b32df27..da1190ca4d67dd97e170b17140e7810df7840bfc 100644
--- a/examples/mnist/property.why
+++ b/examples/mnist/property.why
@@ -1,9 +1,39 @@
 theory MNIST
-  use MNIST_256_2.AsArray
   use ieee_float.Float64
-  use caisar.DatasetClassification
-  use caisar.DatasetClassificationProps
+  use int.Int
+  use interpretation.Vector
+  use interpretation.NeuralNetwork
+  use interpretation.Dataset
+
+  type image = vector t
+  type label_ = int
+
+  predicate valid_image (i: image) =
+    forall v: index. valid_index i v -> (0.0: t) .<= i[v] .<= (1.0: t)
+
+  predicate valid_label (l: label_) = 0 <= l <= 9
+
+  predicate advises (n: nn) (i: image) (l: label_) =
+    valid_label l ->
+    forall j: int. valid_label j ->  j <> l -> (n@@i)[l] .> (n@@i)[j]
+
+  predicate bounded_by_epsilon (i: image) (eps: t) =
+    forall v: index. valid_index i v -> .- eps .<= i[v] .<= eps
+
+  predicate robust_around (n: nn) (eps: t) (l: label_) (i: image) =
+    forall perturbed_image: image.
+      has_length perturbed_image (length i) ->
+      valid_image perturbed_image ->
+      let perturbation = perturbed_image - i in
+      bounded_by_epsilon perturbation eps ->
+      advises n perturbed_image l
+
+  predicate robust (n: nn) (d: dataset label_ image) (eps: t) =
+    Dataset.forall_ d (robust_around n eps)
 
   goal robustness:
-   robust model dataset (0.0100000000000000002081668171172168513294309377670288085937500000:t)
+    let nn = read_neural_network "nets/MNIST_256_2.onnx" ONNX in
+    let dataset = read_dataset "csv/mnist_test.csv" CSV in
+    let eps = (0.0100000000000000002081668171172168513294309377670288085937500000:t) in
+    robust nn dataset eps
 end