diff --git a/doc/_static/media/caisar_logo.png b/doc/_static/media/caisar_logo.png
new file mode 100644
index 0000000000000000000000000000000000000000..6b5c44d6b3e6bca1aed7e2e3f77c133440f45068
Binary files /dev/null and b/doc/_static/media/caisar_logo.png differ
diff --git a/doc/acas_xu.rst b/doc/acas_xu.rst
index 1176bc4936c0ecbe50b80afc77a53f428a07a0c2..7f558448acfce15f6318997a6eb4630797e8297b 100644
--- a/doc/acas_xu.rst
+++ b/doc/acas_xu.rst
@@ -18,7 +18,8 @@ The system considers a 2D plane with two entities: the monitored airplane (the
 "ownship") and an incoming airplane (the "intruder").
 
 .. image:: _static/media/acas_xu.png
-  :alt: A picture with two aircrafts seen from above
+  :alt: A schematic with two aircrafts, seen from above, displaying the relative
+   angles of their trajectories
 
 In the original implementation, the system state has seven inputs:
 
@@ -39,10 +40,10 @@ system can give:
 * :math:`WR`: Weak Right
 * :math:`SR`: Strong Right
 
-In the paper, the authors consider :math:`45` neural networks, for several
-values of :math:`\tau` and :math:`a_{prev}`, that operate on five inputs only
-while maintaining the same number of outputs. We will consider five-inputs
-networks in the future.
+In the original paper, the authors consider :math:`45` neural networks, for
+several values of :math:`\tau` and :math:`a_{prev}`, that operate on five inputs
+only while maintaining the same number of outputs. We will consider five-inputs
+networks in the remaining of this example.
 
 Properties
 ----------
@@ -112,16 +113,11 @@ basic programming data structures (arrays, queues, hash tables, *etc.*).
 Let us try to model the property :math:`\phi_1` defined earlier. We will call
 our theory ``ACASXU_P1``.
 
-.. code-block:: whyml
-
-   theory ACASXU_P1
-   end
-
 We will need to write down some numerical values. As of now, CAISAR allows
 writing values using floating-point arithmetic only. Why3 defines a float type
 and the relevant arithmetic operations according to the IEEE floating-point
 standard in a theory, astutely called ``ieee_float``. Specifically, we will
-import the ``Float64`` subtheory, that defines everything we need for 64-bit
+import the ``Float64`` sub-theory, that defines everything we need for 64-bit
 precision floating-point numbers. We thus import it in our theory using the
 ``use`` keyword.
 
@@ -136,10 +132,10 @@ Our file looks like this so far:
 We would like to verify our propety given a certain neural network. To do this,
 CAISAR extends the Why3 standard library for recognizing neural networks in ONNX
 and NNet formats. Given a file of such formats, CAISAR internally builds a
-theory named as the original neural network file, that contains the subtheories
+theory named as the original neural network file, that contains the sub-theories
 ``AsTuple`` and ``AsArray`` that provide logical symbols for describing the
 input-output interface of a neural network as tuples and array, respectively. We
-will only consider the ``AsTuple`` subtheory for this tutorial.
+will only consider the ``AsTuple`` sub-theory for this tutorial.
 
 In particular, the theory built by CAISAR is equivalent to the following WhyML
 file:
@@ -166,7 +162,7 @@ network.
 
 As our neural network takes five inputs and provides five outputs, adding ``use
 filename.AsTuple`` to our theory will provide a ``nn_apply`` function symbol
-that takes a 5-elements tuple as input, and provides a 5-elements tuple as
+that takes a five-elements tuple as input, and provides a five-elements tuple as
 output. Assuming we have a neural network named ``ACASXU_1_1.onnx``, the WhyML
 file looks like this:
 
@@ -248,6 +244,11 @@ Note that the previous commands set the verification time limit to 10 minutes
 file ``ACASXU_1_1.onnx`` that is used by the ``ACASXU_P1`` theory in
 ``property_1.why``.
 
+Under the hood, CAISAR first translates each goal into a compatible form for the
+targeted provers, then calls the provers on them, and finally interprets and
+post-processes the prover results for displaying them in a coherent form to the
+user.
+
 Marabou is not able to prove the property valid in the specified time limit,
 while nnenum does. In general, the result of a CAISAR verification is typically
 either ``Valid``, ``Invalid``, ``Unknown`` or ``Timeout``. CAISAR may output
@@ -278,7 +279,7 @@ However, these two solutions are not advisable in terms of clarity and
 maintainability.
 
 Reassuringly enough, WhyML provides all necessary features to come up with a
-better solution. First, WhyML allows for naming used (sub)theories in order to
+better solution. First, WhyML allows for naming used (sub-)theories in order to
 distinguish identical logic symbols coming from different theories. This is
 critical for identifying the correct ``nn_apply`` symbols in the two
 verification goals we will define. Second, WhyML allows for the hypotheses on
@@ -342,6 +343,12 @@ We can then verify the resulting verification problem as before:
     [caisar] Goal P3_1_1: Valid
     [caisar] Goal P3_2_7: Valid
 
+It is interesting to remark that, since Marabou does not support disjunctive
+formulas, CAISAR first splits a disjunctive goal formula into conjunctive
+sub-goals, then calls Marabou on each sub-goals, and finally post-processes the
+sub-results to provide the final result corresponding to the original goal
+formula.
+
 
 .. [Manfredi2016] G. Manfredi and Y. Jestin, *An introduction to ACAS Xu and the
    challenges ahead*, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference
diff --git a/doc/conf.py b/doc/conf.py
index 42a6c97be5114c72b61c4f133852410c8dfac530..b7903e1fa1522b0af82091918a642fd6dc6ddce8 100644
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -23,6 +23,7 @@ sys.path.append(os.path.abspath('./ext'))
 project = 'CAISAR'
 copyright = '2022, Michele Alberti, Julien Girard, François Bobot'
 author = 'Michele Alberti, Julien Girard, François Bobot'
+version = '0.1'
 
 
 # -- General configuration ---------------------------------------------------
diff --git a/doc/ext/__pycache__/why3.cpython-310.pyc b/doc/ext/__pycache__/why3.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..58c7cad082195fe747e9ffe46b4ead50457f7a6d
Binary files /dev/null and b/doc/ext/__pycache__/why3.cpython-310.pyc differ
diff --git a/doc/index.rst b/doc/index.rst
index f042f054973abf8bb06115866019e767ecaefccd..5c99e5fbb42f7d00c02b6451fc5b776410ecd321 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -1,35 +1,77 @@
-.. Caisar documentation master file, created by
+.. CAISAR documentation master file, created by
    sphinx-quickstart on Thu Jun 16 11:14:48 2022.
    You can adapt this file completely to your liking, but it should at least
    contain the root `toctree` directive.
 
-Welcome to Caisar's documentation!
-**********************************
+.. _index:
 
-CAISAR (Characterizing AI Safety And Robustness) is a platform
-under active development at CEA LIST, aiming to provide a
-wide range of features to characterize the robustness of
-artificial intelligence based software.
-It is available as a free software under the LGPLV2.1
-license. Code is hosted at
-https://git.frama-c.com/pub/caisar.
+The CAISAR Platform
+====================
 
-You may browse the documentation for :ref:`installation` and :ref:`usage`.
+.. image:: _static/media/caisar_logo.png
 
+*version:* |version|
 
-Content
-=======
+CAISAR is an open-source platform for the Characterization of Artificial
+Intelligence Safety And Robustness. Its expressive modelling language allows the
+description of a wide variety of specifications to enforce on a machine learning
+system. The specification is then decomposed into subproblems that are then sent
+to specialized provers. CAISAR aims to be component-agnostic: it can model and
+assess the trustworthiness of an hybrid artificial intelligence system that
+mixes symbolic and data-driven approaches.
+
+It is available as a free software under the LGPLV2.1 license. The code is
+hosted in our `GitLab instance <https://git.frama-c.com/pub/caisar>`_.
+
+Architecture and components
+---------------------------
+
+CAISAR relies on the `Why3 verification platform <https://why3.lri.fr/>`_ to
+provide both the basis of the specification language (WhyML), and an
+intermediate representation of logical formulas and goals.
+
+CAISAR supports the verification of multiple machine learning models. Neural
+Network (NN) are handled through the `ONNX exchange format <https://onnx.ai/>`_.
+Support Vector Machines (SVM) are supported through an ad-hoc CSV format.
+Gradient boosting models and decision trees will soon be supported.
+
+.. _supported_provers:
+
+Provers
+-------
+
+CAISAR heavily relies on external software to provide answers on proof
+obligations. Those programs are called *provers*, altough some of them may not
+directly provide a logical proof.
+
+The supported provers are the following:
+
+* `PyRAT <https://laiser.frama-c.com/laiser-websites/pyrat-website/>`_,
+  a neural network verifier powered by abstract interpretation (some developers of CAISAR are also working on PyRAT)
+* `Marabou <https://github.com/NeuralNetworkVerification/Marabou>`_, a
+  Satisfiability Modulo Theory (SMT) solver specialized in Neural Network
+  verification
+* `SAVer <https://github.com/abstract-machine-learning/saver>`_, a tool for
+  verifying reachability properties on SVM
+* `nnenum <https://github.com/stanleybak/nnenum>`_
+* "Traditional" SMT solvers that support SMT-LIBv2 input language (including
+  `Alt-Ergo <https://alt-ergo.ocamlpro.com/>`_, `Z3
+  <https://github.com/Z3Prover/z3/wiki>`_, `Colibri
+  <https://colibri.frama-c.com/>`_)
+
+If you would like for CAISAR to support a tool you are
+developping, please reach out so we can discuss about it!
+
+Browse the documentation
+------------------------
+
+Installation instructions for CAISAR are available at :ref:`installation`.
+A description of core usage is available at :ref:`usage`.
+Finally, we provide some :ref:`examples` as well.
 
 .. toctree::
    :maxdepth: 3
-   :caption: Contents
 
    installation
    usage
    examples
-
-
-..
-   Indices and tables
-   ==================
-   * :ref:`search`
diff --git a/doc/installation.rst b/doc/installation.rst
index ca8c5117a6a9a0d7d125a1fdb33e5450cb1c8269..46f5b962f2f16c643b25e0c76a2e084d24c47056 100644
--- a/doc/installation.rst
+++ b/doc/installation.rst
@@ -2,6 +2,7 @@
 
 Installation
 ============
+*version:* |version|
 
 The latest release of CAISAR is available as an
 `opam <https://opam.ocaml.org/>`_ package or
diff --git a/doc/mnist.rst b/doc/mnist.rst
index 00d62cfb410c77725c22739c3dec54ad84b056c6..cee05c3aaf468ce46319d9084d9cbf7cbabf99c2 100644
--- a/doc/mnist.rst
+++ b/doc/mnist.rst
@@ -101,7 +101,7 @@ corresponding classification label.
 
 In order to use the ``robust`` predicate in our WhyML specification, we need
 values of types ``model`` and ``dataset``. For the former, CAISAR makes
-available the constant ``model`` upon interpreting the ``AsArray`` subtheory
+available the constant ``model`` upon interpreting the ``AsArray`` sub-theory
 that is built by the extension of the Why3 standard library for recognizing
 neural network ONNX and NNet formats. For the latter, the CAISAR standard
 library provides the constant ``dataset`` in ``DataSetClassification`` that will
diff --git a/doc/modules.rst b/doc/modules.rst
deleted file mode 100644
index 06a4e808612d7820300b3bfef461cc99ae44b711..0000000000000000000000000000000000000000
--- a/doc/modules.rst
+++ /dev/null
@@ -1,9 +0,0 @@
-API documentation
-=================
-
-To be generated with odoc.
-
-.. toctree::
-   :maxdepth: 4
-
-   nier_cfg
diff --git a/doc/usage.rst b/doc/usage.rst
index 70cd0bbd3cbdde766435344e9e4343596a2bb551..5344dc0d5e98135fae3071d9c84d33bf924c72d1 100644
--- a/doc/usage.rst
+++ b/doc/usage.rst
@@ -1,31 +1,74 @@
 .. _usage:
 
-Usage
-=====
+Using CAISAR
+============
 
-Property verification
----------------------
+*version:* |version|
+
+Prover registration
+-------------------
+
+CAISAR relies on external provers to work. You must install
+them first, then point CAISAR to their location. Please
+refer to each prover documentation to install them.
 
-CAISAR can be used to verify properties on neural networks.
-The prototype command is ``caisar verify --prover=PROVER property.mlw``.
-File ``property.mlw`` defines the goal you want to verify,
-written in the Whyml language.
-Example of ``.mlw`` files can be seen in tests folder.
+Provers tend to be complex programs with multiple options.
+Some combination of options may be suited for one
+verification problem, while inefficient for another. As they
+also have different interface, it is also necessary to register
+their call in a way CAISAR can understand.
+To do so, you must register provers inside the ``config/caisar-detection-data.conf`` file.
+Each supported prover is registered by specifying the following fields:
 
-External verifiers detection
-----------------------------
+* ``name``: the name under which CAISAR will know the prover
+* ``exec``: the prover's executable name
+* ``alternative`` (optional): an alternative configuration for the
+  prover. To use it with CAISAR, use the option
+  ``--prover-altern``. Useful when you want to use the same
+  prover on different problems.
+* ``version_switch``: the command used to output the prover's
+  version
+* ``version_regexp``: a regular expression parsing the
+  output of ``version_switch``
+* ``version_ok``: CAISAR supported version of the prover.
+  Provers should only be used with their supported version.
+* ``command``: the actual command CAISAR will send to the prover. There are a few builtin expressions provided by CAISAR:
 
-CAISAR relies on external verifiers to work. You must install
-them first, then point CAISAR to their location.
-To do so,
-use ``DIR=/path/to/solver/exec caisar config -d.``
+  * ``%f``: the generated property file sent to the prover
+  * ``%t``: the timelimit value
+  * ``%{nnet-onnx}``: the name of the neural network file
+* ``driver``: location of the CAISAR driver for the prover, if any
 
-List of supported verifiers:
 
-* PyRAT
-* `Marabou <https://github.com/NeuralNetworkVerification/Marabou>`_
-* `SAVer <https://github.com/abstract-machine-learning/saver>`_
+Assuming you have installed a prover and you filled the `caisar-detection.conf` file,
+you can register the prover to CAISAR using the following command: ``DIR=/path/to/solver/exec caisar config -d``.
+
+Property verification
+---------------------
+A prominent use case of CAISAR is to model a specification for an artificial intelligence
+system, and to verify its validity for a given system.
+
+The modelling uses `WhyML <https://why3.lri.fr/doc/syntaxref.html>`_,
+a typed first-order logic language.
+Example of WhyML are in the `source code <https://git.frama-c.com/pub/caisar/-/tree/master/examples/>`_.
+You may also read the :ref:`examples` section of this
+documentation to get a first grasp on using WhyML.
 
-List of actively developed supports:
+Provided a file `trust.why` containing a goal to verify, the command ``caisar verify --prover=PROVER trust.why``
+will verify the goals using the specified prover. A list of
+supported provers is available in :ref:`supported_provers`. The prover
+must already be registered by CAISAR. Currently, only one
+prover can be selected at a time; future work will allow
+selecting multiple provers and orchestrate verification
+strategies.
+Internally, CAISAR will translate the goals into a form that
+is acceptable by the targeted prover, generating a file (the
+``%f`` defined in the previous section).
 
-* Tools that support SMTLIB as inputs: `Alt-Ergo <https://alt-ergo.ocamlpro.com/>`_, `Z3 <https://github.com/Z3Prover/z3/wiki>`_, `Colibri <https://colibri.frama-c.com/>`_
+Built-in properties
+-------------------
+In addition to all the theories provided by Why3, CAISAR
+provide additional theories that model commonly desirable properties for machine learning programs.
+All those predicates are located in the file ``stdlib/caisar.mlw``.
+Among them are theories to describe classification datasets,
+local and global robustness around a neighborhood.