A prominent use case of CAISAR is to model a specification for an artificial intelligence
system, and to verify its validity for a given system.
The modelling uses `WhyML <https://why3.lri.fr/doc/syntaxref.html>`_,
a typed first-order logic language.
Example of WhyML are in the `source code <https://git.frama-c.com/pub/caisar/-/tree/master/examples/>`_.
You may also read the :ref:`examples` section of this
documentation to get a first grasp on using WhyML.
List of actively developed supports:
Provided a file `trust.why` containing a goal to verify, the command ``caisar verify --prover=PROVER trust.why``
will verify the goals using the specified prover. A list of
supported provers is available in :ref:`supported_provers`. The prover
must already be registered by CAISAR. Currently, only one
prover can be selected at a time; future work will allow
selecting multiple provers and orchestrate verification
strategies.
Internally, CAISAR will translate the goals into a form that
is acceptable by the targeted prover, generating a file (the
``%f`` defined in the previous section).
* Tools that support SMTLIB as inputs: `Alt-Ergo <https://alt-ergo.ocamlpro.com/>`_, `Z3 <https://github.com/Z3Prover/z3/wiki>`_, `Colibri <https://colibri.frama-c.com/>`_
Built-in properties
-------------------
In addition to all the theories provided by Why3, CAISAR
provide additional theories that model commonly desirable properties for machine learning programs.
All those predicates are located in the file ``stdlib/caisar.mlw``.
Among them are theories to describe classification datasets,
local and global robustness around a neighborhood.