You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Caisar's documentation!
**********************************
CAISAR documentation
====================
.. image:: _static/media/caisar_logo.png
*Written in november 2022*
CAISAR is an open-source platform for the Characterization of Artificial
Intelligence Safety And Robustness.
Its expressive modelling language allows the description of
a wide variety of specifications to enforce on a machine learning system.
The specification is then decomposed into subproblems that
are then sent to specialized provers.
CAISAR aims to be component-agnostic: it can model and assess the trustworthiness of an hybrid artificial intelligence systemthat mixes symbolic and data-driven approaches.
CAISAR (Characterizing AI Safety And Robustness) is a platform
under active development at CEA LIST, aiming to provide a
wide range of features to characterize the robustness of
artificial intelligence based software.
It is available as a free software under the LGPLV2.1
license. Code is hosted at
https://git.frama-c.com/pub/caisar.
license. The code is is hosted in our `gitlab instance
<https://git.frama-c.com/pub/caisar>`_.
Architecture and components
---------------------------
CAISAR relies on the `Why3 verification platform <https://why3.lri.fr/>`_
to provide both the basis of the specification language
(WhyML), and an intermediate representation of logical
formulaes and goals.
CAISAR supports the verification of multiple machine learning programs.
Neural Network (NN) are handled through the `ONNX exchange format <https://onnx.ai/>`_.
Support Vector Machines (SVM) are supported through an ad-hoc CSV format.
Gradient boosting models and decision trees will soon be
supported.
You may browse the documentation for :ref:`installation` and :ref:`usage`.
Provers
-------
CAISAR heavily relies on external software to provide
answers on proof obligations. Those programs are called
"provers", altough some of them may not directly provide a
a neural network verifier powered by abstract interpretation (some developers of CAISAR are also working on PyRAT)
* `Marabou <https://github.com/NeuralNetworkVerification/Marabou>`_, a Satisfiability Modulo Theory (SMT) solver specialized in Neural Network verification
* `SAVer <https://github.com/abstract-machine-learning/saver>`_, a tool for verifying reachability properties on SVM