--- layout: fc_discuss_archives title: Message 43 from Frama-C-discuss on August 2015 ---
Hello, Tim. On Tue, Aug 25, 2015 at 7:21 PM, Tim Newsham <tim.newsham at gmail.com> wrote: > "Usually, the advice if you want to verify your code for an arbitrary > length /n/ is to use a constant N in your code and then define a concrete > value for N on the command line before doing the analysis, using a shell > script to change N from 0 to the-biggest-value-you-want. Of course, you'll > need to get all analysis results and check you have no warning in each one > of them." > > Hmm.. this is a useful technique, but I think frama's value analysis could > make it much more convenient by building in support in frama. For > example, why not support a "forall N in 1..2048" type notation and then > repeat the analysis for each value, and then finally merge the results? > You are looking for the value analysis built-in Frama_C_interval_split, documented at http://blog.frama-c.com/index.php?post/2012/10/10/RERS-2012-competition-problems-1-9 You will need to have the slevel option set high enough in conjunction with it for it to work properly. > This would probably be a bit more efficient than using an external > driver to vary the parameters, provide better merging of results and > be more convenient to run. > It is not that clear-cut. You have information that the analyzer does not have: you know that a memory state with n equal to 1 is never included in a memory state with n equal to 2 and so on. The analyzer does not know that, and will at each statement of your program ask itself whether the new memory state it has just computed (including the binding of variable n to 2) is included in a memory state that it saw before (that included the binding of n to 1). This is just an example, but generally speaking the cost of larger analyses may be more than linear in the number of states the analyzer has to handleâthough I did my best to ensure that it scales smoothly. On the other hand, with n separate analyses, you pay the start-up costs n times. The start-up time include the time to scan for all installed plug-ins, load and initialize them (about 1/10 s) and the time to parse the source code and build the Abstract Syntax Tree (which can be up to say, one minute for 100 000 lines of code). It would be possible to have the best of both world by writing a plug-in that restarts the value analysis as many time as necessary on the same AST, saving up on the start-up time without making each analysis more complex. In fact Sven Mattsen did exactly this during an internship a few years ago: http://www.sts.tu-harburg.de/research/spalter.html (I tried to port the resulting plug-in, Spalter, to more recent Frama-C versions recently, and I would advise against trying that, though. There were enough API changes to take into account that you might as well start from the article and do it again from scratch.) Otherwise, you can keep the start-up times in check by not putting the entire OpenSSL source code on the commandline but only, if you are verifying SHA-1, the C files that provide SHA-1 and these files' dependencies. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gforge.inria.fr/pipermail/frama-c-discuss/attachments/20150825/5f88432a/attachment.html>