diff --git a/_data/plugins_categories.yml b/_data/plugins_categories.yml
index 801dbf5c3d5c3fef7baa651fe8dc863a7aa9b45a..7a87f7272a91ffe8030988860f66095cc8251e6c 100644
--- a/_data/plugins_categories.yml
+++ b/_data/plugins_categories.yml
@@ -13,11 +13,11 @@
 - name: Code transformation
   key: code
 
-- name: For browsing unfamiliar code
+- name: Browsing unfamiliar code
   key: browsing
 
-- name: Front-end for other languages
+- name: Front-ends for other languages
   key: front
 
-- name: For concurrent/multi-threaded programs
+- name: Concurrent/multi-threaded programs
   key: concurrent
diff --git a/_fc-plugins/e-acsl.md b/_fc-plugins/e-acsl.md
index 722df69c5e18ecd3841f0b2e5eb84f39241a1557..2e0c913494cde3d8b8143eb7c27f65f7b5388e2c 100644
--- a/_fc-plugins/e-acsl.md
+++ b/_fc-plugins/e-acsl.md
@@ -5,11 +5,11 @@ description: Runtime Verification Tool
 key: main
 manual_pdf: http://frama-c.com/download/e-acsl/e-acsl-manual.pdf
 additional:
-  - name:  "E-ACSL Language Reference Manual"
-    short: "Language Reference"
+  - name:  "E-ACSL language reference manual"
+    short: "Language reference"
     link:  http://frama-c.com/download/e-acsl/e-acsl.pdf
   - name:  "E-ACSL Language Reference Implementation Manual"
-    short: "Language Implementation"
+    short: "Language implementation"
     link:  http://frama-c.com/download/e-acsl/e-acsl-implementation.pdf
 distrib_mode: main
 ---
diff --git a/_posts/2011-01-22-Verifying-numerical-precision-with-Frama-Cs-value-analysis.html b/_posts/2011-01-22-Verifying-numerical-precision-with-Frama-Cs-value-analysis.html
index 797ffd539eb062b828518be4f2764b32a271fe77..caa7713e4caa31d1f0154c9786bcd341c3c20579 100644
--- a/_posts/2011-01-22-Verifying-numerical-precision-with-Frama-Cs-value-analysis.html
+++ b/_posts/2011-01-22-Verifying-numerical-precision-with-Frama-Cs-value-analysis.html
@@ -8,7 +8,7 @@ title: "Verifying numerical precision with Frama-C's value analysis"
 summary: 
 ---
 {% raw %}
-<p>Frama-C's value analysis wasn't aimed at verifying numerical precision of C functions when it was conceived. There already was <a href="http://www-list.cea.fr/labos/gb/LSL/fluctuat/index.html" hreflang="en">a specialized project for this purpose</a>.</p> 
+<p>Frama-C's value analysis wasn't aimed at verifying numerical precision of C functions when it was conceived. There already was <a href="http://www.lix.polytechnique.fr/~putot/fluctuat.html" hreflang="en">a specialized project for this purpose</a>.</p> 
 <p>However  the value analysis needed to handle floating-point computations <strong>correctly</strong> (that is  without omitting any of the possible behaviors the function can have at run-time) in order to be generally correct. Some industrial users at the same time wanted floating-point values to be handled more precisely than the "always unknown" placeholder used in very early versions. Floating-point computations are notoriously underspecified in C  so "handling floating-point computations correctly" became something of a journey  with gradual improvements in each release since Beryllium. The goal was only reached in the last release (missing support for the single-precision <code>float</code> type was the last known limitation  and that was removed in Carbon).</p> 
 <p>Besides  in the value analysis  the primary solution to work around issues is <strong>case analysis</strong>. Again  the underlying algorithms were there from the start but they have been continuously refined until the Carbon release. For a numerical function  case analysis typically means subdividing the definition domain of the function in small sub-intervals on which interval arithmetic is as precise as the sub-interval is narrow.</p> 
 <p>Interestingly  the combination of these different elements means that the value analysis can be used to verify the numerical precision of a C program  in a context where the required precision is not too high but it's absolutely vital that it is uniformly reached on the entire definition interval of the program.</p> 
diff --git a/_posts/2011-03-03-This-time-for-real-verifying-function-cosine.html b/_posts/2011-03-03-This-time-for-real-verifying-function-cosine.html
index ec8324261668e7811324ab381161f4d463ca8129..1c67569c05f16e725d515e83a661fafb526e64c2 100644
--- a/_posts/2011-03-03-This-time-for-real-verifying-function-cosine.html
+++ b/_posts/2011-03-03-This-time-for-real-verifying-function-cosine.html
@@ -22,7 +22,7 @@ summary:
 </pre> 
 <p>Next  I arranged to have the value analysis repeatedly analyze these lines with bounds passed to <code>Frama_C_double_interval</code> that  taken together  cover entirely the range from 0 to π/4. I will not say how I did  because it is not very interesting. Besides  the method I used is convenient enough to discourage someone from writing an Emacs Lisp macro to generate sub-division assertions. That would be an unwanted side-effect: it in fact has complementary advantages with the would-be assertion generation macro that I recommend someone look into.</p> 
 <p>The verification used a few hours of CPU time (I did not try very hard to make it fast). One thing I did was adapt the input sub-interval width to the part of the curve that was being verified  from two millionths for inputs near zero to a quarter of a millionth for inputs near π/4.</p> 
-<p>During this experiment  I actually used the cosine implementation from <a href="http://lipforge.ens-lyon.fr/www/crlibm/" hreflang="en">CRlibm</a> as reference function. This library  incidentally developed at my alma mater  provides a number of fast <strong>and</strong> correctly rounded implementations for mathematical functions. 
+<p>During this experiment  I actually used the cosine implementation from <a href="https://github.com/taschini/crlibm" hreflang="en">CRlibm</a> as reference function. This library  incidentally developed at my alma mater  provides a number of fast <strong>and</strong> correctly rounded implementations for mathematical functions. 
 The value analysis was configured with the <code>-all-rounding-modes</code> option  so the results are guaranteed for arbitrary rounding modes  and are guaranteed even if the compiler uses supplemental precision for intermediate results (e.g. if the compiler uses the infamous 80-bit historial floating-point unit of Intel processors).</p> 
 <p>Here are the logs I obtained (warning: I did not make any file of that level of boringness publicly available since my PhD  plus these are really big  even after compression. Handle with care): <a href="/assets/img/blog/imported-posts/log1.gz">log1.gz</a> <a href="/assets/img/blog/imported-posts/log2.gz">log2.gz</a> <a href="/assets/img/blog/imported-posts/log3.gz">log3.gz</a> <a href="/assets/img/blog/imported-posts/log4.gz">log4.gz</a> <a href="/assets/img/blog/imported-posts/log5.gz">log5.gz</a> <a href="/assets/img/blog/imported-posts/log6.gz">log6.gz</a> <a href="/assets/img/blog/imported-posts/log7.gz">log7.gz</a> <a href="/assets/img/blog/imported-posts/log8.gz">log8.gz</a> <a href="/assets/img/blog/imported-posts/log9.gz">log9.gz</a> <a href="/assets/img/blog/imported-posts/log10.gz">log10.gz</a> <a href="/assets/img/blog/imported-posts/log11.gz">log11.gz</a> <a href="/assets/img/blog/imported-posts/log12.gz">log12.gz</a> <a href="/assets/img/blog/imported-posts/log13.gz">log13.gz</a> <a href="/assets/img/blog/imported-posts/log14.gz">log14.gz</a> <a href="/assets/img/blog/imported-posts/log15.gz">log15.gz</a>. To save you the bother of downloading even the first file  here is what the first three lines look like.</p> 
 <pre>[value] CC -0.0000000000000000 0.0000019073486328 0.9999999999981809 1.0000000000000002 0.9999999999981810 1.0000000000000000 OK 
diff --git a/_posts/2011-05-31-Test-driving-static-analysis-tools.html b/_posts/2011-05-31-Test-driving-static-analysis-tools.html
index faec6e64781535c070fa3b251213b07c83f0ffb9..09cf91820220d79e47b053a611bd3bd30a04eee9 100644
--- a/_posts/2011-05-31-Test-driving-static-analysis-tools.html
+++ b/_posts/2011-05-31-Test-driving-static-analysis-tools.html
@@ -15,7 +15,7 @@ summary:
 <p>By the way  I expect that many of the cases where the value analysis failed to identify an issue can be traced to the use of standard functions  although I have not yet dug enough in the benchmark to be sure. Standard functions are natural suspects: the benchmark clearly uses them  and although we provide several imperfect modelizations for <code>malloc()</code>  they are so little documented that we can expect the authors of the benchmark used none. In this eventuality  <code>malloc()</code> is treated by the analyzer like any unknown function that takes an integer and return a pointer. Now that there are tutorials  you can see that they all start by listing the C functions that are missing from the analysis projects and by wondering  for each  how it can be provided.</p> 
 <h2>The human factor</h2> 
 <p>I was very happy to find this benchmark. I should explain: the 100% recall  or "soundness"  claim  is utterly boring to test for. The value analysis was designed to be sound from the beginning (when used properly for its intended usage)  and since  its authors have only been fighting for precision. As a result  and because they are only human  there are a lot of what I would call positive tests (check that the value analysis succeeds at not warning about a complex but safe piece of code). There are not nearly enough negative tests (check that the value analysis does warn about an unsafe piece of code) because there is no progress to measure there. The value analysis implementors have been guilty of a basic  very forgivable  psychological error  but even if we forgive them  the situation must be fixed. This benchmark is a good start towards a collection of "negative tests" for the value analysis.</p> 
-<p>In fact  I have started looking at the qualitative tests from <a href="http://mathind.csd.auth.gr/static_analysis_test_suite/">the benchmark</a>  and it is rather simple to check the value analysis against expected results. Because I want to fix the lack of "negative tests"  I focused on lines with the comment <code>DANGER</code>  that the value analysis should find. Here is the test script I currently have:</p> 
+<p>In fact  I have started looking at the qualitative tests from the benchmar [removed dead link]  and it is rather simple to check the value analysis against expected results. Because I want to fix the lack of "negative tests"  I focused on lines with the comment <code>DANGER</code>  that the value analysis should find. Here is the test script I currently have:</p> 
 <pre> 
 #!/bin/sh 
 # export FRAMACLIBC=/usr/local/share/frama-c/libc 
diff --git a/_posts/2011-07-19-New-version-of-some-text-editor.html b/_posts/2011-07-19-New-version-of-some-text-editor.html
index db955706358181f2e88628a59ec05a3059583ae7..129030d236118cd48d0ffa7b19ceb7ffdba62d28 100644
--- a/_posts/2011-07-19-New-version-of-some-text-editor.html
+++ b/_posts/2011-07-19-New-version-of-some-text-editor.html
@@ -12,5 +12,5 @@ summary:
 <blockquote><p>I’ve been beta-testing 10 for months and at this point I couldn’t bear to go back to version 9.</p> 
 </blockquote> 
 <p>Wow!</p> 
-<p>I would hate an editor that  come a new release  puts you in a position to say things like that. I am pretty sure I could <a href="http://sunsite.univie.ac.at/textbooks/emacs/emacs_38.html" hreflang="en">go back to Emacs 19.3? from 1996</a> at any time  if only people ceased to send me files in Unicode.</p>
+<p>I would hate an editor that  come a new release  puts you in a position to say things like that. I am pretty sure I could go back to Emacs 19.3? from 1996 at any time  if only people ceased to send me files in Unicode.</p>
 {% endraw %}
diff --git a/_posts/2011-07-22-Animated-donut-verification.html b/_posts/2011-07-22-Animated-donut-verification.html
index f066afc5406e4f0203750a8a41089074d5335d7c..e45abcb7da950e850eb3888c3ceb1ff6757ac063 100644
--- a/_posts/2011-07-22-Animated-donut-verification.html
+++ b/_posts/2011-07-22-Animated-donut-verification.html
@@ -8,10 +8,10 @@ title: "Animated donut verification"
 summary: 
 ---
 {% raw %}
-<p>Here's a cool <a href="http://a1k0n.net/2011/07/20/donut-math.html">obfuscated C program</a> by Andy Sloane that draws a revolving donut.</p> 
+<p>Here's a cool <a href="https://www.a1k0n.net/2011/07/20/donut-math.html">obfuscated C program</a> by Andy Sloane that draws a revolving donut.</p> 
 <p>You know where this is heading... I am going to suggest that someone should verify it. 
 I will get us started.</p> 
-<p>1. Download <a href="http://a1k0n.net/2006/09/15/obfuscated-c-donut.html">the code</a></p> 
+<p>1. Download <a href="https://www.a1k0n.net/2006/09/15/obfuscated-c-donut.html">the code</a></p> 
 <p>2. Determine what library functions it needs:</p> 
 <pre>$ frama-c -metrics donut.c 
 [kernel] preprocessing with "gcc -C -E -I.  donut.c" 
diff --git a/_posts/2011-08-08-Holiday-stuff.html b/_posts/2011-08-08-Holiday-stuff.html
index 760a7d09f62390d94b40e3d60c093313ccee6bf5..1ff89bd7398ae4fceab29533e391c6a7ba8c78ea 100644
--- a/_posts/2011-08-08-Holiday-stuff.html
+++ b/_posts/2011-08-08-Holiday-stuff.html
@@ -9,7 +9,7 @@ summary:
 ---
 {% raw %}
 <p>I do not expect that many people are diligently working at this time of year, and this is my excuse for posting this series of productivity-wasting links. Some of them are even on-topic.</p> 
-<p>For those who missed it, GNU/Linux Magazine/France has <a href="http://www.gnulinuxmag.com/index.php/2011/07/01/gnulinux-magazine-hs-n%C2%B055-juillet-aout-2011-chez-votre-marchand-de-journaux" hreflang="fr">a summer special</a> on the C programming language  with a 7-page introduction to Frama-C. In French  and as the URL says  "chez votre marchand de journaux". In related news  Dotclear  the Content Management System for this blog  allows to specify a link's target language in its markdown language. I have always wondered what that was good for. Well  the link in this paragraph is dutifully marked as going to a French page. Tell us if you notice any difference.</p> 
+<p>For those who missed it, GNU/Linux Magazine/France has a summer special [removed dead link] on the C programming language  with a 7-page introduction to Frama-C. In French  and as the URL says  "chez votre marchand de journaux". In related news  Dotclear  the Content Management System for this blog  allows to specify a link's target language in its markdown language. I have always wondered what that was good for. Well  the link in this paragraph is dutifully marked as going to a French page. Tell us if you notice any difference.</p> 
 <p>John Carmack  famous programmer of 3D games  <a href="http://www.youtube.com/watch?v=4zgYG-_ha28&amp;feature=player_detailpage#t=54m00s">spoke at length about static analysis at QuakeCon 2011</a> a few days ago. He also says at one point that he wishes he could experiment with  say  OCaml.</p> 
 <p>In the lab  we are supposed to obtain an administrative authorization before publishing any article. If the article has non-CEA co-authors  we must obtain an authorization in writing from each of them in order to apply for the authorization to publish (I'm simplifying a bit here. In reality  we must submit an intranet form that will automatically be rejected because there the form has no field for co-author authorizations  and then send the co-author authorizations in reply to the rejection mail). 
 I have always assumed this was a CEA-wide thing  and therefore  I wonder whether my colleagues from the physics department had to fulfill this formality when they published this <a href="http://www.sciencedirect.com/science/article/pii/S0370269310003989">3469-author article</a>.</p> 
diff --git a/_posts/2011-08-09-One-more-rant-for-the-holidays-style-self-consciousness.html b/_posts/2011-08-09-One-more-rant-for-the-holidays-style-self-consciousness.html
index 636f6ddd05c6e1cbecc3579b1d3c5ea91babb580..207d94aeb8628c259b2052e0da68227d672d5151 100644
--- a/_posts/2011-08-09-One-more-rant-for-the-holidays-style-self-consciousness.html
+++ b/_posts/2011-08-09-One-more-rant-for-the-holidays-style-self-consciousness.html
@@ -9,6 +9,6 @@ summary:
 ---
 {% raw %}
 <p>One unexpected consequence of writing semi-regularly in a blog, in addition to the other bits of writing that I have to do, is that I am starting to hate my own style. It feels like English sentences always come out my fingers with the same rhythm to them. Precisely dull. I always use adverbs the same way. I would go on, but it's gotten to the point where I'm wondering whether I'm not going to switch to my native language mid-post. Better B quick!</p> 
-<p>I read tons, from many different people. The Internet is great for that. Audiobooks have also long replaced music in my iPod. That is less varied for obvious technical reasons, but I listen to anything by Douglas Adams, and various other BBC radio broadcasts. There's \Bilbo the Hobbit" I like to go through when I know I'll have a long stretch of listening time such as <a href="http://www.saintelyon.com/course-raid-nocturne/" hreflang="fr">a night walk</a>. Rudyard Kipling's "Just So Stories" are great for alternating with chapters of "The Selfish Gene". One thing written and read by Scott Adams (the author of Dilbert  not any other Scott Adams). Another by Orson Scott Card. Neal Stephenson. In a nutshell  plentiful formats and authors. But I don't seem to be able to form sentences other than the repetitive tripe here  which wouldn't be too bad  and I'm getting allergic to it  which is.</p> 
+<p>I read tons, from many different people. The Internet is great for that. Audiobooks have also long replaced music in my iPod. That is less varied for obvious technical reasons, but I listen to anything by Douglas Adams, and various other BBC radio broadcasts. There's \Bilbo the Hobbit" I like to go through when I know I'll have a long stretch of listening time such as a night walk. Rudyard Kipling's "Just So Stories" are great for alternating with chapters of "The Selfish Gene". One thing written and read by Scott Adams (the author of Dilbert  not any other Scott Adams). Another by Orson Scott Card. Neal Stephenson. In a nutshell  plentiful formats and authors. But I don't seem to be able to form sentences other than the repetitive tripe here  which wouldn't be too bad  and I'm getting allergic to it  which is.</p> 
 <p>Do others feel the same way about any non-native language? How do you overcome it? Let's say you have an article to write  some documentation to revise  and you are completely and utterly tired of hearing yourself speak. What would you do?</p>
 {% endraw %}
diff --git a/_posts/2011-09-27-Summary-of-a-2010-research-article.html b/_posts/2011-09-27-Summary-of-a-2010-research-article.html
index 5783046ee54ca491eaf576d6cd7b784f596c3d9b..eb5995f1b8b4fee88f2d5cf4c30bacb26c9c6b47 100644
--- a/_posts/2011-09-27-Summary-of-a-2010-research-article.html
+++ b/_posts/2011-09-27-Summary-of-a-2010-research-article.html
@@ -8,7 +8,7 @@ title: "Summary of a 2010 research article"
 summary: 
 ---
 {% raw %}
-<p>A number of colleagues and I have been tasked with writing a summary of an article they published in 2010. Each of us has to do this for an article of theirs, that is. I chose the article whose long version title is \A Mergeable Interval Map" (<a href="http://studia.complexica.net/index.php?option=com_content&amp;view=article&amp;id=185%3Aa-mergeable-interval-map%20pp-5-37&amp;catid=56%3Anumber-1&amp;Itemid=100&amp;lang=en">link</a>  PDF available  in English starting with page number 10). The article describes a data structure to associate values to intervals. I like this data structure because it is uncomplicated  and its necessity was made apparent by the implementation of previous ideas. The article on the other hand  may not do as good a job as it should of explaining the constraints that led to invent a new data structure. This is an opportunity to try and improve that part.</p> 
+<p>A number of colleagues and I have been tasked with writing a summary of an article they published in 2010. Each of us has to do this for an article of theirs, that is. I chose the article whose long version title is \A Mergeable Interval Map" (<a href="https://pdfs.semanticscholar.org/5972/1d7cac5cf4fdc0cc3947432c6472f1da0f82.pdf">link</a>  PDF available  in English starting with page number 10). The article describes a data structure to associate values to intervals. I like this data structure because it is uncomplicated  and its necessity was made apparent by the implementation of previous ideas. The article on the other hand  may not do as good a job as it should of explaining the constraints that led to invent a new data structure. This is an opportunity to try and improve that part.</p> 
 <p>Without further information concerning the intended audience  I decided to assume that the reader is vaguely familiar with the low-level aspects of C (he does not need to be a C programmer). It should also help if he is vaguely familiar with  say  one modern programming language (Perl  Python  PHP  C# or OCaml).</p> 
 <h2>A new data structure for static analysis</h2> 
 <p>When working on an analyzer for low-level embedded C  whose selling point is that it automatically and correctly analyzes relatively large programs relatively precisely  your analyzer will inevitably encounter a C program that does something like this:</p> 
diff --git a/_posts/2011-10-14-Features-in-Frama-C-Nitrogen-part-1.html b/_posts/2011-10-14-Features-in-Frama-C-Nitrogen-part-1.html
index 05a1cfe4257ad3a952759ccc9828e132c33cc2d7..8ea74b5671c013a395bb667c8c6dba1bb5be6ac6 100644
--- a/_posts/2011-10-14-Features-in-Frama-C-Nitrogen-part-1.html
+++ b/_posts/2011-10-14-Features-in-Frama-C-Nitrogen-part-1.html
@@ -11,7 +11,7 @@ summary:
 <p>Here is a series of posts that highlight interesting features in the recently released Frama-C Nitrogen 20111001. There is new functionality in Nitrogen, that we hope will entice both existing and prospective users. Other improvements yet will only have meaning for existing users. I will start off with two items from this second category.</p> 
 <h2>Nitrogen compiles with OCaml 3.10.2.</h2> 
 <p><strong>The only non-optional dependency for compiling Nitrogen is an OCaml compiler</strong>. But which one? Despite a now long history, the OCaml language is still evolving relatively rapidly. We have never been happier with our choice of an implementation language: it is evolving in a direction that suits us, and it is evolving conservatively (compare Perl 6, PHP 5, Python 3, …). On the other hand, Frama-C is large and contains tricky code; the plug-in architecture with dynamic loading in general on the one hand, and the hash-consing programming technique on the other hand, are only two examples. It is large enough and tricky enough to reveal many of the subtle difference between any two versions of the OCaml compiler.</p> 
-<p>Nitrogen compiles with the latest version of OCaml, of course. That's 3.12.1 at the time of this writing. We already know that it won't compile as-is with the future OCaml 3.13 (a patch will be released in due time). Similarly, support for older versions has to be gained, version by version. If you have only written medium-sized OCaml programs, you could easily underestimate the difficulty of this. I was lucky enough not to have to deal with it much during this cycle, but some of my colleagues had to. It always is a struggle. Sometimes the equivalent of <code>#define/#if</code> constructs from C pre-processing would help, but this is not idiomatic in OCaml. Again, the only non-optional dependency for compiling Nitrogen is an OCaml compiler, so we won't use fine solutions such as <a href="http://martin.jambon.free.fr/cppo.html">Cppo</a>.</p> 
+<p>Nitrogen compiles with the latest version of OCaml, of course. That's 3.12.1 at the time of this writing. We already know that it won't compile as-is with the future OCaml 3.13 (a patch will be released in due time). Similarly, support for older versions has to be gained, version by version. If you have only written medium-sized OCaml programs, you could easily underestimate the difficulty of this. I was lucky enough not to have to deal with it much during this cycle, but some of my colleagues had to. It always is a struggle. Sometimes the equivalent of <code>#define/#if</code> constructs from C pre-processing would help, but this is not idiomatic in OCaml. Again, the only non-optional dependency for compiling Nitrogen is an OCaml compiler, so we won't use fine solutions such as <a href="https://github.com/ocaml-community/cppo">Cppo</a>.</p> 
 <p>For Windows and Mac OS X  OCaml is not part of the operating system  so we ship the version we like together with the binary package (if we make one). With Unix  the issues are different: there are too many flavors for us to distribute binaries  but there are efficient package systems and distributions to painlessly bring in required dependencies. Often  Frama-C itself is packaged in binary or source form as part of these distributions  thanks to the work of many volunteers. It may take some time for packages to be created for Nitrogen  and some users do not want to wait. Linus Token  for instance  may rely on a computer he bought two years ago. Frama-C works fine on two-years old computers  as seen in the next section. Linus installed his Unix distribution of choice when he bought his computer  and now he expects Frama-C's sources to compile with the OCaml version from his distribution (OCaml programmers can be expected to have the latest OCaml compiler installed from its own sources  but Linus is not an OCaml programmer). 
 The Unix distribution installed two years ago was on average 3 months old at that time  and it may have been frozen for stability 3 months before being released. For Linus  Frama-C has to compile with a 2.5-year-old compiler. And then there are industrial users who like to trail a little bit on the Linux front  but at the same time want all the latest Frama-C features. For Nitrogen  we chose to retain compatibility with OCaml 3.10.2  released in February 2008  and all OCaml versions released since.</p> 
 <h2>The value analysis is up to four times faster on realistic examples</h2> 
diff --git a/_posts/2011-10-27-Covering-all-interlacings-in-a-single-short-context.html b/_posts/2011-10-27-Covering-all-interlacings-in-a-single-short-context.html
index 0c1ead294b3b1ca556b48f798dba250a746ce63f..b4d53cc4ff1244dfb1fca6257c34e63e54e47142 100644
--- a/_posts/2011-10-27-Covering-all-interlacings-in-a-single-short-context.html
+++ b/_posts/2011-10-27-Covering-all-interlacings-in-a-single-short-context.html
@@ -10,7 +10,7 @@ summary:
 {% raw %}
 <p>Answering the two questions from last post in reverse order:</p> 
 <p><q><strong>What shorter analysis context would test arbitrarily long sequences of interrupts, with repetitions?</strong></q></p> 
-<p>There was a hint in the previous post in the link to the <a href="/index.php?tag/skein">Skein-256</a> tutorial. A stronger hint would have been to link directly to <a href="https://svn.frama-c.com/blog/admin/post.php?id=87">this post</a>  where an analysis context that encompasses many possible executions is offered. The analysis context there is for arbitrary numbers of calls to <code>Skein_256_Update()</code>  and the key part of the analysis context is:</p> 
+<p>There was a hint in the previous post in the link to the <a href="/index.php?tag/skein">Skein-256</a> tutorial. A stronger hint would have been to link directly to this post [removed dead link] where an analysis context that encompasses many possible executions is offered. The analysis context there is for arbitrary numbers of calls to <code>Skein_256_Update()</code>  and the key part of the analysis context is:</p> 
 <pre>  ... 
   Skein_256_Init(&amp;skein_context  HASHLEN * 8); 
   while (Frama_C_interval(0 1)) 
diff --git a/_posts/2011-11-18-Just-a-few-more-digits-please.html b/_posts/2011-11-18-Just-a-few-more-digits-please.html
index 3ed2955d628480ba0b8cb9bb829c4be044b23c17..6f28135c1d3d65b359677d411e45bd4a22d35115 100644
--- a/_posts/2011-11-18-Just-a-few-more-digits-please.html
+++ b/_posts/2011-11-18-Just-a-few-more-digits-please.html
@@ -52,7 +52,7 @@ main(){
 <h2>Mitigation techniques</h2> 
 <p>This is not a great user interface. The programmer must write more digits to demonstrate that ey knows exactly what the rounding error is than ey writes to unambiguously designate the double-precision floating-point ey wants. 
 It wouldn't be so important if every compiler had to correctly round constants to the nearest floating-point representation  but as discussed in last post  the standard does not mandate it. Crucially  by displaying the <code>Will use 0x1.999999999999ap-4</code> part of the message  Frama-C is also insuring itself against misunderstandings. The user cannot claim that  because eir compiler does not round constants to the nearest  Frama-C is unsound: it told em what values it would use.</p> 
-<blockquote><p>"Caveat emptor" or the first word thereof would make a great name for a static analysis tool  but unfortunately  it's already taken (by <a href="http://www.springerlink.com/content/h33p602vh34634v8/">one of the ancestors of Frama-C</a>).</p> 
+<blockquote><p>"Caveat emptor" or the first word thereof would make a great name for a static analysis tool  but unfortunately  it's already taken (by one of the ancestors of Frama-C [removed dead link]).</p> 
 </blockquote> 
 <p>I have tried to adjust the usability compromise thus:</p> 
 <ol> 
diff --git a/_posts/2011-12-23-Z3-theorem-prover-in-Microsoft-Store-and-retrocomputing-Frama-C-package.html b/_posts/2011-12-23-Z3-theorem-prover-in-Microsoft-Store-and-retrocomputing-Frama-C-package.html
index 7652c572e6c5005a8054eaddf190233b5a5b526a..daaf182abe05d73eb4bb4535d79bf3edd014fdb9 100644
--- a/_posts/2011-12-23-Z3-theorem-prover-in-Microsoft-Store-and-retrocomputing-Frama-C-package.html
+++ b/_posts/2011-12-23-Z3-theorem-prover-in-Microsoft-Store-and-retrocomputing-Frama-C-package.html
@@ -8,11 +8,11 @@ title: "Z3 theorem prover in Microsoft Store, and retrocomputing Frama-C package
 summary: 
 ---
 {% raw %}
-<p>This <a href="http://www.microsoftstore.com/store/msstore/en_US/pd/productID.242666500">page</a>  where Microsoft Research's Z3 theorem prover is on sale for for just a bit less than $15 000  has come to my attention. This is interesting  because Z3 is at the same time a <a href="http://research.microsoft.com/en-us/downloads/0a7db466-c2d7-4c51-8246-07e25900c7e7/">free download</a>. Note that the free download is covered by <a href="http://research.microsoft.com/en-us/downloads/0a7db466-c2d7-4c51-8246-07e25900c7e7/Z%203%20MSR-LA%20(2007-07-07).txt">this license</a> (executive summary: "non-commercial use only"). Presumably  the $15 000 price tag buys you a more traditional license agreement.</p> 
+<p>This <a href="http://www.microsoftstore.com/store/msstore/en_US/pd/productID.242666500">page</a>  where Microsoft Research's Z3 theorem prover is on sale for for just a bit less than $15 000  has come to my attention. This is interesting  because Z3 is at the same time a <a href="https://github.com/Z3Prover/z3">free download</a>. Note that the free download is covered by <a href="https://github.com/Z3Prover/z3/blob/master/LICENSE.txt">this license</a> (executive summary: "non-commercial use only"). Presumably  the $15 000 price tag buys you a more traditional license agreement.</p> 
 <p>I have two theories: the first is that the page is a mistake or a draft that was put online prematurely. The other is that Z3 is indeed now for sale for productive use  which would be great news.</p> 
 <p>Z3 is an SMT solver that can be used  among other applications  as the last step of a program's verification through Hoare logic techniques. If you are from my generation  and depending where you studied  you may hazily remember from your studies applying Hoare logic rules by hand on simple programs... on paper. Well  a decade's worth of research later  it turns out it is much less tedious when the computer applies the rules automatically  and then it even scales to moderately large functions whose verification may genuinely preoccupy someone. 
 For some examples  Yannick Moy wrote a rather nice <a href="http://frama-c.com/jessie/jessie-tutorial.pdf">tutorial</a> introducing Jessie. Jessie is not the only tool that applies Hoare rules mechanically to verify a program against a specification. There's another one right in Frama-C  named Wp  in addition to all such verifiers that aren't Frama-C plug-ins.</p> 
 <p>Both Jessie and Wp can use Z3 in the last step of their work  in which verifying a program against its specification is reduced to checking the validity of a number of logic formulas (you may have done that by hand during your computer science studies  too  and again  it's much more fun when the computer does it).</p> 
 <p>I would be remiss if I didn't mention that for the same price as the "non-commercial use only" Z3 license  you can get a commercial-use-allowing license for competing (and competitive) SMT prover <a href="http://ergo.lri.fr/">Alt-Ergo</a>. I expect that by contrast  the $15 000 Z3 license includes bare-bones support  but if that sum of money is burning a hole in your pocket  you might also discuss the possibility of an agreement with the makers of Alt-Ergo. They are researchers with lots of competing demands on their time  but you may be able to work something out.</p> 
-<p>Lastly  if you are interested in Alt-Ergo and/or the Wp Frama-C plug-in and have a PowerPC Mac  you're in luck! The last version (0.94) of Alt-Ergo is included in this semi-official <a href="http://www.mediafire.com/?vfmb6hsw08eeno0">retrocomputing Frama-C Nitrogen binary package</a> for Mac OS X Leopard PowerPC. <a href="/assets/img/blog/imported-posts/README_ppc">Click here</a> for the "readme" of that package. The package is mostly a celebration of the <a href="/index.php?post/2011/10/17/Features-in-Nitrogen-1">performance improvements in the value analysis in Nitrogen</a>  that make it very usable on a G4 powerbook.</p>
+<p>Lastly  if you are interested in Alt-Ergo and/or the Wp Frama-C plug-in and have a PowerPC Mac  you're in luck! The last version (0.94) of Alt-Ergo is included in this semi-official [removed dead link] retrocomputing Frama-C Nitrogen binary package</a> for Mac OS X Leopard PowerPC. <a href="/assets/img/blog/imported-posts/README_ppc">Click here</a> for the "readme" of that package. The package is mostly a celebration of the <a href="/index.php?post/2011/10/17/Features-in-Nitrogen-1">performance improvements in the value analysis in Nitrogen</a>  that make it very usable on a G4 powerbook.</p>
 {% endraw %}
diff --git a/_posts/2012-01-14-Public-announcements.html b/_posts/2012-01-14-Public-announcements.html
index f2979ce2ec98e69a98ee07e437903d0567f980d8..8e9b9e6dd760f8fd6d7c4ed547ab2604633be7ad 100644
--- a/_posts/2012-01-14-Public-announcements.html
+++ b/_posts/2012-01-14-Public-announcements.html
@@ -9,7 +9,7 @@ summary:
 ---
 {% raw %}
 <h2>Yes, I have read that static analysis post</h2> 
-<p>If you are one of the three persons in the world who have not sent me a link to John Carmack's <a href="http://altdevblogaday.com/2011/12/24/static-code-analysis/">Christmas post on static analysis</a>  go ahead and read it  it might interest you. You can stop sending me that link now.</p> 
+<p>If you are one of the three persons in the world who have not sent me a link to John Carmack's Christmas post on static analysis [removed dead link]  go ahead and read it  it might interest you. You can stop sending me that link now.</p> 
 <p>The post is very much a text version of the second half of his August <a href="http://www.youtube.com/watch?v=4zgYG-_ha28">QuakeCon keynote</a>  which I prefer  if only  as someone who sometimes has to speak in public  for the amazing delivery. Here's <a href="http://blog.regehr.org/archives/659">some commentary</a> on the PC-lint part of the post. I was myself going to comment on the "pursuing a Space Shuttle style code development process for game development"  because the tools can be the same  or at least take advantage of the same techniques  and just be employed differently to answer different needs. I thought that if I only had time to put my arguments in order  I might convincingly argue this  and rebound on John Regehr's post by characterizing semantic static analyzers as limiting the programmer to the dialect of correct programs. But thinking about it  I realize that I know nothing about game development and that it would be presumptuous to make strong claims there. In the case of Frama-C  and Frama-C is not the only analyzer intended for "Space Shuttle style code" and quality objectives  the total feedback we have  is still limited enough to nearly double in one day  and this feedback is still all about Space Shuttle style development and verification. And that day was yesterday.</p> 
 <h2>U3CAT meeting</h2> 
 <p>Yesterday was the day members of the national research project U3CAT gathered to tell each other about recent developments. Several presentations were on experiments done at <a href="http://atos.net/en-us/">Atos Origin SA</a> and <a href="http://uk.c-s.fr/">CS</a>. You can't expect them to reveal too much about the actual code  but yes  it was Space Shuttle-like  and the results are very satisfying.</p> 
diff --git a/_posts/2012-03-02-ptrdiff_t-links-for-the-week-end.html b/_posts/2012-03-02-ptrdiff_t-links-for-the-week-end.html
index 693202ae433861cbdbdac690fc059ae27d7b877c..f8878c5e009e4ff94364090e90ab217be7cddc19 100644
--- a/_posts/2012-03-02-ptrdiff_t-links-for-the-week-end.html
+++ b/_posts/2012-03-02-ptrdiff_t-links-for-the-week-end.html
@@ -10,7 +10,7 @@ summary:
 {% raw %}
 <p>Here are two links related to <code>ptrdiff_t</code>:</p> 
 <ul> 
-<li><a href="http://blogs.msdn.com/b/david_leblanc/archive/2008/09/02/ptrdiff-t-is-evil.aspx">Ptrdiff_t is evil</a>  a blog post for if you are not tired of conversion and promotion issues yet </li> 
+<li>Ptrdiff_t is evil [removed dead link] a blog post for if you are not tired of conversion and promotion issues yet </li> 
 <li>and an interesting <a href="http://stackoverflow.com/a/9538071/139746">answer on StackOverflow</a>.</li> 
 </ul> 
 <p>I was doubtful about the second one (I had no difficulties to accept the first one after the previous conversions-and-promotions-related posts on this blog). But type <code>ptrdiff_t</code> is for storing pointer differences  right? Why would a compilation platform (compiler and associated standard library including <code>malloc()</code>) make it just too small to hold all possible pointer differences? So I tested it  and StackOverflow user R.. is right:</p> 
diff --git a/_posts/2012-03-14-But-wait-There-may-be-more.html b/_posts/2012-03-14-But-wait-There-may-be-more.html
index ef569ee389d9ad8a37bd49636f88431775a5646d..e1a23036923aee9d72a9d6149d0aca647165bf99 100644
--- a/_posts/2012-03-14-But-wait-There-may-be-more.html
+++ b/_posts/2012-03-14-But-wait-There-may-be-more.html
@@ -11,7 +11,7 @@ summary:
 <p>My colleague Boris Yakobowski is on holidays, and while leisurely browsing the internet, he seems to have stumbled upon my previous post. He e-mailed me about it.</p> 
 <p>For some reason, it does not feel right to provide much biography when mentioning <a href="/index.php?tag/facetious-colleagues">facetious colleagues</a>  perhaps because it would look too self-congratulatory from the outside. Just this once  a little bit of context appears to be necessary: Boris is the person who has  using Frama-C's value analysis  managed to analyze the largest codebase to date. Without qualification  this does not mean much: anyone can type "frama-c -val *.c" in a directory and call it an analysis. But there is a point at which one can convincingly argue that the analysis is meaningful and that the results obtained are the best the tool can give. The latter  for instance  means all remaining alarms have been examined  and for each  the verifier has some confidence that it is false and an explanation why the analyzer emits the alarm instead of doing the precise thing and not emitting it. In short  Boris has reached this point for a pretty big piece of embedded software.</p> 
 <p>Boris  who knows everything about false alarms and generally Frama-C's value analysis' limitations  claims that the example from the last post is too simple and suspects me to have doctored it to make it analyze optimally in “only” 7 minutes. How unfair!</p> 
-<p>The example in last post comes straight from Omar Chebaro's PhD thesis. Omar worked during his PhD on SANTE (more information <a href="http://lifc.univ-fcomte.fr/~publis/papers/Author/CHEBARO-O.html">here</a>)  a Frama-C plug-in that combines results from the value analysis plug-in and concolic test-case generation with PathCrawler for  one  identifying false alarms when possible and two  providing input vectors for true alarms (again  when possible). There is no reason to doubt Omar's scientific integrity  but even if he had a slight bias  the bias would not necessarily be to make things easy for the value analysis. He might just as well be tempted to make the value analysis emit more alarms in order to show the necessity of sorting them out with dynamic analysis.</p> 
+<p>The example in last post comes straight from Omar Chebaro's PhD thesis. Omar worked during his PhD on SANTE (more information <a href="https://www.theses.fr/140422080">here</a>)  a Frama-C plug-in that combines results from the value analysis plug-in and concolic test-case generation with PathCrawler for  one  identifying false alarms when possible and two  providing input vectors for true alarms (again  when possible). There is no reason to doubt Omar's scientific integrity  but even if he had a slight bias  the bias would not necessarily be to make things easy for the value analysis. He might just as well be tempted to make the value analysis emit more alarms in order to show the necessity of sorting them out with dynamic analysis.</p> 
 <p>I do have a bias  and my motive in last post is clearly to show that the value analysis  on that example  with the right options  can limit its list of alarms to a single one  which happens to be a true alarm. But I did not choose the example  nor did I modify it to make it easier to analyze.</p> 
 <p>Note that SANTE is still useful in conditions where the value analysis emits a single alarm: at the end of last post  we only know that the alarm is true because of a comment in the source code telling us so. Using PathCrawler to classify that alarm gives SANTE a chance to confirm that it is true and to provide an input vector that the developer would certainly appreciate  if only for debugging purposes.</p>
 {% endraw %}
diff --git a/_posts/2012-03-16-Security-and-safety.html b/_posts/2012-03-16-Security-and-safety.html
index 43232326e6816a3e0fbacb140654363cff400313..a3ce014e4bfda6277c5b55d161042bde2cefb91d 100644
--- a/_posts/2012-03-16-Security-and-safety.html
+++ b/_posts/2012-03-16-Security-and-safety.html
@@ -10,7 +10,7 @@ summary:
 {% raw %}
 <p>I usually feel uncomfortable diving into the subject of safety vs security, because while I think I have a good intuition of the difference between them, I find this difference hard to formalize in writing. 
 Fortunately, a large “security” “administration” provides:</p> 
-<p>“We use layers of security to ensure the security of [...]” (<a href="http://www.tsa.gov/what_we_do/layers/index.shtm">source</a>  proudly linked from <a href="http://blog.tsa.gov/2012/03/viral-video-about-body-scanners.html">here</a>)</p> 
+<p>“We use layers of security to ensure the security of [...]” (source [removed dead link]  proudly linked from here [removed dead link])</p> 
 <p>The difference between safety and security is that if the aforementioned administration was trying to ensure the safety of the traveling public  then accumulating a large enough number of rubbish detection layers would work. In safety  you are working against a harsh but neutral universe. It is okay simply to do your best to ensure that a safety-critical component works as designed  and then to put in some redundancy  and perhaps some fail-safe mechanism  because you are actually multiplying probabilities of failure. Assuming your design works  and naming p₁ and p₂ the (low) probabilities that the component fails and that the fail-safe fails  the probability that two out of three redundant components and the fail-safe fail simultaneously is of the order of some factor of p₁²*p₂. Accumulate enough layers and you can make the overall probability of failure low enough.</p> 
 <p>And that's the difference  really.</p> 
 <p>The intuition is that good protection can be obtained by accumulating imperfect layers. The intuition works well when the probabilities of failure of each layer are somewhat independent. When they are completely independent  probability theory tells us that they can be multiplied. And the definition of security  as opposed to safety  is that when dealing with a security issue  you cannot  assume that the probabilities of failure of your layers are independent  because all the layers tend to fail in the same conditions  namely  when a smart attacker is trying to get through them.</p> 
diff --git a/_posts/2012-03-29-Why-do-signed-overflows-so-often-do-what-programmers-expect.html b/_posts/2012-03-29-Why-do-signed-overflows-so-often-do-what-programmers-expect.html
index 3b4719b84da1ce8f86caa3cdc39a6c447375c33b..ea563b15bd6ebc1608e95b6fb9329b01c851acd8 100644
--- a/_posts/2012-03-29-Why-do-signed-overflows-so-often-do-what-programmers-expect.html
+++ b/_posts/2012-03-29-Why-do-signed-overflows-so-often-do-what-programmers-expect.html
@@ -16,5 +16,5 @@ summary:
 <h2>And now for some more Regehr links</h2> 
 <p><a href="http://blog.regehr.org/archives/320">This blog post</a> shows how good compilers already are.</p> 
 <p>I put forward a weak version of the argument presented here on the Frama-C mailing list a long time ago  and John answered with an <a href="http://lists.gforge.inria.fr/pipermail/frama-c-discuss/2009-June/001235.html">example</a> C program where the compiler can take advantage of the undefinedness of signed overflows to generate code that does not give 2's complement results.</p> 
-<p>John also discussed undefined behaviors on his blog  which prompted Chris Lattner to write a <a href="http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html">3-post series</a> with more of the compiler implementer's point of view and revealing examples.</p>
+<p>John also discussed undefined behaviors on his blog  which prompted Chris Lattner to write a 3-post series [removed dead link] with more of the compiler implementer's point of view and revealing examples.</p>
 {% endraw %}
diff --git a/_posts/2012-04-29-Benchmarking-static-analyzers.html b/_posts/2012-04-29-Benchmarking-static-analyzers.html
index 219958cf4c4c481e2bce1d65726c32d36579e314..6cd43193654acb5276b9ecd7e840da67a791f078 100644
--- a/_posts/2012-04-29-Benchmarking-static-analyzers.html
+++ b/_posts/2012-04-29-Benchmarking-static-analyzers.html
@@ -25,7 +25,7 @@ testcase  not as the goal of the testcase (the undefined behavior to be detected
 this can be contrasted to the condition <code>&amp;a == &amp;b + 1</code>. You may expect that this would be undefined  too  but for some reason it is only unspecified. The implementation may return <code>0</code> or <code>1</code>  and it will not return the same value everytime. Derek Jones would point out here that an implementation could rely on just-in-time compilation and that the value of <code>&amp;a == &amp;b + 1</code> could as a result change from one evaluation to the other  for the same expression referring to the same variables <code>a</code> and <code>b</code>. Anyway  it is not undefined  so if the standard was the absolute reference  static analyzers would not be allowed to consider that execution gets stuck on evaluating <code>&amp;a == &amp;b + 1</code>.</p> 
 <p>In Frama-C's value analysis  we do not see much reason to treat equality and inequality differently  but we recognize that both stopping execution and continuing with results <code>{0; 1}</code> can have their uses  so we offer an option to choose between these two behaviors. This <a href="/index.php?tag/unspecified%20behavior">series of posts</a> already covers this kind of nuance.</p> 
 <p>Frama-C's value analysis detects  warns and considers execution stuck on uninitialized variables. But by design  it avoids warning (and considering execution stuck) on the statement <code>y = x;</code> or <code>*dst = *src;</code>. This is a compromise  made necessary by the necessity to analyze user implementations of <code>memcpy()</code>  which may use exactly that second statement for uninitialized data when copying a struct or union.</p> 
-<p>One <a href="http://bts.frama-c.com/view.php?id=69">long-standing bug</a> in Frama-C's front-end is that it erases a possible bug during parsing and normalization. According to the letter of the standard  <code>addressee().a</code> is dangerous. However  this misfeature of the C99 standard has quite a history (in the circle of people who care about this sort of thing)  as illustrated by the numerous comments on the <a href="https://www.securecoding.cert.org/confluence/display/seccode/EXP35-C.+Do+not+access+or+modify+an+array+in+the+result+of+a+function+call+after+a+subsequent+sequence+point">linked CERT page</a>. A static analyzer could omit this bug (and consider execution continues) by a conscious design decision (for instance if it specializes in a compiler that documents that this construct is safe). Another static analyzer could treat it as undefined  and consider that execution gets stuck. Again  the construct is not very interesting in itself  so it shouldn't be restrictive to omit it from the explicit goals of the benchmark  and to omit it from the incidental features that might perturb other tests.</p> 
+<p>One <a href="http://bts.frama-c.com/view.php?id=69">long-standing bug</a> in Frama-C's front-end is that it erases a possible bug during parsing and normalization. According to the letter of the standard  <code>addressee().a</code> is dangerous. However  this misfeature of the C99 standard has quite a history (in the circle of people who care about this sort of thing)  as illustrated by the numerous comments on the linked CERT page [removed dead link]. A static analyzer could omit this bug (and consider execution continues) by a conscious design decision (for instance if it specializes in a compiler that documents that this construct is safe). Another static analyzer could treat it as undefined  and consider that execution gets stuck. Again  the construct is not very interesting in itself  so it shouldn't be restrictive to omit it from the explicit goals of the benchmark  and to omit it from the incidental features that might perturb other tests.</p> 
 <p>Finally  there are areas where the standard under-specifies things. One egregious example is floating-point. David Monniaux's <a href="http://arxiv.org/abs/cs/0701192">report</a> is still the reference I recommend for these floating-point issues.</p> 
 <p>All but the last of these examples are cases of “Don't do that  then”: the issue can simply be avoided by not using any of the ambiguous constructs for which it is unclear whether an analyzer should be allowed to consider them blocking. Also  all the issues in these first examples disappear when toolmakers are allowed to participate: when their analyzer scores badly on a test because it is better than others are detecting incidental issues  they will be prompt to point out the problem.</p> 
 <p>For the last example  floating-point  I do not have a solution to offer. Some real-life software <a href="http://www.exploringbinary.com/php-hangs-on-numeric-value-2-2250738585072011e-308/">issues</a> come specifically from the liberties that the standard allows and from the fact that compilers take advantage of them  so it is a worthy goal for a static analyzer to try to model these liberties  and a worthy goal for a benchmark to measure how well the static analyzer is doing. But floating-point in C is a mess  and since one of the issues is that compilation is under-specified  it does not suffice to use one (or even several) compilations as reference.</p> 
diff --git a/_posts/2012-07-25-The-previous-post-was-written-in-jest.html b/_posts/2012-07-25-The-previous-post-was-written-in-jest.html
index 15d9182d7d863eb89d35c227a03e5fbcf514e3ec..daeddc0215377867fd13369de9d1606d81141b9e 100644
--- a/_posts/2012-07-25-The-previous-post-was-written-in-jest.html
+++ b/_posts/2012-07-25-The-previous-post-was-written-in-jest.html
@@ -13,7 +13,7 @@ summary:
 <p>Long ago, a C programmer trying to write a portable C program had to avoid signed overflows because different architectures could exhibit different behaviors depending on whether signed numbers were represented as sign-magnitude, in one's complement or in two's complement. 
 These days are gone. A few dedicated architectures (DSP) may give you saturating behavior for overflows, but mostly every architecture use two's complement.</p> 
 <p>A modern programmer might think “I know my target architecture uses two's complement representation, and I want wrap-around behavior here, so I will just use signed addition”.</p> 
-<p>The problem nowadays is that <a href="http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html">compilers take advantage of undefined behavior for optimization</a>. An expression such as <code>X+1 &gt; X</code> with <code>X</code> of type <code>int</code> may be optimized to <code>1</code>  because the only case when it is not true is when <code>X+1</code> overflows  which is undefined behavior  and therefore the compiler can do what it wants then. Incidentally  this means the same compiler  compiles <code>X+1 &gt; X</code> into code that produces different results for <code>INT_MAX</code> on different optimization levels.</p> 
+<p>The problem nowadays is that compilers take advantage of undefined behavior for optimization [removed dead link]. An expression such as <code>X+1 &gt; X</code> with <code>X</code> of type <code>int</code> may be optimized to <code>1</code>  because the only case when it is not true is when <code>X+1</code> overflows  which is undefined behavior  and therefore the compiler can do what it wants then. Incidentally  this means the same compiler  compiles <code>X+1 &gt; X</code> into code that produces different results for <code>INT_MAX</code> on different optimization levels.</p> 
 <p>The previous post suggested to use a statement that is undefined when <code>p</code> and <code>q</code> alias in order to free the compiler of any constraints in these circumstances. The compiler can effectively do anything it wants  including returning the incorrect result <code>3</code>  if the same address is passed as both arguments of <code>f2()</code>  because the function is undefined then. This was not a serious suggestion  but should be understood as an argument in the debate about the exploitation of undefined behavior in compiler optimization.</p> 
 <p>This debate is interesting to me as someone who works on the static analysis of critical embedded C  because embedded code has constraints that make some kinds of undefined behaviors unavoidable. And critical code shouldn't use any of the dangerous undefined behaviors. And the frontier between the unavoidable undefined behaviors and the dangerous undefined behaviors is not written anywhere  and it is always moving.</p> 
 <p>The previous post was largely written in jest. I was not suggesting to substitute the respected <code>restrict</code> keyword with an awkward replacement. All the arguments I put forward were in bad faith.</p>
diff --git a/_posts/2012-08-02-restrict-is-not-good-for-modular-reasoning.html b/_posts/2012-08-02-restrict-is-not-good-for-modular-reasoning.html
index b487e7abe51f6ecc4c4e9042a241209d8a204e5f..9df79056a73a89e9804b431c529086ebf3ac3533 100644
--- a/_posts/2012-08-02-restrict-is-not-good-for-modular-reasoning.html
+++ b/_posts/2012-08-02-restrict-is-not-good-for-modular-reasoning.html
@@ -10,7 +10,7 @@ summary:
 {% raw %}
 <h2>ACSL</h2> 
 <p>There were quite a few posts recently that were concerned with ACSL, and a few more are planned for the near future. ACSL is a specification language for C (comparable with JML for Java, for those who know about JML). Some people call it a BISL, for “Behavioral Interface Specification Language”, but considering that this acronym's definition needs to be introduced everytime, on account of nobody in the world having ever seen it before, and that it has very little reuse potential, please forget that we mentioned it.</p> 
-<p>ACSL is Frama-C's annotation language, and it is partially implemented in Frama-C, but it is not intended to be just Frama-C's. According to <a href="http://lists.gforge.inria.fr/pipermail/frama-c-acsl-discuss/2010-February/000012.html">this message</a>  Microsoft Research's <a href="http://research.microsoft.com/en-us/projects/vcc/">VCC</a> now uses an annotation language with ACSL-like syntax and semantics.</p> 
+<p>ACSL is Frama-C's annotation language, and it is partially implemented in Frama-C, but it is not intended to be just Frama-C's. According to this message [remove dead link]  Microsoft Research's <a href="http://research.microsoft.com/en-us/projects/vcc/">VCC</a> now uses an annotation language with ACSL-like syntax and semantics.</p> 
 <p>ACSL allows to write <strong>function contracts</strong>  the stuff that modular reasoning about programs is made of. A function contract is supposed to tell you everything you need to know about the function  so that you do not need to look at its code. The function might still be unimplemented  even.</p> 
 <blockquote><p>ACSL also allows to write annotations that go inside the function and are specific to one particular implementation. These annotations are typically hints to allow the verification that the specific implementation satisfies the contract. They can be  for instance  loop invariants or assertions.</p> 
 </blockquote> 
diff --git a/_posts/2012-08-03-assume-and-assert.html b/_posts/2012-08-03-assume-and-assert.html
index e79a2c65615efb9eb77096ad875720f330515f07..76f4162e1627055d27ff350d0aa090d28bdd6ce5 100644
--- a/_posts/2012-08-03-assume-and-assert.html
+++ b/_posts/2012-08-03-assume-and-assert.html
@@ -8,7 +8,7 @@ title: "assume and assert"
 summary: 
 ---
 {% raw %}
-<p>The <a href="/index.php?post/2012/08/02/restrict-not-modular">previous post</a> links to a <a href="http://lists.gforge.inria.fr/pipermail/frama-c-acsl-discuss/2010-February/000012.html">message</a> from Michał Moskal highlighting ACSL constructs that the VCC developers at Microsoft Research had either regretted the absence of or found superfluous while re-designing their own annotation language for VCC. 
+<p>The <a href="/index.php?post/2012/08/02/restrict-not-modular">previous post</a> links to a message [removed dead link] from Michał Moskal highlighting ACSL constructs that the VCC developers at Microsoft Research had either regretted the absence of or found superfluous while re-designing their own annotation language for VCC. 
 In that e-mail  the third item in the “missing” list was:</p> 
 <blockquote><p>ACSL is surprisingly missing /*@ assume ... */  we've found it tremendously useful when debugging specifications.</p> 
 </blockquote> 
diff --git a/_posts/2012-08-23-Technical-interlude.html b/_posts/2012-08-23-Technical-interlude.html
index 64e0b5f5ca0ea48d7485bd339c8af50f810dda4c..5a27fd5d3048e25c7de2413ffedb1d079b43e181 100644
--- a/_posts/2012-08-23-Technical-interlude.html
+++ b/_posts/2012-08-23-Technical-interlude.html
@@ -16,7 +16,7 @@ summary:
 <p><q>A person (participant) was qualified as competition contributor for a competition candidate if the person was a contributing designer/developer of the submitted competition candidate (witnessed by occurrence of the person’s name on the tool’s project web page  a tool paper  or in the revision logs).</q></p> 
 <p>Ostensibly  the <strong>tools are the participants</strong> in the competition. Involving the authors is only a mean to make the competition as fair as possible for the tools. Some colleagues and I have <a href="https://www.dropbox.com/s/el2lidf5v71ogm6/p.pdf">expressed</a> the opinion that  for static analysis  at this time  this was the right way:</p> 
 <p><q>In general  the best way to make sure that a benchmark does not misrepresent the strengths and weaknesses of a tool is to include the toolmakers in the process.</q></p> 
-<p>Some competitions on the other hand acknowledge that a participant is a person(s)+tool hybrid  and <a href="http://lists.csl.sri.com/pipermail/sal/2010-July/000135.html">welcome</a> entries from users that did not create the tools they used. This is perfectly fine  especially for the very interactive tools involved in the cited competition  as long as it is understood that it is not a tool alone  but a hybrid participant  that is being evaluated in such a competition. Some of the participating tools participated in several teams  associated to several users (although the winning teams did tend to be composed of toolmakers using their respective tools).</p> 
+<p>Some competitions on the other hand acknowledge that a participant is a person(s)+tool hybrid  and welcome [removed dead link] entries from users that did not create the tools they used. This is perfectly fine  especially for the very interactive tools involved in the cited competition  as long as it is understood that it is not a tool alone  but a hybrid participant  that is being evaluated in such a competition. Some of the participating tools participated in several teams  associated to several users (although the winning teams did tend to be composed of toolmakers using their respective tools).</p> 
 <p>It is not clear to me where on this axis the RERS competition falls. It is a “free-style” competition. The entries are largely evaluated on results  with much freedom with respect to the way these were obtained. This definitely can also be fun(*).</p> 
 <p>The competition certainly does not require participating tools to be packaged and documented in such exquisite detail that anyone could have computed the same results. As such  the best description to write is the description of what the authors of the value analysis can do with it with all the shortcuts they trust themselves to take. If some of the questions can be solved efficiently using undocumented features  so be it. It is the free-style style.</p> 
 <p>But the mission of the blog remains to document  and showing what we have done without explaining how to reproduce it sounds like boasting. I am torn.</p> 
diff --git a/_posts/2012-11-18-About-the-rounding-error-in-these-Patriot-missiles.html b/_posts/2012-11-18-About-the-rounding-error-in-these-Patriot-missiles.html
index 0236eb74bb9d009fc7d69a8e608ff2cfe3bfe303..a86990a3298ca2c87b28cede7270db9c714100c2 100644
--- a/_posts/2012-11-18-About-the-rounding-error-in-these-Patriot-missiles.html
+++ b/_posts/2012-11-18-About-the-rounding-error-in-these-Patriot-missiles.html
@@ -21,7 +21,7 @@ summary:
 <p>I vaguely knew that it had something to do with a clock inside the Patriot missile measuring tenths of seconds and the constant 0.1 not being representable in binary. When researchers tell the story on a motivation slide  it sounds like a stupid mistake.</p> 
 <blockquote><p>There is nothing reprehensible in calculating with constants that are not represented finitely in binary. Other computations inside the missile surely involved the number π  which is not finitely representable in binary either. The designer of the system simply must understand what <a href="http://en.wikipedia.org/wiki/Spivak_pronoun">ey</a> is doing. It can get a little tricky  especially when the software is evolved. Let me tell you about the single-precision <code>cosf()</code> function from the Glibc library in another rant.</p> 
 </blockquote> 
-<p>Similarly with the Ariane 5 case  a rather good-looking <a href="http://mate.uprh.edu/~pnm/notas4061/patriot.htm">summary</a> of the issue is available. 
+<p>Similarly with the Ariane 5 case  a rather good-looking <a href="http://www-users.math.umn.edu/~arnold//disasters/patriot.html">summary</a> of the issue is available. 
 Assuming that this summary is correct  and it certainly looks more plausible than the rash explanations you get at motivation-slide time  the next time I hear a researcher use the Patriot missile example to motivate eir talk  I will ask the questions that follow.</p> 
 <ol> 
 <li>When are you adapting your system to ad-hoc  24-bit fixed-point computations (not whether it is theoretically possible but when you are doing it)?</li> 
diff --git a/_posts/2012-11-29-Solution-to-yesterdays-quiz.html b/_posts/2012-11-29-Solution-to-yesterdays-quiz.html
index cbad543a9f06e8b3e450470087fd09876034ca55..85d24f1328fada5fe5dff23d655e5e691ecfe1cd 100644
--- a/_posts/2012-11-29-Solution-to-yesterdays-quiz.html
+++ b/_posts/2012-11-29-Solution-to-yesterdays-quiz.html
@@ -38,7 +38,7 @@ l(18)=4
 l(19)=4 
 l(20)=4 
 </pre> 
-<p>The construct <code>(float[]){…}</code> is C99's syntax for anonymous arrays  a kickass programming technique. This is an <a href="http://www.run.montefiore.ulg.ac.be/~martin/resources/kung-f00.html">unabated</a> quote.</p> 
+<p>The construct <code>(float[]){…}</code> is C99's syntax for anonymous arrays  a kickass programming technique. This is an unabated quote.</p> 
 <p>In the case at hand  the construct converts to float the contents of the braces and puts the result in memory. The function puts the float in memory in order to read its most significant byte. That's <code>*(char*)…</code> on a big-endian architecture  and <code>*(3+(char*)…)</code> on a little-endian one.</p> 
 <p>One reason to read a single char is to circumvent <a href="http://stackoverflow.com/q/98650/139746">strict aliasing rules</a>—which do not apply to type <code>char</code>. A simpler version of the same function would have been <code>(*(int*)(float[]){x} &gt;&gt; 23) - 127</code>  but that version would break strict aliasing rules. Also  it would be too obvious.</p> 
 <p>The most significant bits of a single-precision IEEE 754 floating-point representation are  in order  one sign bit and eight exponent bits. By reading the most significant byte  we get most of the exponent  but one bit is lost. To compensate for this  the operation is applied to <code>x*x</code>  whose exponent is double the exponent of <code>x</code>.</p> 
diff --git a/_posts/2013-02-01-ENSL-seminar.html b/_posts/2013-02-01-ENSL-seminar.html
index a7d898586883a042f4664fbd1f27f815bc9422de..57363e169d882b2767ee12c75b0423cddffe73ac 100644
--- a/_posts/2013-02-01-ENSL-seminar.html
+++ b/_posts/2013-02-01-ENSL-seminar.html
@@ -15,7 +15,7 @@ summary:
 <p>The sort of discussion that emerges naturally in an open, low-stakes chat after a general presentation of Frama-C but is too open-ended to be part of the talk's questions is that of general usefulness.</p> 
 <p>Not every programmer is writing critical embedded code—in fact, most of us aren't. But in this day and age, many of us who program at all write code with mild to severe security implications, and I believe that with a reasonable amount of work, Frama-C can next help here.</p> 
 <h2>Video games, and software security in the news</h2> 
-<p>The seminar also gave me an excuse to visit my family that lives in the general area of Lyon. As a result, today, I was helping my nephew play Professor Fizzwizzle (now an <a href="http://www.heavygames.com/professorfizzwizzle/gameonline.asp">online game</a>  apparently) after lunch  when the 13:00 TV news announced that some webcams  sold for the purpose of allowing homeowners to watch their living room remotely and reassure themselves that they are not being burgled  instead allowed anyone with internet access to watch  at any time.</p> 
+<p>The seminar also gave me an excuse to visit my family that lives in the general area of Lyon. As a result, today, I was helping my nephew play Professor Fizzwizzle (now an online game apparently) after lunch  when the 13:00 TV news announced that some webcams  sold for the purpose of allowing homeowners to watch their living room remotely and reassure themselves that they are not being burgled  instead allowed anyone with internet access to watch  at any time.</p> 
 <p>This was not by design  of course. This was the result of a security flaw. I am not saying that the security flaw was one of the kind Frama-C identifies  say  a buffer overflow in a C program: the <a href="http://pluzz.francetv.fr/videos/jt_13h_ 76414636.html">France 2 news report</a> (in French  available for a week) did not go into such detail. But I am willing to bet that software was involved.</p> 
 <h2>The future</h2> 
 <p>So what world is my three-year-old nephew going to grow in? Either we collectively solve the software security problem  and hopefully  when he is in age of lending an ear to TV news while playing Professor Fizzwizzle  there won't be any more of these spectacular failures to announce. Or we do not  and then  at the current rhythm  TV news won't announce security failures because they will not be news any more.</p> 
diff --git a/_posts/2013-03-03-Correct-rounding-or-mathematically-correct-rounding.html b/_posts/2013-03-03-Correct-rounding-or-mathematically-correct-rounding.html
index 2ac62a2ea59e87b10e6fcb182f7b024302612f85..7ad8c52ddf355549002faa050782878532575c8a 100644
--- a/_posts/2013-03-03-Correct-rounding-or-mathematically-correct-rounding.html
+++ b/_posts/2013-03-03-Correct-rounding-or-mathematically-correct-rounding.html
@@ -8,9 +8,9 @@ title: "Correct rounding or mathematically-correct rounding?"
 summary: 
 ---
 {% raw %}
-<p><a href="http://lipforge.ens-lyon.fr/www/crlibm/">CRlibm</a> is a high-quality library of floating-point elementary functions. We used it as reference a long time ago in this blog while looking at lesser elementary function implementations and the even lesser properties we could verify about them.</p> 
+<p><a href="https://github.com/taschini/crlibm">CRlibm</a> is a high-quality library of floating-point elementary functions. We used it as reference a long time ago in this blog while looking at lesser elementary function implementations and the even lesser properties we could verify about them.</p> 
 <h2>A bold choice</h2> 
-<p>The CRlibm <a href="http://ftp.nluug.nl/pub/os/BSD/FreeBSD/distfiles/crlibm/crlibm-1.0beta3.pdf">documentation</a> contains this snippet:</p> 
+<p>The CRlibm <a href="https://hal-ens-lyon.archives-ouvertes.fr/ensl-01529804/file/crlibm.pdf">documentation</a> contains this snippet:</p> 
 <blockquote><p>[…] it may happen that the requirement of correct rounding conflicts with a basic mathematical property of the function  such as its domain and range. A typical example is the arctangent of a very large number which  rounded up  will be a number larger than π/2 (fortunately  ◦(π/2) &lt; π/2). The policy that will be implemented in crlibm will be</p> 
 <p> 
 • to give priority to the mathematical property in round to nearest mode (so as not to hurt the innocent user who may expect such a property to be respected)  and</p> 
diff --git a/_posts/2013-03-19-Digital-Technologies-ANR-Awards.html b/_posts/2013-03-19-Digital-Technologies-ANR-Awards.html
index dff5b51f977efc55cd9d80452b2f642a5afdd7b2..78c9c08e479ef6325fc008c2dc3d4e8737b6f8fd 100644
--- a/_posts/2013-03-19-Digital-Technologies-ANR-Awards.html
+++ b/_posts/2013-03-19-Digital-Technologies-ANR-Awards.html
@@ -8,7 +8,7 @@ title: "Digital Technologies ANR Awards"
 summary: 
 ---
 {% raw %}
-<p>In about a month, on April 17 and 18, <a href="http://www.agence-nationale-recherche.fr/en/project-based-funding-to-advance-french-research/" hreflang="en">ANR</a>  the French research agency  will hold <a href="http://www.agence-nationale-recherche.fr/Colloques/RencontresduNumerique2013/en/" hreflang="en">a meeting on digital technologies</a>. On this occasion  ANR will also give <a href="http://www.agence-nationale-recherche.fr/Colloques/RencontresduNumerique2013/en/prix.php" hreflang="en">5 awards</a> to projects it has funded in the previous years. It is my pleasure to announce here that <a href="http://frama-c.com/u3cat/" hreflang="en">U3CAT</a>  which has fueled the development of Frama-C during the last 4 years  is among the nominees. This is of course a great reward for all the partners that have been involved in the project (Airbus  Atos  CEA LIST  CNAM  CS  Dassault Aviation  Inria and Sagem)  and we are very grateful to the ANR for that  although we of course hope to be able to brag a bit more about it next month<sup>[<a href="#pnote-202-1" id="rev-pnote-202-1">1</a>]</sup>. In the meantime  I guess that it is an excellent occasion to remind you that we are always open to collaborations to ensure that Frama-C keeps being headed in the good direction.</p> 
+<p>In about a month, on April 17 and 18, <a href="http://www.agence-nationale-recherche.fr/en/project-based-funding-to-advance-french-research/" hreflang="en">ANR</a>  the French research agency  will hold <a href="http://www.rencontres-numeriques.org/2013/" hreflang="en">a meeting on digital technologies</a>. On this occasion  ANR will also give 5 awards to projects it has funded in the previous years. It is my pleasure to announce here that <a href="http://frama-c.com/u3cat/" hreflang="en">U3CAT</a>  which has fueled the development of Frama-C during the last 4 years  is among the nominees. This is of course a great reward for all the partners that have been involved in the project (Airbus  Atos  CEA LIST  CNAM  CS  Dassault Aviation  Inria and Sagem)  and we are very grateful to the ANR for that  although we of course hope to be able to brag a bit more about it next month<sup>[<a href="#pnote-202-1" id="rev-pnote-202-1">1</a>]</sup>. In the meantime  I guess that it is an excellent occasion to remind you that we are always open to collaborations to ensure that Frama-C keeps being headed in the good direction.</p> 
 <div class="footnotes"><h4>Notes</h4> 
 <p>[<a href="#rev-pnote-202-1" id="pnote-202-1">1</a>] If you haven't seen anything on this blog by April  19<sup>th</sup>  I suppose that you'll understand yourselves what the outcome was regarding our project</p> 
 </div>
diff --git a/_posts/2013-05-20-Attack-by-Compiler.html b/_posts/2013-05-20-Attack-by-Compiler.html
index a8ad4c61fb9d890cfc6e36c1ea96a5a63fbcb0b4..3b4c0ba5f133ea37b031616916e4f05b619c468b 100644
--- a/_posts/2013-05-20-Attack-by-Compiler.html
+++ b/_posts/2013-05-20-Attack-by-Compiler.html
@@ -9,14 +9,14 @@ summary:
 ---
 {% raw %}
 <p>The title of this post, “Attack by Compiler”, has been at the back of my mind for several weeks. It started with a comment by jduck on a post earlier this year. The post's topic, the practical undefinedness of reading from uninitialized memory, and jduck's comment, awakened memories from a 2008 incident with the random number generator in OpenSSL.</p> 
-<p>As I am writing this, if I google “attack by compiler”, the first page of results include the classic essay <a href="http://cm.bell-labs.com/who/ken/trust.html">Reflections on Trusting Trust</a> by Ken Thompson  Wikipedia's definition of a <a href="http://en.wikipedia.org/wiki/Backdoor_(computing)">backdoor in computing</a>  an <a href="http://www.acsa-admin.org/2005/abstracts/47.html">article</a> by David A. Wheeler for countering the attack described by Thompson  a <a href="http://www.schneier.com/blog/archives/2006/01/countering_trus.html">commentary</a> by Bruce Schneier on Wheeler's article  and a Linux Journal <a href="http://www.linuxjournal.com/article/7929">article</a> by David Maynor on the practicality of the attack described by Thompson on the widespread GNU/Linux platform.</p> 
+<p>As I am writing this, if I google “attack by compiler”, the first page of results include the classic essay <a href="https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf">Reflections on Trusting Trust</a> by Ken Thompson  Wikipedia's definition of a <a href="http://en.wikipedia.org/wiki/Backdoor_(computing)">backdoor in computing</a>  an <a href="http://www.acsa-admin.org/2005/abstracts/47.html">article</a> by David A. Wheeler for countering the attack described by Thompson  a <a href="http://www.schneier.com/blog/archives/2006/01/countering_trus.html">commentary</a> by Bruce Schneier on Wheeler's article  and a Linux Journal <a href="http://www.linuxjournal.com/article/7929">article</a> by David Maynor on the practicality of the attack described by Thompson on the widespread GNU/Linux platform.</p> 
 <p>This post is about a slightly different idea.</p> 
 <h2>Initial conditions: trustworthy compiler  peer-reviewed changes</h2> 
 <p>Suppose that we start with a trustworthy compiler  widely distributed in both source and binary form. Some people are in the position to make changes to the compiler's source code and could  like Thompson in his essay  attempt to insert a backdoor in the compiler itself.</p> 
 <p>But now  each change is scrutinized by innumerable witnesses from the Open-Source community. I say “witnesses”  in fact  they  are  mostly students   but  we  will probably be forced to assume that they can't all be mischievous.</p> 
 <p>The attackers could try to obfuscate the backdoor as they insert it  but the problem is that some of the witnesses are bound to possess this character trait that the less they understand a change  the more they investigate it. Furthermore  once these witnesses have uncovered the truth  loss of credibility will ensue for the person who tried to sneak the backdoor in. This person will lose eir commit privilege to the compiler's sources  and people will recover untainted compilers in source and binary form from their archives. This kind of approach is risky and may only result in a temporary advantage—which may still be enough.</p> 
 <h2>The underhanded approach</h2> 
-<p>The 2013 edition of the <a href="http://underhanded.xcott.com">Underhanded C Contest</a> is under way. The contest defines underhanded code as:</p> 
+<p>The 2013 edition of the <a href="http://underhanded-c.org/_page_id_25.html">Underhanded C Contest</a> is under way. The contest defines underhanded code as:</p> 
 <blockquote><p>code that is as readable  clear  innocent and straightforward as possible  and yet [fails] to perform at its apparent function</p> 
 </blockquote> 
 <p>Underhandedness is exactly what an attacker with commit access to the source code of the widely used compiler should aim for. If the commit is underhanded enough  the committer may not only enjoy full <strong>deniability</strong>  but ey may obtain that the incriminating change <strong>stays in ulterior versions</strong> of the compiler  as a “good” change. This implies that all affected security-sensitive applications  like the “login” program in Thompson's essay  must be updated to work around the now official backdoor in the compiler. In this scenario  even after the attack has been discovered  anytime someone unknowingly compiles an old version of “login” with a recent compiler  it's another win for the attacker.</p> 
@@ -28,7 +28,7 @@ summary:
 <li>optimizing source code that was written to take constant-time into binary code that <strong>reveals secrets by terminating early</strong>.</li> 
 </ol> 
 <p>I am not being very original.  According to this <a href="http://kqueue.org/blog/2012/06/25/more-randomness-or-less/">post</a> by Xi Wang  idea (1) is only waiting for someone to give the compiler one last well-calibrated shove. The NULL test optimization was already implemented in the compiler when it was needed for a famous Linux kernel <a href="http://lwn.net/Articles/342330/">exploit</a>. The interesting scenario would have been if someone had found that the code in <code>tun_chr_poll()</code> was almost exploitable and had submitted the GCC optimization to activate the problem  but it did not happen in this order. Idea (3) really <a href="http://lwn.net/Articles/278137/">happened</a>.</p> 
-<p>Idea (4) has not been exploited that I know of  but it is only only a matter of time. If I google for “constant-time memcmp”  I may <a href="http://www.serverphorums.com/read.php?12 650482">find</a> an implementation such as follows:</p> 
+<p>Idea (4) has not been exploited that I know of  but it is only only a matter of time. If I google for “constant-time memcmp”  I may find an implementation such as follows:</p> 
 <pre>int memcmp_nta(const void *cs  const void *ct  size_t count) 
 { 
   const unsigned char *su1  *su2; 
diff --git a/_posts/2013-08-24-Function-pointers-in-C.html b/_posts/2013-08-24-Function-pointers-in-C.html
index ca1b6e432f01d088a6d8dde38b072b81727e7e8e..100716f1443b6f1347ddee5f0e438af2abd3906f 100644
--- a/_posts/2013-08-24-Function-pointers-in-C.html
+++ b/_posts/2013-08-24-Function-pointers-in-C.html
@@ -31,7 +31,7 @@ int main(){
 } 
 </pre> 
 <p>The line <code>void (*p)(float) = f;</code>, which defines a variable <code>p</code> of type “pointer to function that takes a float”, and initializes it with the conversion of <code>f</code>, is legal as per 6.3.2.3:8. However, the following statement, <code>(*p)(3);</code> is actually equivalent to <code>(*p)((float)3);</code>, because the type of <code>p</code> is used to decide how to convert the argument prior to the call, and it is undefined because <code>p</code> points to a function that requires an <code>int</code> as argument.</p> 
-<p>Even if you know that the types <code>int</code> and <code>long</code> are both 32-bit and virtually indistinguishable on your platform (you may be using an ILP32 or an IL32P64 platform), the types <code>int</code> and <code>long</code> are not compatible. Josh Haberman has written <a href="http://blog.reverberate.org/2013_03_01_archive.html">a nice essay on this precise topic</a>.</p> 
+<p>Even if you know that the types <code>int</code> and <code>long</code> are both 32-bit and virtually indistinguishable on your platform (you may be using an ILP32 or an IL32P64 platform), the types <code>int</code> and <code>long</code> are not compatible. Josh Haberman has written <a href="https://blog.reverberate.org/2013/03/cc-gripe-1-integer-types.html">a nice essay on this precise topic</a>.</p> 
 <p>Consider the program:</p> 
 <pre>void f(int x); 
 int main(){ 
diff --git a/_posts/2013-09-02-The-case-for-formal-verification-of-existing-software.html b/_posts/2013-09-02-The-case-for-formal-verification-of-existing-software.html
index ddee8072907ed9b3d8dd27b52768a01ab84fdcd9..2828aaa865b75bf6c52f03749587dc52180ddad7 100644
--- a/_posts/2013-09-02-The-case-for-formal-verification-of-existing-software.html
+++ b/_posts/2013-09-02-The-case-for-formal-verification-of-existing-software.html
@@ -8,7 +8,7 @@ title: "The case for formal verification of existing software"
 summary: 
 ---
 {% raw %}
-<p>Perry E. Metzger <a href="http://permalink.gmane.org/gmane.comp.encryption.general/14818">takes a look at formal verification</a>. This is good stuff; there is a lot to agree with here.</p> 
+<p>Perry E. Metzger takes a look at formal verification [removed dead link]. This is good stuff; there is a lot to agree with here.</p> 
 <p>However  agreeing with Perry's post alone would not make a very interesting counterpoint. If agreeing was the only thing I intended to do  I might even not have written this post. Instead I intended to add  and I apologize in advance for the predictability of my views  that while creating formally verified software from scratch is useful  verifying existing software is useful too.</p> 
 <p>Yes  formally verified software written from scratch can now be large enough in scope to be a compiler or a microkernel  but when verifying existing software  we can tackle the problem from the other end: we can pick any useful software component  and verify that. We can pick a software component so useful that it is already <em>used</em> by millions. If we succeed in verifying it  we have put formally verified software in the hands of millions of satisfied users. Transparently.</p> 
 <p>Take the example of the <a href="https://polarssl.org">SSL implementation</a> I am taking a couple of weeks to finish massaging through Frama-C. It is not as wide in scope as Quark  seL4 or CompCert. Neither am I aiming for the same kind of functional correctness as these projects are: I am only verifying the absence of undefined behaviors in the component  and verifying the functional dependencies of the cryptographic primitives(*).</p> 
diff --git a/_posts/2018-01-25-Analysis-scripts-helping-automate-case-studies-part-1.md b/_posts/2018-01-25-Analysis-scripts-helping-automate-case-studies-part-1.md
index 819da08f774380cc85f02a582968056b8825211e..a6239a676f41a0961529f0549828be6894e2d839 100644
--- a/_posts/2018-01-25-Analysis-scripts-helping-automate-case-studies-part-1.md
+++ b/_posts/2018-01-25-Analysis-scripts-helping-automate-case-studies-part-1.md
@@ -144,7 +144,8 @@ If you want to name your Makefile otherwise, just remember to always add
 `-f <your makefile name>` to the make commands presented in this tutorial.
 
 Our `GNUmakefile` will be created with content based on the
-[template available on Frama-C's Github repository](https://github.com/Frama-C/open-source-case-studies/blob/master/template.mk).
+template (update: previously) available on Frama-C's Github repository.
+*(update: The template is now distributed with Frama-C)*.
 
 In this tutorial, we consider that Frama-C is installed and in the `PATH`
 to keep the template concise.
diff --git a/_posts/2019-02-26-Finding-unexpected-bugs-in-the-DARPA-CGC-corpus.md b/_posts/2019-02-26-Finding-unexpected-bugs-in-the-DARPA-CGC-corpus.md
index 2f7d6b6c5d8d7c68edeebbfb92f71abf005a6627..5f9a43d3836d70fa95b9a0dc8c044188ef3ff21d 100644
--- a/_posts/2019-02-26-Finding-unexpected-bugs-in-the-DARPA-CGC-corpus.md
+++ b/_posts/2019-02-26-Finding-unexpected-bugs-in-the-DARPA-CGC-corpus.md
@@ -13,7 +13,7 @@ evaluation, organized by NIST.
 The C programs come from a Linux-compatible port of the
 [DARPA Cyber Grand Challenge](http://www.lungetech.com/cgc-corpus/).
 The port was made by
-[Trail of Bits](https://www.trailofbits.com/research-and-development/challenge-sets/)
+[Trail of Bits](https://www.trailofbits.com/)
 and used as one of the evaluation sets of SATE VI.
 
 These challenges contain each one or more bugs which lead to exploitable
diff --git a/download/user-manual-20.0-Calcium.pdf b/download/user-manual-20.0-Calcium.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..66dc8afed0592faf24c48e3515f299422440df6e
--- /dev/null
+++ b/download/user-manual-20.0-Calcium.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ca7790bbe84c26bedf6efb3d9aeb326dabf8a74b99323a6b81e86b1f86c52cd
+size 1390037
diff --git a/download/user-manual-21.0-Scandium.pdf b/download/user-manual-21.0-Scandium.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f5a26196d1204a4f71daa4860a5f138e315bc2e5
--- /dev/null
+++ b/download/user-manual-21.0-Scandium.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:42f6555f3ce2c1b292f788169e67585801160d33b146bde62db6ec065072a401
+size 1779440
diff --git a/download/user-manual-21.1-Scandium.pdf b/download/user-manual-21.1-Scandium.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..90da96a17ba97f2948baff545c5af5cfbd2d97c4
--- /dev/null
+++ b/download/user-manual-21.1-Scandium.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f53c658dceb950e5c0a674e7fe16c5310ce7ea01cac73931bdad427034412531
+size 1779440
diff --git a/html/documentation.html b/html/documentation.html
index 6ff08638f0b6c7312ef054cf372f85ed8ea4e265..e1a11a0be450f503c948e02f4f118439051c6f60 100755
--- a/html/documentation.html
+++ b/html/documentation.html
@@ -23,7 +23,7 @@ title: Documentation - Frama-C
               <h4 class="tileTitle"><span>Frama-C</span></h4>
               <ul>
                 <li><a href="/html/kernel.html">Description page</a></li>
-                <li><a href="/download/frama-c-user-manual.pdf">User Manual</a></li>
+                <li><a href="/download/frama-c-user-manual.pdf">User manual</a></li>
                 <li><a href="https://git.frama-c.com/pub/frama-c/blob/master/INSTALL.md">Installation</a></li>
                 <li><a href="/html/get-frama-c.html">Releases</a></li>
                 <li><a href="/html/publications.html">List of publications</a></li>
@@ -35,17 +35,17 @@ title: Documentation - Frama-C
               <h4 class="tileTitle"><span>ACSL</span></h4>
               <ul>
                 <li><a href="/html/acsl.html">Description page</a></li>
-                <li><a href="/download/acsl.pdf">Language Reference</a></li>
-                <li><a href="/download/frama-c-acsl-implementation.pdf">Language Implementation</a></li>
-                <li><a href="https://github.com/acsl-language/acsl">ACSL Git Repository</a></li>
+                <li><a href="/download/acsl.pdf">Language reference</a></li>
+                <li><a href="/download/frama-c-acsl-implementation.pdf">Language implementation</a></li>
+                <li><a href="https://github.com/acsl-language/acsl">ACSL Git repository</a></li>
                 <li><a href="https://github.com/fraunhoferfokus/acsl-by-example/blob/master/ACSL-by-Example.pdf">ACSL by Example</a></li>
               </ul>
             </div>
             <div class="tile">
               <h4 class="tileTitle"><span>Contributing</span></h4>
               <ul>
-                <li><a href="https://git.frama-c.com/pub/frama-c/">Frama-C Git Repository</a></li>
-                <li><a href="https://git.frama-c.com/pub/pub.frama-c.com/">Website Git Repository</a></li>
+                <li><a href="https://git.frama-c.com/pub/frama-c/">Frama-C Git repository</a></li>
+                <li><a href="https://git.frama-c.com/pub/pub.frama-c.com/">Website Git repository</a></li>
                 <li><a href="https://git.frama-c.com/pub/frama-c/blob/master/CONTRIBUTING.md">Contributer guide</a></li>
                 <li><a href="https://git.frama-c.com/pub/frama-c/-/issues">Known issues</a></li>
                 <li><a href="/html/bug_reporting_guidelines.html">Bug reporting guide</a></li>
@@ -60,7 +60,7 @@ title: Documentation - Frama-C
             <div class="tile">
               <h4 class="tileTitle"><span>Plugin Development</span></h4>
               <ul>
-                <li><a href="/download/frama-c-plugin-development-guide.pdf">Developer Manual</a></li>
+                <li><a href="/download/frama-c-plugin-development-guide.pdf">Developer manual</a></li>
               </ul>
             </div>
             {% for plugin in site.fc-plugins %}
@@ -77,7 +77,7 @@ title: Documentation - Frama-C
               <ul>
                 <li><a href="{{plugin.url}}">Description page</a></li>
                 {% if plugin.manual_pdf %}
-                <li><a href="{{plugin.manual_pdf}}">User Manual</a></li>
+                <li><a href="{{plugin.manual_pdf}}">User manual</a></li>
                 {% endif %}
                 {% for add in plugin.additional %}
                 <li><a href="{{add.link}}">{% if add.short %}{{add.short}}{% else %}{{add.name}}{% endif %}</a></li>