Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

Also see [[AdvancedOptions]]
<div class='header' role='banner' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div id='mainMenu' role='navigation' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' role='navigation' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' role='complementary' refresh='content' force='true' tiddler='SideBarTabs'></div>
<div id='displayArea' role='main'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected {color:[[ColorPalette::PrimaryDark]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:alpha(opacity=60);}
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0; top:0;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0 3px 0 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0; padding-bottom:0;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox where print preview displays the noscript content */
<div class='toolbar' role='navigation' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
[[Homotopy Type Theory course (with lecture notes and videos)|]]

@@wip@@ Some personal notes with @@indexed time@@
*Lecture 01 - Course Introduction
**Learning type theory from scratch starting from page one of the HoTT book is probably not a good idea
**Homotopy Type Theory is self standing subject and not really tied to homotopy theory
**HoTT is a sophisticated analyses of the concept of equality in constructive type theory
**This course introduction will give some hints about where the type theory come from starting with ultra basic things
**The foundation of this subject is : Intuitionistic Type Theory called TT (not ITT see later why) @@00:11:18@@
**TT is primarily nut not exclusively based on the work of Per Martin-L¨of’s and analysis and expansion of Brouwer’s program
**Intensional type theory (ITT) is the starting point because various other forms of type theories can be seen has extension of ITT
**For example ETT (Extensional type theory) = ITT + ER (equality reflection) + UIP (uniqueness of identity proof) = intuitionistic theory of sets (types are sets)
**We can even say that ETT like NuPRL is a kind of constructive set theory
**Other example HoTT (Homotopy Type Theory) = ITT + HIT (higher inductive types) + UA (univalence axion) = intuitionistic theory of weak infinity groupoids (types are abstract spaces)
**in order to makes these statements mean-full we need to know more about the origin of type theory
**Brouwer’s program consequences
***proofs are forms of construction
***technical consequence : prrof relevance = proofs are mathematical objects like other (this make this approach different of all other approach for foundation of mathematics)
**a formal prroof is a mathematical object but this is only a ??? shadow of what a proof is (this is the content of Gödel's incompleteness theorems @@00:25:50@@
**formal proof is elimining concepts. Formal proof is different from a proof. This is a common mistake.
**prrof relevance is absolutely central in HoTT : proofs of equality in HoTT correspond to paths in a space <= this is the core of HoTT
**synthetic vs analytic (ex: LF/Twelf vs coq) @@00:33:00@@
**HoTT is synthetic so clean and mechanizable
**the idea of HoTT is intrinsic to the notion type theory
**What is type theory ? Type theory is an analysis + codification of Brouwer’s intuitionism drawing on Gentzen's proff theory
**Type classify constructions <= this is sufficient to encompass all known mathematical construction and in particular forms of proof
**What are constructions (statics)
***Introduction rules : how to create/effect a construction
***Elimination rules : how to use a construction
**How constructions compute (dynamics/run)
***Inversion principle (ie) conservation of proof = elim is post-inv to intro (ie) you can only take out what you put in
**Herbrand was wrong when he was against Brouwer’s notion of constructivism @@00:43:40@@
**Constructivism is completely compatible with traditional approach
**Axiomatic freedom of constructivism mathematic
***fewer assumptions lead to stronger results
***selectively/locally introduce assumptions
**Type theory is the grand unified theory of computation unified with mathematics @@0:52:20@@
**Type theory is about enriching and extending  what we mean by computable
**TT (Type Theory), PT (Proof Theory), CT (Category Theory) are related and are different points of view for same subject
**Intuitionistic Logic is a social process. A is a true proposition mean A has a proof
**We do not expect either A is true (ie provable) or A is false (ie refutable) because the are open problems @@01:02:40@@  
**This is called Open-endedness principle. 
**Logical entailment is a notion anterior to implication (in traditional Herbrand logic distinction between entailment and implication is not drawn an this is a terrible thing)
**Come next : When are two proofs the same ? , richer type theory called dependent type theory, ITT, HoTT
*Lecture 02 - Judgements
**Type theory is not yet another axiomatic theory
**The set of rules (Trivially true, truth introduction, conjonction, introductory rule, elimination rule, implication, etc) is not arbitrary, internal coherence and computational properties does matter 
**Implication introduction (called natural deduction) is the innovation of Gentzen @@00:16:50@@
**What is entailment (ie) logical consequence (ie) hypothetical judgement (judgement made on hypotheses) ? It's a mapping. Here is the structural properties :
***transitivity (ie) composition : this is inlining from a computer science point of view @@00:23:20@@
***exchange (ie order does not matter)
**Order Theoretic Formulation @@00:33:20@@
**Univalence is a principle of mathematical efficiency @@00:48:00@@
**Positive Fragment IPL
**Set of rules : trivially false, disjonction, ...
**Heyting algebra = lattice with exponentials @@01:01:30@@
**Every Heyting algebra is distributive
**Boolean algebra = complemented distributive lattice and therefore has exponentials 
**Boolean algebra = Heyting algebra with complement @@01:12:00@@
**If something is true in every Heyting algebra it must be provable
**Lindenbaum algebra is used to show completeness
**the all subject of HoTT repose on the fact that in HoTT there is no global principe of decidability
**Come next : Hilbert mistake
*Lecture 03 - Transitivity
**Not every Heyting algebra  is a Boolean algebra
**every Boolean algebra  is an Heyting algebra
**A proposition could be interpreted in Heyting algebra : conjunction => meet, disjunction => join
**claim: in IPL not all instances of LEM are provable @@00:13:30@@
**Double negation is not cancelable, this is a weakening @@00:24:20@@
**Not every proposition is stable @@00:31:30@@
**Structural properties are admissible @@00:51:30@@
**weakening is admissible because the rules are polymorphic (ie) the rules holds for any choice of Γ (so we can inductively weaken the premises and re-apply the rules)
**exchange, contraction, reflexivity, transitivity are admissible
**Genzen's insight (ie) great idea : the inversion principle (ie) elim is post inverse to intro (origin of computation)  @@01:00:45@@
**This give rise to a dynamics of proof (ie) run the proof (ie) no difference between math en programming
**Proof relevant logic @@01:08:50@@
**Proof terms are mappings in category theory
**Come next : proof of Genzen's normalization property
*Lecture 04 - Exponentials
**Proof terms
***variable = re-exporession of Reflexivity
***substitution = re-expression of transitivity
***contraction (replication of variable)
***exchange(ie order does not matter)
**we can revisiting the rules for the various connectives  @@00:10:50@@
**objective : mechanize Gentzen's inversion principle (elim is post inverse to intro)
**Computation core come from Gentzen's inversion principle and definitional equality is central to explaining that
**There is no dynamics at all in first order logic
**we can compute by calculation with closed terms (think about polynomials) @@00:25:25@@
**Gentzen’s Unicity Principle : intro is post inverse to elim (ie) intro and elim are mutually inverse
**Propositon as types (the great correspondence)
**constructive point of view: never talk about things just being true, talk about the reasons why => this is the essence of proof relevance mathematics @@00:42:20@@
**Terminal object 1 : for any object A there exists a unique map A → 1 
**Exponentials @@01:00:00@@
**Come next : the same with positive fragment
**The inversion principle and the unicity principle show that propositions together with proofs taking modulo equivalence we defined pass the behavior of product and exponential in a category @@01:17:55@@
*Lecture 05 - Equality
**Gentzen's Inversion Principles (β rules) mean that proof are programs (ie we can run them with this principe of simplification)
**Gentzen's Unicity Principles (η rules)
**From the point of view of proof theory we can only say there is one lemma in the world, it is just a big conjunction  @@00:19:10@@
**The unicity principle try to express the idea that every proof of certain type can be putted into some kind of standard form  @@00:40:40@@
**Shannon expansion is the core of BDD (binary diagram)
**A case analysis is a degenerate induction @@01:10:50@@
**The induction principle for a final type is : do all the cases
**β rules seem to be kind of Analytical judgement (\self-evident"). Be thought as Definitional equality
**η rules seem to be of the character of Synthetic judgement ("require proof"). Be thought as Propositional equality (ie) express by a type
**In HoTT the interplay/distinction between definitional equality and propositional equality is central
**Come next : Gödel's T = IPL + NNO (Natural Number Object)
*Lecture 06 - IPL
**Implication correspond to function space
**Gentzen's Inversion Principles tell us that elimination should cancel introduction
**There is 2 introductions for Natural so there is 2 β rules
***rec(P,Q)(0)  ≡ P
***rec(P,Q)(s(M))  ≡ [rec(P,Q)(M)/x]Q
**η rule expressing the unicity
**Re-orging the NNO into an initial algebra
**The idea of being a natural number object is the essence of a more general thing @@00:19:15@@
**≡ is not equality
**Extensionally equal (same reference), but intentionally not equal (different sense) [ie expressing the same I/O behavior in two different ways (other algorithm)]
**Extensionally equality is synthetic (require proof)
**Proof of extensional equality for higher type like (N → N) → (N → N) become complicated and require a proof @@00:34:00@@
**Recall Martin{Lof's distinction between judgements and propositions. With this in mind:
***Intensional equality is an inductively dened judgement
***Extensional equality is a proposition: it may be subject to judgement
**By propositions as types, extensional equality is a family of types
**One of the univalence axiom consequence is that equivalence of types is the same as equivalence @@00:49:50@@
**Setup for dependent types (aka families)
**The functionality principle says that the type families have to respect definitional equality in their free variables@@01:05:25@@
**Type classify the same things
**Formation rule (Id-F). You can form a type that express the equality of M and N in A
**Identity rule (Id-I). M is equal to M in the type A
**Come next : Id-E
*Lecture 07 - Dependent Types
**Families of types (ie) index by a type
**Example of family of type (Sequence of natural number of length x)  x : Nat ⊦ Seq(x) (other notation {Seq(x)}~~x∈Nat~~)
**We do not wan't a rule of induction in definitional equivalence because this is what distinguish it from equality @@00:11:00@@
**A doubly index family of types is a binary relation.
**Types and propositions are the same thing
**equivalent formulation of P :  Id~~A~~(M,N)
***proof/evidence that M is N
***identification of M with N
***path between M and N
**Generalize the propositional negative connectives to their dependent forms @@00:20:55@@
**The essence of constructive interpretation is that prof of a universally quantify formula is going to be a function, the same way a proof of the implication is going to be a function @@00:32:40@@
**So far what we had among positives : 0; A +B, Nat @@00:43:38@@
**The issue is that he positive elim "reach into" arbitrary types
**From a programming point of view the proof by induction is a primitive recursive function @@01:05:40@@
**Other form of product type : Σ variant (the one on the HoTT book) idea : elimination as pattern matching)
**Come next : Identity type
*Lecture 08 - Transport
***Dependent formulation of
****negatives ∏ (->) , Σ (x)
****positives +, Nat
***Elim's become induction principle
**Identity types @@00:19:55@@
**Notation M =~~A~~ N (Do remember that this is a type)
**If we wan't to say something about the identity type there is a goto method : apply path induction (the identity elim) @@00:35:35@@
**The definitional equivalences that hold of a proof can affect the well formed-less of a consequent proof (ie) depend on a specific code and not on his spec @@01:00:00@@
**The leap from non dependent to dependent type type programming is perhaps a bit more complicated than it might sound at first
**Simple functionality @@01:02:10@@
**ap F u (apply the path F to u)
**Transportation @@01:08:50@@
**lifting property : lifting the path from the base space to the total space ∫~~A~~
**Transportation aka functionality of families
*Lecture 09 - Universes
***Identity type
****map preserve identifications (ie profs of identity) : (aka functionality)
**Universes / Large elims @@00:09:20@@
**if(M, Nat, Bool) is not a well-formed type (at this point)
**Large eliminations (had hoc way) @@00:17:10@@ 
**Universal types
**A universe is a type of types
**Burali-Forte paradox can be reproduce inside of type theory (ie) the class of all type cannot be a type @@00:26:20@@
**In constructive theories the ordinals do not forme a linear ordering
**if(M, Nat, Bool) : U
**Idea: cumulative hierarchy of universes U~~0~~, U~~1~~, U~~2~~, ... @@00:38:15@@
**Typical Ambiguity/Universe polymorphism : pretend U:U in fact U~~i~~:U~~i+1~~ @@00:48:00@@
**ITT introduced in 1973 by Martin-L¨of
**--Axiom--Theorem of choice : every total binary relation contain a function
**A special case of proof relevance is proof irrelevance (the constructive framework is strictly superior @@01:00:00@@
**That is axiom of extensionality fail ! This is a weakness of ITT
**The big open problem is to understand the computational meaning of HoTT @@01:11:53@@
**ETT : extensional theory of type (the word extensional is overuse in type theory : many different meaning)
***idea : only identifications is reflexives
**NuPRL is based on ETT. M:A is undecidable but a derivation is decidable
**Coq is based on ITT : M:A is decidable (not feasible)
**Coq is a rewritten of NuPRL but this is the same architecture
*Lecture 10
***+no muching about/w transport
***+"standard mathematics" in that there is "just" equality
***+function extensionality "for free"
***-type checking is no longer decidable
**type in ETT (limited to) sets (where only paths/identifications are reflexivities) aka (hsets) @@00:12:20@@
**types in ITT are more general, they are ∞-groupoïd (ie has reacher path structure)
**for set level mathematics it really easiest to work with ETT @@00:20:45@@
**HoTT is ITT + axioms that introduce new identifications among element of a type
**Where do you get off adding axioms to type theory ? @@00:31:30@@
**Type theory is not an axiom theory
**In Type theory the idea is you introduce type structures (Intro + elim, inversion/Unicity) raise to computation meaning (B)
**open problem : recover some idea of computation in HoTT (this is the principal problem in HoTT, HoTT need a computational interpretation)@@00:35:45@@
**Types are ∞-groupoïds
**Groupoid Laws @@00:44:20@@
**Id~~A~~(M,M) form a (higher) group
**to prove that a type is true mean to write a program that has that type @@01:00:45@@
**Maps/Functions are functors @@01:10:15@@
*Lecture 11
**Paths-over-Paths @@00:24:54@@
**Equivalence of Types @@00:31:20@@
**Types are not sets @@00:52:18@@
**The notion of bijection is irrelevant-less useful when working with high order types
**Elements of type should not be thought in terms of atoms because they are not @@01:08:40@@
**In a proof relevant environment everything we do is defined in terms of structure
**isomorphism is not a proposition, it is a structure
**Equivalence @@01:10:50@@
**The axiom of functionality say : path and homotopy are the same thing
**The univalence axiom say : there is an equivalence between paths in a universe and equivalences between the elements (A ≃ B) ≃ (A =~~U~~ B)
**A ≃ B is a type not a proposition
*Lecture 12 - Equivalence of Types
**isequiv(f) express an HPROP (has at most proof up to higher homotopy)
**Naturality ("polymorphism") @@00:14:50@@
**Structure of paths in types @@00:28:20@@
**With the negative types characterize what the paths structure are is straightforward but arbitrary hard for the positive types
**The proof that I'm doing in "my" library need to know the code in the standard library @@00:50:33@@
**same problem as "why inheritance is a bad idea" : code encode dependency
**rely on on definitional equivalences is a code encode dependency which is fly in the face about everything we know about modularity @@00:55:00@@
**The definitional equality influence which things are well typed @@01:00:00@@
**Coproduct is a nice programming problem
**Haskell GADT is garbage, the right thing is dependent type @@01:18:30@@
*Lecture 13 - HOTT
**A path in the natural number types is either path from 0 to 0 or the ap of the successor to a path between the predecessor @@00:30:00@@
**The essence of dealing with nats is dealing with coproducts
**Mechanization in synthetic homotopy theory is not a total mess (because of details) like in analytic homotopy theory @@00:32:30@@
**Characterize paths in identity type @@00:35:00@@
**Identity type correspond to Hom's in an (∞,1) category @@00:36:30@@
**In an (∞,1) category, except for the first level, all the Hom's are weak groupoids @@0:49:10@@
**The contravariance of identity is "ameliorated" by the groupoid structure @@00:57:20@@
**In HoTT we interpret Id~~A~~(x,y) as paths in A @@01:11:20@@
*Lecture 14 - Homotopy Type Theory
**Justifying J @@00:14:00@@
**In ITT J express an induction on proofs of identity, of wich there is exactly one
**In HoTT (or even ITT + FUNEXT) the situation is less clear
**ETT (and also OTT) gives FUNEXT without special arrangement @@00:20:55@@
**HoTT give up on computation (maybe we can recover one)
**The identity type occupy a special position in type theory @@00:54:50@@
**Homotopy (Type Theory) is (Homotopy Type) Theory @@08:10:00@@
*Lecture 15 - Homotopy Type Theory Part 2
**Interval type I @@00:03:40@@
**The idea In HoTT is we think of paths being equations because path are reversible and composable => equivalence relation => notion of equality
!Notable Events
*1666 : [[Characteristica universalis|]] concept
*1879 : Frege invente axiomatic predicate logic
*@@1903 : [[The Principles of Mathematics|]] @@
*1934 : Gerhard Gentzen create the first [[sequent calculi|]], systems LK and LJ.
*@@1939 : [[Éléments de mathématique|]]@@
*1967 : [[Automath|]] the first automated proof checker
*1996 : automated proof of the [[Robbins conjecture|]] using the EQP theorem prover
*1998 : computer-aided proof of the [[Kepler conjecture|]]
*2004 : fully checked formal proof of the famous [[Four Colour Theorem|]].
*2012 : [[Feit-Thompson theorem|]] has been totally checked in Coq
*2013 : On going [[flyspeck|]] project to produce a formal proof of the Kepler Conjecture
*@@2013 : [[Homotopy Type Theory|]]: Univalent Foundations of Mathematics@@
!Notable People involved
*[[Gottfried Leibniz|]] 1646-1716
*[[Gottlob Frege|]] 1848-1925
*[[Giuseppe Peano]|]] 1858–1932
*[[Bertrand Russell|]] 1872–1970
*[[Jacques Herbrand|]] 1908-1931
*[[Gerhard Gentzen|]] 1909-1945
*[[Nicolaas Govert de Bruijn|]] 1918-2012
*[[Bourbaki group|]] 1935 -
*[[William McCune|]] 1953-2011
*[[Thomas Callister Hales|]] 1958 -
*[[Georges Gonthier|]] ?
*[[Vladimir Voevodsky|]] 1966 -
This site is highly inspired by [[Garrett Lisi's site|]]
The idea germinated in 2009 after fortuity visiting [[vdash web site|]]. That site was the starting point.
The short term objective is to centralize valuable resources about Automated Theorem Proving.
The long term objective is to build a new generation theorem prover with Scala language and to participate to the [[CADE ATP System Competition|]].
This new theorem prover is called : [[HED|]] (hoper edei deixai).
The [[HED|]] theorem prover is* free software, released under the GNU General Public License version 3 (GPLv3)

(*) HED is actually vaporware but is definitely THE target.
*Algorithms for [[Herbrand|'s_theorem]]  sequent extraction
**classic Gentzen’s mid-sequent reduction
**Matthias Baaz and Alexander Leitsch. On skolemization and proof complexity. Fundamenta Matematicae, 20:353–379, 1994.
**Philipp Gerhardy and Ulrich Kohlenbach. Extracting herbrand disjunctions by functional interpretation. Archive for Mathematical Logic, 44(5), 2005.
**Bruno Woltzenlogel Paleo. Herbrand sequent extraction. Master’s thesis, Technische Universitaet Dresden; Technische Universitaet Wien, Dresden, Germany;Wien, Austria, 2007.
**Bruno Woltzenlogel Paleo. [[Herbrand Sequent Extraction|]]. VDM-Verlag, Saarbruecken, Germany, 2008.
Translation of the description to Bison
%left '=' 
%left IMPLIES 
%left IFF 
%left AND 
%left OR 
%left NOT 
        : atomic_sentence 
        | sentence IMPLIES sentence 
        | sentence IFF sentence 
        | sentence AND sentence 
        | sentence OR sentence 
        | quantifier VARIABLE sentence 
        | '~' sentence %prec NOT 
        | '(' sentence ')' 
        : PREDICATE '(' term_list ')' 
        | term '=' term 
        : FUNCTION '(' term_list ')' 
        | CONSTANT 
        | VARIABLE 
        : term_list term 
        | term 
        : THERE_EXISTS 
        | FORALL 
*--[[What every (Scala) programmer should know about category theory|]]--
*[[Category Theory, The essence of interface-based design - Erik Meijer|]]
*[[Category Theory Applied to Functional Programming|]]
*[[Category Theory for Computing Science|]]
The two main pieces from which this site is built are [[TiddlyWiki|]], created by Jeremy Ruston, and [[MathJax|]] a joint project of the American Mathematical Society, Design Science, Inc., and the Society for Industrial and Applied Mathematics.  Both of these excellent open source software packages are under continuing development and have supportive communities.

Thank's to [[Lizao Li|]] for the [[MathJax Plugin for TiddlyWiki|MathJax plugin]]
See [[demo|MathJaxTiddlyPluginDemo]]

Some hints about MathJax [[here|]] 
CPDT has an interesting “Why Coq?” section:
It compares Coq with other proof assistants and dependently typed languages.
*Java's network stack can't be implemented correctly/efficiently in iOS ([[source|]])
*** (posix sockets don't turn on radio)
*sub pixel anti-aliasing
*cell networking in iOS is only activated by the Objective-C layer ([[source|]])
*[[Criticisms of JavaFXPorts|!topic/codenameone-discussions/Nipmy684qm4]]
*@@[[POC of a Scala/JavaFX multi-platform (iOS/Android/Windows/Mac OSX/Linux) application with Gluon JavaFXPorts |]]@@
!Codename One

[[Etat De L'Art]]
@@see also [[Homotopy Type Theory]]@@
!Read this first
*[[Unifying Programming and Math – The Dependent Type Revolution|]]
*[[Dependent-type 'revolution'|]]
*[[A gentle introduction to dependent types|]]
!Scala and dependent type
*[[Scala has surprisingly good support for dependent typing|]]
*[[Miles Sabin on Dependent Types with Scala, Shapeless, Scala Macros|]]
*[[Scala vs Idris: Dependent Types, Now and in the Future|]]
*[[42: Rise of the dependent types|]]
!Languages with dependent type support
--[[Dotty]]-- @@this is a common mistake, see [[path-dependent type]] for clarification@@
See also [[Comparison of languages with dependent types|]]
!How to make language with no dependent type support more dependent type friendly
*[[42.type: Literal-based Singleton types|]]
Odersky unveils Dotty at [[Strange Loop 2013|]]
See [[Video|]] and [[Slides|]]
DOT: Calculus for Dependent Object Types
[[Dotty|]]: A Scala-Like Language with DOT as its core
*Dotty has union and intersection types
*In Odersky’s experimental Dotty language, there is no distinction between existential and higher-kinded types
[[Dotty & Scala|]]
From Scala point of view

*Scala IDE for Eclipse pros
**managed by Scala team
**used to
*Scala IDE for Eclipse cons
**need more than 4 gigas to build gapt
**maven integration need external plugin
*Netbeans pros
**good native maven integration
*Netbeans cons
**not used to
*IntelliJ pros
**used by most of the professional Scala developers
*IIntelliJ cons
**not used to

The winner is : Scala IDE for Eclipse (not yet productive ready but the more promising)
*it offends DRY by forcing you to check for a failed future and the error case of either
*use the left-side of the disjunction for errors that are recoverable, validation problems with the user input
*sometimes needed because failing futures are not type checked
*we can use Try as the specialized case for the left hand exception
Welcome to [[EDLA|]]
This site focus essentially on automated theorem proving (ATP) or automated deduction which is a subfield of [[automated reasoning|]].

Source code is available on [[GitHub|]]

If you wan't an overview of what is available on this site you can use [[Tags]] as a sitemap.

@@If you came thru google search due to the uniqueness of ~TiddlyWiki the web result you saw on google may not appear on this website.  To find it just remember the query items and put on the search line to the upper right of this page. You should be able to find the item you were looking for.@@
@@see also [[Dependent type]] and [[Homotopy Type Theory]]@@
**[[Practical Foundations for Programming Languages|]]. Online edition is also [[available|]].
**[[Types and Programming Languages|]]
a little comparaison of those two books [[here|]]

** Open Course Project from general introduction to Automated Theorem Proving : [[Logic in Action|]]
** Free online (Coursera) course with Michael Genesereth : [[Introduction to Logic|]]
** Online book by Michael Genesereth : [[Introduction to Logic|]]

** Free online (Coursera) course with Martin Odersky [[Functional Programming Principles in Scala|]]
*** This course introduces the cornerstones of functional programming using the Scala programming language.
** Learn Scala for free thanks to O'REILLY : [[Programming Scala|]]
*** example at the end of Chapter 9 has been updated by to scala 2.8.1 here :
** Not free* but far better introduction to Scala : [[Programming in Scala, Second Edition|]]
*** * the first edition is now available for free :
** Practical techniques for coding in Scala : [[Scala in Depth|]]
** Functional programming from square one in clear, readable language : [[Functional Programming in Scala|]]

** Recommended book about ATP (OCaml oriented) : [[Handbook of Practical Logic and Automated Reasoning|]]
*** Scala port of the OCaml code :  @@wip@@
*** Haskell port of the OCaml code : --
*** F# port of the OCaml code :
** General framework in Scala for applied proof theory :
** Formal Methods :
** One-semester course on Software Foundations (Coq oriented) :

** Basic but clear introduction to BNF :
** The whole story :
** Nicely written introduction to parsing :
** Very enlightening master's thesis about combinator parsing in Scala :

** Haskell implementation with excellent documentation :

**Awesome interactive tutorial :

** A Bibliography of Non-Standard Logics :

**[[Software Foundation|]]
**[[Certified Programming with Dependent Types|]]

** The Bulletin of Symbolic Logic :
** Journal of formalized reasoning :

** The Philosophy of Roger Bishop Jones :
**1. What is functional programming?
***1.1 The fundamental premise of functional programming
****We construct our programs using only @@pure functions@@. In other words, functions that have no @@side effects@@.
****Because of their modularity, pure functions are easier to test, to reuse, to parallelize, to generalize, and to reason about.
***1.2 Exactly what is a (pure) function?
****We can formalize this idea of pure functions by using the concept of @@referential transparency@@ (RT).
****to be referentially transparent—in any program, the expression can be replaced by its result without changing the meaning of the program
****we say that a function is pure if its body is RT, assuming RT inputs.
***1.3 Functional and non-functional: an example
****Referential transparency enables a mode of reasoning about program evaluation called the @@substitution model@@.
****RT enables @@equational reasoning@@ about programs.
****the substitution model is simple to reason about since effects of evaluation are purely local (they affect only the expression being evaluated)
***1.4 Why functional programming?
****pure function is @@modular@@ and @@composable@@ because it separates the logic of the computation itself from "what to do with the result" and "how to obtain the input"
****any function with side effects can be split into a pure function at the "core" and possibly a pair of functions with side effects; one on the input side, and one on the output side. 
****Functional programmers often speak of implementing programs with a pure core and a thin layer on the outside that handles effects.
***1.5 Conclusion
**2. Getting Started
***2.1 Introduction
****@@higher-order functions@@. These are functions that take other functions as arguments, and may themselves return functions as their output.
***2.2 An example Scala program
****We declare an object (also known as a "@@module@@")
***2.3 Running our program
***2.4 Modules, objects, and namespaces
****Aside from a few technical corner cases, every value in Scala is what's called an "object".
****The difference between a package and a module is that a package cannot contain val or def members and can't be passed around as if it were an object.
***2.5 Function objects: passing functions to functions
****A call is said to be in 'tail position' if the caller does nothing other than return the value of the recursive call.
****It is a common convention to use f, g, and h as parameter names for functions passed to a HOF.
****In FP, we tend to use one-letter or very short variable names, especially when everything there is to say about a value is implied by its type.
****2.5.1 @@Annonymous functions@@
*****Also often called @@function literals@@, lambda functions, @@lambda expressions@@, or just lambdas.
*****When we define a function literal, what is actually being defined is an object with a method called apply.
*****objects that have an apply method can be called as if they were themselves methods.
*****Because functions are really just ordinary Scala objects, we say that they are first-class values.
***2.6 Polymorphic functions: abstracting over types
****So far we have been defining only @@monomorphic@@ functions. That is, functions that operate on only one type of data.
****We can call the @@type parameters@@ anything we want though by convention we typically use short, one letter type parameter names, like [A,B,C].
****We are using the term 'polymorphism' in a slightly different way than mainstream object-oriented programming, where that term usually connotes some form of subtyping. One will occasionally see the term parametric polymorphism used to refer to this form of polymorphism.
****A function that is polymorphic in some type is generally forced to represent values of these types as boxed, or non-primitive values
****It is possible to instruct the Scala compiler to produce specialized versions of a function for each of the primitive types, just by adding an annotation to that type parameter: def binarySearch@@[@specialized A]@@(as: Array[A], key: A,gt: (A,A) => Boolean): Int
***2.7 Conclusion
****We saw how the implementations of polymorphic functions are often significantly constrained, such that one can often simply 'follow the types' to the correct implementation. 
**3. Functional data structures
***3.1 Introduction
***3.2 Defining functional data structures
****A functional data structure is (not surprisingly!) operated on using only pure functions.
****A trait is an abstract interface that may optionally contain implementations of some methods.Adding sealed in front means that all implementations of our trait must be declared in this file.
****Technically, an abstract class can contain constructors, in the OO sense, which is what separates it from a trait, which cannot contain constructors. 
****Just as functions can be polymorphic, data types can be as well, and by adding the type parameter [+A]
****In the declaration trait List[+A], the + in front of the type parameter A is a variance annotation which signals that A is a @@covariant@@ or 'positive' parameter of List. This means that, for instance, List[Dog] is considered a subtype of List[Animal], assuming Dog is a subtype of Animal.
****Nothing is a subtype of all types
****We will often declare a @@companion object@@ in addition to our data type and its data constructors. This is just an object with the same name as the data type where we put various convenience functions for creating or working with values of the data type.
****Companion objects are more of a convention in Scala. We could have called this module Foo if we wanted.
****For data types, it is a common idiom to have a variadic apply method in the companion object to conveniently construct instances of the data type. (is a variadic function, meaning it accepts zero or more arguments of type A) 
****We can convert a Seq[A], x, back into something that can be passed to a variadic function using the syntax x: _*, where x can be any expression
***3.3 Functional data structures and @@data sharing@@
****Data sharing of immutable data often lets us implement functions more efficiently
***3.4 Recursion over lists and generalizing to higher-order functions
****In the Scala standard library, foldRight is a method on List and its arguments are curried similarly for better type inference.
****all operators ending in : are right-associative
***3.5 Trees
****List is just one example of what is called an algebraic data type (ADT).
****An ADT is just a data type defined by one or more @@data constructors@@, each of which may contain zero or more arguments.
****Pairs and other arity tuples are also algebraic data types.
****Algebraic data types can be used to define other data structures.
***3.6 Summary
****We introduced algebraic data types and pattern matching and showed how to implement purely functional data structures, including the singly-linked list.
**4 Handling errors without exceptions
***4.1 Introduction
****throwing an exception breaks referential transparency
***4.2 Possible alternatives to exceptions
****We need a way to defer the decision of how to handle undefined cases so that they can be dealt with at the most appropriate level.
***4.3 The Option data type
****total function. It takes each value of the input type to exactly one value of the output type.
****The general rule of thumb is that we use exceptions only if no reasonable program would ever catch the exception—if for some callers the exception might be a recoverable error, we use Option to give them flexibility.
****we can @@lift@@ ordinary functions to become functions that operate on Option.
****It's also possible to lift a function by using a @@for-comprehension@@, which in Scala is a convenient syntax for writing a sequence of nested calls to map and flatMap
****Either has only two cases, just like Option. The essential difference is that both cases carry a value
****The Either data type represents, in a very general way, values that can be one of two things. We can say that it is a @@disjoint union@@ of two types.
****When we use it to indicate success or failure, by convention the Left constructor is reserved for the failure case.
***4.4 Conclusion
****Using algebraic data types such as Option and Either, we can handle errors in a way that is modular, compositional, and simple to reason about.
**5 Strictness and laziness
***we'll see that non-strictness is a fundamental technique for improving on the efficiency and modularity of functional programs in general.
***5.1 Strict and @@non-strict@@ functions
****A function f is @@strict@@ if the expression f(x) evaluates to bottom for all x that evaluate to bottom.
****To say a function is non-strict just means that the function may choose not to evaluate one or more of its arguments.
****For example, the Boolean functions && and || are non-strict.
****The arguments we would like to pass unevaluated have => immediately before their type.
****The unevaluated form of an expression is often called a @@thunk@@.
****An argument that is passed unevaluated to a function will be evaluated once for each place it is referenced in the body of the function.
****Adding the lazy keyword to a val declaration will cause Scala to delay evaluation of the right hand side until it is first referenced and will also cache the result
****We'll often refer to unevaluated parameters in Scala as by-name parameters.
***5.2 An extended example: lazy lists
****We'll see how chains of transformations on streams are fused into a single pass, through the use of laziness.
***5.3 Separating program description from evaluation
****This kind of separation of concerns is a central theme in functional programming.
****people sometimes describe streams as "first-class loops" whose logic can be combined using higher-order functions like map and filter.
****The incremental nature of stream transformations also has important consequences for memory usage.
***5.4 Infinite streams and corecursion
****Because they are incremental, the functions we've written also work fine for infinite streams.
****Be careful though, since it's easy to write an expression that never terminates.
****While a recursive function consumes data and eventually terminates, a corecursive function produces data and coterminates.
****Corecursion is also sometimes called guarded recursion.
***5.5 Summary
****non-strictness can improve modularity by separating the description of an expression from the "how and when" of its evaluation.
**6 Purely functio6nal state
***6.1 Generating random numbers using side-effects
****Because these state updates are performed as a side effect, these methods are not referentially transparent.
***6.2 Purely functional random number generation
****That is, do not update the state as a side effect, but simply return the new state along with the value we are generating.
***6.3 A better API for state actions
****We're starting to see a pattern: We're progressing towards implementations that don't explicitly mention or pass along the RNG value.
***6.4 Purely functional imperative programming
****Functional programming has excellent support for writing imperative programs
***6.5 Summary
****In this chapter, we touched on the subject of how to deal with state and state propagation.
**7 Purely functional parallelism
***7.1 Introduction
****In this chapter, we are going to build a library for creating and composing parallel and asynchronous computations.
***7.2 Choosing data types and functions
****this law disallows downcasting or isInstanceOf checks (often grouped under the term @@typecasing@@)
****Thus, given map(y)(id) == y, it must be true that map(unit(x))(f) == unit(f(x)). Since we get this second law or theorem "for free", simply because of the @@parametricity@@ of map
****We say that map is required to be @@structure-preserving@@, in that it does not alter the structure of the parallel computation, only the value "inside" the computation.
****Given map(y)(id) == y, it is a @@free theorem@@ that map(map(y)(g))(f) == map(y)(f compose g). (This is sometimes called @@map fusion@@, and it can be used as an optimization)
***7.3 Conclusion
****the goal of this chapter was to give you a window into the process of functional design
**8 Property-based testing
***8.1 Introduction
! notes (from Tomer mails)
* GAPT traditionally had a somewhat different notation than Prover9 for variables and constants
* Both ATP and Prover9 in GAPT get as input a clause set which was parsed right now by the "simple" parser
* Difference between Prover9 format and gapt "simple" format : the format is almost identical to Prover9 with the exception of uppercases for predicates and lower cases for formulas
* tptp_to_ladr parse a tptp into a prolog style prover9 format which is quiet different from the simple one 
* The simple format was just intended as a utility but I should indeed made it identical to Prover9 and not only similar.

! TPTP notation
* a constant can be either an identifier starting with a lower-case letter, a single quoted string or an numeric constant.
* the TPTP-FOF variables always start with an upper case letter.

! GAPT notation analysed from GAPT code
* constante : Regex("[a-t]" + word)
* variable : Regex("[u-z]" + word)

! from Prover9 documentation
There is a rule for distinguishing variables from constants, because clauses and other formulas can have free variables (variables not bound by quantifiers). The default rule is that variables start with (lower case) u through z. For example, in the formula P(a,x), the term a is a constant, and x is a variable. (See also the [[flag prolog_style_variables|]].)

! Example
GAPT simple format : P(a). -P(x) | P(f(x)). -P(f(f(a))).
private class MyParser(str: String) extends StringReader(str) with SimpleResolutionParserFOL
trait SimpleResolutionParserFOL extends SimpleResolutionParser with SimpleFOLParser

trait SimpleResolutionParser extends ResolutionParser
trait ResolutionParser extends InputParser

trait SimpleFOLParser extends SimpleHOLParser
trait SimpleHOLParser extends HOLParser with JavaTokenParsers with at.logic.language.lambda.types.Parsers
trait HOLParser extends InputParser

trait InputParser extends RegexParsers

Each parsed clause is converted to a FSequent (ie) Pair[Seq[HOLFormula],Seq[HOLFormula]] like this :
(f1,...,fn) => (negated functions neg-stripped, not negated functions)
Example : 
clause in GAPT simple format : Much_smaller(snail,bird) | -Snail(snail) | -Bird(bird).
Much_smaller(snail, bird), ¬Snail(snail), ¬Bird(bird) => List(Snail(snail), Bird(bird)),List(Much_smaller(snail, bird))
def clause: Parser[FSequent] = repsep(formula,"|") ~ "." ^^ {
    case ls ~ "." => new Pair(
          (ls.filter(x => !filterPosFormulas(x)))
     ) }

!Gapt development with Eclipse
Upstream Gapt code is not usable with Eclipse.
I maintain [[master branch|]] with a few modifications for Gapt to be Eclipse compliant.
!Upgrade to Scala 2.10 @@done and committed upstream@@
@@without zinc server, export MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=256m" is needed for mvn install or mvn test@@
See [[m2e-210-proper|]] branch
**syntax modification needed for Scala 2.10 (comment in the code about workaround for deprecation of operator - but problem not seen with unit tests provided)
**disable 2 tests with pendingUntilFixed("Problematic with Scala 2.10")
**sequential testing needed
**upgrade jline to version """${scala.version}.${scala-minor.version}"""
**upgrade scala-swing to version """${scala.version}.${scala-minor.version}"""
**upgrade scala-swing to version """${scala.version}.${scala-minor.version}"""
**fix path layout
**sequential testing needed
**sequential testing needed
**sequential testing needed
**sequential testing needed
**syntax modification needed for Scala 2.10
**syntax modification needed for Scala 2.10
**syntax modification needed for Scala 2.10
**upgrade jline to version """${scala.version}.${scala-minor.version}"""
**sequential testing needed
**sequential testing needed
**sequential testing needed
**sequential testing needed
**syntax modification needed for Scala 2.10

! Dependencies
Dependency graph generated with [[el4j DepGraph maven plugin|]] :
! Parser anomaly
*at.logic.parsing.language.arithmetic.HOLTermArithmeticalExporter need 
**import at.logic.transformations.ceres.struct.ClauseSetSymbol
**import at.logic.transformations.ceres.struct.TypeSynonyms.CutConfiguration
*at.logic.transformations.ceres.struct need
**import at.logic.calculi.lksk
but maven test is ok without ceres

! Upgrade to Scala 2.8
*code update from scala 2.7 to scala 2.8 in [[r457|]]
*scala 2.8 warning cleanup in [[r460|]]
*scala 2.8 warning cleanup ended in [[r470|]]

This tiddler was automatically created to record the details of this server
|Description|Google Analytics tracker|
|Author|Julien Coloos|
|License|[img[CC BY-SA 3.0|][]]|

This plugin enables Google Analytics tracking inside TiddlyWiki.

The version used is the asynchronous one ({{{ga.js}}}).
The plugin comes with its own configuration, which is stored persistently inside the (hidden) [[SystemSettings]] tiddler.
The configuration has to be set before being effective: it can be done in the plugin tiddler (see below) if TiddlyWiki is not in read-only mode. Tracking works if an account ID has been set, tracking has been enabled, and TiddlyWiki access is non-local.

Tracking can be reported as either:
* page views
** pages are named {{{/#Tiddler name}}}
* events
** Category: {{{Tiddlers}}}
** Action: {{{Open}}}, {{{Refresh}}}, {{{Edit}}}, {{{Search}}}, {{{Close}}} or {{{CloseAll}}}
** Label
*** for {{{CloseAll}}} action: excluded tiddler
*** for {{{Search}}} action: searched text
*** otherwise, tiddler name on which action is performed
** Value: for the {{{CloseAll}}} action, the number of closed tiddlers
** Note: Google Analytics script limits the number of events (1 every 5 seconds, with a burst limit of 10)
Tracking can be globally disabled, or enabled per action on each tiddler:
* //Open//: when tiddler was not yet displayed
** Note: default tiddlers do not trigger this action when accessing TiddlyWiki
* //Refresh//: when tiddler was already displayed
** Note: this action is automatically triggered after editing a tiddler
* //Edit//: when editing (or viewing in read-only mode) the tiddler
* //Close//: when tiddler was displayed
** this action is never tracked in //pages views// tracking
** the //CloseAll// action is triggered by the TiddyWiki links //close all// and //close others// if at least one tiddler was closed; individual tiddlers closed are not tracked as //Close// actions
* //Search//: when searching in tiddlers
** this action is never tracked in //pages views// tracking
** {{{CloseAll}}} and {{{Open}}} actions are not taken into account while search is performed: TiddlyWiki automically closes opened tiddlers before searching and opens tiddler that match the searched text


!Revision History
!!v1.2.1 (2013-11-07)
* add configurable regular expression to disable tracking according to hosting URL

!!v1.2.0 (2011-05-18)
* do not trigger {{{CloseAll}}} and {{{Open}}} actions when search is performed
* added the {{{Search}}} action

!!v1.1.0 (2011-05-17)
* do not trigger {{{Open}}} action when displaying default tiddlers
* added the {{{CloseAll}}} action

!!v1.0.0 (2011-05-14)
Initial release.

/* Google Analytics queue object. Needs to be global. */
var _gaq = _gaq || [];

if (!config.extensions.GATracker) {(function($) {

version.extensions.GATrackerPlugin = {major: 1, minor: 2, revision: 1, date: new Date(2013, 11, 07)};

/* Prepare overridden TiddlyWiki displaying */
var trackOptions = {};
var displayDefault = 0, closingAll = 0, searching = 0;
var pl = config.extensions.GATracker = {
getOption: function(optKey) {
	return (config.optionsSource && (config.optionsSource[optKey] == "setting")) ? config.options[optKey] : null;
setOption: function(optKey, value) {
	config.options[optKey] = value;
	config.optionsSource[optKey] = "setting";
loadOptions: function() {
	var gaTrack = (pl.getOption("txt_GATracker_track") || "1,0,1,1,1,0,0").split(",");
	trackOptions = {
		id: pl.getOption("txt_GATracker_id"),
		enabled: parseInt(gaTrack[0] || "1"),
		url_exclude: pl.getOption("txt_GATracker_url_exclude") || "",
		type: parseInt(gaTrack[1] || "0"),
		events: {
			open: parseInt(gaTrack[2] || "1"),
			refresh: parseInt(gaTrack[3]) || "1",
			edit: parseInt(gaTrack[4] || "1"),
			close: parseInt(gaTrack[5] || "0"),
			search: parseInt(gaTrack[6] || "0")
	if ( && ! { = null;
saveOptions: function() {
	var opts = trackOptions.enabled && "1" || "0";
	opts += "," + trackOptions.type;
	for (var ev in {
		opts += "," + ([ev] && "1" || "0");
	pl.setOption("txt_GATracker_id", || "");
	pl.setOption("txt_GATracker_url_exclude", trackOptions.url_exclude || "");
	pl.setOption("txt_GATracker_track", opts);
track: function() {
	_gaq.push.apply(_gaq, arguments);
trackAndDisplayDefaultTiddlers: function() {
	displayDefault = 1;
	try { pl.displayDefaultTiddlers.apply(this, arguments) } catch(e){};
	displayDefault = 0;
trackAndDisplayTiddler: function(srcElement, tiddler, template, animate, unused, customFields, toggle, animationSrc) {
	if (!displayDefault) {
		var trackEvent, title = (tiddler instanceof Tiddler) ? tiddler.title : tiddler;
		if (story.getTiddler(title)) {
			/* Tiddler is already displayed */
			if (toggle === true) {
				/* Closing tiddler: tracked in separate function */
			else if (template === DEFAULT_EDIT_TEMPLATE) {
				if ( trackEvent = "Edit";
			else if ( trackEvent = "Refresh";
		else if ( && !searching) trackEvent = "Open";

		if (trackEvent) pl.track(trackOptions.type ? ["_trackPageview", "/#" + title] : ["_trackEvent", "Tiddlers", trackEvent, title]);
	pl.displayTiddler.apply(this, arguments);
trackAndCloseTiddler: function(title, animate, unused) {
	if (closingAll) closingAll++;
	else pl.track(["_trackEvent", "Tiddlers", "Close", title]);
	pl.closeTiddler.apply(this, arguments);
trackAndCloseAllTiddlers: function(excluded) {
	closingAll = 1;
	try { pl.closeAllTiddlers.apply(this, arguments) } catch(e){};
	if ((closingAll > 1) && !searching) pl.track(["_trackEvent", "Tiddlers", "CloseAll", excluded, closingAll - 1]);
	closingAll = 0;
trackAndSearch: function(text, useCaseSensitive, useRegExp) {
	if (!trackOptions.type && pl.track(["_trackEvent", "Tiddlers", "Search", text]);
	searching = 1;
	try {, arguments) } catch(e){};
	searching = 0;


/* Only track in non-local mode */
var local = "file:" == document.location.protocol;
var url_excluded = trackOptions.url_exclude.length ? (new RegExp(trackOptions.url_exclude, "i")).test(document.location.href) : 0;
if (!local && && trackOptions.enabled && !url_excluded) {
	/* Insert script tag to load GA */
	$("head").eq(0).prepend($("<script/>").attr({type: "text/javascript", async: "true", src: ("https:" == document.location.protocol ? "https://ssl" : "http://www") + ""}));

	/* Override TiddlyWiki display */
	pl.displayTiddler = story.displayTiddler;
	story.displayTiddler = pl.trackAndDisplayTiddler;
	pl.displayDefaultTiddlers = story.displayDefaultTiddlers;
	story.displayDefaultTiddlers = pl.trackAndDisplayDefaultTiddlers;
	if (!trackOptions.type && {
		pl.closeTiddler = story.closeTiddler;
		story.closeTiddler = pl.trackAndCloseTiddler;
		pl.closeAllTiddlers = story.closeAllTiddlers;
		story.closeAllTiddlers = pl.trackAndCloseAllTiddlers;
	} =; = pl.trackAndSearch;

	/* Initialize tracking */
	pl.track(["_setAccount",], ["_trackPageview"]);

config.macros.GATrackerConfig = {
handler: function(place, macroName, params, wikifier, paramString, tiddler) {
	$(createTiddlyElement(place, "div")).html("Tracking status: <span style='color:" + ( && trackOptions.enabled ? "green'>enabled" : "red'>disabled") + "</span> and <span style='color:" + (local ? "red'>" : "green'>non-") + "local</span>" + (!local && url_excluded ? ", but <span style='color:red'>excluded</span>" : ""));
	if (readOnly) {
		$(createTiddlyElement(place, "div")).html("Configuration is not available in read-only mode");
	var formNode = $(createTiddlyElement(place, "div")).html("<div>Google Analytics plugin configuration:</div><table><tr><td>Account ID:</td><td><input id='ga_id' type='text'/></td></tr><tr><td>Tracking:</td><td><input id='ga_enabled' type='checkbox'/>Enabled<br/><br/>How: <select id='ga_track'><option value='0'>Events</option><option value='1'>Pages</option></select><br/><br/>What:<br/><input id='ga_track_open' type='checkbox'/>Open<br/><input id='ga_track_refresh' type='checkbox'/>Refresh<br/><input id='ga_track_edit' type='checkbox'/>Edit<br/><input id='ga_track_close' type='checkbox'/>Close<br/><input id='ga_track_search' type='checkbox'/>Search<br/></td></tr><tr><td>URL exclusion:</td><td><input id='ga_url_exclude' type='text'/></td></tr></table><input id='ga_action_submit' type='submit' value='Apply'/>");
	$("#ga_id", formNode).val(;
	$("#ga_enabled", formNode)[0].checked = trackOptions.enabled;
	$("#ga_track option", formNode).eq(trackOptions.type)[0].selected = true;
	for (var ev in {
		$("#ga_track_" + ev, formNode)[0].checked =[ev];
	$("#ga_url_exclude", formNode).val(trackOptions.url_exclude);
	$("#ga_action_submit", formNode).click(function() { = $("#ga_id", formNode).val();
		if (! = null;
		trackOptions.enabled = $("#ga_enabled", formNode)[0].checked;
		trackOptions.url_exclude = $("#ga_url_exclude", formNode).val();
		if (!trackOptions.url_exclude.length) trackOptions.url_exclude = null;
		trackOptions.type = parseInt($("#ga_track", formNode).val());
		for (var ev in {[ev] = $("#ga_track_" + ev, formNode)[0].checked;

		var nodeDisplay = story.findContainingTiddler(place);
		var tiddlerDisplay;
		if (nodeDisplay) tiddlerDisplay = store.getTiddler(nodeDisplay.getAttribute("tiddler"));
		story.refreshTiddler(tiddlerDisplay ? tiddlerDisplay.title : tiddler.title, null, true);

*[[Standard ML as a Proof System, without GADTs|]]
*[[Proving Type Inequality in C++|]]
*[[EvenOdd in Agda, Idris, Haskell, Scala|]]
*[[Proof that numbers go even-odd-even-odd...|]] in haskell
*[[How to get the Scala compiler to prove e.g. that a type-level natural number isn't divisible by three|]]
*[[A proof of Löb's theorem in Haskell|]]
Let's prove C, C ⇒ A, ¬A ∨ B → B by refutation.
We get :
∃ A,B,C so that :
1) C, C ⇒ A, ¬A ∨ B is true
2) B is false

So if we take C ⇒ A
we get : C is false or A is true
but we know C is true so C cannot be false and so A is true

So if we take  ¬A ∨ B
we get  ¬A is true or B is true but ¬A cannot be true because A is true.
Conclusion : we get B is true but we suppose B is false in (2) so we got a logical contradiction and we prove C, C ⇒ A, ¬A ∨ B → B

We can represent this proof as an array. Read it from the bottom to the up.
|| @@''A'', C, C ⇒ A, ¬A ∨ B → A, B@@ |||
|| ''¬A'', A, C, C ⇒ A, ¬A ∨ B → B | //^^<= hypothesis 3,hypothesis 4 =>^^// | @@''B'', A, C, C ⇒ A, ¬A ∨ B → B@@ |
| @@C, C ⇒ A, ¬A ∨ B → B,''C''@@ | //^^<= hypothesis 1,hypothesis 2 =>^^// | ''A'', C, C ⇒ A, ''¬A ∨ B'' → B ||
|| C, ''C ⇒ A'', ¬A ∨ B → B |||
*in bold we see each hypothesis we made
*each contradiction* is highlighted (*) when the same formula is present on left and on right of the sequent 
*→ is a separation between true formulas and false formulas
*, mean a conjunction on the left and a disjunction on the right
''This is the very essence of sequent calculus.''
!What is Automated Theorem Proving and what has it been really useful for ?
*[[Geoff Sutcliffe's overview|]]
*[[Wikipedia article|]]
*[[Great introduction in French by Benjamin Werner|]]
*[[[VIDEO] Overview of Automated Reasoning by Peter Baumgartner|]]
!What should you learn to understand ATP ?
*see [[First-class resources]]
!Some concrete examples
*[[How to Prove Stuff Automatically|]]
*[[A concrete step by step example|]]
*[[Another concrete step by step example with a (very poor quality) Video from John Harrison|]]
You can also see algorithms <<tag explained>> and introductory documents [[here|]] 
!And then ?
@@You can see all these from a new angle, welcome to [[Dependent type]] and begin the journey !@@
!related to formal logic 
*[[Atomic formula|]]
*[[Conjunctive normal form|]]
*[[Skolem normal form|]]
*[[De Bruijn index|]]
*[[Typed lambda calculus|]]
*[[First-order logic|]]
*[[Higher-order logic|]]
*[[Combinatory logic|]]
*[[Principle of explosion|]]
!related to sequent calculus
*[[Turnstile (symbol)|]]
*[[Sequent calculus|]]
*[[Satisfiability Modulo Theories|]]
[[The grammar-combinators Parser Library|]] combine the advantages of parser combinators with the power that is standard in parser generators and associated tooling.
* Preface
** What's not in the book
*** model checking
*** inductive theorem proving
*** many-sorted logic
*** modal logic
*** description logics
*** intuitionistic logic
*** lambda calculus
*** higher-order logic
*** type theory
* Chapter 1 · Introduction
** Boole's algebra
** two arguments have the same form if they are both instances of the same formal expression, consistently replacing variables by other propositions
** we can use the formal language to make a mathematically precise definition of logically valid arguments
** format used by people = the concrete syntax
** form used for manipulations = the abstract syntax (AST)
** translate input from concrete syntax to abstract syntax = parsing
*** lexical analysis (scanning) decomposes the sequences of input characters into ‘tokens’ 
*** parsing converts the linear sequences of tokens into an abstract syntax tree
** translate output back from abstract syntax to concrete syntax = prettyprinting
* Chapter 2 · Propositional logic
** Boole used traditional algebraic signs like ‘+’ for the logical connectives
** p(q + r) = pq + pr is familiar but p+qr = (p+q)(p+r) is confusing so we will use special symbols for the connectives
** the ultimate meaning of a formula is just one of the two truth-values ‘true’ and ‘false’
** we can naturally extend the enumeration in truth-table form from the basic operations to arbitrary formulas
** we also have to accept that ‘p implies q’ is true whenever p is false, regardless of q
** a formula is said to be 
*** a tautology if its truth-table value is ‘true’ in all rows
*** satisfiable if its truth-table value is ‘true’ in at least one row
*** unsatisfiable if its truth-table value is ‘false’ in all rows
** a set Γ of formulas is said to be satisfiable if there is a valuation v that simultaneously satisfies them all
** the notation Γ |= q to mean ‘for all valuations in which all p ∈ Γ are true, q is true’
** Compactness : For any set Γ of propositional formulas, if each finite subset Δ ⊆ Γ is satisfiable, then Γ itself is satisfiable
* Chapter 3 · First-order logic
** First- order (predicate) logic extends propositional logic in two ways
*** the atomic propositions can be built up from non-propositional variables and constants using functions and predicates
*** the non-propositional variables can be bound with quantifiers
** The name ‘first-order logic’ arises because quantifiers can be applied only to object-denoting variables, not to functions or predicates
** Logics where quantification over functions and predicates is permitted (e.g. ∃f. ∀x. P [x, f(x)]) are said to be second-order or higher-order
** By analogy with propositional logic, a first-order formula is said to be logically valid if it holds in all interpretations and all valuations
** The notation Γ |= p means ‘p holds in all models of Γ’
** We have no way at all of evaluating whether a formula holds in an interpretation with an infinite domain
** Note the contrast with propositional logic, where the propositional variables range over a finite (2-element) set which can therefore be enumerated exhaustively
!Read this first
*[[Homotopy Type Theory: a new foundation for 21st-century mathematics|]]
*Short story of HoTT : [[Vladimir Voevodsky's talk at the Heidelberg Laureate Forum (Video, ~30 min)|]]
!Highly recommended course
*[[15-819 Homotopy Type Theory]] by Robert Harper
*[[The Types Project|]]
*[[Homotopy Type Theory|]]
*[[The HoTT book : the story|]]
*Homotopy type theory is a new branch of mathematics based on a recently discovered connection between homotopy theory and type theory
*univalent foundations will eventually become a viable alternative to set theory as the “implicit foundation” for the unformalized mathematics done by most mathematicians
*In type theory, unlike set theory, objects are classified using a primitive notion of type, similar to the data-types used in programming languages
*we treat types as ∞-groupoids; this is a name for the “invariant objects” of homotopy theory
*The key new idea of the homotopy interpretation is that the logical notion of identity a = b of two objects a, b : A of the same type A can be understood as the existence of a path p : a ~> b from point a to point b in the space A
*This also means that two functions f , g : A → B can be identified if they are homotopic, since a homotopy is just a (continuous) family of paths px : f (x) ~> g(x) in B, one for each x :A. In type theory, for every type A there is a (formerly somewhat mysterious) type IdA of identifications of two objects of A; in homotopy type theory, this is just the path space AI of all continuous maps I → A from the unit interval. In this way, a term p : IdA(a,b) represents a path p : a ~> b in A
*Adding univalence to type theory in the form of a new axiom has far-reaching consequences, many of which are natural, simplifying and compelling
*At the risk of oversimplifying, we can state this succinctly as follows: Univalence Axiom: (A = B) ≃ (A ≃ B)
*In other words, identity is equivalent to equivalence
*From the homotopical point of view, univalence implies that spaces of the same homotopy type are connected by a path in the universe U
*From the logical point of view, however, it is a radically new idea: it says that isomorphic things can be identified!
*Thus in sum, for points A and B of the universe U (i.e., small types), the univalence axiom identifies the following three notions:
**(logical) an identification p : A = B of A and B
**(topological) a path p: A ~> B from A to B in U
**(homotopical) an equivalence p : A ≃ B between A and B
*One of the classical advantages of type theory is its simple and effective techniques for work- ing with inductively defined structures
*In homotopy theory, it is natural to consider also “inductively defined spaces” which are generated not merely by a collection of points, but also by collections of paths and higher paths
*Of course, there are many specific examples of the use of type theory without the new homotopy type theory features to formalize mathematics
*For this reason, most of the applications in this book have been chosen to be areas where univalent foundations has something new to contribute that distinguishes it from existing foundational systems
*One can imagine a not-too-distant future when it will be possible for mathematicians to verify the correctness of their own papers by working within the system of univalent foundations, formalized in a proof assistant,
*from every proof we can automatically extract an algorithm; this can be very useful in applications to computer programming
*Chapter 1 - Type theory
**Type theory versus set theory
[[Idris|]] is a general purpose pure functional programming language with dependent types.
!Why Idris
*Scala does not explicitly support dependent types
!How does Idris compare to other dependently-typed programming languages?
!State of the art
Extract from [[Inductive Theorem Proving Automated Reasoning Petros Papapanagiotou|]]
*Inductive Proofs
**Appear very often in formal verification and automated reasoning tasks.
**Are hard to automate.
*So far
**Advanced automated provers (ACL2, IsaPlanner, etc)
**Advanced techniques (Rippling, Decision Procedures, etc)
**Still require fair amount of user interaction.
*Still work on
***More advanced heuristics
****Better generalization
****Counterexample checking
****Productive use of failure (Isaplanner)
****More decision procedures
**Termination heuristics
!Inductive Theorem Provers
[[List of All Inductive Theorem Provers in The World|]]
*Formules et démonstration de la logique du premier ordre
***En logique du premier ordre et, en particulier en théorie de la démonstration, les objets que l'on étudie sont les formule et leurs démonstrations.
***Les termes et les formules forment la grammaire d'une langue simplifiée à l'extrême
****termes : objets dont on veut prouver les propriétés
****formules : propriétés des objets
****démonstrations : déductions sous hypothèses
***ne pas mélanger le niveau où vivent les termes, les formules et les démonstrations
****niveau objet : formule, démonstration
****niveau méta : propriétés, preuve
**Les formules
***il suffit de donner la liste de ces symboles pour préciser le langage
***Un langage de premier ordre est la donnée d'une famille de symboles
****symboles de constante
****symboles de fonction
****symboles de relation
***un symbole de constante peut être vu comme un symbole de fonction à 0 argument
***Un terme clos est un terme qui ne contient pas de variables
***La preuve par induction sur la hauteur d'un terme sera parfois insuffisante => preuve par récurrence sur la taille du terme
***Les formules sont construites à partir de formules dites atomiques en utilisant des connecteurs et des quantificateurs
***Comme pour les termes, on sera amené, dans la suite, à définir des notions (ou à prouver des résultats) par récurrence sur la taille d'une formule
***@@Variables libres et variables liées@@
***Pour les logiciels d'aide à l'écriture de preuves, le renommage des variables pose de difficiles problèmes
***au lieu d'utiliser des noms pour les variables, une solution consiste à utiliser des entiers appelés indices de de Bruijn
***L’occurrence d'une variable x est remplacée par le nombre de quantificateurs qu'il faut traverser (dans la branche qui va de cette occurrence à la racine de la formule) avant de trouver le quantificateur qui lie cette variable.
***Malheureusement, cette notation très commode en informatique, rend les formules illisibles pour un être humain
***Une formule close est une formule sans variables libres
***Si les seuls symboles de relation du langage sont des relations d'arité 0, les quantificateurs sont alors inutiles. On obtient alors le calcul propositionnel.
**Les démonstrations en déduction naturelle
***Plusieurs choses peuvent paraître surprenantes
****Il n'y a qu'un nombre fini de règles : deux pour chacun des connecteurs (et l'égalité) plus trois règles générales
****Ce sont les mêmes règles pour toutes les mathématiques : algèbre, analyse, géométrie, etc ...
***Un séquent est un couple (noté Γ ⊢ F) où
****Γ est un ensemble fini de formules. Γ représente les hypothèses
****F est une formule. C'est la formule que l'on veut démontrer
***Le signe ⊢ se lit thèse ou démontre
***Les règles
****Axiome : si la conclusion du séquent est l'une des hypothèses, alors le séquent est prouvable 
****Affaiblissement : en ajoutant des hypothèses supplémentaires
****Introduction de l'implication : pour montrer A → B, on suppose A et on montre B
****Élimination de l'implication : (modus ponens) pour démontrer B, si on connait un théorème de la forme A → B, il suffit de démontrer A 
****Introduction de la conjonction : pour montrer A ∧ B, il suffit de montrer A et de montrer B
****Élimination de la conjonction : de A ∧ B on peut déduire A et B
****Introduction de la disjonction : pour démonter A ∨ B, il suffit de démontrer A ou de démontrer B. La conclusion est plus faible que la prémisse.
****Élimination de la disjonction : raisonnement par cas
****Introduction de la négation : pour montrer ¬A, on suppose A et on démontre par l'absurde
****Élimination de la négation : si on a montré ¬A et A alors on a montré l'absurde
****Absurdité classique : cette règle peut être appliqué à tout moment. Pour trouver A, je suppose ¬A et je vais chercher une contradiction
****@@Introduction du quantificateur universel@@ : pour démontrer ∀x A, il suffit de montrer A en ne faisant aucune hypothèse sur x
****Élimination du quantificateur universel : si on a prouvé A pour tout x, alors on peut utiliser A avec n'importe quel objet t
****Introduction du quantificateur existentiel : pour démontrer ∃x A, il suffit de trouver un objet t pour lequel on sait montrer A[x := t]
****@@Élimination du quantificateur existentiel@@ quand on a une hypothèse de la forme ∃x A, on peut utiliser cette hypothèse en prenant un x qui satisfait A
****Introduction de l'égalité : on peut toujours montrer t = t. Cette règle signifie que l'égalité est réflexive.
****Élimination de l'égalité : cette règle exprime que les objets égaux ont les même propriétés.
***Quelques heuristiques
****si on peut appliquer une règle d'introduction, le faire
****Sinon, appliquer une règle d'élimination avec une hypothèse (ou un théorème)
***∧i est réversible mais les règles ∨i ne le sont pas
***On gardera donc en mémoire, quand on utilise une règle réversible, que l'on est peut-être parti dans une mauvaise direction
***Déduction naturelle sans séquents : on note une fois pour toute l'ensemble des hypothèses et on applique les règles uniquement sur la conclusion
***La notation arborescente est inutilisable dès que la démonstration fait plus de quelques lignes. On lui préférera une notation linéaire
***Règles dérivées
****absurdité intuitionniste
****loi de Peirce
****tiers exclus
****lois de Morgan
***Simplification de la notation linéaire
****Affaiblissements implicites
****Succession de règles
****Omission de sous-démonstrations
**Démonstrations formelles et informelles
**Premiers résultats théoriques
**Systèmes alternatifs de démonstration
***Le calcul des séquents (Gentzen)
***Les systèmes de Hilbert
***Les systèmes utilisés en démonstration automatique (présentations différentes du calcul des séquents)
*Complétude de la logique de premier ordre
***Le théorème de complétude dit, intuitivement, qu'une formule est vrai si et seulement si elle est démontrable
***implications :
****facile : si une formule est démontrable alors elle est vraie
****difficile : si une formule est vraie, alors il y a une démonstration (avec les règles du chapitre précédent)
***@@On dit qu'une formule F est un théorème (logique) ssi pour toute interprétation M et tout environnement e, M satisfait F dans l'environnement e@@
***On peut donc sire que les théorèmes sont les formules toujours vraies
**Le cas du calcul propositionnel
***Le théorème de complétude du calcul propositionnel (F est une tautologie ssi ¬F) est un cas particulier du théorème de complétude de la logique de premier ordre
**Un peu de théorie des modèles
***La théorie des modèles est l'étude générale des structures mathématiques
***@@Définitions : morphisme, isomorphisme, isomorphes@@
***On dit quelques fois qu'un morphisme est une fonction qui "respecte" les structures ou qui "transporte" l'une des structures dans l'aure.
***Deux structures isomorphes seront souvent considérés identiques : on peut, en effet, considérer que ceux sont les mêmes à une bijection près.
**@@Théorème de complétude@@
***Une théorie est un ensemble (fini ou infini) de formules closes
***Les éléments d'une théorie sont souvent appelés les axiomes de cette théorie
***Théorème de complétude : soient T une théorie et F une formule close. T ⊢ F ssi T ⊨ F
**@@Théorème de compacité@@
***il se déduit trivialement du théorème de complétude
***Théorème de compacité : T est satisfaisable ssi tout sous-ensemble fini de T est satisfaisable
***Formes canoniques
***toute formule peut-être mise, à équivalence près, sous certaines formes canoniques
****prénexe (sans quantificateur)
***Toute formule est équivalente à une formule sous forme prénexe
*Ensemble de théories
***quelques exemples de théories classiques
****théorie des groupes
****théorie des anneaux
****les axiome de Peano (PA) : servent de base à l'arithmétique
****la théorie des ensembles de Zermelo Fraenkle (ZF) : permet de formaliser la totalité des mathématiques
**Quelques théories algébriques
***Théorie de l'égalité
***Théorie des anneaux et des corps
***Théorie des espaces vectoriels
**Exemple d'analyse
***On a vu dans les sections précédentes des démonstrations formelles de résultats d'algèbre
***dans cette section, démonstration de résultats connus d'analyse pour convaincre le lecteur de la puissance du système de démonstration présenté au chapitre 1
**L'arithmétique de Peano
**La théorie des ensembles de Zermelo Fraenkle
***avec les axiomes de ZF on pourrait construire tous les outils habituellement utilisé en mathématique
***Dans ZF tout est construit à partir de ∈ et = (ie langage machine)
***Il y a quelques situations ou des axiomes supplémentaires sont nécessaire pour prouver un théorème
****Axiome de fondation
****Axiome de choix
****L'hypothèse du continu
**Les phénomènes d'incomplétude
***Le paradoxe de Russell
***Le paradoxe de Berry
***Un problème est décidable s'il existe un algorithme qui le résout
***Ni PA ni ZF ne sont décidables
***Théorème d'incomplétude de Gödel
****Aucune théorie non contradictoire et récursive contenant PA n'est complète
****On peut écrire, sur le langage de l'arithmétique, une formule Cons qui exprime intuitivement que PA est non contradictoire et qui est telle que PA ⊭ Cons
***il n'y a aucun espoir de montrer que ZF est non contradictoire
***Le théorème de complétude dit que les règles de démonstration données au chapitre 1 suffisent pour démontrer tout ce qui est vrai de manière général
***Le théorème d'incomplétude dit qu'aucun ensemble récursif d'axiomes ne suffira à décider toutes les formules écrites sur le vocabulaire ZF
***exemple concret de l'incomplétude de PA : suite de Goodstein
***L'intérêt du théorème de Gödel est qu'il s'applique à toute théorie raisonnable
**Quelques théories et structures décidables
***Elimination des quantificateurs
***La théorie des ordres denses
***La théorie de l'égalité
***Le théorème de Tarski
***L'arithmétique de Presburger
*Logique intuitionnistique et modèle de Kripke
***En mathématique, certaines preuves sont constructives, d'autres ne le sont pas
***raisonnement par l'absurde
****supprimé dans la logique minimale
****remplacé par un autre, plus faible, dans la logique intuitionnistique
**Logique minimale, intuitionnistique et classique
***La logique intuitionnistique est obtenu en ajoutant à la logique minimale la règle d'absurdité intuitionnistique ⊥~~i~~
***Toute formule démontrable en logique minimale est démontrable en logique intuitionniste. De même toute formule démontable en logique intuitionniste est démontrable en logique classique
***Les deux réciproques sont fausses @@LM ≠ LI et LI ≠ LC@@
**Traduction entre ces logiques
***Traduction de Gödel
***Traduction de Kuroda
***@@Etant donné une formule F,  ⊦~~i~~ F (resp ⊦~~m~~ F) n'est pas décidable@@
**Sémantique du calcul propositionnel
***@@Modèle de Kripke propositionnel@@
***une formule est démontrable en logique intuitionniste ssi elle est vraie (ie si elle est réalisée dans tous les modèles de Kripke)
***@@Thorème de complétude : Soient Γ, A des formules. On a Γ ⊦~~i~~ A ssi Γ ⊩ A@@
***Autres théorèmes de complétude
****Les modèles ordonnés
****Les modèles avec plus petit élément
****Les modèles arborescents
****Les modèles finis
***@@La logique propositionnelle est décidable@@
**Sémantique de la logique de premier ordre
***@@Théorème de la constructivité de la logique intuitionnistique@@
****⊦~~i~~ A~~1~~ ∨ A~~2~~ ssi ⊦~~i~~ A~~1~~ ou ⊦~~i~~ A~~2~~
****⊦~~i~~ ∃x A ssi il existe un terme t tel que ⊦~~i~~ A[x := t]
**Sémantique de la logique minimale
**L'arithmétique de Heyting HA
***version intuitionnistique de l'arithmétique de Peano
***Constructivité de HA
*****HA ⊦ A ∨ B ssi HA ⊦ A ou HA ⊦ B
*****HA ⊦ ∃x A ssi il existe n ∈ IN tel que HA A[x := n]  
*Calcul des séquents
***Le calcul des séquents est une autre présentation formelle de la notion de démonstration
***La différence principale avec la déduction naturelle réside dans le remplacement des règles d'élimination par des règles d'introduction à gauche
***Ce système
****permet de mieux dégager certaines propriétés mathématiques
****est plus commode que la déduction naturelle pour la recherche automatique de preuve
****est indispensable pour comprendre ce qu'est la logique linéaire
**Les systèmes LK et LJ
***Le système LK est la version calcul des séquents de la déduction naturelle classique
***Un séquent est une expression de la forme Γ ⊦ Δ où Γ et Δ sont des multi-ensembles finis de formules
***@@Un séquent est prouvable dans LK (on écrit Γ ⊦~~LK~~ Δ si on peut l'obtenir en appliquant les règles suivantes :@@
****Règles structurelles
****Règles des connecteurs logiques
****Règles des quantificateurs
****Règle de coupure
***Le calcul des séquents admet plusieurs présentations équivalentes qui sont plus ou moins commodes en fonction de l'objectif qu'on se donne : recherche automatisé de preuve ou simplicité de la théorie
***Le système LJ correspond à la déduction naturelle intuitionniste
***@@Un séquent est prouvable dans LJ (on écrit Γ ⊦~~LJ~~ Δ si on peut l'obtenir en appliquant les règles suivantes :@@
****Règles structurelles
****Règles des connecteurs logiques
****Règles des quantificateurs
****Règle de coupure
**Déduction naturelle et calcul des séquents
***la déduction naturelle classique (resp. intuitionniste) démontre exactement les mêmes formules  que le système LK (resp. LJ)
**Elimination des coupures
***on peut transformer toute dérivation en une autre (ayant la même conclusion) qui n'utilise pas la règle de coupure
***@@Théorème d'élimination des coupures (le Haupsatz de Gentzen) : soient Γ,t Δ des formules. Si le séquent Γ ⊦ Δ est dérivable dans LK (resp. LJ), il en existe une dérivation normale dans LK (resp. LJ)@@
***L'élimination des coupures peut être vue comme un processus de calcul
***Elimination des coupures / Preuves = Déroulement d'un programme /Programme
***En déduction naturelle, il n'y a pas de règle de coupure mais la notion de coupure existe néanmoins
**Conséquence de l'élimination des coupures
***@@Consistance des systèmes LK et LJ@@
***Constructivité de la logique intuitionniste
*****⊦~~LJ~~ A~~1~~ ∨ A~~2~~, alors ⊦~~LJ~~ A~~1~~ ou ⊦~~LJ~~ A~~2~~
*****⊦~~LJ~~ ∃x A alors il existe un terme t tel que ⊦~~LJ~~ A[x := t]
****A partir d'une dérivation quelconque, on peut donc construire une dérivation sans coupure, et l'avant dernier séquent de cette dérivation donne donc le résultat recherché
***Propriété de la sous-formule
*****Soient Γ, Δ des formules. Si Γ ⊦~~LK~~ Δ (resp. Γ ⊦~~LJ~~ Δ) alors il existe une dérivation de ce séquent dans LK( resp. dans LJ) dont toutes les formules sont de la forme A[x~~1~~ := t~~1~~, ..., x~~n~~ := t~~n~~], où A est une sous-formule d'une des formules de Γ, Δ et les t~~i~~sont des termes (qui ne sont pas nécessairement des sous-termes des termes apparaissant dans Γ, Δ)
*Logiques d'ordre supérieur
***Des objets tels que les sous-ensembles (sous groupes, idéaux, ...) ou les fonctions ne sont pas des objets du premier ordre
**Le système
***Les sortes représentent les divers niveaux des objets. Le symbole o  désigne la sorte des formules
***La sorte s → s' désigne l'ensemble des fonctions associant un objet de la sorte s un objet de la sorte s'
***Par exemple si w est la sorte des entiers naturels, la sorte w → w désigne l'ensemble des fonctions des entiers vers les entiers.
***La sorte (w → w) → w désigne l'ensemble des fonctionnelles qui prennent en argument une fonction (des entiers vers les entiers) et donne comme résultat un entier. 
***Il ne faudra pas confondre la flèche utilisée pour les sortes et celle utilisée pour les formules
***En logique de premier ordre, on a définit d'abord les termes puis les formules. En logique d'ordre supérieur, les expressions vont remplacer ces deux notions.
***La logique de premier ordre est un cas particulier de la logique d'ordre supérieur
***En logique d'ordre supérieur, la notion de démonstration naturelle, est exactement la même qu'en logique du premier ordre
**Exemples de logiques et de théories
***@@Les logiques multisortes de premier ordre@@
***Les logiques du second ordre
***@@L'arithmétique du second ordre@@
****version au second ordre de la logique de Peano. Beaucoup plus puissante que PA
***Les nombres réels
****on peut définir les nombres réelles dans PA~~2~~~
***La théorie des types simples
****comble les lacunes de PA~~2~~en ne limitant pas les possibilités de quantification
****On peut en particulier, y faire la construction habituel des réels et, en fait, la quasi-totalité des mathémtiques
**Extensionnalité et axiome du choix
***Formalisation de l'axiome du choix dans une logique d'ordre supérieur
***Généralisation de théorème de complétude de la logique de premier ordre
*Démonstration automatique
***Comme le problème de la recherche de preuve est indécidable, on ne peut espérer, au mieux, qu'obtenir un semi-algorithme : si la formule est prouvable l'algorithme en trouvera une démonstration. Si elle ne l'est pas, l'algorithme pourra ne pas se terminer.
***Seule la logique de premier ordre (sans égalité) sera abordée ici. On n'abordera donc pas la recherche, pourtant également utile, pour la génération de programme corrects, de preuves en logique intuitionniste.
***@@L'algortithme élémentaire d'unification@@
****algorithme très peu performant (exponentiel)
***algorithme plus performant (permet de ne faire l'occur_check qu'à la fin)
**La méthode des tableaux
**La résolution
***La méthode des tableaux cherche une démonstration d'un séquent.
***La résolution procède différemment : elle cherche à monter un ensemble de clauses est contradictoire
***Le problème majeur de la méthode de résolution est le choix des clauses sur lesquelles on fera une résolution ou une contraction. L'ordre dans lequel on applique les règles est, en effet, très important
****Ce sujet a été longuement étudié. L'idée générale est qu'il faut essayer de faire apparaître les clauses les plus fortes au plus tôt.
****Elimination des tautologies
****Simplification des clauses
****Restrictions de la résolution
@@this book provides useful remember section which lazily constitute this summary@@

* Chapter 1 · Atomic Sentences
**In fol,
***Every individual constant must name an (actually existing) object.
***No individual constant can name more than one object.
***An object can have more than one name, or no name at all.
***Every predicate symbol comes with a single, fixed “arity,” a number that tells you how many names it needs to form an atomic sentence.
***Every predicate is interpreted by a determinate property or relation of the same arity as the predicate.
***Atomic sentences are formed by putting a predicate of arity n in front of n names (enclosed in parentheses and separated by commas).
***Atomic sentences are built from the identity predicate, =, using infix notation: the arguments are placed on either side of the predicate.
***The order of the names is crucial in forming atomic sentences.
**In a language with function symbols,
***Complex terms are typically formed by putting a function symbol of arity n in front of n terms (simple or complex).
***Complex terms are used just like names (simple terms) in forming atomic sentences.
***In fol, complex terms are assumed to refer to one and only one object.
* Chapter 2 · The Logic of Atomic Sentences
**An argument is a series of statements in which one, called the conclusion, is meant to be a consequence of the others, called the premises.
**An argument is valid if the conclusion must be true in any circumstance in which the premises are true. We say that the conclusion of a logically valid argument is a logical consequence of its premises.
**An argument is sound if it is valid and the premises are all true.
**A proof of a statement S from premises P~~1~~,...,P~~n~~ is a step-by-step demonstration which shows that S must be true in any circumstances in which the premises P~~1~~,..., P~~n~~ are all true.
**Informal and formal proofs differ in style, not in rigor.
**There are four important principles that hold of the identity relation: (The latter two principles follow from the first two.) 
***= Elim: If b = c, then whatever holds of b holds of c. This is also known as the indiscernibility of identicals.
***= Intro: Sentences of the form b = b are always true (in fol). This is also known as the reflexivity of identity.
***Symmetry of Identity: If b = c, then c = b.
***Transitivity of Identity: If a = b and b = c, then a = c. 
**To demonstrate the invalidity of an argument with premises P1,...,Pn and conclusion Q, find a counterexample: a possible circumstance that makes P~~1~~,..., P~~n~~ all true but Q false. Such a counterexample shows that Q is not a consequence of P~~1~~,...,P~~n~~.
* Chapter 3 · The Boolean Connectives
**If P is a sentence of fol, then so is ¬P.
**The sentence ¬P is true if and only if P is not true.
**A sentence that is either atomic or the negation of an atomic sentence is called a literal.
**If P and Q are sentences of fol, then so is P∧Q.
**The sentence P∧Q is true if and only if both P and Q are true.
**If P and Q are sentences of fol, then so is P∨Q.
**The sentence P∨Q is true if and only ifP is true or Q is true (or both are true).
**Parentheses must be used whenever ambiguity would result from their omission. In practice, this means that conjunctions and disjunctions must be “wrapped” in parentheses whenever combined by means of some other connective.
**(Double negation and DeMorgan’s Laws) For any sentences P and Q:
***Double negation: ¬¬P ⇔ P
***DeMorgan: ¬(P ∧ Q) ⇔ (¬P ∨ ¬Q)
***DeMorgan: ¬(P ∨ Q) ⇔ (¬P ∧ ¬Q)
**In order for an fol sentence to be a good translation of an English sentence, it is sufficient that the two sentences have the same truth values in all possible circumstances, that is, that they have the same truth conditions.
**The English expression and sometimes suggests a temporal ordering; the fol expression ∧ never does.
**The English expressions but, however, yet, nonetheless, and moreover are all stylistic variants of and.
**The English expressions either and both are often used like parentheses to clarify an otherwise ambiguous sentence.
* Chapter 4 · The Logic of Boolean Connectives
**Let S be a sentence of fol built up from atomic sentences by means of truth-functional connectives alone. A truth table for S shows how the truth of S depends on the truth of its atomic parts.
***S is a tautology if and only if every row of the truth table assigns true to S.
***If S is a tautology, then S is a logical truth (that is, is logically necessary).
***Some logical truths are not tautologies.
***S is tt-possible if and only if at least one row of the truth table assigns true to S.
**Let S and S′ be a sentences of fol built up from atomic sentences by means of truth-functional connectives alone. To test for tautological equivalence, we construct a joint truth table for the two sentences.
***S and S′ are tautologically equivalent if and only if every row of the joint truth table assigns the same values to S and S′.
***If S and S′ are tautologically equivalent, then they are logically equivalent.
***Some logically equivalent sentences are not tautologically equivalent.
**Let P~~1~~,...,P~~n~~ and Q be sentences of fol built up from atomic sentences by means of truth functional connectives alone. Construct a joint truth table for all of these sentences.
***Q is a tautological consequence of P~~1~~,...,P~~n~~ if and only if every row that assigns T to each of P~~1~~,...,P~~n~~ also assigns T to Q.
***If Q is a tautological consequence of P~~1~~,...,P~~n~~, then Q is also a logical consequence of P~~1~~,...,P~~n~~.
***Some logical consequences are not tautological consequences.
**Substitution of equivalents: If P and Q are logically equivalent: P⇔Q then the results of substituting one for the other in the context of a larger sentence are also logically equivalent: S(P) ⇔ S(Q)
**A sentence is in negation normal form (NNF) if all occurrences of ¬ apply directly to atomic sentences.
**Any sentence built from atomic sentences using just ∧, ∨, and ¬ can be put into negation normal form by repeated application of the DeMorgan laws and double negation.
**Sentences can often be further simplified using the principles of associativity, commutativity, and idempotence.
**(The distributive laws) For any sentences P, Q, and R:
***Distribution of ∧ over ∨: P∧(Q∨R) ⇔ (P∧Q)∨(P∧R)
***Distribution of ∨ over ∧: P∨(Q∧R) ⇔ (P∨Q)∧(P∨R)
**A sentence is in disjunctive normal form (DNF) if it is a disjunction of one or more conjunctions of one or more literals.
**A sentence is in conjunctive normal form (CNF) if it is a conjunction of one or more disjunctions of one or more literals.
**Distribution of ∧ over ∨ allows you to transform any sentence in negation normal form into disjunctive normal form.
**Distribution of ∨ over ∧ allows you to transform any sentence in negation normal form into conjunctive normal form.
**Some sentences are in both CNF and DNF.
* Chapter 5 · Methods of Proof for Boolean Logic
**In giving an informal proof from some premises, if Q is already known to be a logical consequence of sentences P~~1~~,...,P~~n~~ and each of P~~1~~,...,P~~n~~ has been proven from the premises, then you may assert Q in your proof.
**Each step in an informal proof should be significant but easily understood.
**Whether a step is significant or easily understood depends on the audience to whom it is addressed.
**The following are valid patterns of inference that generally go unmentioned in informal proofs:
***From P ∧ Q, infer P.
***From P and Q, infer P∧Q.
***From P, infer P∨Q.
**Proof by cases: To prove S from P~~1~~ ∨ ... ∨ P~~n~~ using this method, prove S from each of P~~1~~,...,P~~n~~.
**Proof by contradiction: To prove ¬S using this method, assume S and prove a contradiction ⊥.
**A proof of a contradiction	⊥ from premises P~~1~~,...,P~~n~~	(without additional assumptions) shows that the premises are inconsistent. An argument with inconsistent premises is always valid, but more importantly, always unsound.
* Chapter 6 · Formal Proofs and Boolean Logic
**In justifying a step of a subproof, you may cite any earlier step contained in the main proof, or in any subproof whose assumption is still in force. You may never cite individual steps inside a subproof that has already ended.
**In assessing the validity of an argument, use the following method:
***Understand what the sentences are saying.
***Decide whether you think the conclusion follows from the premises.
***If you think it does not follow, or are not sure, try to find a counterexample.
***If you think it does follow, try to give an informal proof.
***If a formal proof is called for, use the informal proof to guide you in finding one.
***In giving consequence proofs, both formal and informal, don’t forget the tactic of working backwards.
***In working backwards, though, always check that your intermediate goals are consequences of the available information.
**A proof without any premises shows that its conclusion is a logical truth.
* Chapter 7 · Conditionals
**If P and Q are sentences of fol, then so is P → Q.
**The sentence P → Q is false in only one case: if the antecedent P is true and the consequent Q is false. Otherwise, it is true.
**The following English constructions are all translated P → Q: If P then Q; Q if P; P only if Q; and Provided P, Q.
**Unless P, Q and Q unless P are translated ¬P → Q.
**Q is a logical consequence of P~~1~~,...,P~~n~~ if and only if the sentence (P~~1~~ ∧ ... ∧ P~~n~~) → Q is a logical truth.
**If P and Q are sentences of fol, then sois P ↔ Q.
**The sentence P ↔ Q is true if and only if P and Q have the same truth value.
**If the assertion of a sentence carries with it a suggestion that could be cancelled (without contradiction) by further elaboration by the speaker, then the suggestion is a conversational implicature, not part of the content of the original claim.
**A set of connectives is truth-functionally complete if the connectives allow us to express every truth function.
**Various sets of connectives, including the Boolean connectives, are truth-functionally complete.
* Chapter 8 · The Logic of Conditionals
**Let P and Q be any sentences of fol.
***Modus ponens: From P → Q and P, infer Q.
***Biconditional elimination: From P and either P ↔ Q or Q ↔ P, infer Q.
***Contraposition: P → Q ⇔ ¬Q → ¬P 
**The method of conditional proof: To prove P → Q, assume P and prove Q.
**To prove a number of biconditionals, try to arrange them into a cycle of conditionals.
* Chapter 9 · Introduction to Quantification
**The language fol has an infinite number of variables, any of t, u, v, w, x, y, and z, with or without numerical subscripts.
**Variables can occur in atomic wffs in any position normally occupied by a name.
**Complex wffs are built from atomic wffs by means of truth-functional connectives and quantifiers in accord with the following rules :
***If P is a wff, so is ¬P.
***If P~~1~~,...,P~~n~~ are wffs, so is (P~~1~~ ∧ ... ∧ P~~n~~).
***If P~~1~~,...,P~~n~~ are wffs, so is (P~~1~~ ∨ ... ∨ P~~n~~).
***If P and Q are wffs, so is (P → Q).
**When you append either quantifier ∀x or ∃x to a wff P, we say that the quantifier binds all the free occurrences of x in P.
**A sentence is a wff in which no variables occur free (unbound).
**Quantified sentences make claims about some intended domain of discourse.
**A sentence of the form ∀xS(x) is true if and only if the wff S(x) is satisfied by every object in the domain of discourse.
**A sentence of the form ∃xS(x) is true if and only if the wff S(x) is satisfied by some object in the domain of discourse.
**The four Aristotelian forms are translated as follows:
***All P’s are Q’s. &nbsp;&nbsp; ∀x (P(x) → Q(x))
***Some P’s are Q’s. &nbsp;&nbsp; ∃x (P(x) ∧ Q(x))
***No P’s are Q’s. &nbsp;&nbsp; ∀x (P(x) → ¬Q(x))
***Some P’s are not Q’s. &nbsp;&nbsp; ∃x (P(x) ∧ ¬Q(x))
**Translations of complex quantified noun phrases frequently employ conjunctions of atomic predicates.
**The order of an English sentence may not correspond to the order of its fol translation.
**All P’s are Q’s does not imply, though it may conversationally suggest, that there are some P’s.
**Some P’s are Q’s does not imply, though it may conversationally suggest, that not all P’s are Q’s.
* Chapter 10 · The Logic of Quantifiers
**Use the truth-functional form algorithm to determine the truth-functional form of a sentence or argument containing quantifiers.
**The truth-functional form of a sentence shows how the sentence is built up from atomic and quantified sentences using truth-functional connectives.
**A quantified sentence is a tautology if and only if its truth-functional form is a tautology.
**Every tautology is a logical truth, but among quantified sentences there are many logical truths that are not tautologies.
**Similarly, there are many logically valid arguments of fol that are not tautologically valid.
**A sentence of fol is a first-order validity if it is a logical truth when you ignore the meanings of the names, function symbols, and predicates other than the identity symbol.
**A sentence S is a first-order consequence of premises P~~1~~,...,P~~n~~ if it a logical consequence of these premises when you ignore the meanings of the names, function symbols, and predicates other than identity.
**The Replacement Method is useful for determining whether a sentence is a first-order validity and whether the conclusion of an argument is a first-order consequence of the premises.
**All tautologies are first-order validities; all first-order validities are logical truths. Similarly for consequence.
**(DeMorgan laws for quantifiers) For any wff P(x):
***¬∀x P(x) ⇔ ∃x ¬P(x)
***¬∃x P(x) ⇔ ∀x ¬P(x)
**(Pushing quantifiers past ∧ and ∨) For any wffs P(x) and Q(x):
***∀x (P(x)∧Q(x)) ⇔ ∀x P(x) ∧ ∀x Q(x)
***∃x (P(x)∨Q(x)) ⇔ ∃x P(x) ∨ ∃x Q(x)
**(Null quantification) For any wff P in which x is not free:
***∀x P ⇔ P
***∃x P ⇔ P
***∀x (P ∨ Q(x)) ⇔ P ∨ ∀x Q(x)
***∃x (P ∧ Q(x)) ⇔ P ∧ ∃x Q(x)
**(Replacing bound variables) For any wff P(x) and variable y that does not occur in P(x):
***∀x P(x) ⇔ ∀y P(y)
***∃x P(x) ⇔ ∃y P(y)
* Chapter 11 · Multiple Quantifiers
**When evaluating a sentence with multiple quantifiers, don’t fall into the trap of thinking that distinct variables range over distinct objects. In fact, the sentence ∀x∀yP(x,y) logically implies ∀xP(x,x), and the sentence ∃x P(x, x) logically implies ∃x ∃y P(x, y)!
**When you are dealing with mixed quantifiers, the order is very important. ∀x ∃y R(x, y) is not logically equivalent to ∃y ∀x R(x, y).
**In translating from English to fol, the goal is to get a sentence that has the same meaning as the original. This sometimes requires changes in the surface form of the sentence.
**A important source of ambiguity in English stems from the order in which quantifiers are interpreted. To translate such a sentence into fol, you must know which order the speaker of the sentence had in mind. This can often be determined by looking at the context in which the sentence was used.
**Anything you can express using an n-ary function symbol can also be expressed using an n + 1-ary relation symbol, plus the identity predicate, but at a cost in terms of the complexity of the sentences used.
**A sentence is in prenex form if any quantifiers contained in it are out in front. Any sentence is logically equivalent to one in prenex form.
* Chapter 12 · Methods of Proof for Quantifiers
**Universal instantiation: From ∀x S(x), infer S(c), so long as c denotes an object in the domain of discourse.
**Existential generalization: From S(c), infer ∃x S(x), so long as c denotes an object in the domain of discourse.
**Let S(x), P(x), and Q(x) be wffs.
***Existential Instantiation: If you have proven ∃xS(x) then you may choose a new constant symbol c to stand for any object satisfying S(x) and so you may assume S(c).
***General Conditional Proof: If you want to prove ∀x [P(x) → Q(x)] then you may choose a new constant symbol c, assume P(c), and prove Q(c), making sure that Q does not contain any names introduced by existential instantiation after the assumption of P(c).
***Universal Generalization: If you want to prove ∀x S(x) then you may choose a new constant symbol c and prove S(c), making sure that S(c) does not contain any names introduced by existential instantiation after the introduction of c.
* Chapter 13 · Formal Proofs and Quantifiers
**The formal rule of ∀ Intro corresponds to the informal method of general conditional proof, including the special case of universal generalization.
**The formal rule of ∃ Elim corresponds to the informal method of existential instantiation.
**Always be clear about the meaning of the sentences you are using.
**A good strategy is to find an informal proof and then try to formalize it.
**Working backwards can be very useful in proving universal claims, especially those of the form ∀x (P(x) → Q(x)).
**Working backwards is not useful in proving an existential claim ∃x S(x) unless you can think of a particular instance S(c) of the claim that follows from the premises.
**If you get stuck, consider using proof by contradiction.
* Chapter 14 · More about Quantification
**The notations ∃≥n, ∃≤n, and ∃!n are abbreviations for complex fol expressions meaning “there are at least/at most/exactly n things such that ....”
**To prove ∃!nx P(x), prove two things:
***that there are at least n objects satisfying P(x), and
***that there are at most n objects satisfying P(x).
**The Russellian analysis of The A is a B is the fol translation of There is exactly one A and it is a B.
**The Russellian analysis of Both A’s are B’s is the fol translation of There are exactly two A’s and each of them is a B.
**The Russellian analysis of Neither A is a B is the fol translation of There are exactly two A’s and each of them is not a B.
**The competing Strawsonian analysis of these determiners treats them as having presuppositions, and so as only making claims when these presuppositions are met. On Strawson’s analysis, these determiners cannot be adequately translated in fol.
**Given any English determiner Q, we can add a corresponding quantifier Q to fol. In this extended language, the sentence Q x (A, B) is true in a world just in case Q of the objects that satisfy A(x) also satisfy B(x).
**There are three properties of determiners that are critical to their logical behavior: conservativity, monotonicity, and persistence.
**All English determiners are conservative (with the exception of only, which is not usually considered a determiner).
**Monotonicity has to do with the behavior of the second argument of the determiner. All basic determiners in English are monotone increasing or decreasing, with most being monotone increasing.
**Persistence has to do with the behavior of the first argument of the determiner. It is less common than monotonicity.
* Chapter 15 · First-order Set Theory
**Naive set theory has the Axiom of Extensionality and the Axiom Scheme of Comprehension. Extensionality says that sets with the same members are identical. Comprehension asserts that every first-order formula determines a set.
**Let a and b be sets.
***a ⊆ b iff every element of a is an element of b.
***a = b iff a ⊆ band b ⊆ a.
**Let b and c be sets.
***x ∈ b ∩ c if and only if x ∈ b ∧ x ∈ c
***x ∈ b ∪ c if and only if x ∈ b ∨ x ∈ c
**The powerset of a set b is the set of all its subsets: ℘b = {a | a ⊆ b}
**Russell found a paradox in naive set theory by considering Z = {x | x̸ ∈ x} and showing that the assumption Z ∈ Z and its negation each entails the other.
**Modern set theory replaces the naive concept of set, which is inconsistent, with a concept of set as a collection that is not too large.
**These collections are seen as arising in stages, where a set arises only after all its members are present.
**The axiom of comprehension of set theory is replaced by the Axiom of Separation and some of the intuitively correct consequences of the axiom of comprehension.
**Modern set theory also contains the Axiom of Regularity, which is justified on the basis of (2).
**All the propositions stated in this chapter—with the exception of Propositions 1 and 14—are theorems of zfc.
* Chapter 16 · Mathematical Induction
**An inductive definition consists of
***a base clause, which specifies the basic elements of the defined set,
***one or more inductive clauses, which tell us how to generate additional elements, and
***a final clause, which tells us that all the elements are either basic or generated by the inductive clauses.
**Given an inductive definition of a set, an inductive proof requires
***a basis step, which shows that the property holds of the basic elements, and
***an inductive step, which shows that if the property holds of some elements, then it holds of any elements generated from them by the inductive clauses.
**The assumption that begins the inductive step is called the inductive hypothesis.
* Chapter 17 · Advanced Topics in Propositional Logic 
**A truth assignment is simply a function from the atomic sentences into the set {true,false}. It models a single row of a complete truth table for the language.
**The Completeness Theorem is proven by showing that every formally consistent set T of sentences is tt-satisfiable. This is done in two steps.
**The first step is to show the result for sets T which are also formally complete.
**The second step is to show how to extend any formally consistent set to one that is both formally consistent and formally complete.
**A Horn sentence is a propositional sentence in CNF such that every disjunction of literals in contains at most one positive literal.
**The satisfaction algorithm for Horn sentences gives an efficient algorithm to tell whether a Horn sentence is tt-satisfiable.
**Every set of propositional sentences can be expressed as a set of clauses.
**The resolution method is an important method for determining whether a set of clauses is tt-satisfiable. The key notion is that of a resolvent for a set of clauses.
**A resolvent of clauses C~~1~~ and C~~2~~ is a clause R provided there is an atomic sentence in one of the clauses whose negation is in the other clause, and if R is the set of all the remaining literals in either clause.
* Chapter 18 · Advanced Topics in FOL
**First-order structures are mathematical models of the domains about which we make claims using fol.
**Variable assignments are functions mapping variables into the domain of some first-order structure.
**A variable assignment satisfies a wff in a structure if, intuitively, the objects assigned to the variables make the wff true in the structure.
**Using the notion of satisfaction, we can define what it means for a sentence to be true in a structure.
**Finally, once we have the notion of truth in a structure, we can model the notions of logical truth, and logical consequence. 
**(Simplest case of Skolemization) Given a sentence of the form ∀x ∃y P(x, y) in some first-order language, we Skolemize it by choosing a function symbol f not in the language and writing ∀x P(x, f(x)). Every world that makes the Skolemization true also makes the original sentence true. Every world that makes the original sentence true can be turned into one that makes the Skolemization true by interpreting the function symbol f by a function f which picks out, for any object b in the domain, some object c such that they satisfy the wff P(x, y).
* Chapter 19 · Completeness and Incompleteness
**immediate consequences of the Completeness Theorem is the first-order Compactness Theorem
**G&ouml;del’s Incompleteness Theorem showed the positive progress was misleading, and that in fact the goal of Hilbert’s Program could never be accomplished.
[[EDLA book collection|]]
[[EDLA document collection|]]
All classes are inter-dependent, TPTPTParser need all other classes.
[[Navigating this site]]
[[Getting started]]
[[The Big Picture]]
[[A bit of history]]
[[First-class resources]]
[[Dependent type]]
[[HoTT|Homotopy Type Theory]]
<<tag Algorithms>>
<<tag Techniques>>
<<tag Uses>>
[[Reading roadmap]]
[[Project Status Report]]

[[Quotes worth remembering]]
|Author|Eric Shulman|
|Description|'tag matching' with full boolean expressions (AND, OR, NOT, and nested parentheses)|
> see [[MatchTagsPluginInfo]]
2011.10.28 2.0.6 added .matchTags CSS class to popups to enable custom styling via StyleSheet
2011.01.23 2.0.5 fix core tweak for TW262+: adjust code in config.filters['tag'] instead of filterTiddlers()
2010.08.11 2.0.4 in getMatchingTiddlers(), fixed sorting for descending order (e.g, "-created")
| please see [[MatchTagsPluginInfo]] for additional revision details |
2008.02.28 1.0.0 initial release
version.extensions.MatchTagsPlugin= {major: 2, minor: 0, revision: 6, date: new Date(2011,10,28)};

// store.getMatchingTiddlers() processes boolean expressions for tag matching
//    sortfield (optional) sets sort order for tiddlers - default=title
//    tiddlers (optional) use alternative set of tiddlers (instead of current store)
TiddlyWiki.prototype.getMatchingTiddlers = function(tagexpr,sortfield,tiddlers) {

	var debug=config.options.chkDebug; // abbreviation
	var cmm=config.macros.matchTags; // abbreviation
	var r=[]; // results are an array of tiddlers
	var tids=tiddlers||store.getTiddlers();
	if (tids && sortfield) tids=store.sortTiddlers(tids,sortfield);
	if (debug) displayMessage(cmm.msg1.format([tids.length]));

	// try simple lookup to quickly find single tags or tags that
	// contain boolean operators as literals, e.g. "foo and bar"
	for (var t=0; t<tids.length; t++)
		if (tids[t].isTagged(tagexpr)) r.pushUnique(tids[t]);
	if (r.length) {
		if (debug) displayMessage(cmm.msg4.format([r.length,tagexpr]));
		return r;
	// convert expression into javascript code with regexp tests,
	// so that "tag1 AND ( tag2 OR NOT tag3 )" becomes
	// "/\~tag1\~/.test(...) && ( /\~tag2\~/.test(...) || ! /\~tag3\~/.test(...) )"

	// normalize whitespace, tokenize operators, delimit with "~"
	var c=tagexpr.trim(); // remove leading/trailing spaces
	c = c.replace(/\s+/ig," "); // reduce multiple spaces to single spaces
	c = c.replace(/\(\s?/ig,"~(~"); // open parens
	c = c.replace(/\s?\)/ig,"~)~"); // close parens
	c = c.replace(/(\s|~)?&&(\s|~)?/ig,"~&&~"); // &&
	c = c.replace(/(\s|~)AND(\s|~)/ig,"~&&~"); // AND
	c = c.replace(/(\s|~)?\|\|(\s|~)?/ig,"~||~"); // ||
	c = c.replace(/(\s|~)OR(\s|~)/ig,"~||~"); // OR
	c = c.replace(/(\s|~)?!(\s|~)?/ig,"~!~"); // !
	c = c.replace(/(^|~|\s)NOT(\s|~)/ig,"~!~"); // NOT
	c = c.replace(/(^|~|\s)NOT~\(/ig,"~!~("); // NOT(
	// change tag terms to regexp tests
	var terms=c.split("~"); for (var i=0; i<terms.length; i++) { var t=terms[i];
		if (/(&&)|(\|\|)|[!\(\)]/.test(t) || t=="") continue; // skip operators/parens/spaces
		if (t==config.macros.matchTags.untaggedKeyword)
			terms[i]="tiddlertags=='~~'"; // 'untagged' tiddlers
	c=terms.join(" ");
	if (debug) { displayMessage(cmm.msg2.format([tagexpr])); displayMessage(cmm.msg3.format([c])); }

	// scan tiddlers for matches
	for (var t=0; t<tids.length; t++) {
	 	// assemble tags from tiddler into string "~tag1~tag2~tag3~"
		var tiddlertags = "~"+tids[t].tags.join("~")+"~";
		try { if(eval(c)) r.push(tids[t]); } // test tags
		catch(e) { // error in test
			break; // skip remaining tiddlers
	if (debug) displayMessage(cmm.msg4.format([r.length,tagexpr]));
	return r;
config.macros.matchTags = {
	msg1: "scanning %0 input tiddlers",
	msg2: "looking for '%0'",
	msg3: "using expression: '%0'",
	msg4: "found %0 tiddlers matching '%1'",
	noMatch: "no matching tiddlers",
	untaggedKeyword: "-",
	untaggedLabel: "no tags",
	untaggedPrompt: "show tiddlers with no tags",
	defTiddler: "MatchingTiddlers",
	defTags: "",
	defFormat: "[[%0]]",
	defSeparator: "\n",
	reportHeading: "Found %0 tiddlers tagged with: '{{{%1}}}'\n----\n",
	handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		var mode=params[0]?params[0].toLowerCase():'';
		if (mode=="inline")
		if (mode=="report" || mode=="panel") {
			var target=params.shift()||this.defTiddler;
		if (mode=="popup") {
			if (params[0]&&params[0].substr(0,6)=="label:") var label=params.shift().substr(6);
			if (params[0]&&params[0].substr(0,7)=="prompt:") var prompt=params.shift().substr(7);
		} else {
			var fmt=(params.shift()||this.defFormat).unescapeLineBreaks();
			var sep=(params.shift()||this.defSeparator).unescapeLineBreaks();
		var sortBy="+title";
		if (params[0]&&params[0].substr(0,5)=="sort:") sortBy=params.shift().substr(5);
		var expr = params.join(" ");
		if (mode!="panel" && (!expr||!expr.trim().length)) return;
		if (expr==this.untaggedKeyword)
			{ var label=this.untaggedLabel; var prompt=this.untaggedPrompt };
		switch (mode) {
			case "popup": this.createPopup(place,label,expr,prompt,sortBy); break;
			case "panel": this.createPanel(place,expr,fmt,sep,sortBy,target); break;
			case "report": this.createReport(target,this.defTags,expr,fmt,sep,sortBy); break;
			case "inline": default: this.createInline(place,expr,fmt,sep,sortBy); break;
	formatList: function(tids,fmt,sep) {
		var out=[];
		for (var i=0; i<tids.length; i++) { var t=tids[i];
			var title=t.title;
			var who=t.modifier;
			var when=t.modified.toLocaleString();
			var text=t.text;
			var first=t.text.split("\n")[0];
			var desc=store.getTiddlerSlice(t.title,"description");
			var tags=t.tags.length?'[['+t.tags.join(']] [[')+']]':'';
		return out.join(sep);
	createInline: function(place,expr,fmt,sep,sortBy) {
	createPopup: function(place,label,expr,prompt,sortBy) {
		var btn=createTiddlyButton(place,
			function(ev){ return config.macros.matchTags.showPopup(this,ev||window.event); });
	showPopup: function(here,ev) {
		var p=Popup.create(here,null,"matchTags popup"); if (!p) return false;
		var tids=store.getMatchingTiddlers(here.getAttribute("expr"));
		var list=[]; for (var t=0; t<tids.length; t++) list.push(tids[t].title);
		if (!list.length) createTiddlyText(p,this.noMatch);
		else {
			var b=createTiddlyButton(createTiddlyElement(p,"li"),
				function() {
					var list=this.getAttribute("list").readBracketedList();
			b.setAttribute("list","[["+list.join("]] [[")+"]]");
		var out=this.formatList(tids," &nbsp;[[%0]]&nbsp; ","\n"); wikify(out,p);;
		if(ev.stopPropagation) ev.stopPropagation();
		return false;
	createReport: function(target,tags,expr,fmt,sep,sortBy) {
		var tids=store.sortTiddlers(store.getMatchingTiddlers(expr),sortBy);
		if (!tids.length) { displayMessage('no matches for: '+expr); return false; }
		var msg=config.messages.overwriteWarning.format([target]);
		if (store.tiddlerExists(target) && !confirm(msg)) return false;
		var out=this.reportHeading.format([tids.length,expr])
		store.saveTiddler(target,target,out,config.options.txtUserName,new Date(),tags,{});
		story.closeTiddler(target); story.displayTiddler(null,target);
	createPanel: function(place,expr,fmt,sep,sortBy,tid) {
		var s=createTiddlyElement(place,"span"); s.innerHTML=store.getTiddlerText("MatchTagsPlugin##html");
		var f=s.getElementsByTagName("form")[0];
		f.expr.value=expr; f.fmt.value=fmt; f.sep.value=sep.escapeLineBreaks();
		f.tid.value=tid; f.tags.value=this.defTags;
<form style='display:inline;white-space:nowrap'>
<input type='text'    name='expr' style='width:50%' title='tag expression'><!--
--><input type='text'    name='fmt'  style='width:10%' title='list item format'><!--
--><input type='text'    name='sep'  style='width:5%'  title='list item separator'><!--
--><input type='text'    name='tid'  style='width:12%' title='target tiddler title'><!--
--><input type='text'    name='tags' style='width:10%' title='target tiddler tags'><!--
--><input type='button'  name='go'   style='width:8%'  value='go' onclick="
	var expr=this.form.expr.value;
	if (!expr.length) { alert('Enter a boolean tag expression'); return false; }
	var fmt=this.form.fmt.value;
	if (!fmt.length) { alert('Enter the list item output format'); return false; }
	var sep=this.form.sep.value.unescapeLineBreaks();
	var tid=this.form.tid.value;
	if (!tid.length) { alert('Enter a target tiddler title'); return false; }
	var tags=this.form.tags.value;
	return false;">
// SHADOW TIDDLER for displaying default panel input form
config.shadowTiddlers.MatchTags="<<matchTags panel>>";
// TWEAK core filterTiddlers() or config.filters['tag'] (in TW262+)
// to use getMatchingTiddlers instead getTaggedTiddlers
// for enhanced boolean matching in [tag[...]] syntax
var TW262=config.filters && config.filters['tag']; // detect TW262+
var fname=TW262?"config.filters['tag']":"TiddlyWiki.prototype.filterTiddlers";
var code=eval(fname).toString().replace(/getTaggedTiddlers/g,'getMatchingTiddlers');
// REDEFINE core handler for enhanced boolean matching in tag:"..." paramifier
// use filterTiddlers() instead of getTaggedTiddlers() to get list of tiddlers.
config.paramifiers.tag = {
	onstart: function(v) {
		var tagged = store.filterTiddlers("[tag["+v+"]]");

This tiddler was automatically created to record the details of this server
|''Description''|This plugin enables Tiddlywiki to use [[MathJax|]] to render mathematical expressions.|
|''Author''|[[Lizao Li|]]|
|''Created''|Jan 23, 2011|
|''Last updated'' |Apr 18, 2011|
|''Keywords''|MathJax, Tiddlywiki, MathJaxTiddlyPlugin|
Backstage (top-right corner) -> import -> use the URL of this page -> choose "MathJaxTiddlyPlugin" -> import
* <<option chkMathJaxUseCustomizedSettings>> Enable the dollar sign for inline math.
* Configuration file to use <<option txtMathJaxConfigurationFile>>. 
** There are 4 possible choices: @@TeX-AMS-MML_HTMLorMML@@ (default), @@TeX-AMS_HTML@@, @@MML_HTMLorMML@@, @@Accessible@@
** You can find details about these settings from [[MathJax website|]].
* Use MathJax to render tiddlers with tag <<option txtMathJaxTag>>.
''A note for authors:'' The above settings are actually for readers of the webpage, which are only saved in the local cookies. To save these settings so that all the readers will see the same, authors should set these settings persistently. You can add the following to the system tiddler [[SystemSettings]]:
chkMathJaxUseCustomizedSettings: false
txtMathJaxConfigurationFile: TeX-AMS-MML_HTMLorMML
txtMathJaxTag: math
More about persistent settings can be found on the [[official website|]]. 
Tag the tiddlers need math rendering with "math" or the name set in the configuration.
*Inline math test
**syntax 1: $\int_1^4 \sqrt{1+\sin(x)}\,dx$ 
**syntax 2: \(\alpha^{\beta^\gamma}\)
*Display-style math test: 
**syntax 1:$$\frac{\partial v}{\partial t}+(v\cdot\nabla)v=-\nabla p+v\Delta v+f(x,t)$$
**syntax 2:\[\int_{\mathbb{R}}\frac{f(y)}{|x-y|^{3/2}}\,dy\]
The result should look like this:
*Inline math test
**syntax 1: $\int_1^4 \sqrt{1+\sin(x)}\,dx$ 
**syntax 2: \(\alpha^{\beta^\gamma}\)
*Display-style math test: 
**syntax 1:$$\frac{\partial v}{\partial t}+(v\cdot\nabla)v=-\nabla p+v\Delta v+f(x,t)$$
**syntax 2:\[\int_{\mathbb{R}}\frac{f(y)}{|x-y|^{3/2}}\,dy\]
* There is also a more complicated [[demo|MathJaxTiddlyPluginDemo]]
!Revision History
* v1.50 (2011-04-18) - Bugs fixed. Thanks to James Montaldi to point out that WikiTexts inside the math blocks are wrongly rendered. The fix was inspired by [[Bob McElrath's jsMath|]] and [[Martin Budden's FormatterPlugin|]].
* v1.15 (2011-03-31) - Added configurations. Now renders math according to the tag (Thanks to Patrick Ion for the suggestions).
* v1.1 (2011-03-17) - Updated after MathJax was updated to 1.1. Now MathJax is loading from the CDN, instead of the local server.
* v1.0 (2011-01-23) - Created.
if(!version.extensions.MathJaxPlugin) { // [ensure that the plugin is only installed once]
    version.extensions.MathJaxPlugin = { installed: true };

    if(config.options.chkMathJaxUseCustomizedSettings == undefined) config.options.chkMathJaxUseCustomizedSettings = false;
    if(config.options.txtMathJaxConfigurationFile == undefined) config.options.txtMathJaxConfigurationFile = 'TeX-AMS-MML_HTMLorMML';
    if(config.options.txtMathJaxTag == undefined) config.options.txtMathJaxTag = 'math';
    // [hijack refreshTiddler() to render mathjax]
    (function() {  
	// [backup the original function]
	Story.prototype.oldrefreshfunction = Story.prototype.refreshTiddler;
	// [override the original function]
	Story.prototype.refreshTiddler = function(title,template,force,customFields,defaultText) {
	    if(window.MathJax) {
		for(var i=0;i<tiddler.tags.length;i++){
		    if(tiddler.tags[i] == config.options.txtMathJaxTag){

    (function () {
	// [add the mathjax settings to the header]
	var mathjax_config = document.createElement("script");
	mathjax_config.type = "text/x-mathjax-config";    
	var content_mathjax_config = "";
	// [enable the dollar sign for inline math rendering]
	if(config.options.chkMathJaxUseCustomizedSettings == true){
	    content_mathjax_config = content_mathjax_config + 
		"MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ['\\\\(','\\\\)'] ], displayMath: [ ['$$','$$'], ['\\\\[','\\\\]'] ], processEscapes: true }});";
	content_mathjax_config = content_mathjax_config + 'MathJax.Hub.Startup.onload();';
	if (window.opera) {mathjax_config.innerHTML = content_mathjax_config}
	else {mathjax_config.text = content_mathjax_config}

	//# Add the mathjax main script to the header
	var script = document.createElement("script");
	script.type = "text/javascript";
	script.src = ""+config.options.txtMathJaxConfigurationFile;
    config.extensions.MathJaxPlugin = {
    config.MathJax = {};

    // Render matched text as plain text without wikification (adapted from Bob McElrath's jsMath and Martin Budden's FormatterPlugin)
    config.MathJax.OutputPlainText = function(w) {
	var e = document.createElement(this.element);
	var endRegExp = new RegExp(this.termRegExp, "mg");
	endRegExp.lastIndex = w.matchStart+w.matchLength;
	var matched = endRegExp.exec(w.source);
	if(matched) {
	    var txt = w.source.substr(w.matchStart, matched.index+matched[0].length-w.matchStart);
	    if(this.keepdelim) {
		txt = w.source.substr(w.matchStart, matched.index+matched[0].length-w.matchStart);
	    w.nextMatch = endRegExp.lastIndex;

    config.MathJax.formatters = config.formatters;

    // Match the math blocks
	    name: "mathjaxDisplay1",
		match: "\\\$\\\$",
		termRegExp: "\\\$\\\$",
		handler: config.MathJax.OutputPlainText

	    name: "mathjaxDisplay2",
		match: "\\\\\\\[",
		termRegExp: "\\\\\\\]",
		handler: config.MathJax.OutputPlainText

	    name: "mathjaxInline1",
		match: "\\\$", 
		termRegExp: "\\\$",
		handler: config.MathJax.OutputPlainText

	    name: "mathjaxInline2",
		match: "\\\\\\\(",
		termRegExp: "\\\\\\\)",
		handler: config.MathJax.OutputPlainText
    config.parsers.MathJaxFormatter = new Formatter(config.MathJax.formatters);
    config.parsers.MathJaxFormatter.format = 'MathJax';
    config.parsers.MathJaxFormatter.formatTag = config.options.txtMathJaxTag;
}//# end of "install only once"
$A_B$ now renders correctly, so does $A_{B}$.
Similarly, $ABc$, $AB c$, $A Bc$, $aBCd$, $ABC$ work now.
!The Lorenz Equations
\dot{x} & = & \sigma(y-x) \\
\dot{y} & = & \rho x - y - xz \\
\dot{z} & = & -\beta z + xy
!The ~Cauchy-Schwarz Inequality
\[ \left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2
\right) \left( \sum_{k=1}^n b_k^2 \right) \]
!A Cross Product Formula
\[\mathbf{V}_1 \times \mathbf{V}_2 =  \left|\begin{array}{ccc}
\mathbf{i} & \mathbf{j} & \mathbf{k} \\
\frac{\partial X}{\partial u} &  \frac{\partial Y}{\partial u} & 0 \\
            \frac{\partial X}{\partial v} &  \frac{\partial Y}{\partial v}
& 0
\end{array}\right|  \]
!The probability of getting \(k\) heads when flipping \(n\) coins is:
\[P(E)   = {n \choose k} p^k (1-p)^{ n-k} \]
!An Identity of Ramanujan
\[ \frac{1}{\left(\sqrt{\phi \sqrt{5}}-\phi\right) e^{\frac25 \pi}} =
1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}}
{1+\frac{e^{-8\pi}} {1+\ldots} } } } \]
!A ~Rogers-Ramanujan Identity
\[  1 +  \frac{q^2}{(1-q)}+\frac{q^6}{(1-q)(1-q^2)}+\cdots =
\quad\quad \textrm{for}\quad |q|<1. \]
!Maxwell's Equations
\nabla \times \vec{\mathbf{B}} -\, \frac1c\,\frac{\partial\vec{\mathbf{E}}}{\partial t} & = & \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = & 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\,\frac{\partial\vec{\mathbf{B}}}{\partial t} & = &\vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = & 0
Finally, while display equations look good for a page of samples, the ability to mix math and text
in a paragraph is also important. This expression \(\sqrt{3x-1}+(1+x)^2\) is an example of an
inline equation.  As you see, ~MathJax equations can be used this way as well, without unduly
disturbing the spacing between lines.
[[This|]] --[[(broken link replaced with google cache)|]]-- compile with old Scala but break at runtime.
This was [[fixed|]] but then don't compile anymore 
Navigating this site is a bit tricky and un-conventional.  Each page is not static.  It is created dynamically according to what you have clicked on.  Everything you click on opens in front of you while 'older' stuff are below in the page.  If you open something that its not to your interest you can close it.

You can use the left navigation bar to open major areas.  You can also use the search to find things that you like.  Phrases underlined are external links while phrases in bold blue font are internal links.
In no particular order.
*University of Edinburgh : [[Mathematical Reasoning Group|]]
*Vienna University of Technology : [[Theory and Logic Group|]]
*Argonne National Laboratory : [[Automated Deduction at Argonne|]]
| TPTP test | without packrat | with packrat | time gain |h
| GRP194+1.p | 0.910 sec | 0.289 sec| -68.24% |
| ARI595=1.p | 0.283 sec | 0.243 sec| -14.13% |
| GEG021=1.p | 0.516 sec | 0.299 sec| -42.05% |
| PUZ133=2.p | 0.822 sec | 0.356 sec| -56.69% |
| GRP633+1.p | 5.413 sec | 1.239 sec| -77.11% |
| ALG027+1.p | 8.238 sec | 3.112 sec| -62.22% |
| ALG196+1.p | 12.252 sec | 4.356 sec| -64.45% |
| COM022+4.p | 2.562 sec | 0.681 sec| -73.42% |
| GEG001+1.p | 14.563 sec | 2.596 sec| -82.17% |
| GEO287+1.p | 10.159 sec | 1.948 sec| -80.82% |
| KRS159+1.p |  4.515 sec | 1.116 sec| -75.28% |
| LCL672+1.020.p | failed* | 4.530 sec| N/A |
| NUM377+1.030.p | 9.064 sec | 2.840 sec| -68.67% |
| PUZ076+1.p | 17.025 sec | 2.985 sec| -82.47% |
| SEU404+2.p |  failed** | 4.117 sec| N/A |
| SYN424+1.p | 18.882 sec | 1.573 sec| -91.67% |

"""* out of memory"""
"""** > 15 min"""

Some lessons learned :
* some TPTP files are really huge (> 40 M/o)
* no regression at all with the conversion to packrat
* packrat perform always better
* sometime parser efficiency does matter (2 test fail without packrat)

There is only one major drawback with packrat : packrat parsers require O(n) space for memoization.
But this could be solved :
Theory here :
A Java implementation here :
Scala’s parsing combinators provide several utility classes for lexical and syntactic analysis. 
These are contained in two sub-packages, one for each kind of analysis:
@@Packrat parsers don’t need a separate scanner@@  so we need to augment the grammar with extra disambiguation information for distinguishing between reserved words, identifiers and delimiter set.
*Parser combinators pros
**in the language itself
**no external tools
**part of native Scala library
*Parser combinators cons
**do not generate proper error messages
**implementation tricky for left recursion (*)
*Parser generators pros
**code could be auto generated from EBNF
*Parser generators cons
**old school method supersed by modern techniques
*Derivative combinators pros
*Derivative combinators cons
**too confidential
*GLL combinators pros
**can handle ambiguous grammar
**generate proper error messages
**solve left recursion problem (*)
*GLL combinators cons
*PEG (Parsing Expression Grammar) pros
**syntax similar to BNF
**generate proper error messages
*PEG cons
**cannot handle left recursion (*)
*Xtext pros
**offers a novel approach to external DSL development
**true decoupling between semantic model and the grammar rules
**syntax diagrams (ie railroad diagrams) for free
**full editor for free
*Xtext cons
**entire toolset (not just a library)
**eclipse-dependent (but the runtime components like the parsers, the metamodel, the serializers, and the linkers, as well as the Xpand templating can be embedded in any arbitrary Java process.
--The winner is : Parser combinators with the PackratParsers extension** (but Parboiled, GLL and derivative combinators should be re-evaluated in case of limitation*** with Scala implementation****)
A comparison study is needed between PackratParsers extension and Parboiled focusing on [[error message handling|]] --
--The new winner is Xtext.
Some work is probably needed to make it more Scala compliant and also a speed comparaison with Packrat Parsers like this one : [[Packrat Efficiency]]--
@@I changed my mind again in favour of PEG thanks to Parboiled2 Macro-Based implementation@@
See [[Parsing Expression Grammar]] for details.

(*)  Even if the grammar has [[left recursion|]] it is usually not hard to remove
(**)  The Scala 2.8 combinator parser library adds PEG parsing and can memoize parse results for potential re-use when backtracking occurs. Packrat Parsing allow left recursive grammars (*). 
(***) packrat & left recursion [[limitation|]] (*)
(****) [[Scala Parser combinator limitation|]]

Nota: [[Grammar combinators]] is also promising but too slow
*Various techniques and tools are available for parsing
**Parser generator (Yacc, Bison, ANTLR, JavaCC, SableCC) with scanned generator (Lex, Flex, JFlex)
**Parser combinators
**Derivative combinators : 
**GLL combinators :
**[[PEGs|]] an alternative to [[CFGs|]] :
**Natural language processing :
**Alternate language processing :
*Old school Initiatives
*[[Algebraic data types|]] are particularly well-suited to the implementation of abstract syntax.
*Requirements for scala-atp
***x + y * 7
***p(x, y) + 8
***⊤ ∧ f(c) ⊃ ∀ x. P(x)
*In Scala, operator names can be characters from the unicode 
**[[Boolean Algebra Internal DSL|]]
*Threads about pretty printing in Scala

See also [[Parser combinators vs others]]
[[The Packrat Parsing and Parsing Expression Grammars Page|]]
[[[GSoC 2013] Parboiled2: PEG parser via Scala Macros|!topic/scala-language/sXC-f88Adiw]]
[[Parboiled2|]] implementation provide both elegant combinator-style parser construction and fast runtime speed
[[Early benchmarks|]] seem to confirm the potential of Parboiled2
!What is it ?
If a parser takes a stream of characters and turns them into an abstract syntax tree, a pretty printer does the opposite: it turns some tree into a string, subject to some constraints (to keep the output "pretty")

!Some pretty printer implementation
*The Scala's standard library already has one Wadler's Pretty-Printer, but it's rather limited and has only few basic combinators. [[Scala doc|]] about it is really weak. Some useful hints on [[this blog|]]
*A Scala port of Lindig’s implementation of Wadler’s pretty printer from Ocaml, adding all combinators and whole API from Haskell PPrint - which is another implementation of Wadler’s prettier-printer by Daan Leijen : [[source code|]] and [[demo|]]
*A rudimentary pretty printer combinator :
*State of the art :
*Chapter 1: Getting a Jump Start in JavaFX
**JavaFX supports data binding, which is characterized by binding the value of a property (such as the height of a rectangle) to an expression
**A Stage contains the user interface of a JavaFX app, whether it is deployed on the desktop, within a browser, or on other devices
**a Scene holds the graphical elements that are displayed on the Stage
**One powerful graphical feature of JavaFX is the ability to create scene graphs, which consist of a tree of graphical nodes
*Chapter 2: Creating a User Interface in JavaFX
**One advantage of using layout containers is that when the node sizes change, their visual relationships with each other are predictable
**Each node in a Scene can be assigned an ID in the id property of the node
**A very powerful aspect of JavaFX is the ability to use CSS to style the nodes in a Scene dynamically
*Chapter 3: Properties and Bindings
**Each Property object may have at most one active unidirectional binding at a time. It may have as many bidirectional bindings as you want
**To define a JavaFX Beans property in a Java class, you provide three methods: the getter, the setter, and the property getter
*Building Dynamic UI Layouts in JavaFX
*Chapter 5: Using the JavaFX UI Controls
**he StackPane class, which has built-in capabilities for aligning nodes.
**Just like the StackPane class, TilePane also has variables for setting the default alignment and also the alignment of each individual node added
* Chapter 1 · A Scalable Language
** Functional programming is guided by two main ideas
*** functions are first-class values
*** methods should not have any side effects
* Chapter 2 · First Steps in Scala
** recursive function => explicit result type
** function literal : args.foreach(arg => println(arg))
** If a function literal consists of one statement that takes a single argument, you need not explicitly name and specify the argument
** You can say “in” for the <- symbol. You’d read for (arg <- args), therefore, as “for arg in args.”
* Chapter 3 · Next Steps in Scala
** if a method takes only one parameter, you can call it without a dot or parentheses. Note that this syntax only works if you explicitly specify the receiver of the method call.
** you could alternatively have written 1 + 2 using traditional method invocation syntax, (1).+(2).
** When you apply parentheses surrounding one or more values to a variable, Scala will transform the code into an invocation of a method named apply on that variable
** Similarly greetStrings(0) = "Hello" will be transformed into greetStrings.update(0, "Hello")
** Scala achieves a conceptual simplicity by treating everything, from arrays to expressions, as objects with methods
** If a method name ends in a colon, the method is invoked on the right operand
** arrays are always mutable; lists are always immutable
** Tuples are very useful, for example, if you need to return multiple objects from a method
** Scala provides mutable and immutable alternatives for sets and maps, but uses the same simple names for both versions
** The telltale sign of a function with side effects is that its result type is Unit
** Prefer vals, immutable objects, and methods without side effects
* Chapter 4 · Classes and Objects
** Public is Scala’s default access level
** One important characteristic of method parameters in Scala is that they are vals, not vars
** Scala cannot have static members. Instead, Scala has singleton objects
** When a singleton object shares the same name with a class, it is called that class’s companion object
** You must define both the class and its companion object in the same source file
** A class and its companion object can access each other’s private members
** Scala implicitly imports members of packages java.lang and scala, as well as the members of a singleton object named Predef
** they end in a definition. A script, by contrast, must end in a result expression
* Chapter 5 : Basic Types and Operations
** Scala will transform the expression -2.0 into the method invocation (2.0).unary_-
** The only identifiers that can be used as prefix operators are +, -, !, and ~
** eq and its opposite, ne, only apply to objects that directly map to Java object
** Scala decides precedence based on the first character of the methods used in operator notation
** The one exception to the precedence rule, alluded to above, concerns assignment operators, which end in an equals character
** The associativity of an operator in Scala is determined by its last character
**  that for each basic type described in this chapter, there is also a “rich wrapper” that provides several additional methods
* Chapter 6 - Functional Objects
** The emphasis in this chapter is on classes that define functional objects, that is, objects that do not have any mutable state
** In Java, classes have constructors, which can take parameters, whereas in Scala, classes can take parameters directly 
** The Scala compiler will compile any code you place in the class body, which isn’t part of a field or a method definition, into the primary constructor
** The override modifier in front of a method definition signals that a previous method definition is overridden
** require will prevent the object from being constructed by throwing an IllegalArgumentException
** Even though n and d are used in the body of the class, given they are only used inside constructors, the Scala compiler will not emit fields for them
** constructors other than the primary constructor are called auxiliary constructors
** Auxiliary constructors in Scala start with def this(...)
** every auxiliary constructor must invoke another constructor of the same class as its first action
** The primary constructor is thus the single point of entry of a class
** only the primary constructor can invoke a superclass constructor
** Identifiers in user programs should not contain """‘$’""" characters, even though it will compile; if they do this might lead to name clashes with identifiers generated by the Scala compiler
** You cannot write Thread.yield() because yield is a reserved word in Scala. However, you can still name the method in back ticks, e.g., Thread.`yield`()
** The implicit modifier in front of the method tells the compiler to apply it automatically in a number of situations
* Chapter 7 - Built-in Control Structures
** The only control structures are if, while, for, try, match, and function calls
** almost all of Scala’s control structures result in some value
** The while and do-while constructs are called “loops,” not expressions, because they don’t result in an interesting value
** If there isn’t a good justification for a particular while or do-while loop, try to find a way to do the same thing without it
** The syntax of a for-yield expression is like this: for clauses yield body
** Technically, an exception throw has type Nothing
** match expressions result in a value
** there has been no mention of break or continue
** Class Breaks in package scala.util.control offers a break method
* Chapter 8 - Functions and Closures 
**  local functions are visible only in their enclosing block
** local functions can access the parameters of their enclosing function
** Scala has first-class functions
** One way to make a function literal more brief is to leave off the parameter types
** A second way to remove useless characters is to leave out parentheses around a parameter whose type is inferred
** you can use underscores as placeholders for one or more parameters, so long as each parameter appears only one time within the function literal
** A partially applied function is an expression in which you don’t supply all of the arguments needed by the function
** is called a closure. The name arises from the act of “closing” the function literal by “capturing” the bindings of its free variables
** Changes made by a closure to a captured variable are visible outside the closure
** the instance used is the one that was active at the time the closure was created
** Scala supports repeated parameters, named arguments, and default arguments
** Scala only optimizes directly recursive calls back to the same function making the call
* Chapter 9 - Control Abstraction
** first-class functions can help you eliminate code duplication
** A curried function is applied to multiple argument lists, instead of just one
** The purpose of this ability to substitute curly braces for parentheses for passing in one argument is to enable client programmers to write function literals between curly braces
** To make a by-name parameter, you give the parameter a type starting with => instead of () =>
* Chapter 10 - Composition and Inheritance
** two fundamental relationships between classes
*** Composition means one class holds a reference to another, using the referenced class to help it fulfill its mission
*** Inheritance is the superclass/subclass relationship
** A method is abstract if it does not have an implementation (i.e., no equals sign or body).
** The recommended convention is to use a parameterless method whenever there are no parameters and the method accesses mutable state only by reading fields of the containing object (in particular, it does not change mutable state)
** This convention supports the uniform access principle, which says that client code should not be affected by a decision to implement an attribute as a field or method
** it is recommended to still write the empty parentheses when the invoked method represents more than a property of its receiver object
** To instantiate an element, therefore, we will need to create a subclass that extends Element and implements the abstract contents method
** Inheritance means that all members of the superclass are also members of the subclass, with two exceptions
*** private members of the superclass are not inherited in a subclass
*** a member of a superclass is not inherited if a member with the same name and parameters is already implemented in the subclass
** fields and methods belong to the same namespace. This makes it possible for a field to override a parameterless method
** Java’s four namespaces are fields, methods, types, and packages
** Scala’s two namespaces are :
*** values (fields, methods, packages, and singleton objects)
*** types (class and trait names)
** so you can override a parameterless method with a val
** You can avoid the code smell by combining the parameter and the field in a single parametric field definition
** To invoke a superclass constructor, you simply place the argument or arguments you want to pass in parentheses following the name of the superclass
** In Java 1.5, an @Override annotation was introduced that works similarly to Scala’s override modifier, but unlike Scala’s override, is not required
** the actual method implementation invoked is determined at run time based on the class of the object, not the type of the variable or expression
** In Scala, you can define classes and singleton objects inside other classes and singleton objects
* Chapter 11 - Scala’s Hierarchy
** In Scala, every class inherits from a common superclass named Any
** Scala also defines some interesting classes at the bottom of the hierarchy, Null and Nothing
** just as Any is a superclass of every other class, Nothing is a subclass of every other class
** The root class Any has two subclasses: AnyVal and AnyRef
** AnyVal is the parent class of every built-in value class in Scala
*** Byte, Short, Char, Int, Long, Float, Double, Boolean, and Unit
** AnyRef is in fact just an alias for class java.lang.Object
** the equality operation == in Scala is designed to be transparent with respect to the type’s representation
** class AnyRef defines an additional eq method, which cannot be overridden and is implemented as reference equality (i.e., it behaves like == in Java for reference types)
** Because Nothing is a subtype of every other type, you can use methods like error in very flexible ways
* Chapter 12 - Traits
** A trait encapsulates method and field definitions, which can then be reused by mixing them into classes
** A trait definition looks just like a class definition except that it uses the keyword trait
** Once a trait is defined, it can be mixed in to a class using either the extends or with keywords
** A trait also defines a type
** you can do anything in a trait definition that you can do in a class definition, and the syntax looks exactly the same, with only two exceptions
*** a trait cannot have any “class” parameters, i.e., parameters passed to the primary constructor of a class
*** whereas in classes, super calls are statically bound, in traits, they are dynamically bound
** Adding a concrete method to a trait tilts the thin-rich trade-off heavily towards rich interfaces
** Unlike in Java, adding a concrete method to a Scala trait is a one-time effort
** The Ordered trait then defines <, >, <=, and >=
** Unlike the traits you have seen so far, Ordered requires you to specify a type parameter when you mix it in
** Traits let you modify the methods of a class, and they do so in a way that allows you to stack those modifications with each other
** super calls in a trait are dynamically bound
** mark such methods as abstract override. This combination of modifiers is only allowed for members of traits, not classes
** With traits, the method called is determined by a linearization of the classes and traits that are mixed into a class
** a class is always linearized before all of its superclasses and mixed in traits
** Whenever you implement a reusable collection of behavior, you will have to decide whether you want to use a trait or an abstract class
*** If the behavior will not be reused, then make it a concrete class
*** If you want to inherit from it in Java code, use an abstract class
*** If you plan to distribute it in compiled form, and you expect outside groups to write classes inheriting from it, you might lean towards using an abstract class
*** If efficiency is very important, lean towards using a class
*** If you still do not know, after considering the above, then start by making it as a trait
* Chapter 13 - Packages and Imports
** One way to minimize coupling is to write in a modular style
** it is recommended to follow Java’s reverse-domain-name convention for Scala packages
** Scala provides a package named _root_ that is outside any package a user can write
** Scala’s import clauses are quite a bit more flexible than Java’s
*** may appear anywhere
**** This syntax is particularly useful when you use objects as modules, which will be described in Chapter 29
*** may refer to objects (singleton or regular) in addition to packages
*** let you rename and hide some of the imported members
** Scala adds some imports implicitly to every program
*** import java.lang._
*** import scala._
**** the scala import overshadows the java.lang import
*** import Predef._
** Access modifiers in Scala can be augmented with qualifiers
*** A modifier of the form private[X] or protected[X] means that access is private or protected “up to” X, where X designates some enclosing package, class or singleton object.
** Any definitions placed in a package object are considered members of the package itself.
** Package objects are frequently used to hold package-wide type aliases (Chapter 20) and implicit conversions (Chapter 21)
** Looking ahead, Chapter 29 describes a more flexible module system than division into packages
* Chapter 14 - Assertions and Unit Testing
** The expression assert(condition) throws an AssertionError if condition does not hold
** There’s also a two-argument version of assert.
** The ensuring method can be used with any result type because of an implicit conversion
** Assertions (and ensuring checks) can be enabled and disabled using the JVM’s -ea and -da command-line flags
* Chapter 15 - Case Classes and Pattern Matching
** In the common case, all you need to do is add a single case keyword to each class that you want to be pattern matchable
** Using the case modifier makes the Scala compiler add some syntactic conveniences to your class.
*** it adds a factory method with the name of the class
*** all arguments in the parameter list of a case class implicitly get a val prefix, so they are maintained as fields
*** adds “natural” implementations of methods toString
*** adds a copy method to your class for making modified copies
** elements of case classes are always compared structurally
** you always have to make sure that all cases are covered, even if it means adding a default case where there’s nothing to do
** Constructors are where pattern matching becomes really powerful
** Scala uses the erasure model of generics, just like Java does. This means that no information about type arguments is maintained at runtime
** The only exception to the erasure rule is arrays
** In addition to the standalone variable patterns, you can also add a variable to any other pattern. You simply write the variable name, an at sign (@), and then the pattern.
** A pattern guard comes after a pattern and starts with an if
** A sealed class cannot have any new subclasses added except the ones in the same file
** So you know that in fact no MatchError will be produced. To make the warning go away, you could add a third catch-all case to the method
** A more lightweight alternative is to add an @unchecked annotation to the selector expression of the match
** The most common way to take optional values apart is through a pattern match
** Patterns are allowed in many parts of Scala, not just in standalone match expressions
*** Any time you define a val or a var, you can use a pattern instead of a simple identifier
*** A sequence of cases (i.e., alternatives) in curly braces can be used anywhere a function literal can be used
**** In general, you should try to work with complete functions whenever possible, because using partial functions allows for runtime errors that the compiler cannot help you with
*** You can also use a pattern in a for expression
* Chapter 16 - Working with Lists
** lists are immutable
** lists are homogeneous: the elements of a list all have the same type
** The list type in Scala is covariant. This means that for each pair of types S and T, if S is a subtype of T, then List[S] is a subtype of List[T]
** pattern matching over lists is clearer than decomposing them with methods
** On lists, unlike arrays, length is a relatively expensive operation
** It’s a good idea to organize your data so that most accesses are at the head of a list, rather than the last element
** Higher-order methods on class List
*** Mapping over lists: map, flatMap and foreach
*** Filtering lists: filter, partition, find, takeWhile, dropWhile, and span
*** Predicates over lists: forall and exists
*** Folding lists: /: and :\
*** Sorting lists: sortWith
** Methods of the List object
*** Creating lists from their elements: List.apply
*** Creating a range of numbers: List.range
*** Creating uniform lists: List.fill
*** Tabulating a function: List.tabulate
*** Concatenating multiple lists: List.concat
** Type inference in Scala is flow based
** This example highlights some limitations of the local, flow-based type inference scheme of Scala
*** It is not present in the more global Hindley-Milner style of type inference used in functional languages such as ML or Haskell
*** However, Scala’s local type inference deals much more gracefully with object-oriented subtyping than the Hindley-Milner style does
*** Fortunately, the limitations show up only in some corner cases, and are usually easily fixed by adding an explicit type annotation
* Chapter 17 - Collections
** A ListBuffer is a mutable object (contained in package scala.collection.mutable), which can help you build lists more efficiently when you need to append
** Another reason to use ListBuffer instead of List is to prevent the potential for stack overflow
**  Because Predef has an implicit conversion from String to StringOps, you can treat any string like a sequence
** The key characteristic of sets is that they will ensure that at most one of each object, as determined by ==, will be contained in the set at any one time
** To make it easier to switch from immutable to mutable collections, and vice versa, Scala provides some syntactic sugar
** Because tuples can combine objects of different types, tuples do not inherit from Traversable
** A common application of tuples is returning multiple values from a method
** To access elements of a tuple, you can use method _1 to access the first element, _2 to access the second, and so on
* Chapter 18 - Stateful Objects
** A class might contain vars and still be purely functional
** every var that is a non-private member of some object implicitly defines a getter and a setter method with it
** The getter of a var x is just named “x”, while its setter is named “x_=”
** The getter and setter get the same visibility as the original var
** an initializer “= _” of a field assigns a zero value to that field. The zero value depends on the field’s type. It is 0 for numeric types, false for booleans, and null for reference types
* Chapter 19 - Type Parameterization
** functional queues are asymptotically just as efficient as mutable ones
** In Scala, the primary constructor does not have an explicit definition; it is defined implicitly by the class parameters and body. Nevertheless, it is still possible to hide the primary constructor by adding a private modifier in front of the class parameter list
** Recall that T* is the notation for repeated parameters, as described in Section 8.8.
** Now that the primary constructor of class Queue can no longer be called from client code, there needs to be some other way to create new queues
*** One possibility is to add an auxiliary constructor
*** Another possibility is to add a factory method
**** A neat way to do this is to define an object Queue that has the same name as the class being defined and contains an apply method
**** By placing this object in the same source file as class Queue, you make the object a companion object of the class
**** You saw in Section 13.5 that a companion object has the same access rights as its class
**** Because of this, the apply method in object Queue can create a new Queue object, even though the constructor of class Queue is private
** Private constructors and private members are one way to hide the initial- ization and representation of a class
** Another, more radical way is to hide the class itself and only export a trait that reveals the public interface of the class
** Queue, as defined in Listing 19.4, is a trait, but not a type. Queue is not a type because it takes a type parameter. As a result, you cannot create variables of type Queue
** Instead, trait Queue enables you to specify parameterized types, such as Queue[String], Queue[Int], or Queue[AnyRef]
** Thus, Queue is a trait, and Queue[String] is a type. Queue is also called a type constructor, because with it you can construct a type by speci- fying a type parameter
** Prefixing a formal type parameter with a + indicates that subtyping is covariant (flexible) in that parameter
** Besides +, there is also a prefix -, which indicates contravariant subtyping.
** if you try to declare a type parameter to be covariant (by adding a +), but that could lead to potential runtime errors, your program won’t compile
** it is safe to assume that a type T is a subtype of a type U if you can substitute a value of type T wherever a value of type U is required. This is called the Liskov Substitution Principle
** the syntax, “U >: T”, defines T as the lower bound for U. As a result, U is required to be a supertype of T.1 The parameter to enqueue is now of type U instead of type T
** With the “T <: Ordered[T]” syntax, you indicate that the type parameter, T, has an upper bound, Ordered[T]. This means that the element type of the list passed to orderedMergeSort must be a subtype of Ordered
* Chapter 20 - Abstract Members
** four kinds of abstract member: vals, vars, methods, and types
** OK to override a ‘def’ with a ‘val’
** cannot override a ‘val’ with a ‘def’
** pre-initialized fields are initialized before the superclass constructor is called, their initializers cannot refer to the object that’s being constructed
** If you prefix a val definition with a lazy modifier, the initializing expression on the right-hand side will only be evaluated the first time the val is used
** lazy vals are an ideal complement to functional objects, where the order of initializations does not matter, as long as everything gets initialized eventually. They are less well suited for code that’s predominantly imperative
** In Scala, the inner class is addressed using the expression Outer#Inner instead of Java’s Outer.Inner. The ‘.’ syntax is reserved for objects
** there’s a class in its standard library, scala.Enumeration. To create a new enumeration, you define an object that extends this class
* Chapter 21 - Implicit Conversions and Parameters
** The way this code works is that the compiler first tries to compile it as is, but it sees a type error. Before giving up, it looks for an implicit conversion that can repair the problem
** Implicit conversions are governed by the following general rules:
*** Marking Rule: Only definitions marked implicit are available
*** Scope Rule: An inserted implicit conversion must be in scope as a single identifier, or be associated with the source or target type of the conversion
*** One-at-a-time Rule: Only one implicit is tried
*** Explicits-First Rule: Whenever code type checks as it is written, no implicits are attempted
*** Naming an implicit conversion. Implicit conversions can have arbitrary names
*** Where implicits are tried. There are three places implicits are used in the language: conversions to an expected type, conversions of the receiver of a selection, and implicit parameters
** Implicit conversions also apply to the receiver of a method call, the object on which the method is invoked This kind of implicit conversion has two main uses
*** First, receiver conversions allow smoother integration of a new class into an existing class hierarchy
*** And second, they support writing domain-specific languages (DSLs) within the language
** The remaining place the compiler inserts implicits is within argument lists
** Note that the implicit keyword applies to an entire parameter list, not to individual parameters
** As a style rule, it is best to use a custom named type in the types of implicit parameters
** You can think of “T <% Ordered[T]” as saying, “I can use any T, so long as T can be treated as an Ordered[T].”
** before adding a new implicit conversion, first ask whether you can achieve a similar effect through other means, such as inheritance, mixin composition, or method overloading
* Chapter 22 - Implementing Lists
** Lists are not “built-in” as a language construct in Scala
** the list buffer implemen- tation is organized so that both the append operation (+=) and the toList operation take (very short) constant time
** Instead of recursing through this structure, however, many core list methods are imple- mented using a ListBuffer
** It is functional on the outside, but uses mutation internally to speed up the common case where a buffer is discarded after toList is been called
* Chapter 23 - For Expressions Revisited
** all for expressions that yield a result are translated by the compiler into combinations of invocations of the higher-order methods map, flatMap, and withFilter
** Generally, a for expression is of the form: for ( seq ) yield expr
*** seq is a sequence of generators, definitions, and filters, with semicolons between successive elements
**** A generator is of the form: pat <- expr
** The for notation is essentially equivalent to common operations of database query languages
** To support the full range of for expressions and for loops, you need to define map, flatMap, withFilter, and foreach as methods of your data type. But it’s also possible to define a subset of these methods
*** If it defines flatMap as well as map, it allows for expressions consisting of several generators
*** If it defines foreach, it allows for loops (both with single and multiple generators)
*** If it defines withFilter, it allows for filter expressions starting with an if in the for expression
** the concept of for expression is more general than just iteration over a collection
** for expressions also play an important role in asynchronous I/O, or as an alternative notation for optional values
* Chapter 24 - The Scala Collections API
**  Scala collections systematically distinguish between mutable and immutable collections
** The three variants are located in packages scala.collection, scala.collection.immutable, and scala.collection.mutable
*** A collection in package scala.collection can be either mutable or immutable
** The difference between root collections and immutable collections is
*** clients of an immutable collection have a guarantee that nobody can mutate the collection
*** clients of a root collection only know that they can’t change the collection themselves
** At the top of the collection hierarchy is trait Traversable
*** Abstract method: xs foreach f
** The next trait from the top in Figure 24.1 is Iterable
*** Abstract method: xs.iterator
** The Seq trait represents sequences. A sequence is a kind of iterable that has a length and whose elements have fixed index positions, starting from 0
** Seq trait has two subtraits, LinearSeq and IndexedSeq
*** A linear sequence has efficient head and tail operations
*** an indexed sequence has efficient apply, length, and (if mutable) update operations
** Sets are Iterables that contain no duplicate elements
** Maps are Iterables of pairs of keys and values
** if you needed a thread-safe map, you could mix the SynchronizedMap trait into whatever particular map imple- mentation you desired.
** the most common immutable collection types
*** Lists are finite immutable sequences
*** A stream is like a list except that its elements are computed lazily
*** Because vectors strike a good balance between fast random selections and fast random functional updates, they are currently the default implementation of immutable indexed sequences
*** If you need a last-in-first-out sequence, you can use a Stack
*** A queue is just like a stack except that it is first-in-first-out rather than last-in- first-out
*** A range is an ordered sequence of integers that are equally spaced apart
*** Hash tries are a standard way to implement immutable sets and maps efficiently
*** Scala provides implementations of sets and maps that use a red-black tree internally. You access them under the names TreeSet and TreeMap
*** Red-black trees are also the standard implementation of SortedSet
*** A bit set represents a collection of small integers as the bits of a larger integer
*** A list map represents a map as a linked list of key-value pairs
** take a look at the mutable collection classes
*** An array buffer holds an array and a size
*** A list buffer is like an array buffer except that it uses a linked list internally instead of an array
*** Just like an array buffer is useful for building arrays, and a list buffer is useful for building lists, a string builder is useful for building strings
*** Linked lists are mutable sequences that consist of nodes that are linked with next pointers
*** DoubleLinkedLists are like the single linked lists. The main benefit of that additional link is that it makes element removal very fast
*** A MutableList consists of a single linked list together with a pointer that refers to the terminal empty node of that list
*** Scala provides mutable queues in addition to immutable ones
*** Array sequences are mutable sequences of fixed size. They are implemented in Scala by class ArraySeq
*** You saw immutable stacks earlier. There is also a mutable version
*** ArrayStack is an alternative implementation of a mutable stack. generally slightly more efficient for most operations than a normal mutable stack
*** the default mutable map and set types in Scala are based on hash tables
*** A weak hash map is a special kind of hash map in which the garbage collector does not follow links from the map to the keys stored in it
*** A concurrent map can be accessed by several threads at once
*** A mutable bit set is just like an immutable one, except that it can be mod- ified in place
** Arrays are a special kind of collection in Scala
** Like arrays, strings are not directly sequences, but they can be converted to them
** You can see the performance char- acteristics of some common operations on collections summarized in two tables, Table 24.10 and Table 24.11
** The collection libraries have a uniform approach to equality and hashing
*** The idea is, first, to divide collections into sets, maps, and sequences
**** Collections in different categories are always unequal
**** within the same category, collections are equal if and only if they have the same elements
** A view is a special kind of collection that represents some base collection, but implements all of its transformers lazily
** An iterator is not a collection, but rather a way to access the elements of a collection one by one
* Chapter 25 - The Architecture of Scala Collections
** The principal design objective of the new collections framework was to avoid any duplication, defining every operation in as few places as possi- ble
** The design approach was to implement most operations in collection “templates” that can be flexibly inherited from individual base classes and implementations
** Almost all collection operations are implemented in terms of traversals and builders
** What needs to be done if you want to integrate a new collection class, so that it can profit from all predefined operations at the right types? two examples that do this
*** Integrating sequences
*** Integrating new sets and maps
** Because of Scala’s rich support for abstraction, each new collection type can have a large number of methods without having to reimplement them all over again&nbsp;
* Chapter 26 - Extractors
** This chapter explains what extractors are and how you can use them to define patterns that are decoupled from an object’s representation
** An extractor in Scala is an object that has a method called unapply as one of its members. The purpose of that unapply method is to match a value and take it apart
** Even though they are very useful, case classes have one shortcoming: they expose the concrete representation of data. Extractors break this link between data representations and patterns.
** One particularly useful application area of extractors are regular expressions
** In this chapter you saw how to generalize pattern matching with extractors
* Chapter 27 - Annotations
** Annotations are allowed on any kind of declaration or definition, including vals, vars, defs, classes, objects, traits, and types
** Scala includes several standard annotations
*** Deprecation
*** Volatile fields
*** Binary serialization
*** Automatic get and set methods
*** Tailrec
*** Unchecked
*** Native methods
* Chapter 28 - Working with XML [[skipped|!topic/scala-user/Dz95uJLqzhA]]
* Chapter 29 - Modular Programming Using Objects
** if a module is an object, then a template for a module is a class
** you can use traits to split a module into separate files
** One final feature of Scala modules is worth emphasizing: they can be linked together at runtime
* Chapter 30 - Object Equality
** The == equality is reserved in Scala for the “natural” equality of each type
** Here’s the recipe for overriding equals
*** If you’re going to override equals in a non-final class, you should create a canEqual method
*** The canEqual method should yield true if the argument object is an instance of the current class (i.e., the class in which canEqual is defined), false otherwise
*** In the equals method, make sure you declare the type of the object passed as an Any
*** Write the body of the equals method as a single match expression. The selector of the match should be the object passed to equals
*** The match expression should have two cases
**** In the body of this case, write an expression that logical-ands together the individual expressions that must be true for the objects to be equal
**** For the second case, use a wildcard pattern that yields false
** Given how difficult it is to implement a correct equality method, you might prefer to define your classes of comparable objects as case classes. That way, the Scala compiler will add equals and hashCode methods with the right properties automatically.
* Chapter 31 - Combining Scala and Java
** Most of the time, you can ignore how Scala is implemented and simply write and run your code
* Chapter 32 - Actors and Concurrency [[skipped|]]
* Chapter 33 - Combinator Parsing
** Sequential composition is expressed by an explicit operator: ~
** Repetition is expressed rep( . . . )
** The ˆˆ operator transforms the result of a parser
*** examples :
**** floatingPointNumber ˆˆ (_.toDouble)
**** "true" ˆˆ (x => true)
** Table 33.1 · Summary of parser combinators
*** "..." literal
*** "...".r regular expression
*** P~Q sequential composition
*** P<~Q,P~>Q sequential composition; keep left/right only
*** P|Q alternative
*** opt(P) option
*** rep(P) repetition
*** repsep(P, Q) interleaved repetition
*** Pˆˆf result conversion
** The task of syntax analysis is often split into two phases
*** The lexer phase recognizes individual words in the input and classifies them into some token classes
*** This is followed by a syntactical analysis phase that analyzes sequences of tokens
** Scala’s parsing combinators provide several utility classes for lexical and syntactic analysis
*** scala.util.parsing.combinator.lexical
*** scala.util.parsing.combinator.syntactical
** The parser combinators employ backtracking to choose between different parsers in an alternative
** Backtracking imposes only a few restrictions on how to formulate a grammar so that it can be parsed. Essentially, you just need to avoid left- recursive productions
* Chapter 34 - GUI Programming [[skipped|]]
* Chapter 35 - The SCells Spreadsheet [[skipped|]]  
!What we've learned so far
All issues related to <<tag parsing>> have been undervalued. ATP and parsing are tightly coupled.
A theorem prover need a solid parsing foundation.
!What we've done so far
*contribute to <<tag MELIA>> [[0.1.3|]] by changing the <<tag TPTP>> parser to packrat <<tag parsing>>, which speeds up parsing significantly.
*contribute to <<tag gapt>> migration to specs2, maven best practices, XML validation and code organization.
*contribute to <<tag gapt>> migration to Scala 2.10
*contribute to <<tag gapt>> migration to Scala 2.11
*--maintain a [[Github fork|]] of gapt compatible with Scala IDE-- done and push upstream
*see also : <<tag done>>
!What we are doing right now
Focus is currently on selecting and using a "state of the art" parser for ATP purpose.
*parser based on parser combinators with the PackratParsers extension has been studied
*[[Xtext|]] framework/tool was also considered
*Finally current study focus on parboiled2: a Macro-Based PEG Parser
In this context, Debasish Ghosh's book [[DSLs in Action |]] and [[Eugen Labun thesis|]] have proved invaluable
*Phase 2 (reloaded) :
**Parsing study and Scala port of [[Handbook of Practical Logic and Automated Reasoning|]]([[started|]], @@WIP@@)
See also : [[Roadmap]]
* a kind of standard for higher-order expressions:
* old prover9 xml format is just lots of boilerplate
* even HLK doesn't make much sense
* if you look at the "proof language" of Coq, you will see that there is almost no language. It is almost just functional programming...

=> <<tag TPTP>> is the cornerstone in ATP world
**The precise technical specification of the language is given in Section 4.1 (p. 92) of the [[Metamath book|]] and is about 4 pages long. A simple example is given on p. 35
**proofs developed using Metamath are fully detailed; this entails that every step can be verified by the readers
**resulting proofs are much longer than informal proofs or even formal developments in more advanced proof assistants
*[[Mizar Language|]]
**The Mizar language is close enough to the mathematical vernacular that mathematicians can read and understand Mizar articles almost immediately
**it is formal enough that proofs can automatically be checked
**The Isar subsystem is tightly integrated into the Isabelle/Pure meta-logic implementation
**Compared to existing declarative theorem proving systems (like Mizar), Isar avoids several shortcomings
*[[TPTP language]]
![[Oh my god. The monad laws are the *category laws* in the kleisli category. How did I not see this until now|]] (Paul Phillips 11/18/2015)
![[indeed.  type classes are a very limited mode of use of modules.|]] (Robert Harper 05/2014)
![[I've never used my own union/negation types in real code. On every occasion it initially seemed justified, a type class turned out better|]] (Miles Sabin 05/2014)
![[Actors are overly nondeterminstic|]] (Paul Chiusano 09/2013)
scalaz-stream can already express lots of the use cases that people might otherwise express with actors
!These book are recommended for first reading
*@@The best introductory book is french : [[Introduction à la logique : Théorie de la démonstration]]@@
*[[Language, proof, and logic|]]
** Starts with a great review of logic and goes on to discuss proofs of soundness and Godel's incompleteness theorems.
** Textbook and software package, intended for use in undergraduate level logic courses.
** Appropriate for students without much exposure to math or computer science
** Full review [[here|]] page 377 
*[[First-Order Logic and Automated Theorem Proving|]]
** All the proofs are here
** Examples in Prolog
*[[Handbook of Practical Logic and Automated Reasoning|]] 
** Automated theorem proving methods are explained with reference to actual concrete implementations
** No higher-order logic at all
** Not an easy read for those not already familiar with classical logic and functional programming
** Full review [[here|]] page 279
*[[DSLs in Action|]]
** Great book on how to design, implement and use domain specific languages. 
** Very useful  if you have to write your parser with Scala parser combinator.
** Very good book. Thank's to the author Debasish Ghosh.
*[[Homotopy Type Theory: Univalent Foundations of Mathematics|]]
**The authors have learnt the lesson of Russell and Whitehead, and use notation deftly to avoid losing the reader in a morass of symbols.
**The introductory sections are short and to the point, giving definitions and justification concisely and clearly.
**The language is approachable and not at all high-falutin.
**@@That said, learning type theory from scratch starting from page one of the HoTT book is probably not a good idea.@@
**@@Consider [[15-819 Homotopy Type Theory]] course as an indispensable companion to the book.@@
!These books are not for first reading
*[[Basic proof theory|]]
**Require a moderately advanced knowledge of proof theory
**Usefull because the authors discuss quite a wide variety of logical systems
**Full review first edition :
**Review second edition : [[here|]] page 280
*[[Handbook of proof theory|]]
**This volume contains articles covering a broad spectrum of proof theory, with an emphasis on its mathematical aspects.
**Full review :
!Tasks to complete
*Phase 1 : Initialization
** --Select the appropriate Content Management System-- ([[done|Content Management System]])
** --Select the appropriate programming language-- ([[done|Scala]])
** --Select additional <<tag tools>>-- (done)
** --Select parsing technique-- ([[done|Parser combinators vs others]])
*Phase 2 (reloaded) :
**Parsing study and Scala port of [[Handbook of Practical Logic and Automated Reasoning|]]([[started|]], @@WIP@@)
**Write a parser for <<tag TPTP>> using parboiled2: a Macro-Based PEG Parser
**Compare [[shapeless|]], [[Idris|]] and  [[Agda|]] in HoTT context
*Phase 3 : Make [[HED|]] (hoper edei deixai) a state of the art theorem prover (not yet started)
!Never ending tasks
* Identify resources (books, article, thesis, blog, etc ...) that are available ([[started|Library]])
* Identify people/organisation also working on ATP ([[started|Who's who]])
* Select authoritative <<tag book>> on each subject and make a summary (<<tag summary [started]>>)
* Use the selected [[CMS|Content Management System]] to consolidate knowledge ([[started|Etat De L'Art]])
* Contribute to <<tag gapt>> (started)
* --maintain a [[Github fork|]] of gapt compatible with Scala IDE-- (done and commit upstream)
!Aborted tasks
*Phase 2 : Write a parser/editor for <<tag TPTP>> using --Scala Parser Combinator-- Xtext  ([[abandoned|TPTP Editor]])
**Construct Xtext grammar for TPTP (started, abandoned)
***Convert tptp.g from antlrV2 to antlrV3 (started, abandoned)
***Convert Antlr grammar to Xtext (abandoned)
***Create scopes for type resolution and challenges faced due to the TPTP’s language semantics (abandoned)
***Customize the appearance and contents of the Outline View (abandoned)
***Customize the editor: syntax highlighting, Javadoc-like documentation and hyperlinking (abandoned)
***Customize content-assist beyond the language syntax (abandoned)
***Validate and test the language’s semantics (abandoned)--
!Recent news about SAT
!The international SAT Competitions web page
Extract from [[Michal Jan Moskal PhD Thesis : Satisfiability Modulo Software|]]
//There is certain overlap between what is referred to as SMT solvers and first-order Automated Theorem Proving (ATP) systems. Both check satisfiability (or equivalently validity) of first-order formulas. However, unlike in SMT, the logic considered in ATP systems is usually just plain first-order logic with or without equality, but always without any additional theories. On the other hand, SMT solvers usually accept only ground (quantifier-free) problems, whereas ATP systems use reasoning methods like resolution or superposition, which are complete also in presence of non-ground formulas. Still, there are SMT solvers supporting quantifiers, and there are ATP solvers handling some background theories. The crucial differences are thus in the application areas and search methods.
SMT solvers evolved from propositional SAT solvers to support more efficient encoding of specific (hardware or software) verification problems. ATP systems are built to move forward automated deduction, in particular of mathematical theorems. In fact the TPTP library contains a number of open mathematical problems. Thus, as a rule of thumb, SMT solvers are good at huge, shallow problems (usually from decidable classes), whereas ATP systems are good at small, deep problems (where the general classes of problems are never decidable). As a result, the constraints on search methods are different.
ATP systems usually do some sort of saturation search, where new formulas are derived from the input. Such process is guaranteed to finish if the formula is unsatisfiable but not in the opposite case.
SMT solvers try to build a model for the input formula, step by step. This is done in hope that the model pointing a verification failure will likely be small, and if there is no model, we will be able to quickly move through such small models and discharge them. The user expects an answer either pointing to a candidate model, or the information that there is none. Both answers need to be given quickly. As a consequence, when SMT solvers need to deal with quantified formulas some heuristic is employed. This sacrifices completeness for the "common"-case performance. From automated deduction point of view, this does not seem very elegant. However, verification tools are very often incomplete for several other reasons, having to do with imprecise or restricted encoding of the verified system. Thus, one additional source of incompleteness does not look so bad anymore, especially when one can trade it for speeding up the interactive feedback loop.//
!The Satisfiability Modulo Theories Library (SMT-LIB)
!Competition (SMT-COMP)
!Constraint solver in Scala @@(wip)@@ (This page contains also first-class resources)
Why Scala ?

This extract from [[Daniel Spiewak's blog|]] is a good summarize to the answer of this question. 

Scala can be the salvation for the program prover.  It’s hybrid functional / object-oriented nature lends itself beautifully to the expression of highly mathematical concepts of intense precision.  Theoreticians have long suspected this, but the research simply has not been there to back it up.  After all, most PhDs write all their proofs on a blackboard, making use of a character set extension to ASCII.  Fortunately for the world, that is no longer an issue.

The answer is to to turn the problem on its ear (as it were) and eschew mathematical expressions altogether.  Instead of the developer expressing intent in terms of epsilon, delta and gamma-transitions, a simple framework in Scala will infer intent just based on the input code.  All of the rules will be built dynamically using an internal DSL, without any need to mess about in low-level character encodings.  Scala is the perfect language for this effort.  Not only does its flexible syntax allow for powerful expressiveness, but it even supports UTF-8 string literals!

See also : [[Scala vs others]] 
!Alternative libraries
The standard XML package (aka Scala-XML) is outdated and beyond fixing (see section below).
And this will be the same for Scala 2.10 :!topic/scala-user/Dz95uJLqzhA%5B1-25%5D
There are alternative XML libraries :
*[[Anti-XML|]] (no build available for Scala 2.9.2)
Unfortunately none of these libraries could be used as drop in replacement.
!Standard Scala-XML is in very bad shape.
Working with XML offline is not straightforward with Scala-XML  
* (the tweak from Daniel Sobral doesnt work any more for recent JDK and OpenJDK)
*Some also concluded "XML catalogs are not yet ready for prime time" and implement their own XML catalogs : See comment in (TestXMLCatalogAndValidate.scala)
I manage to make the whole thing working for gapt : 
! Problems in Scala plugin
* no hover for Javadoc/Scaladoc : [[ticket 1000255|]]
* --no "Add Import'" function like in Java (but quick fix work)--
* Import not used never signaled [feature available in IntelliJ IDEA]
* no maximum line width in Scala formatter configuration
* search references of a Scala object in project (like in Java) is not yet implemented [feature available in IntelliJ IDEA Find usages]
* open call hierarchy doesn't work : [[ticket 1000360|]]
* --no code analysis : [[ticket 1000629|]] [feature available in IntelliJ IDEA Inspect code]-- [[Scalastyle|]] Eclipse plugin work well.
! Problems in m2eclipse
* add a module in a maven multiple project never add the corresponding scala project even with update project configuration
! Problems with specs2
* First method to execute specs2 specification [[documented here|]] doesn't always work even with the [[trick|]]. The author note :  for undetermined reasons the trick above might not even work! In that case your best chance is to put explicitly the following annotation on your specification class ... @@<= not really a problem since the second method work.@@
* @RunWith(classOf[JUnitRunner]) need to be on top so no case class before for example. (Not a big problem)
!How to debug
*the log parser is very useful (see above)

Example with
  lazy val expression: Parser[Expression] = log(product ~ "+" ~ expression)("product+expression") ^^ {
    case left ~ "+" ~ right =>
      Add(left, right)
  } | log(product)("product")

  lazy val product: Parser[Expression] = log(atom ~ "*" ~ product)("atom*product") ^^ {
    case left ~ "*" ~ right =>
      Mul(left, right)
  } | log(atom)("atom")

  lazy val atom: Parser[Expression] = "(" ~> log(expression)("expression") <~ ")" | log(constant)("constant") | log(variable)("variable")

  lazy val constant = log(decimalNumber)("decimalNumber") ^^ { s => Const(s.toInt) }

  lazy val variable = log(ident)("ident") ^^ { s => Var(s.toString) }

!Migrate code from Scala standard parser combinator to packrat
*replace "def" with "lazy val"
*extends xxxParsers with PackratParsers (ie) make the parsers-class/object mix-in from PackratParsers
*replace "Parser[T]" with "PackratParser[T]"
A complete example of conversion can be found here :
A benchmark before and after the migration : see [[Packrat Efficiency]]
* Chapter 1 · Scala - a blended language
**Object orientation can deal with composing the nouns and functional programming can deal with composing verbs
**The essence of functional programming is delaying side effects as long as possible
**In addition to separating the concerns of how a variable behaves from the variable type, the placement of types on the right allows type inference to determine the type of the variables
**Implicits are a powerful feature and are mistrusted by some. The key to implicits in Scala are knowing how and when to use them
* Chapter 2 · The core rules
*Part 1 - Scala : the basics
**Why Scala?
***a pure object-oriented language should have the following characteristics:
****Encapsulation/information hiding.
****Polymorphism/dynamic binding.
****All predefined types are objects.
****All operations are performed by sending messages to objects.
****All user-defined types are objects.
***Scala has made some innovations on OOP space:
****Modular mixin composition
****Type abstraction
***Functional programming languages that support this style of programming provide at least some of the following features:
****Higher-order functions
****Lexical closures
****Pattern matching
****Single assignment
****Lazy evaluation
****Type inference
****Tail call optimization
****List comprehensions
****Mondadic effects
***Because all values in Scala are objects, it follows that functions are objects too
***List(1, 2, 3).map((x: Int) => x + 1)
***List(1, 2, 3).map(new Function1[Int, Int]{ def apply(x:Int): Int = x + 1})
***Scala version 2.10 adds support for a Dynamic
**Getting started
***Scala Predef (part of the standard library) maps println to Console.println
***placeholder character (_) to assign a default value
***you can’t declare lazy var variables in Scala
***val first :: rest = List(1, 2, 3)
***passing a function isn’t always necessarily closure
***Predef implicitly converts Array to scala.collection.mutable .ArrayOps
***val afterDelete = newList.filterNot(_ == 3)
***The special :_* tells the Scala compiler to send the result of toArray as a variable argument to the Arrays.asList method
**OOP in Scala
***The class declaration looks different from the way you declare in Java or C#—you’re not only declaring the class, but also its primary constructor
***You can create classes without any class body
***val and var prefixes are optional, and when both of them are missing, they’re treated as private instance values
***To overload a constructor, name it this followed by the parameters. Constructor definition is similar to method definition except that you use the name this
***To add a setter, you have to suffix your setter method with _=
***When you don’t explicitly extend any class, by default that class extends the scala.AnyRef class.
***Scala provides syntactic sugar that allows you to use objects as function calls. Scala achieves this by translating these calls into the apply method
***The only things you can put in a package are classes, traits, and objects. But with the help of the package object, you can put any kind of definition in a package, such as a class
***abstract class can have constructor parameters, but traits can’t take any parameters. Both can take type parameters
***The difference between def and val is that val gets evaluated when an object is created, but def is evaluated every time a method is called
***One advantage of traits compared to module systems available in other languages is that the trait mixin is checked at compile time
*Part 2 - Working with Scala
**Concurrency programming in Scala
***Locks don’t compose. You can’t implement a higher-level, thread-safe behavior by composing smaller thread-safe operations.
***Three most popular trends in concurrency
****Software transactional memory (STM). This is typically implemented in a lock-free way and is composable.
****Dataflow concurrency
****Message- passing concurrency
***you can safely use mutable state inside an actor as long as it’s not shared.
***the recommended approach is to separate blocking actors from nonblocking actors by assigning different message dispatchers
***A common use case of Future is to perform some computation concurrently without needing the extra utility of an actor
***The most compelling feature of the Scala Future library is it allows us to compose concurrent operations, which is hard to achieve with actors
***Future is an example of a monad. It implements map, flatMap, and filter operations, necessary ingredients of functional composition.
*map and filter are not defined on Array. Implicit conversions in Predef are used to provide such methods for arrays.
* Scala compiler : this is OK in Java but not valid in Scala
System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg)
*The following programming languages have algebraic data types as a first class notion. This criteria is mandatory.
**Clean : too confidential/exotic. Too little community.
**F# : seem OK but Micro$oft world is definitely not my cup of tea.
**Haskell : seem OK
**haXe : too confidential/exotic. Too little community.
**Hope : too confidential/exotic. Too little community.
**Mercury : too confidential/exotic. Too little community.
**Miranda : too confidential/exotic. Too little community.
**Nemerle :too confidential/exotic. Too little community.
**Objective Caml : seem OK
**Scala : seem OK
**Standard ML : supersede by Objective Caml 
**Visual Prolog : too confidential/exotic. Too little community.

* Objective Caml pros
** mixed paradigm
* Objective Caml cons
** poor IDE support
* Haskell pros
** cleanest syntax of all
** pure formal
* Haskell cons
** pure formal
** poor IDE support
* Scala pros
** mixed paradigm
** the fastest
** strong IDE support
* Scala cons
** syntax is as good as possible but far from the beauty of haskell syntax 
The winner is : [[Scala]]. The only language of the three with strong IDE support. 
@@None of the languages explicitly support dependent types, that's why [[Idris]] is now also considered
*ScalaTest pros
**usable Eclipse plugin
**Akka’s own test suite is written using ScalaTest
**used by Martin Odersky in Coursera
*ScalaTest cons
**--source repository on svn, ant build, ... (but this is going to change => [[see this thread|!topic/scalatest-users/cExfcfMgNcQ]])--
*Specs2 pros
**Eric Torreborre support is amazing
**parallel tests proved to be usefull for thread safety check  
**The Scala community seem to prefer Specs2 (1)
*Specs2 cons
**no Eclipse plugin, Junit is needed and result is far from perfect (no message for skipped test and fragment not found glitch)
**Slow time compilation :

ScalaTest win

(1) but Twitter migrate from Specs to Scalatest :
//we found that scalatest was typically a lot more conscientious about breaking API changes (we are still getting tests off of specs, as you can see), didn't require that you used a metric ton of implicits, which slow down compilation significantly, and is more responsive to bug reports. In general, scalatest is flexible, simple to reason about, and doesn't get in your way.//
Netty Javadoc use [[apiviz|]] which is a rewrite of [[umlgraph|]] to embed class diagram in Javadoc
Similar feature for scala discussed [[here|]] and [[here|]] --but no implementation yet.--
Extract from :
{{{Type lattice visualization: Rather than trying to understand head-on how the typechecker works ;-) it might be useful to visualize first what the typechecker does for a given run (which type lattice it computes). At least one other component of the Scala compiler generates a DOT file to illustrate its operation (-Xgenerate-phase-graph). In IDE-speak, a type lattice is called a "UML Class Hierarchy". Actually, it need not be IDE-based: Scaladoc2 could be retrofitted to perform code-summarization as in Spinelli's UMLGraph.}}}
--Implementation is ongoing : see [[Scaladoc class diagrams part 1|]] and [[Scaladoc class diagrams part 2|]]--
@@''Implementation is done and the graph and implicit feature are available in Scala 2.10''@@
*scala-maven-plugin limitation
** no way to pass -diagrams and -implicits argument other than this way and that is not compatible with install goal
@@sbt use a much better way@@ : 
scalacOptions in (Compile, doc) ++= Seq("-diagrams","-implicits")

*Scaladoc limitations --(@@TODO@@ check if fixed in last version)--
**--Scaladoc/Scaladoc2 : no Java like {{{"{@link ...}"}}} (VScalaDoc2 doesn't seem an option actually)-- @@"""Scaladoc uses [[fullyQualifiedName]] instead of {@link} """@@
--Project to follow :
*Resources :
**[[scaladoc2|]] source code :
**Advanced browsable Scala source code in HTML with :
**--Collaborative doc initiative :

a site about Homotopy Type Theory, Theory of Programming Language and Automated Theorem Proving
{{right{<html> <a href=""><img style="position: absolute; top: 0; right: 0; border: 0;" src="" alt="Fork me on GitHub"></a>
</html>}}}[[Etat De L'Art]] 
.headerShadow .left, 
.headerShadow .right { 
     position: absolute; 
     top: 0; 

.headerShadow .left { 
     left: 0; 

.headerShadow .right { 
     right: 0; 

.headerForeground .left, 
.headerForeground .right { 
     display: none; 
; FORM ::= true //truth//
: | false //falsity//
: | FORM ∧ FORM //conjunction//
: | FORM ∨ FORM //disjunction//
: | FORM → FORM //implication//
: | ¬FORM //negation//
: | ∀VAR FORM //universal quantification//
: | ∃VAR FORM //existential quantification//
: | p(TERM, . . . , TERM ) where p ∈ Σp //atomic formulas or predicates//
; TERM ::= f (TERM, . . . , TERM ) where f ∈ Σf
: | VAR
; VAR ::= some countable set of names //variables//
[[Ivy|]] is a preprocessor and proof checker for resolution/paramodulation theorem provers.

Information about Ivy seem available in this non free book : [[Computer-Aided Reasoning: ACL2 Case Studies|]]
Some informations about the syntax of Ivy are available in this [[document|]] (see page 18 & 19)
*each line is justified as an application of rule to preceding lines
*the justification for initial steps is named ''input''
*''instantiate'' apply to and explicit substitution to a clause
*''resolve'' apply binary resolution on identical atoms to a pair of clause
*''paramod'' apply equality substitution on identical terms to a pair of clauses
*''flip'' swap the arguments of an equality atom of a clause
*''propositional'' apply to a propositional simplification to a clause
; FORM ::= true //truth//
: | false //falsity//
: | FORM ∧ FORM //conjunction//
: | FORM ∨ FORM //disjunction//
: | FORM → FORM //implication//
: | ¬FORM //negation//
: | p where p ∈ Σ //boolean variables//
Prover9/Mace4 BNF
David A. Wheeler, 2007-09-21

The following attempts to document prover9's syntax for formulas, in BNF format.  Note that symbols can be changed and added, including whether they are prefix, infix, and so on; this BNF only describes the default.
A formulas(sos), formulas(assumptions), and formulas(goals) contains a list of zero or more "top_level_formula"s.  This is based on the Aug 2007 version of prover9.  There may be errors.
A "formula" is essentially the 'type' of a built-in boolean, while "term" is essentially the 'type' of everything else.

top_level_formula ::= formula "."

formula ::= formula binary_operation formula | "-" formula |
            quantifier variable formula | "(" formula ")" |
            predicate | term equality_related_ops term |
            "$T" | "$F" |
            formula "#" attribute

equality_related_ops ::= "=" | "!="

binary_operation ::= "|" | "&" | "->" | "<-" | "<->"

quantifier ::= "all" | "exists"

predicate ::= predicate_name "(" term { "," term }* ")"

term ::= function_name { "(" term { "," term }* ")" } |
         term binary_term_operation term |
         prefix_term_operation term |
         term postfix_term_operation | list

binary_term_operation ::= "==" | "<" | "<=" | ">" | ">=" |
                          "+" | "*" | "@" | "/" | "\" | "^" | "v"
% Note: Lacks infix "-".

prefix_term_operation ::= "-"
% Note that prefix "-" can form either a formula or a term.

postfix_term_operation ::= "'"

list ::= "[]" |
         "[" term { "," term}* [":" term] "]"
% list notation is shorthand for $cons(...).

attribute ::= "label" "(" string ")" |
              "answer" "(" term ")" |
              "action" "(" term ")"
              "bsub_hint_wt" "(" integer ")"
txtUserName: Olivier%20ROLAND
txt_GATracker_id: UA-4646079-1
txt_GATracker_track: 1%2C0%2C1%2C1%2C1%2C1%2C1
txt_GATracker_url_exclude: dropboxusercontent
@@Abandoned project@@ superseded by 
[[tptp-editor|]] project on github
!Standalone editor
TPTP Editor is an application built on the Eclipse Rich Client Platform (RCP) and as such could be an Eclipse plugin and potentially also a standalone application.
Some examples of editors based on the same stack (Eclipse + Xtext) :
* <= also available as a standalone editor
* <= also available as a standalone editor
!Xtext grammar for TPTP
Andrei Tchaltsev has created an ANTLR grammar file for TPTP language version (see section ''//with ANTLR//'' in [[TPTP parser]])
This grammar is for ANTLR v2 but is a good starting point.
[[ANTLR v2 to v3 converter|]] was used to help v2 => v3 conversion but 12 residual errors need to be [[manually|]] fixed
java -cp ./antlr-runtime-3.0.1.jar:. v3me tptp.g
See [[here|]] some tips to convert Antlr grammar to Xtext 

The creation of the Xtext grammar from Antlr grammar did not prove to be a good idea. However Xtext generate an Antlr grammar InternalX.g and for debugging purpose the translation may prove to be useful. (see below)
!Debugging Xtext grammar
[[ANTLRWorks|]] or [[ANTLR IDE|]] can sometime help to debug corner case problems.

!Main site
[[TPTP technical report|]]
[[Valuable information in this old document|]]

*Four languages in TPTP SyntaxBNF are defined : THF, TFF, FOF, and CNF
*[[TFF|]] (Typed First-order Form) : introduced in TPTP release v5.0.0, Thu Sep 16 10:38:26 EDT 2010
*MELIA accepts formulas in the TPTP FOF, TFF, and TFF-INT formats



!Naming convention
|>|>|>|>|>|DDD - Domain name abbreviation. The domain names and their abbreviations are listed in Section  The Domain Structure of the TPTP.|
||>|>|>|>|NNN - Abstract problem number. It is unique within the domain.|
|||>|>|>|F - Form. ^ for THF, _ for TFF without arithmetic, = for TFF with arithmetic, + for FOF, and - for CNF.|
||||>|>|V - Version number. It differentiates between different versions of the abstract problem, as discussed in Section Problem Versions.|
|||||>|SSS - Size parameter(s). These only occur for generated problems, and give the size parameter(s).|
||||||T - File name extension. p for problem files, g for generator files.|
!with parser combinator
The most advanced work regarding TPTP parsing in Scala are currently done by Peter Baumgartner (NICTA) in MELIA (GPLv3) prover. 
Source code is here :
The parser is in : TPTPTParser.scala

There is three problem with Peter's parser. 
* it is tightly coupled with underlying technical structure (the parser depend on ALL other classes of the prover). 
* MELIA accepts formulas in the TPTP FOF, TFF, and TFF-INT formats but 
** some tptp input files need tptpFO2tme which is not included with MELIA bundle 
** some need previous conversion to protein format to use .tme files as input files 
* it is awfully slow (despite notable improvements achieved with packrat parser see [[Packrat Efficiency]])
@@the first problem has been undervalued and is not really cleanly fixable with Parser Combinator@@   

!with ANTLR
Andrei Tchaltsev's highly reusable TPTP parser in Java using ANTLR
Several version of this parser (total mess due to no release note) 
*the [[official version|]] is not up to date. 
*in [[Alexandre Riazanov converter|]] in the jar TptpParser.jar in the lib directory. (the same jar is also available in the [[psoa-to-tptp|]] [[project|]]. See [[here|]])
*An earlier version of the binary seem to be available in the OMDoc (Open Mathematical Documents) project. See [[here|]]

The most up to date is the one in [[Geoff's Service Tools|]]

Very interresting and fix in a way the traditional ANTLR problem (see below) but Xtext is probably a better alternative.

!with Xtext
Xtext allow true decoupling between semantic model and the grammar rules.
Extract from  Debasish Ghosh's book [[DSLs in Action |]] :
//Parser generators like ANTLR are a big step forward toward developing external DSLs at a higher level of abstraction. But you still need to embed your target actions directly within the grammar rules. The EBNF rules and the logic for generating your semantic model are still not sufficiently decoupled. If you need to implement multiple semantic models for a single set of grammar rules, you might even end up duplicating the code for the parser itself; either that, or you’ll have to derive your semantic models from a common base abstraction that the grammar rules offer.//

|''Description:''|Allows Tiddlers to use [[TWiki|]] text formatting|
|''Author:''|Martin Budden (mjbudden (at) gmail (dot) com)|
|''Date:''|Nov 5, 2006|
|''Comments:''|Please make comments at|
|''License:''|[[Creative Commons Attribution-ShareAlike 2.5 License|]]|

|''Display unsupported TWiki variables''|<<option chkDisplayTWikiVariables>>|

This the TWikiFormatterPlugin, which allows you to insert TWiki formated text into a TiddlyWiki.

The aim is not to fully emulate TWiki, but to allow you to work with TWiki content off-line and then resync the content with your TWiki later on, with the expectation that only minor edits will be required.

To use TWiki format in a Tiddler, tag the Tiddler with TWikiFormat or set the tiddler's {{{wikiformat}}} extended field to {{{twiki}}}.

Please report any defects you find at

There are (at least) the following known issues:
# Table code is incomplete.
## Table headings not yet supported.
# Anchors not yet supported.
# TWiki variables not supported


// Ensure that the TWikiFormatter Plugin is only installed once.
if(!version.extensions.TWikiFormatterPlugin) {
version.extensions.TWikiFormatterPlugin = {installed:true};

if(version.major < 2 || (version.major == 2 && version.minor < 1))
	{alertAndThrow('TWikiFormatterPlugin requires TiddlyWiki 2.1 or later.');}

if(config.options.chkDisplayTWikiVariables == undefined)
	{config.options.chkDisplayTWikiVariables = false;}

TWikiFormatter = {}; // 'namespace' for local functions

twDebug = function(out,str)

TWikiFormatter.Tiddler_changed = Tiddler.prototype.changed;
Tiddler.prototype.changed = function()
	if((this.fields.wikiformat==config.parsers.twikiFormatter.format) || this.isTagged(config.parsers.twikiFormatter.formatTag)) {
		this.links = [];
		var tiddlerLinkRegExp = /\[\[(.*?)(?:\]\[(?:.*?))?\]\]/mg;
		tiddlerLinkRegExp.lastIndex = 0;
		var match = tiddlerLinkRegExp.exec(this.text);
		while(match) {
			match = tiddlerLinkRegExp.exec(this.text);
	} else if(!this.isTagged('systemConfig')) {
	this.linksUpdated = true;

Tiddler.prototype.escapeLineBreaks = function()
	var r = this.text.escapeLineBreaks();
	if(this.isTagged(config.parsers.twikiFormatter.formatTag)) {
		r = r.replace(/\x20\x20\x20/mg,'\\b \\b');
		r = r.replace(/\x20\x20/mg,'\\b ');
	return r;

config.textPrimitives.twikiLink = '(?:' + 
		config.textPrimitives.upperLetter + '+' + config.textPrimitives.lowerLetter + '+' +
		config.textPrimitives.upperLetter + config.textPrimitives.anyLetter + '*)';

TWikiFormatter.setAttributesFromParams = function(e,p)
	var re = /\s*(.*?)=(?:(?:"(.*?)")|(?:'(.*?)')|((?:\w|%|#)*))/mg;
	var match = re.exec(p);
	while(match) {
		var s = match[1].unDash();
		if(s == 'bgcolor') {
			s = 'backgroundColor';
		try {
			if(match[2]) {
			} else if(match[3]) {
			} else {
		catch(ex) {}
		match = re.exec(p);

config.formatterHelpers.singleCharFormat = function(w)
	this.lookaheadRegExp.lastIndex = w.matchStart;
	var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
	if(lookaheadMatch && lookaheadMatch.index == w.matchStart && lookaheadMatch[0].substr(lookaheadMatch[0].length-2,1) != ' ') {
		w.nextMatch = this.lookaheadRegExp.lastIndex;
	} else {

config.formatterHelpers.doubleCharFormat = function(w)
	this.lookaheadRegExp.lastIndex = w.matchStart;
	var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
//twDebug(w.output,'lm0:'+lookaheadMatch[0]+' lm:'+lookaheadMatch[0].length);
	if(lookaheadMatch && lookaheadMatch.index == w.matchStart &&
			lookaheadMatch[0].substr(lookaheadMatch[0].length-3,1) != ' ') {
		var e = createTiddlyElement(w.output,this.element);
		w.nextMatch = this.lookaheadRegExp.lastIndex;
	} else {

config.twiki = {};
config.twiki.formatters = [
	name: 'twikiTable',
	match: '^\\|(?:[^\\n]*)\\|$',
	lookaheadRegExp: /^\|([^\n]*)\|$/mg,
	rowTermRegExp: /(\|$\n?)/mg,
	cellRegExp: /(?:\|([^\n\|]*)\|)|(\|$\n?)/mg,
	cellTermRegExp: /((?:\x20*)\|)/mg,
	handler: function(w)
		var table = createTiddlyElement(w.output,'table');
		var rowContainer = table;//createTiddlyElement(table,'tbody');
		var prevColumns = [];
		var rowCount = 0;
		w.nextMatch = w.matchStart;
		this.lookaheadRegExp.lastIndex = w.nextMatch;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		while(lookaheadMatch && lookaheadMatch.index == w.nextMatch) {
			var rowClass = (rowCount&1) ? 'TD.odd' : 'TD.even';
			if(rowCount==1) rowClass = 'TD.heading';
			if(rowCount==3) rowClass = 'TD.third';
			this.lookaheadRegExp.lastIndex = w.nextMatch;
			lookaheadMatch = this.lookaheadRegExp.exec(w.source);
	rowHandler: function(w,e,prevColumns)
		var col = 0;
		var colSpanCount = 1;
		var prevCell = null;
		this.cellRegExp.lastIndex = w.nextMatch;
		var cellMatch = this.cellRegExp.exec(w.source);
		while(cellMatch && cellMatch.index == w.nextMatch) {
			if(cellMatch[1] == '^') {
				// Rowspan
				var last = prevColumns[col];
				if(last) {
					last.element.setAttribute('rowSpan',last.rowSpanCount); // Needed for IE
					last.element.valign = 'center';
				w.nextMatch = this.cellRegExp.lastIndex-1;
			} else if(cellMatch[1] === '') {
				// Colspan
				w.nextMatch = this.cellRegExp.lastIndex-1;
			} else if(cellMatch[2]) {
				// End of row
				if(prevCell && colSpanCount > 1) {
					prevCell.setAttribute('colSpan',colSpanCount); // Needed for IE
				w.nextMatch = this.cellRegExp.lastIndex;
			} else {
				// Cell
				var spaceLeft = false;
				var chr = w.source.substr(w.nextMatch,1);
				while(chr == ' ') {
					spaceLeft = true;
					chr = w.source.substr(w.nextMatch,1);
				var cell = createTiddlyElement(e,'td');
				prevCell = cell;
				prevColumns[col] = {rowSpanCount:1, element:cell};
				if(colSpanCount > 1) {
					cell.setAttribute('colSpan',colSpanCount); // Needed for IE
					colSpanCount = 1;
				if(w.matchText.substr(w.matchText.length-2,1) == ' ') {
					// spaceRight
					cell.align = spaceLeft ? 'center' : 'left';
				} else if(spaceLeft) {
					cell.align = 'right';
			this.cellRegExp.lastIndex = w.nextMatch;
			cellMatch = this.cellRegExp.exec(w.source);

	name: 'twikiRule',
	match: '^---+$\\n?',
	handler: function(w)

//<h1><a name='TWiki_Text_Formatting'></a> TWiki Text Formatting </h1>
	name: 'twikiHeading',
	match: '^---[\\+#]{0,5}',
	lookaheadRegExp: /^---[\+#]{0,5}(?:!!)? ?(.*?)\n/mg,
	termRegExp: /(\n)/mg,
	handler: function(w)
		var h = createTiddlyElement(w.output,'h' + (w.matchLength-2));
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			var a = createTiddlyElement(w.output,'a');
			var prefix = w.tiddler ? w.tiddler.title : '';
			var name = '#'+ prefix + lookaheadMatch[1];
			name = name.replace(/ /g,'_'); = name;
			w.nextMatch = this.lookaheadRegExp.lastIndex - lookaheadMatch[1].length - 1;

	name: 'twikiAnchor',
	match: '^#' + config.textPrimitives.wikiLink + '\\s',
	lookaheadRegExp: /^#(.*?)\s/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			var a = createTiddlyElement(w.output,'a');
			var prefix = w.tiddler ? w.tiddler.title : '';
			var name = '#'+ prefix + lookaheadMatch[1];
			name = name.replace(/ /g,'_'); = name;
			w.nextMatch = this.lookaheadRegExp.lastIndex;

	name: 'twikiDefinitionList',
	match: '^   \\$ .+?:.+?\\n',
	lookaheadRegExp: /^   \$ (.+?):(.+?)\n/mg,
	termRegExp: /(\n)/mg,
	handler: function(w)
		var li = createTiddlyElement(w.output,'dl');
		w.nextMatch = w.matchStart;
		this.lookaheadRegExp.lastIndex = w.nextMatch;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		while(lookaheadMatch && lookaheadMatch.index == w.nextMatch) {
			w.nextMatch += 5;
			lookaheadMatch = this.lookaheadRegExp.exec(w.source);

	name: 'twikiList',
	match: '^(?:   )+(?:(?:\\*)|(?:[1AaIi](?:\\.)?)) ',
	lookaheadRegExp: /^(?:   )+(?:(\*)|(?:([1AaIi])(\.)?)) /mg,
	//termRegExp: /(\n\n|\n(?=(?:   )+[\\*1AaIi]))/mg,
	termRegExp: /(\n)/mg,
	handler: function(w)
		var stack = [w.output];
		var currLevel = 0;
		var currType = null;
		var listLevel, listType;
		var itemType = 'li';
		w.nextMatch = w.matchStart;
		this.lookaheadRegExp.lastIndex = w.nextMatch;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		while(lookaheadMatch && lookaheadMatch.index == w.nextMatch) {
			listType = 'ol';
			listLevel = (lookaheadMatch[0].length-(lookaheadMatch[3]?3:2))/3;
			var style = null;
			if(lookaheadMatch[1]=='*') {
				listType = 'ul';
			} else if(lookaheadMatch[2]=='1') {
				style = 'decimal';
			} else if(lookaheadMatch[2]=='A') {
				style = 'upper-alpha';
			} else if(lookaheadMatch[2]=='a') {
				style = 'lower-alpha';
			} else if(lookaheadMatch[2]=='I') {
				style = 'upper-roman';
			} else if(lookaheadMatch[2]=='i') {
				style = 'lower-roman';
			w.nextMatch += lookaheadMatch[0].length;
			if(listLevel > currLevel) {
				for(var i=currLevel; i<listLevel; i++) {
			} else if(listLevel < currLevel) {
				for(i=currLevel; i>listLevel; i--) {
			} else if(listLevel == currLevel && listType != currType) {
			currLevel = listLevel;
			currType = listType;
			var e = createTiddlyElement(stack[stack.length-1],itemType);[config.browser.isIE ? 'list-style-type' : 'listStyleType'] = style;
			this.lookaheadRegExp.lastIndex = w.nextMatch;
			lookaheadMatch = this.lookaheadRegExp.exec(w.source);

	name: 'twikiNoAutoLink',
	match: '^\\s*<noautolink>',
	lookaheadRegExp: /\s*<noautolink>((?:.|\n)*?)<\/noautolink>/mg,
	termRegExp: /(<\/noautolink>)/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			var autoLinkWikiWords = w.autoLinkWikiWords;
			w.autoLinkWikiWords = false;
			w.autoLinkWikiWords = autoLinkWikiWords;
			w.nextMatch = this.lookaheadRegExp.lastIndex;
		} else {

	name: 'macro',
	match: '<<',
	lookaheadRegExp: /<<([^>\s]+)(?:\s*)((?:[^>]|(?:>(?!>)))*)>>/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart && lookaheadMatch[1]) {
			w.nextMatch = this.lookaheadRegExp.lastIndex;

	name: 'twikiNotExplicitLink',
	match: '!\\[\\[',
	handler: function(w)

//<a href='/cgi-bin/view/Sandbox/WebHome#Sandbox_Web_Site_Tools'> Sandbox Web Site Tools </a>
//<a href='/cgi-bin/view/Sandbox/MeetingMinutes' class='twikiLink'>MeetingMinutes</a>
	name: 'twikiAnchorLink',
	match: '\\[\\[(?:'+ config.textPrimitives.twikiLink +')?#',
	lookaheadRegExp: /\[\[(.*?)?#(.*?)(?:\]\[(.*?))?\]\]/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			var a = createTiddlyElement(w.output,'a');
			var prefix = w.tiddler ? w.tiddler.title : '';
			var href = lookaheadMatch[1] ? lookaheadMatch[1] : '';
			href += '#' + prefix + lookaheadMatch[2];
			href = href.replace(/ /g,'_');
			a.href = href;
			a.innerHTML = lookaheadMatch[3] ? lookaheadMatch[3] : lookaheadMatch[2];
			w.nextMatch = this.lookaheadRegExp.lastIndex;

	name: 'twikiExplicitLink',
	match: '\\[\\[',
	lookaheadRegExp: /\[\[(.*?)(?:\]\[(.*?))?\]\]/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			var e = null;
			var link = lookaheadMatch[1];
			if (lookaheadMatch[2]) {
				// titled bracketted link
				var text = lookaheadMatch[2];
				e = config.formatterHelpers.isExternalLink(link) ? createExternalLink(w.output,link) : createTiddlyLink(w.output,link,false,null,w.isStatic,w.tiddler);
			} else {
				// simple bracketted link
				text = link;
				var s = text.indexOf(' ');
				if(s!=-1) {
					link = text.substring(0,s).trim();
					if(config.formatterHelpers.isExternalLink(link)) {
						e = createExternalLink(w.output,link);
						text = text.substring(s+1).trim();
					} else {
						e = createTiddlyLink(w.output,text,false,null,w.isStatic,w.tiddler);
				} else {
					e = createTiddlyLink(w.output,link,false,null,w.isStatic,w.tiddler);
			w.nextMatch = this.lookaheadRegExp.lastIndex;

	name: 'twikiNotWikiLink',
	match: '(?:!|<nop>)' + config.textPrimitives.wikiLink,
	handler: function(w)

	name: 'twikiWikiLink',
	match: config.textPrimitives.twikiLink,
	handler: function(w)
		if(w.matchStart > 0) {
			var preRegExp = new RegExp(config.textPrimitives.anyLetter,'mg');
			preRegExp.lastIndex = w.matchStart-1;
			var preMatch = preRegExp.exec(w.source);
			if(preMatch.index == w.matchStart-1) {
		if(w.autoLinkWikiWords == true || store.isShadowTiddler(w.matchText)) {
			var link = createTiddlyLink(w.output,w.matchText,false,null,w.isStatic,w.tiddler);
		} else {

	name: 'twikiUrlLink',
	match: config.textPrimitives.urlPattern,
	handler: function(w)

	name: 'twikiBoldByChar',
	match: '\\*(?!\\s)',
	lookaheadRegExp: /\*(?!\s)(?:.*?)(?!\s)\*(?=\W)/mg,
	termRegExp: /((?!\s)\*(?=\W))/mg,
	element: 'strong',
	handler: config.formatterHelpers.singleCharFormat

	name: 'twikiBoldTag',
	match: '<b>',
	termRegExp: /(<\/b>)/mg,
	element: 'b',
	handler: config.formatterHelpers.createElementAndWikify

	name: 'twikiBoldItalicByChar',
	match: '__(?!\\s)',
	lookaheadRegExp: /__(?!\s)(?:.*?)(?!\s)__(?=\W)/mg,
	termRegExp: /((?!\s)__(?=\W))/mg,
	element: 'strong',
	element2: 'em',
	handler: config.formatterHelpers.doubleCharFormat

	name: 'twikiItalicByChar',
	match: '_(?![\\s|_])',
	lookaheadRegExp: /_(?!\s)(?:.*?)(?!\s)_(?=\W)/mg,
	termRegExp: /((?!\s)_(?=\W))/mg,
	element: 'em',
	handler: config.formatterHelpers.singleCharFormat

	name: 'twikiBoldMonoSpacedByChar',
	match: '==(?!\\s)',
	lookaheadRegExp: /==(?!\s)(?:.*?)(?!\s)==(?=\W)/mg,
	termRegExp: /((?!\s)==(?=\W))/mg,
	element: 'strong',
	element2: 'code',
	handler: config.formatterHelpers.doubleCharFormat

	name: 'twikiMonoSpacedByChar',
	match: '=(?![\\s=])',
	lookaheadRegExp: /=(?!\s)(?:.*?)(?!\s)=(?!\w|\'|\")/mg,
	termRegExp: /((?!\s)=(?!\w|\'|\"))/mg,
	element: 'code',
	handler: config.formatterHelpers.singleCharFormat

	name: 'twikiPreByChar',
	match: '<pre>',
	lookaheadRegExp: /<pre>((?:.|\n)*?)<\/pre>/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			w.nextMatch = this.lookaheadRegExp.lastIndex;

	name: 'twikiVerbatimByChar',
	match: '<verbatim>',
	lookaheadRegExp: /\<verbatim>((?:.|\n)*?)<\/verbatim>/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			w.nextMatch = this.lookaheadRegExp.lastIndex;

	name: 'twikiParagraph',
	match: '\\n{2,}',
	handler: function(w)

	name: 'twikiNop',
	match: '<nop>',
	handler: function(w)

	name: 'twikiExplicitLineBreak',
	match: '%BR%|<br ?/?>|<BR ?/?>',
	handler: function(w)

	name: 'twikiColorByChar',
	termRegExp: /(%ENDCOLOR%)/mg,
	handler:  function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			var e = createTiddlyElement(w.output,'span'); = lookaheadMatch[1];

	name: 'twikiVariable',
	match: '(?:!)?%(?:<nop>)?[A-Z]+(?:\\{.*?\\})?%',
	lookaheadRegExp: /(!)?%(<nop>)?([A-Z]+)(?:\{(.*?)\})?%/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			if(lookaheadMatch[1]) {
				// ! - escape variable
			} else if(lookaheadMatch[2]) {
				var text = w.matchText.replace(/<nop>/g,'');
			} else {
				// deal with variables by name here
				if(lookaheadMatch[3]=='BB') {
					createTiddlyElement(w.output,'span').innerHTML = '&bull;';
				} else if(config.options.chkDisplayTWikiVariables) {
					// just output the text of any variables that are not understood
			w.nextMatch = this.lookaheadRegExp.lastIndex;

	name: 'twikiHtmlEntitiesEncoding',
	match: '&#?[a-zA-Z0-9]{2,8};',
	handler: function(w)
		createTiddlyElement(w.output,'span').innerHTML = w.matchText;

	name: 'twikiComment',
	match: '<!\\-\\-',
	lookaheadRegExp: /<!\-\-((?:.|\n)*?)\-\->/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			w.nextMatch = this.lookaheadRegExp.lastIndex;

	name: 'twikiHtmlTag',
	match: "<(?:[a-zA-Z]{2,}|a)(?:\\s*(?:[a-zA-Z]*?=[\"']?[^>]*?[\"']?))*?>",
	lookaheadRegExp: /<([a-zA-Z]+)((?:\s+[a-zA-Z]*?=["']?[^>\/\"\']*?["']?)*?)?\s*(\/)?>/mg,
	handler: function(w)
		this.lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			var e =createTiddlyElement(w.output,lookaheadMatch[1]);
			if(lookaheadMatch[2]) {
			if(lookaheadMatch[3]) {
				w.nextMatch = this.lookaheadRegExp.lastIndex;// empty tag
			} else {

config.parsers.twikiFormatter = new Formatter(config.twiki.formatters);
config.parsers.twikiFormatter.format = 'twiki';
config.parsers.twikiFormatter.formatTag = 'TWikiFormat';
} // end of 'install only once'
*<<tag atp>>- Automated Theorem Proving, what this site is all about
**<<tag intro>>- Getting started
**<<tag scala-atp>>- Scala port of [[Handbook of Practical Logic and Automated Reasoning|]]
**<<tag who>>- People and organizations working on ATP
**<<tag HoTT>>- Related to Type theorie
***<<tag type-system>>- Related to type system
***<<tag category-theory>>- Related to category theory
**<<tag book>>- Some books
***<<tag summary>>- Reading notes
**<<tag Algorithms>>- Algorithms related to Automated Theorem Proving
***<<tag explained>>- Algorithms explained
**<<tag Techniques>>
**<<tag SMT>>- Satisfiability Modulo Theories
**<<tag proof-assistants>>
**<<tag provers>>
***<<tag gapt>>- General Architecture for proof
***<<tag MELIA>>- Model Evolution Calculus with Equality and built-in linear integer arithmetic 
***<<tag princess>>- Theorem prover for Presburger arithmetic with uninterpreted predicates
***<<tag prover9>>- Automated theorem prover for first-order and equational logic
***<<tag Vampire>>- Fast first order logic theorem prover
***<<tag Maude>>- Ttheorem prover for order-sorted first order logic without equality
**<<tag Uses>>- What has Automated Theorem Proving been Really Useful for ? (for example Industrial uses of ATP)
**<<tag parsing>>- Related to parsing
**<<tag languages>>- Related to formal languages and focused on proof languages
***<<tag TPTP>>- TPTP language
**<<tag code-analysis>>- Source code analysis
**<<tag howto>>- various how-to
*<<tag tools>>- tools or frameworks drastically selected
**<<tag scala>>- related to Scala
**<<tag idris>>- related to Idris
**<<tag GUI>>- Graphical User Interface
**<<tag scala-ide>>- all about Scala IDE for Eclipse
**<<tag duel>>- duels
*<<tag pb>>- some things annoying that needs to be fixed
*<<tag done>>- what we've done so far
*<<tag wip>>- work in progress
*<<tag abandoned>>- abandoned projects
*<<tag obsolete>>- out-of-date, could be deleted anytime
*<<tag meta>>- describes the operation of this site
**<<tag editing>>- tips on editing and authoring notes, including all sorts of tools
**<<tag system>>- control how the site operates and is layed out
***<<tag systemConfig>>- this is a special tag, marking notes with code to be loaded at startup
***<<tag systemNotes>>- system control notes loaded at startup, used to control content
***<<tag plugins>>- code snippets enhancing functionality, and containing descriptions of what they do
***<<tag systemServer>>- tiddlers automatically created to record the details of source server

The tags group collections of notes into sets — they're adjectives, directories, or folders. By selecting a tag, visible in the upper right of each note, you can jump to any note labeled with that tag — a navigational convenience. It's efficacious to build a flexible hierarchy of tagged content. Each note should be labeled by its most appropriate, "lowest" leaf tags — more than one as appropriate. Click on the tag to see a popup menu of notes with that tag. Alternatively, these folders and their contents may be selectively displayed in the "Contents" tab to the left.
@@This site is about [[Homotopy Type Theory]] & ATP. What did you expect ?@@
---+ [[ Automated theorem proving]]

   * [[ automated reasoning]]

   * [[ mathematical proof / proving]]

   * [[ mathematical theorem]]

   * [[ computer program]]

   * [[  Decidability of the problem ]]
      * [[ propositional logic]]
      * [[ NP-complete]]
      * [[ first-order logic / first order predicate calculus]]
      * [[ˆdel's_completeness_theorem Gödels completeness theorem]]
      * [[ well-formed formula]]
      * [[ recursively enumerable]]
      * [[ first-order theory of the natural numbers]]
      * [[ˆdel's_incompleteness_theorem Gödels incompleteness theorem]]

   * [[  Related problems ]]
      * [[ proof verification]]
      * [[ primitive recursive function]]
      * [[ Interactive theorem proving / Interactive theorem provers]]
      * [[ model checking]]
      * [[ four color theorem]]
      * [[ non-surveyable proofs]]
      * [[ Connect Four]]

   * [[  Industrial uses ]]
      * [[ Pentium FDIV bug]]
      * [[ floating point unit]]
      * [[ AMD]]
      * [[ Intel]]

   * [[ First-order theorem proving]]
      * [[ First-order logic / First-order]]
      * [[ CADE ATP System Competition]]
      * [[ E equational theorem prover / E]]
      * [[ superposition calculus / purely equational calculus]]
      * [[ Technical University of Munich]]
      * [[ Otter theorem prover / Otter]]
      * [[ Argonne National Laboratory]]
      * [[ first-order resolution]]
      * [[ paramodulation]]
      * [[ Prover9 theorem prover / Prover9]]
      * [[ Mace4]]
      * [[ SETHEO]]
      * [[ model elimination]]
      * [[ Vampire theorem prover / Vampire]]
      * [[  Manchester University]]
      * [[  Andrei Voronkov]]
      * [[  Alexandre Riazanov]]

   * [[ Deontic theorem proving]]
      * [[ Deontic logic]]

   * [[  Popular techniques ]]
      * [[ First-order resolution]]
      * [[ unification]]
      * [[ Lean theorem prover / Lean theorem proving]]
      * [[ Model elimination]]
      * [[ Method of analytic tableaux]]
      * [[ Superposition calculus / Superposition]]
      * [[ rewriting]]
      * [[ Model checking]]
      * [[ Mathematical induction]]
      * [[ Binary decision diagram]]
      * [[ DPLL algorithm / DPLL]]
      * [[ Unification#Higher-order unification / Higher-order unification]]

   * [[  Available implementations ]]
      * [[ :Category:Theorem proving software systems]]
      * [[  Free software ]]
         * [[ ACL2]]
         * [[ Alt-Ergo]]
         * [[ Automath]]
         * [[ CVC (theorem prover) / CVC]]
         * [[ E equational theorem prover / E]]
         * [[ EQP]]
         * [[ Gandalf (theorem prover) / Gandalf]]
         * [[ˆdel-machines Gödel-machines]]
         * [[ HOL theorem prover family / HOL]]
         * [[ HOL Light]]
         * [[ Isabelle (theorem prover) / Isabelle]]
         * [[ IsaPlanner]]
         * [[ Jape]]
         * [[ KED]]
         * [[ KeY]]
         * [[ KeYmaera]]
         * [[ LCF (theorem prover) / LCF]]
         * [[  Leo II]]
         * [[ LoTREC]]
         * [[ MetaPRL]]
         * [[ Matita]]
         * [[ NuPRL]]
         * [[ Otter (theorem prover) / Otter]]
         * [[ Paradox (theorem prover) / Paradox]]
         * [[ PhoX]]
         * [[ Prover9]]
         * [[ Prototype Verification System / PVS]]
         * [[  SAD]]
         * [[ SNARK theorem prover / SNARK]]
         * [[ SPASS]]
         * [[ Tau (theorem prover) / Tau]]
         * [[ Theorema]]
         * [[ Theorem Checker]]
      * [[  Proprietary software including Share-alike Non-commercial]]
         * [[ Acumen RuleManager]]
         * [[ Alligator (software) / Alligator]]
         * [[ CARINE]]
         * [[ KIV]]
         * [[ Mizar system / Mizar]]
         * [[ Prover Plug-In]]
         * [[ ProverBox]]
         * [[ Cyc#ResearchCyc / ResearchCyc]]
         * [[ Simplify (software) / Simplify]]
         * [[ SPARK (programming language)]]
         * [[ Spear modular arithmetic theorem prover]]
         * [[ Theorem Proving System]]
         * [[ Twelf]]
         * [[ Vampire theorem prover / Vampire/Vampyre]]
         * [[ Waldmeister]]

   * [[  Notable people ]]
      * [[ Leo Bachmair]]
      * [[ superposition calculus]]
      * [[ Woody Bledsoe]]
      * [[ artificial intelligence]]
      * [[ Robert S. Boyer]]
      * [[ Nqthm / Boyer-Moore theorem prover]]
      * [[ Herbrand Award]]
      * [[ Alan Bundy]]
      * [[ University of Edinburgh]]
      * [[ IJCAI Award for Research Excellence]]
      * [[ Donald E. Walker Distinguished Service Award]]
      * [[  William McCune]]
      * [[  Hubert Comon]]
      * [[ CNRS]]
      * [[ ENS Cachan]]
      * [[ Robert Constable]]
      * [[  Martin Davis]]
      * [[ DPLL algorithm]]
      * [[  Branden Fitelson]]
      * [[ Harald Ganzinger]]
      * [[  Michael Genesereth]]
      * [[ Stanford University]]
      * [[ Keith Goolsbey]]
      * [[ Cyc]]
      * [[ Michael J. C. Gordon]]
      * [[  Gerard Huet]]
      * [[ Robert Kowalski]]
      * [[ SLD resolution]]
      * [[ logic programming / logic programs]]
      * [[  Donald W. Loveland]]
      * [[ Metamath]]
      * [[]]
      * [[ J Strother Moore]]
      * [[ Nqthm / Boyer-Moore theorem prover]]
      * [[ Robert Nieuwenhuis]]
      * [[ Tobias Nipkow]]
      * [[ Isabelle theorem prover / Isabelle]]
      * [[ Ross Overbeek]]
      * [[  The Fellowship for Interpretation of Genomes]]
      * [[ Lawrence C. Paulson]]
      * [[ University of Cambridge]]
      * [[ David A. Plaisted]]
      * [[ University of North Carolina at Chapel Hill]]
      * [[ Knuth-Bendix completion algorithm / completion]]
      * [[  John Rushby]]
      * [[ SRI International]]
      * [[ John Alan Robinson (Computer Scientist) / J. Alan Robinson]]
      * [[¸rgen_Schmidhuber Jürgen Schmidhuber]]
      * [[  G%26#246;del Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements]]
      * [[ Stephan Schulz]]
      * [[ Natarajan Shankar]]
      * [[ Prototype Verification System / PVS]]
      * [[ Mark Stickel]]
      * [[ SRI]]
      * [[ Geoff Sutcliffe]]
      * [[  Dolph Ulrich]]
      * [[  Robert Veroff]]
      * [[  Andrei Voronkov]]
      * [[  Larry Wos]]
      * [[ Wen-Tsun Wu]]
      * [['s_method Wus method]]

   * [[  See also ]]
      * [[ Symbolic computation]]
      * [[ Computer-aided proof]]
      * [[ Automated reasoning]]
      * [[ Formal verification]]
      * [[ Logic programming]]
      * [[ Proof checking]]
      * [[ Proof complexity]]
      * [[ Computer algebra system]]
      * [[ Program analysis (computer science)]]

   * [[  Notes ]]

   * [[  References ]]
*Google doesn't understand TW's syntax (including the hyperlink format), and nor does it know how to link directly to the correct tiddler within a TiddlyWiki.
In no particular order
*[[Apache Maven|]]

This tiddler was automatically created to record the details of this server
In no particular order.
*Australian National University (Australia)
** College of Engineering & Computer Science
***[[Peter Baumgartner|]] 
*Intel Corporation (USA)
**Oregon Center of semiconductor research 
***[[John Harrison|]]
****ocaml-atp,hol-light,Handbook of Practical Logic and Automated Reasoning
*Vienna University of Technology : [[Theory and Logic Group|]] (Austria)
**Theory and Logic Group
***[[Alexander Leitsch|]]
***[[Bruno Woltzenlogel Paleo|]]
***[[Mikheil Rukhaia|]]
***[[Tomer Libal (shaolin)|]]
***[[Daniel Weller|]]
***[[Martin Riener|]]
***[[Giselle Machado Nogueira Reis|]]
***[[Stefan Hetzl|]]
****gapt, ASCOP
* University of Miami (USA)
**Department of Computer Science
***[[Geoff Sutcliffe|]]
*University of New Brunswick (Canada)
**Dr. Chris Baker’s laboratory
***[[Alexandre Riazanov|]]
*Ecole Ploytechnique (France)
**Laboratoire d'Inforrmatique (LIX)
***[[Bruno Barras|]]
***[[Benjamin Werner|]]
*Radboud University of Nijmegen (Netherlands)
**Foundations group
***[[Carst Tankink|]]
*Technische Universität München (Germany)
**Institut für Informatik
***[[Markus (Makarius) Wenzel|]]
***[[Tobias Nipkow|]]
*Carnegie Mellon University (Pittsburgh)
**Department of Computer Science
***[[Sean McLaughlin|]]
*University of Cambridge (United Kingdom)
**Computer Laboratory
***[[Lawrence C. Paulson|]]
*Massachusetts Institute of Technology (USA)
**Computer Science and Artificial Intelligence Laboratory
***[[Adam Chlipala|]]
*University of Manchester (United Kingdom)
**Department of Computer Science
***[[Andrei Voronkov|]]
*Center for Scientific and Technological Research (ITC-IRST) (Trento, Italy)
**Automated Reasoning System Devision (SRA)
***[[Andrei Tchaltsev (Andrey Chaltsev)|]]
****[[TPTP parser in Java with ANTLR|]]
*University of New Brunswick
**Dr. Chris Baker’s group
***[[Alexandre Riazanov (Alexander Ryazanov)|]]
****Vampire, [[TPTP parser in Java with ANTLR|]]
*University of New Mexico
**Computer Science Department
***[[William McCune|]]
****Otter,Mace4,Prover9,Ivy proof checker
*École Polytechnique Fédérale de Lausanne (EPFL) (Lausanne, Switzerland)
**Laboratory for Automated Reasoning and Analysis
***[[Philippe Suter|]]
See also [[Notable people|]]
* See Chapter 9 in [[Java Concurrency in Practice|]] book
!Answers to criticisms
!Recommended practice for actor creation in Scala
!gapt should return true (PUZ031-1 : TPTP v3.1.0. Released v1.0.0)
println(getRefutation("Animal(x) | -Wolf(x). Animal(x) | -Fox(x). Animal(x) | -Bird(x). Animal(x) | -Caterpillar(x). Animal(x) | -Snail(x). Wolf(a_wolf). Fox(a_fox). Bird(a_bird). Caterpillar(a_caterpillar). Snail(a_snail). Grain(a_grain). Plant(x) | -Grain(x). Eats(animal,plant) | Eats(animal,small_animal) | -Animal(animal) | -Plant(plant) | -Animal(small_animal) | -Plant(other_plant) | -Much_smaller(small_animal,animal) | -Eats(small_animal,other_plant). Much_smaller(catapillar,bird) | -Caterpillar(catapillar) | -Bird(bird). Much_smaller(snail,bird) | -Snail(snail) | -Bird(bird). Much_smaller(bird,fox) | -Bird(bird) | -Fox(fox). Much_smaller(fox,wolf) | -Fox(fox) | -Wolf(wolf). -Wolf(wolf) | -Fox(fox) | -Eats(wolf,fox). -Wolf(wolf) | -Grain(grain) | -Eats(wolf,grain). Eats(bird,catapillar) | -Bird(bird) | -Caterpillar(catapillar). -Bird(bird) | -Snail(snail) | -Eats(bird,snail). Plant(caterpillar_food_of(catapillar)) | -Caterpillar(catapillar). Eats(catapillar,caterpillar_food_of(catapillar)) | -Caterpillar(catapillar). Plant(snail_food_of(snail)) | -Snail(snail). Eats(snail,snail_food_of(snail)) | -Snail(snail). -Animal(animal) | -Animal(grain_eater) | -Grain(grain) | -Eats(animal,grain_eater) | -Eats(grain_eater,grain)."))
@@Input is wrong : see [[GAPT and TPTP notation]] (in the adaptation of the problem into gapt "simple" format, variables were translated into constants)@@
Prover9 format :
TPTP format :
[[MELIA|]] is a theorem prover for the Model Evolution Calculus with Equality and built-in linear integer arithmetic.

*could be run conveniently from eclipse
*accept fof, tff and tff-int tptp input files  like DAT001=1.p,GRP194+1
*cnf tptp input files need tptpFO2tme which is not included with MELIA bundle
*other need previous conversion to protein format to use .tme files as input files
./tptp2X  -f protein /tmp/PUZ001-1.p
[[The Maude Resolution Theorem Prover|]] (RTP) is a refutationally complete resolution theorem prover for order-sorted first order logic without equality, and with support for associative and commutative function symbols.

*part of the [[Maude system|]]
*GPL2 License
*binary available for MacOS and Linux
*provide benchmarks for the Harrison's problems
*provide translation scripts for Harrison's and TPTP's benchmarks into RTP's format

MacBook-Pro-de-hack:maude hack$ ./maude.intelDarwin 
		   --- Welcome to Maude ---
	    Maude 2.6 built: Dec 10 2010 11:12:39
	    Copyright 1997-2010 SRI International
		   Fri Nov 11 23:32:37 2011
Maude> load ../rtp/rtp
> pr PRED-BASE .
> sort S .
>   op p :  -> Atom [metadata "1"] .
>   op r :  -> Atom [metadata "2"] .
>   op q :  -> Atom [metadata "3"] .
> endfm
> fmod TEST is
>   inc RTP .
> endfm
> red !((
>   (q -> p,r),
>   ((empty).AtomMagma -> q),
>   (p -> (empty).AtomMagma),
>   (r -> (empty).AtomMagma)
> )) .
reduce in TEST : !(((q -> p, r), (empty -> q), (p -> empty), (r -> empty))) .
rewrites: 1520 in 39ms cpu (42ms real) (38226 rewrites/second)
result State: [empty -> empty,5]
> q .
MacBook-Pro-de-hack:maude hack$ 
[[Princess|]] is a theorem prover for Presburger arithmetic with uninterpreted predicates.

make the parser
sudo port install hs-platform-cabal
cabal install bnfc
export PATH=$PATH:/$HOME/.cabal/bin
mkdir JLex
cd JLex
cd ..
jar cvf JLex/JLex.jar *.class
make parser-jar

rmake & run princess GUI
[[Prover9|]] is an automated theorem prover for first-order and equational logic. Prover9 is the successor of the [[Otter prover|]].

*A brief introduction :
*Prover9 and miscellaneous LADR applications are available in Debian distribution
aptitude install prover9 prover9-doc prover9-mace4 libladr-dev
*convert and solve TPTP problems
tptp_to_ladr < PUZ031-1.tptp | prover9

Nota : [[Prover9 & co|]] also compile --with just a glitch*-- on Mac OS X version 10.6.8 with gcc 4.2.1 

--@@*ladr_to_tptp segfault@@-- <= The last version here : fix this.
Eclipse Juno is known for performance regression so Indigo is recommended until SDK of Scala IDE for Kepler is available
For the adventurous manual installation for Kepler is possible see : [[howto Scala IDE for Eclipse 4.3 (Kepler)]]
Simply install the "all in one" [[SDK of the Scala IDE for Eclipse|]]
!Post installation
*Eclipse/Preference/General/Workspace/ => set text file encoding to UTF-8
*ScalaTest plugin for Scala IDE
*m2e-egit from m2e market place (File/Import.../maven/Check out Maven Projects from SCM)
*@@Eclipse/Preference/Maven/ => set hide folders of physically nested modules@@
*@@Eclipse/Preference/Team/Git/Projects => set auto share projects located in a git repository@@
*Eclipse/Preference/Team/Git/Ignored Resources/ => add .cache,.classpath,.project,*.prefs,*/target/*
!Optimal memory configuration for Scala IDE
Modify this in eclipse.ini if you got more than 4 Go of memory
!Checkout a single-maven project to check everything is OK
#/File/Import/Maven/Check out Maven Projects from SCM
#Choose git and enter in SCM URL
#Click on finish and that is : @@you've got automatically a fully configured Scala Maven single-module project git managed !@@
!Checkout a multi-modules maven project to check everything is OK
--@@Gapt actually doesn't work with Eclipse due to a change in m2eclipse-scala. This is fixed in the m2e branch@@--
--@@Gapt is not yet compatible with Scala 2.10. Fix is ongoing in the m2e-210 branch@@--
#/File/Import/Maven/Check out Maven Projects from SCM
#Choose git and enter in SCM URL
#Click on finish and that is : @@you've got automatically a fully configured Scala Maven multi-modules project git managed !@@
@@obsolete@@ See [[howto Scala IDE for Eclipse]]

Eclipse 3.7 is old and Eclipse 4.2 is now stable and usable with Scala IDE. It is recommended to use [[howto Scala IDE for Eclipse 4.2 (Juno)]]
*Eclipse IDE for Java Developers
*m2e from eclipse market place
*ScalaIDE -
*m2eclipse-scala -
*m2e-subversive from m2e market place
*m2e-egit from m2e market place
@@Scala IDE Version 2.1, Nightly is used because of the new Implicit Hyperlinking and Scala Debugger feature.@@
!Post installation
*Eclipse/Preference/General/Workspace/ => set text file encoding to UTF-8
*@@Eclipse/Preference/Maven/ => set hide folders of physically nested modules@@
*@@Eclipse/Preference/Team/Git/Projects => set auto share projects located in a git repository@@
*Eclipse/Preference/Team/Git/Ignored Resources/ => add .cache,.classpath,.project,*.prefs,*/target/*
!Optimal memory configuration for Scala IDE
@@The time Scala IDE need huge heap space to compile the gapt project is over@@

Default eclipse settings are now sufficient but these could be used if necessary :
Also most of the JVM optimizations are not necessary anymore. Details [[here|]].
*Compressed oops is supported and enabled by default in Java SE 6u23 and later. In Java SE 7, use of compressed oops is the default for 64-bit JVM processes when -Xmx isn't specified and for values of -Xmx less than 32 gigabytes.
*Escape analysis is supported and enabled by default in Java SE 6u23 and later.
!Checkout a maven project to check everything is OK
@@Gapt actually doesn't work with Eclipse due to change in m2eclipse-scala. This is fixed in the m2e branch@@
#/File/Import/Maven/Check out Maven Projects from SCM
#Choose git and enter in SCM URL
#Click on finish and that is : @@you've got automatically a fully configured Scala Maven multi-modules project git managed !@@

@@obsolete@@ See [[howto Scala IDE for Eclipse]]

*Eclipse IDE for Java Developers @@Juno SR1@@
*m2e from eclipse market place @@If not already installed@@
*ScalaIDE - or @@Scala 2.10 is now usable and recommended@@
*m2eclipse-scala -
*m2e-egit from m2e market place
*m2e-subversive from m2e market place
*Scala Worksheet - or
*ScalaTest plugin for Scala IDE - -- @@old update site note compatible anymore since Scala IDE 3.0@@
*Scalastyle plugin for Scala IDE - @@also work with Scala 2.10 but not for new features like string interpolation@@
!Post installation
*Eclipse/Preference/General/Workspace/ => set text file encoding to UTF-8
*@@Eclipse/Preference/Maven/ => set hide folders of physically nested modules@@
*@@Eclipse/Preference/Team/Git/Projects => set auto share projects located in a git repository@@
*Eclipse/Preference/Team/Git/Ignored Resources/ => add .cache,.classpath,.project,*.prefs,*/target/*
!Optimal memory configuration for Scala IDE
Modify this in eclipse.ini and use -Xmx1024m if you got less that 4 Go of memory
Also most of the JVM optimizations are not necessary anymore. Details [[here|]].
*Compressed oops is supported and enabled by default in Java SE 6u23 and later. In Java SE 7, use of compressed oops is the default for 64-bit JVM processes when -Xmx isn't specified and for values of -Xmx less than 32 gigabytes.
*Escape analysis is supported and enabled by default in Java SE 6u23 and later.
!Checkout a single-maven project to check everything is OK
#/File/Import/Maven/Check out Maven Projects from SCM
#Choose git and enter in SCM URL
#Click on finish and that is : @@you've got automatically a fully configured Scala Maven single-module project git managed !@@
!Checkout a multi-modules maven project to check everything is OK
@@Gapt actually doesn't work with Eclipse due to a change in m2eclipse-scala. This is fixed in the m2e branch@@
@@Gapt is not yet compatible with Scala 2.10. Fix is ongoing in the m2e-210 branch@@
#/File/Import/Maven/Check out Maven Projects from SCM
#Choose git and enter in SCM URL
#Click on finish and that is : @@you've got automatically a fully configured Scala Maven multi-modules project git managed !@@
*remove .m2/repository/.cache/m2e/ to fix java heap with indexation and some with the scala compiler
*Eclipse/project/clean could be needed after checkout (with Scala IDE nightly build)
*@@[[Download the Bundle of the Scala IDE for Eclipse|]]@@
*--m2eclipse-scala -
*--m2e-egit - (unselect Group items by category)--
#Go to /File/Import/Maven/Check out Maven Projects from SCM
#click on Find more SCM connectors in the m2e  Marketplace
#install m2e-egit for git and m2e-subclipse or m2e-subversive for subversion (you will after restart be prompted to install Svn Connector, choose SVN Kit 1.8.3)
!Post installation
*--Eclipse/Preference/General/Workspace/ => set text file encoding to UTF-8--
*--@@Eclipse/Preference/Maven/ => set hide folders of physically nested modules@@--
*Eclipse/Preference/Team/Git/Ignored Resources/ => add .cache,.classpath,.project,*.prefs,*/target/*
!Optimal memory configuration for Scala IDE
Modify this in eclipse.ini if you got more than 4 Go of memory and a 64 bits OS. Expect trouble with big project if you don't have both.
Also most of the JVM optimizations are not necessary anymore. Details [[here|]].
*Compressed oops is supported and enabled by default in Java SE 6u23 and later. In Java SE 7, use of compressed oops is the default for 64-bit JVM processes when -Xmx isn't specified and for values of -Xmx less than 32 gigabytes.
*Escape analysis is supported and enabled by default in Java SE 6u23 and later.
!Checkout a single-maven project to check everything is OK
#Go to /File/Import/Maven/Check out Maven Projects from SCM 
#Choose git and enter in SCM URL or svn and enter
#Click on finish and that is : @@you've got automatically a fully configured Scala Maven single-module project ( evt git managed) !@@
!Checkout a multi-modules maven project to check everything is OK
--Gapt actually doesn't work with Eclipse due to a change in m2eclipse-scala. This is fixed in the m2e branch--
--Gapt is not yet compatible with Scala 2.10. Fix is ongoing in the m2e-210 branch--
--Gapt is not yet compatible with Scala 2.11--
#/File/Import/Maven/Check out Maven Projects from SCM
#Choose git and enter in SCM URL
#Click on finish and that is : @@you've got automatically a fully* configured Scala Maven multi-modules project git managed !@@
--*You need to manually add /src/main/java to the build path in the treeviz module--
*@@[[Download Scala IDE|]]@@
*--m2eclipse-scala -
*--m2e-egit - (unselect Group items by category)--
#Go to /File/Import/Maven/Check out Maven Projects from SCM
#click on Find more SCM connectors in the m2e  Marketplace
#install m2e-egit for git and m2e-subclipse or m2e-subversive for subversion (you will after restart be prompted to install Svn Connector, choose SVN Kit 1.8.3)
!Post installation
*--Eclipse/Preference/General/Workspace/ => set text file encoding to UTF-8--
*Eclipse/Preference/Maven/ => set hide folders of physically nested modules @@doesn't work but does not hurt either@@
*Eclipse/Preference/Team/Git/Ignored Resources/ => add .cache,.classpath,.project,*.prefs,*/target/*
!Optimal memory configuration for Scala IDE
Also most of the JVM optimizations are not necessary anymore. Details [[here|]].
*Compressed oops is supported and enabled by default in Java SE 6u23 and later. In Java SE 7, use of compressed oops is the default for 64-bit JVM processes when -Xmx isn't specified and for values of -Xmx less than 32 gigabytes.
*Escape analysis is supported and enabled by default in Java SE 6u23 and later.
!Checkout a maven project to check everything is OK
#Go to /File/Import/Maven/Check out Maven Projects from SCM 
#Choose git and enter
#Click on finish and that is : @@you've got automatically a fully configured Scala Maven project ( evt git managed) !@@
!Test also with an sbt managed project like gapt to check everything is OK
Follow instructions [[here|]]
[[Vampire|]] is actually the best automatic theorem prover for first-order classical logic 
*very easy to use : vampire <TPTP filename>
*source code is available but licence is very restrictive
*self contained binary available for MacOS, Linux and Windows
vampire_macosx ~/TPTP-v5.2.0/Problems/PUZ/PUZ031-1.p

@@vampire doesn't accept space in <fof_annotated> name (todo check this with TPTP BNF Syntax)@@

!using gapt framework
We want to prove : All men are mortal Socrates is a man Therefore Socrates is mortal
This is equivalent to : All X are Y a is X Therefore a is Y
The conjecture converted to Prover9 syntax in natural way by using a non-clausal formula for the man-implies-mortal rule and a goal list for the conclusion :
formulas(assumptions).   % synonym for formulas(sos).
  man(x) -> mortal(x).   % open formula with free variable x

formulas(goals).         % to be negated and placed in the sos list
The negation of this conjecture in Prover9 syntax with list of clauses :
formulas(sos).           % clauses to be placed in the sos list
  -man(x) | mortal(x).  % this is the Conjuctive Normal Form of man(x) -> mortal(x).
  man(Socrates).        % this is the Conjuctive Normal Form of man(Socrates).
  -mortal(Socrates).    % we assert the opposite of the query and then try to find a logical contradiction between that statement and the rest of the knowledge base
We use sample code inspired from ProverTest.scala :
println(getRefutation("-Man(x) | Mortal(x). Man(socrates). -Mortal(socrates)."))
=> true
We've found a contradiction for the negation of our conjecture, thereby proving it to be true.

!using client
* --root pom.xml doesn't build cli. cli need to be build manually with mvn install in cli directory--
* launch cli with : java -jar cli/target/cli-1.0-SNAPSHOT-jar-with-dependencies.jar
scala> parse fol "P(f(x,c),y)"
res29: at.logic.language.fol.FOLFormula = P(f(x, c), y)
scala> parse hol "g(f(F(a:i):i):i,f(F(a:i):i):i):i"
res5: at.logic.language.hol.HOLExpression = g(f(F(a:i):i):i, f(F(a:i):i):i):i
scala> val v = huet("x(f(a:i):i):i", "f(x(a:i):i):i")
v: at.logic.utils.executionModels.ndStream.NDStream[at.logic.language.lambda.substitutions.Substitution[at.logic.language.hol.HOLExpression]] = at.logic.algorithms.unification.hol.huet.Huet$$anon$1@4513098
scala> normalizeSub(
<x -> (λ#5{1}.#5{1})>
* launch atp with : java -jar ./provers/atp/target/atp-1.1-Alpha-main.jar INPUTFILE
List of clauses in set:
0) :-P(a)
1) P(x), P(x):-P(f(x)), P(f(x))
2) P(f(f(a))):-
Enter index of first clause:

cd haskell-atp/src/ATP
ghci ../Main.lhs
:main resolution -f "(exists y. forall x. P(x, y)) ==> forall x. exists y. P(x, y)"
~/.cabal/bin/atp resolution -f "(exists y. forall x. P(x, y)) ==> forall x. exists y. P(x, y)"

Nota: interpreted mode need to escape backslash whereas compiled version did not. See example above.

cd haskell-atp/src/ATP
ghci ../Main.lhs
:main form "(forall x. x < 2 ==> 2 * x <= 3) \\/ false"
~/.cabal/bin/atp form "(forall x. x < 2 ==> 2 * x <= 3) \/ false"
cd Ocaml
#use "";;
resolution <<(exists y. forall x. P(x, y)) ==> forall x. exists y. P(x, y)>>;;

Nota : anything in quotations with | gets passed to secondary_parser, while anthing else in quotations gets passed to default_parser.
# <<(forall x. x < 2 ==> 2 * x <= 3) \/ false>>;;
- : fol formula = <<(forall x. x < 2 ==> 2 * x <= 3) \/ false>>
# <<|2 * x|>>;;
- : term = <<|2 * x|>>
From gapt point of view

*m2eclipse-subversive pros
**subversive is eclipse native project
*m2eclipse-subversive cons
**import and checkout maven project from SCM is not OK* for mutli-project like gapt
*m2eclipse-subclipse pros
**none from gapt point of view
*m2eclipse-subclipse cons
**import and checkout maven project from SCM is not OK* for multi-project like gapt
**subclipse is not a native eclipse project

The winner is : [[m2eclipse-subversive|]]

(*) Support of maven multi-project in eclipse is actually far for Netbeans or  IntelliJ IDEA but eclipse Scala plugin is the more promising.
<<list filter [tag[-]]>>
***"""<<exists x y. forall z. (F(x,z) <=> F(z,y)) /\ (F(z,y) <=> F(z,z)) /\ (F(x,y) <=> F(y,x)) ==> (F(x,y) <=> F(x,z))>>"""
***∃ x y. ∀ z. (F(x,z) ⇔ F(z,y)) ∧ (F(z,y) ⇔ F(z,z)) ∧ (F(x,y) ⇔ F(y,x))  ⊃ (F(x,y) ⇔ F(x,z)) 
***"""<<p ==> q <=> ~q ==> ~p>>"""
***p ⊃ q ⇔ ¬ q ⊃ ¬ p
# skolemize p1;;
- : fol formula = <<(~p \/ q) /\ (q \/ ~p) \/ (p /\ ~q) /\ ~q /\ p>>
:main skolem p1
Welcome to Haskell ATP!
(¬p ∨ q) ∧ (q ∨ ¬p) ∨ (p ∧ ¬q) ∧ ¬q ∧ p
# dnf <<(p <=> (~q /\ r))>>;;
- : fol formula = <<p /\ r /\ ~q \/ q /\ ~p \/ ~p /\ ~r>>
~/.cabal/bin/atp dnf -f "(p ⇔ (¬ q ∧ r))"
Welcome to Haskell ATP!
p ∧ r ∧ ¬q ∨ q ∧ ¬p ∨ ¬p ∧ ¬r
| Description | OCaml | Haskell | Scala |h
|Example formulas for testing the algorithms||TestFormulas.lhs|
|Binary Decision Diagram||Bdd.lhs|
|Nelson-Oppen combined decision procedure||Combining.lhs|
|Knuth-Bendix completion||Completion.lhs|
|Complex quantifier elimination||Complex.lhs,Poly.lhs|
|Simple congruence closure||Cong.lhs|
|Cooper's algorithm||Cooper.lhs|
|Procedures for decidable subsets of FOL||Decidable.lhs|
|Definitional CNF||DefCnf.lhs|
|Dense linear order||Dlo.lhs|
|Davis-Putnam-Loveland-Logemann procedures||Dp.lhs|
|Equality elimination||EqElim.lhs|
|FOL with equality||Equal.lhs|
|Some ATP examples|||
|First-order derived rules in LCF|||
|Basic stuff for FOL||Fol.lhs|
|Polymorphic type of formula||Formula.lhs,FormulaSyn.lhs|Formula.scala|
|Geometry theorem proving||Geom.lhs|
|Grobner basis algorithm||Grobner.lhs,MPoly.lhs|
|Herbrand theorem||Herbrand.lhs|
|Tweak Ocaml default state for ATP|||
|Init ATP example code|||
|Craig-Robinson interpolation theorem||Interpolation.lhs|
|Simple algebric expression||Intro.lhs,IntroSyn.lhs|Intro.scala,IntroSpec.scala|
|First order tableau procedure using LCF setup|||
|LCF-style basis for Tarski-style Hilbert system|||
|First order tableau procedure using LCF setup|||
|Misc library functions to set up a nice environment||Utils/Lib.lhs|
|Goedel's theorem and relatives|||
|Model elimination procedure (MESON version)||Meson.lhs|
|Term orderings||Order.lhs|
|Backchaining procedure for Horn clauses||Prolog.lhs|
|Some propositional formulas to test||PropExamples.lhs|
|Basic stuff for propositional logic||Prop.lhs|
|Introduction to quantifier elimination||Qelim.lhs|
|Real quantifier elimination (Cohen-Hormander)||Real.lhs,Poly.lhs|
|Prenex and Skolem normal forms||Skolem.lhs|
|Illustration of Skolemizing a set of formulas|||
|Simple implementation of Stalmarck's algorithm||Stal.lhs|
|Optimized version of a Prawitz-like procedure||Tableaux.lhs|
|Goals, LCF-like tactics and Mizar-like proofs|||
|Unification for first order terms||Unif.lhs|
Dependent types are very distinct from path dependent types : see [[Dependent types vs. Path Dependent types|!topic/scala-types/5J71FpGl4JA]] or [[Dependent Types in Scala|]]
@@but@@ in combination with singleton types and implicit values [[they allow some support for dependent typing|]]
From Eclipse point of view 
*sbt pros
**quicker incremental build (probably useless with eclipse)
**using scala as build script
**the majoriity of scala projects use it
*sbt cons
**after edition you should run "sbt clean-lib update eclipse" and refresh the project in eclipse
**no multiproject support (fixed in sbt >= 0.9)
*maven pros
**more standard in Java land
**auto-download of source (and javadoc) of dependencies direct and transitive
**update via project>maven>update configuration click
**compatible with many other tools
**solid eclipse support with m2eclipse
*maven cons
**verbose XML syntax

@@No winner@@
Maven and sbt together is the better option.
The idea is to drive the project with sbt and auto generate a pom.xml with sbt make-pom to have the best of both worlds.
!What are the programming languages with the best type system? Why?