## Logic and Nonlinearity

The crucial passages from this reconstructed conversation have already been cited over at the other place, but it’s important enough to pick over here, too. The maximally-compressed take-away: cybernetic processes are naturally registered as logical paradoxes (with consequent affinity between paradox and — dynamic — reality).

*[The] whole fabric of living things is not put together by logic … when you get circular trains of causation, as you always do in the living world, the use of logic will make you walk into paradoxes. Just take the thermostat, a simple sense organ, yes? […] If it’s on, it’s off; if it’s off, it’s on. If yes, then no; if no, then yes. …*

So the isomorphy between the most basic cybernetic control loop and classical logical paradoxes (for e.g.) is exact. The significance of this is surely beyond need of defense.

Capra asks, alluding to the Epimenides Paradox, “Do thermostats lie?” To which Bateson replies:

*Yes-no-yes-no-yes-no. You see, the cybernetic equivalent of logic is oscillation*.

It seems to me that something of vast importance was discovered here, and subsequently almost entirely lost.

(For anybody following the link, it’s worth noting that surgical extraction is in this case ‘steelmanning’. The retreat to ‘metaphor’ as a substitute for logical formalism is disastrously inadequate. The alternative that matters is not figurative language, but the circuit diagram, and recursive code.)

[…] Logic and Nonlinearity […]

Posted on May 2nd, 2015 at 2:20 pm | QuoteI’m not sure that logic can’t be used for crabs and porpoises, and butterflies and habit formation.

On the other note, instead of bang-bang control, one can use, for example, PID, one of most commonly used controllers in automation science, to make the temperature exactly the one desired.

Also ring oscillator – one can express logic states of some system, even an oscillating one, if one takes time into account, so the answer to the question “Is thermostat on?” would be “Yes between t1 and t2, no between t3 and t4, yes between…” but mayhaps I misunderstood something.

[Reply]

admin Reply:

May 2nd, 2015 at 2:56 pm

I think you have missed something. The oscillation isn’t just as empirical description of states across time, but a series of ‘paradoxical’ logical implications. It’s not ‘yes and then no’ but ‘yes and

thereforeno’ (exactly as with the Epimenides Paradox).[Reply]

Aeroguy Reply:

May 2nd, 2015 at 7:17 pm

There’s a reason the equations for the control systems for stabilizing inherently unstable systems depend on nonreal numbers.

[Reply]

Frog Do Reply:

May 2nd, 2015 at 7:22 pm

The real numbers and the complex numbers traditionally understood are exactly equally fictitious.

Orthodox Laissez-fairist Reply:

May 2nd, 2015 at 7:51 pm

I’m still not sure that I understand. Paradoxes cannot exist in reality, therefore oscillations, that is, oscillation and paradox are mutually exclusive. It’s in case of ideal systems and only in case of ideal systems that paradox is possible. In ideal systems paradox would be present, in real systems oscillations are present precisely because they’re real i.e. non-ideal* (they have latency – it takes non-zero amount of time to get response at the output after signal gets at the input of said system or alternatively explained, after the state at the input changes it takes non-zero amount of time for output state to change). In ideal system there would be no latency, so there would be no oscillations but paradox – both states, at both the input and output, at the same time, all the time.

For example, look at the formula for ring oscillator F=1/(2*N*T), and let T→0. You get F→∞. Now imagine a square wave, with gradually increasing frequency. Eventually, as frequency becomes infinite (i.e. latency becomes zero), you get a paradox: at every point in time signal is both 1 and 0; but, in this case, when you have both values at all times, there’s no oscillation, because oscillation is alternation between two different states (but in case of paradox there’s both states at the same time, so no alternation, and therefore no oscillation).

* – It is physically impossible for information to travel faster than c (speed of light in vacuum).

[Reply]

Jesse Reply:

May 2nd, 2015 at 10:27 pm

But it isn’t, in fact “yes and therefore no”–rather it’s “yes at time-increment N, therefore no at time-increment N+1”, i.e. a dynamical evolution rule. A thermostat certainly doesn’t follow the rule “yes at time-increment N, therefore no at time-increment N”, that would be a genuine logical paradox.

[Reply]

admin Reply:

May 3rd, 2015 at 12:30 am

Thermostats don’t have time-sensors. You’re framing the dynamics with transcendent elements.

Anomaly UK Reply:

May 3rd, 2015 at 7:18 am

I made the same comment as Jesse below (missed this one, sorry)

A thermostat doesn’t sense time, but it exists in time. It is not a mathematical statement, it is an agent. The nature of an agent is that it acts, and the nature of an action is that its effects occur at a later time than the action itself.

If the thermostat closes a relay, it takes time for the current to overcome the induction of the coil, and time for the connector to accelerate under the resulting magnetic force.

In point of fact, a real thermostat almost certainly does have a time-sensor, precisely to damp overly high-frequency oscillations. That’s not required, because of the essential time-delay mechanisms, but it reduces wear on the actuators.

I confess I find all this very, very silly.

Thales Reply:

May 4th, 2015 at 2:14 pm

I confess I find all this very, very silly.The ancients had Saṃsāra.

We’re stuck worshiping…

relaxation oscillators?(“Dude, why are you putting flowers on your turn signal?”

“You wouldn’t understand, heathen…”)

“The alternative that matters is not figurative language, but the circuit diagram, and recursive code.”

I don’t think that a code or circuit diagram could impose any form of logic onto dense fiction, like say a Raymond Carver short story. There is no systematic analysis of pure form that could derive meaning from literature such as this. And this poses a problem because the world, as Bateson states, is full of paradox, and a functioning “brain” would need to, for the purposes of linear motion, presume there is meaning in the data, and be able to structure “logical” perspectives and counter-perspectives beyond binaries. I don’t know if there is AI that can progress intelligibly through Carver; and if AI can’t do that, how can it navigate the “real-world”?

[Reply]

Posted on May 2nd, 2015 at 3:02 pm | QuoteDavid Mumford agrees:

“

I would argue first of all that oscillations are central part of every science plus engineering/economics/business(arguably excluding computer science) and one needs the basic tools for describing them — sines and cosines, all of trig of course, Euler’s formula e^ix=cos(x)+i.sin(x) and especially Fourier series. And, of course, modeling a system by the path of a state vector in some R^n, often with a PDE, is also ubiquitous.For example, surely all ecologists have studied the Lotka-Volterra equation (wolf and rabbit population cycles).“[Reply]

Posted on May 2nd, 2015 at 3:18 pm | QuoteMetaphor is also the currency of poetry which leads to us to higher realms not limited by mere words and cold calculating logic..

[Reply]

Posted on May 2nd, 2015 at 4:19 pm | QuoteMakes me think of excluded middles and sampling constraints, sensu Pynchon.

[Reply]

Posted on May 2nd, 2015 at 5:08 pm | Quote[…] Source: Outside In […]

Posted on May 2nd, 2015 at 5:54 pm | QuoteThe reference I am thinking of is Brouwer’s assault on classical logic and the development of non-standard analysis, which also gives better ways for thinking about infinity. Historically, even he was saying this sort of thinking was “the standard” back in the day and had been forgotten. Seems the forgetting runs on a circuit too.

[Reply]

Posted on May 2nd, 2015 at 7:20 pm | QuoteOf course something was lost. What do you expect to happen when a discovery about the nature of reality has to be disseminated into the delusional culture of monotonic pseudoprogress, i.e., Liberalism (of which the Nrx is merely a hysterical symptom).

Time exists in order that the logic of indications can appear locally consistent. The truth (which has been known and conveniently evaded for quite some time now) is that the only completion of logical form can be found in self-contradiction (i.e., imaginary value). There is only the undistinguished state, the distinguished state, and the oscillating moment of the distinction itself, which cannot be grasped and thus retreats into a banal infinity.

Like the mental children they are, Nrxers fantasize about vast extradimensional power being unlocked by somehow using technology to linearly penetrate and control the nonlinear oscillation of logic “inside the infinite.” But there is nothing to be unlocked except the repressive power-fetish that causes Nick Land to delude himself (and others) with the notion that “what’s playing you might get to Level 2.”

You could already have been at Level 2 for a long time, if not for the blinkered Nrxer-consciousness in which resentful right-liberals have regressed into One-Dimensional Men.

A good place for Nrxers to start emancipating themselves from self-imposed mental slavery:

http://homepages.math.uic.edu/~kauffman/VirtualLogic.pdf

[Reply]

Artxell Knaphni Reply:

May 3rd, 2015 at 5:01 pm

The paradox is generated by confused specification, conflation of identity attribution, & scope.

The paradox resides in identity thinking.

Let “This statement is false” equal x.

So, if x is true, it’s false; & if false, it’s true.

The paradox intercedes because a blatant contradiction is sententially drawn out.

It rests on three assumptions, two of them, classical: that “This statement” is an identity: that this identity is constant & unchanging, & that a singular identity cannot comprehend opposing attributes without losing its singularity.

x’s self-reference asserts its ‘own’ falsity.

But it does not specify why this is so: the assertion is axiomatic.

The paradox occurs because x is a statement, & statements are automatically considered to be truth-statements, due to conventions of normative use. Furthermore, truth-statements are usually about identities. Such truth-statements can be ‘verified’ through referential practices & considerations of coherence. All of these usually go beyond the identity under consideration. Such specifiable ‘truth’ is always derived from beyond the identity concerned, through relation to conditions & contexts.

But this is not the case with x.

x, is asserted as a truth-statement, about an identity.

But that identity is the statement x.

But the statement, x, contradicts the assertion of x as a truth-statement: the constative contradicts the performative.

The paradox arises because x, as a statement; a truth-statement; & an identity, are conflated.

x, actually, reduces to the statement: “Truth is false”, if identity is emphasised.

The error occurs because x, whether as an identity statement, or as an asserted ‘truth-statement’, does not include or specify the truth conditions that both statements tacitly assume. In search of truth-conditions, they both refer to each other, under the aegis of assumed identity, & the spell of axiomatic truth.

[Reply]

Artxell Knaphni Reply:

May 4th, 2015 at 10:13 am

The above comment is not a response to that of Joseph sans Brothers.

It is a response to the post “Logic and Nonlinearity”.

[Reply]

Yes. Metaphor is quite inadequate because it cannot escape from being language.i think brassier discusses this, but in any case, metaphor can at best approximately map the territory of experiential phenomenon. It may even simulate, but it isn’t generative. Circuits are causally structured.

Anyway, great post. Makes me want to go back and read Bateson again.

[Reply]

Posted on May 3rd, 2015 at 4:08 am | QuoteInsurrealist introduced me to Girard’s transcendental syntax, with defines a ‘geometry of interaction” if you are all interested. Has tons of links to HoTT, Brouwer, BKH interpretation etc

[Reply]

Posted on May 3rd, 2015 at 6:13 am | QuoteAs far as I can see this is terminological confusion resulting from eliding time-sequences.

LET A = A + 10

Mathematically absurd, but only because the mathematical language implies eternal identity, whereas the mutable-variable model assumes a time-sequence.

A_{n+1} = A_{n} + 10

is more correct, but the extra typing is implicit in the context.

[Reply]

Posted on May 3rd, 2015 at 6:52 am | Quote“use of logic will make you walk into paradoxes” – not necessary true. It may depend of level of sophistication, we use in or thinking machine. why not to increase number of logically thinking people. Lets take 9 line of people, in parallel, with 100 people in each line. Everyone is passing logical conclusions down of their respective line to the next person and taking in consideration intermittent conclusions from all another 8 lines. How some of them will walk into paradox when there is each time, so much of fresh input. Closer to real life compared to thermostat.

[Reply]

Posted on May 3rd, 2015 at 4:56 pm | Quote“The retreat to ‘metaphor’ as a substitute for logical formalism is disastrously inadequate. The alternative that matters is not figurative language, but the circuit diagram, and recursive code.”

That’s not an alternative due to the Curry-Howard correspondence. You’d be trading six for half a dozen. It *is* correct though, but trying to replace logic these days is something like giving an insurance policy for the Apocalypse. It’s entrenched enough in CS and math that if it fails definitively somehow we’re well beyond fucked. Proofs are possibly the most thoroughly examined objects in all mathematics. Also, Lawvere has already given a category theoretical explanation for these types of arguments (an easier exposition here).

“The original aim of this article was to demystify the incompleteness theorem of Godel and the truth-definition theory of Tarski by showing that both are consequences of some very simple algebra in the cartesian-closed setting. It was always hard for many to comprehend how Cantor’s mathematical theorem could be re-christened as a “paradox” by Russell and how Godel’s theorem could be so often declared to be the most significant result of the 20th century. There was always the suspicion among scientists that such extra-mathematical publicity movements concealed an agenda for re-establishing belief as a substitute for science. Now, one hundred years after Godel’s birth, the organized attempts to harness his great mathematical work to such an agenda have become explicit.”

To put it a bit more bluntly, from Le Point Aveugle:

”

Chapter 2

Incompleteness

2.1 Technical statement

2.1.1 The difficulty of the theorem

It is out of question to enter into the technical arcana of Gödel’s theorem, this for several reasons :

(i) This result, indeed very easy, can be perceived, like the late paintings of Claude Monet, but from a certain distance. A close look only reveals fastidious details that one perhaps does not want to know.

(ii) There is no need either, since this theorem is a scientific cul-de-sac : in fact it exposes a way without exit. Since it is without exit, nothing to seek there, and it is of no use to be expert in Gödel’s theorem.

It is however important to know the general sense, the structure of the proof. Further, since the theorem is a genuine paradox(1), one is naturally tempted to try it, to pass it round — by the way the only way to understand it. The examination of various objections to the theorem, all of them wrong, requires more than a mere detailed knowledge of the proof. Rather than insisting on those tedious details which *hide the forest*, we shall spend time on objections, from the most ridiculous to the less stupid (none of them being eventually respectable).

(1)In the literal sense, exterior to the dogma

2.1.2 The diagonal argument

The argument consists, given functions g(z) and f(x, y), in constructing h(x) := g(f(x, x)) ; if by any chance h admits the form h(x) = f(x, a), one obtains h(a) = f(a, a) = g(f(a, a)), a fixed point of g, what is obviously unexpected. Depending on the context, various consequences will be drawn, most of them paradoxical.

1 : Cantor’s paradox : It is the fact that there is no bijection between N and its power set. Indeed, if (Xn) enumerates the subsets of N and f(m, n) := 1 when m 2 Xn, 0 otherwise, and g(0) = 1, g(1) = 0, then g(a) = a, contradiction.

2 : Russell’s antinomy : The same story, N being replaced with the set of all sets. Integers become arbitrary sets so that f(x, y) = 1 when x 2 y, and, with g as above, the fixed point becomes a = {x; x 62 x}.

3 : Fixed point of programs : If (fn) enumerates all programs sending N to N, if g is one among the fn, then the previous construction yields a fixed point for g. Since most functions admit no fixed point, one concludes that the fixed point often corresponds to a diverging computation. Typically, starting with g(n) := n + 1, a = a + 1 = a + 2 = . . ., which shows that we are indeed dealing with partial functions.

4 : Fixed point of -calculus : If M is a lambda-term and (Omega):= lambda*xM(x(x)),then Omega(Omega) is a fixed point of M. This version is to 3 what 2 (Russell) is to 1 (Cantor).

5 : First incompleteness theorem : The fixed point of programs (3), but replacing the programming language with a formal theory. f(m, n) is the code of An[ ¯m] and g is non-provability, which makes that the fixed point is a formula saying I am not provable. Remark that the theorem also establishes that g(·) is not computable.

To this series, it is correct to add Richard’s paradox, which slightly underlines Gödel’s theorem, “the smallest integer not definable in less than 100 symbols”, which I just defined in much less than 100 symbols. One traditionally gets rid of

Richard by saying that the word define is not well-defined, that the language should be made precise. Gödel’s theorem can be seen as a corrected version of Richard ; by the way, Gödel explicitly refers to Richard.

2.1.3 Coding

This is traditionally the difficult part of the theorem, the one in which some do their best to mislead the neophyte, perhaps because they themselves do not grasp its general structure. What is this about ? Nothing more than the numerisation

of language, quite revolutionary an idea in 1931, but rather divulgated at the age of computers. And, by the way, there is a causal link, never forget Turing’s contribution to computer science, a contribution which mainly rests on a second reading of Gödel’s theorem ; the fixed point of programs is nothing more than the celebrated algorithmic undecidability of the halting problem : no program is able to decide whether a program will eventually stop, and no way to pass around this prohibition.

This is a simplified version of the incompleteness theorem which loses very little, in contrast with Tarski’s version (see 2.D.1).

[…]”

@Sanguine – Try to see denotational semantics as a representation theory of proofs that generates algebraic invariants modulo cut-elimination. (And obviously this view makes it clearer what I’m trying to get with homotopy, info geom, learning theory, minimal surfaces, hyperbolic geometry, etc. Political systems might not exactly be formal systems yet if I said there’s a government out there that experimentally violated the second law of thermodynamics you wouldn’t believe me, with good reasons!)

Anyways, proof nets (see 1, 2, 3) do give exactly the geometric and graph interpretation then transcendental syntax provides the dialectical flavor.

On Transcendental syntax: a Kantian program for logic:

“[…]

The ambitions of the program are summarized by the following purpose: the development of a new format to represent logical proofs and types (quite in the lines of geometry of interaction) whose “normative” features (typing criteria, cut-elimination, completeness) are not to be searched for by reference to external structures, but find an explanation though geometrical criteria directly applicable to syntactical artifacts. This general point, which was sustained by a Kantian inspiration that we are trying to make explicit, has as corollaries at least two other ambitions, that we discuss subsequently: the use of transcendental syntax tools to exert a critical power towards existing syntaxes, by putting together philosophical justifications and purely technical ones and the refinement of actual logical systems, along with a finer understanding of their inner constitutive principles.

Ambitious being the program, ambitious is also the name of the program: transcendental is, for Kant, a form of knowledge which is not directed to the objects of ordinary science, but to “the mode of our cognition of these objects, so far as this mode of cognition is possible a priori” (Kant (2010)); the issue of transcendental enquiry is therefore to identify the place of the subject in the universe of his multifarious knowledges, a place which makes it possible for those knowledges to manifest an objective relation with the subject himself.

Kant aimed at justifying the objective claims of mathematicians and physicians of his time, and at finding means to charge the metaphysical debate, so devoid of the clarity of the formers, of being a collection of “dialectical illusions”; nevertheless, he wanted these results to be the outcome of an investigation on the respective forms of argument and judgement of the mathematician, the physician and the metaphysician; it was, after all, an enquiry entirely directed to the categorial (i.e. syntactical?) criteria constitutive of the debates characterizing the disciplines. Never, in his arguments, Kant mentions the objects of those disciplines: what he disputes is whether they can ever have a definite object.

It is more or less in this spirit that the attempt at unearthing the constraints hidden in the rules of logic aims at an explanation of how syntactical constraints are related to traditional logical results such as general cut-elimination theorems, completeness and incompleteness theorems; it is again in this spirit that enquiries within transcendental syntax should to be performed without reference to those entities whose representability crucially relies on the use of what is charge of being justified; this constraint was indeed the one making a difference, for Kant, between the transcendental enterprise and metaphysics.

For instance, the geometrical criterion of proof-nets provides an apparently internal explanation of the constraints imposed by logic typability (i.e. sequent calculus): it seems indeed to show that introduction and elimination rules do not fall from the sky, but are just those which prevent disputes to fall into (vicious) circles.

As a concrete example of how the criterion works (see Girard (2011b)), we take proof nets for first order quantifiers (see Girard (1991)): this part of proof-net theory is the result of a geometrical reinterpretation of a genial trick adopted by Herbrand in the theorem which bears his name (see Herbrand (1930)), and which produces an apparently non circular explanation of quantifiers (on the contrary, as we mentioned above, the Tarskian explanation of quantified formulas like ∀x∃yA(x;y) is given in terms of quantifiers applied to the elements of the support of the models interpreting the formula). Herbrand’s trick uses the technique of unification (see Herbrand (1930)), an algorithm for the solution of systems of equations between functional terms. To formulas of the form ∀x∃yA(x;y) Herbrand associates the equation

x = f (y) (4)

which implicitly corresponds to the “Herbrandization” of the formula, i.e. its transformation into the formula ∀f∃xA(x; f (x)) (the latter being valid if and only if the former is).

Now, whereas the semantic refutation of the invalid formula ∀x∃yA(x,y) ⇒ ∃y∀xA(x,y) still makes use of quantifiers, Herbrand’s refutation does not: it is indeed obtained by considering the quantifier-free “Herbrandized” formula A(g(y),y) ⇒ A(x; f (x)) and then showing that the induced system of equations

x = g(y)

y = f (x)

has no solution. The translation in terms of proof-nets is the following: one considers an arbitrary proof-structure* representing an alleged proof of the formula; to any variable which is eigenvariable of a “forall” quantifiers a specific point of the net is associated; the system (5) is thus translated into the existence of a path in the net from the point associated to y to the one associated to x so as a path from the point associated to x to the one associated to y, i.e. a (vicious) cycle!

On the other hand, one easily sees that no equations are produced in the case of the logically valid formula ∃x∀yA(x,y) ⇒ ∀y∃xA(x,y) , to which a proof-net with no cycles can be naturally associated.”

*Proof-structures are the equivalent in proof-net theory of l-terms à la Curry, that is pure terms which do not necessarily correspond to correct proof-terms. Only those proof-structures which satisfy the correctness criterion (which have no vicious cycles) are called proof-nets and are sequentializable.

[…]

“Finally, the focus on dynamics may allow to discover “subjective” redundancies in known formalisms: bureaucratic, as one might say, aspects of syntax are those that are irrelevant to the behaviour of proofs (i.e. their interaction through cut–elimination) that is, i.e. to its use (or its meaning, following Wittgenstein’s notorious equation – Wittgenstein (2009), §43 – ?). Proof-nets, for instance, actually perform a quotient on sequent calculus proofs, since different proofs can be associated to distinct sequentialization of the same net, thus erasing about irrelevant information.

It must be said that, whereas for a restricted class of proof-nets (relative to multiplicative linear logic) this quotient has a clear and convincing status, in the general case (concerning additive and exponential linear logic) it is still no clear where to draw the line between “essential” and “inessential” sequential information; this problem, which constitutes one of the main line of technical research in transcendental syntax, comprising the crucial theme of computational complexity (see Girard (2012)), really appears like (and is presented by Girard as) a sort of “transcendental” enquiry on the positioning of the subject and his “epicycles” (the “inessential” components of syntax) with respect to logic.

Here is how Girard describes the sense of working on transcendental syntax:

If we exclude divine revelations, the only possibility consists in making things interact with alter egos. In the case of pasta, one alters the recipe and sees whether it tastes the same. Typically, put the salt before boiling, you will notice no difference; push the cooking time to 15mn and you get glue. To sum up, restrictions are not out of a Holy Book, but out of use. And use is internal, i.e., homogeneous to the object. Girard (2003)

The challenges discussed so far seem to converge on what we can call, freely taking inspiration from Kant, a criticist stand on syntax: by this we mean a philosophical position, still to be delineated in detail, which on the one side refuses the “mirror effects” of semantical explanation, as we have exhaustively explained, and on the other does not resign to the conventionalist refusal of any explanation in logic. After all, the conclusions of the arguments of the preceding section, namely that logical analysis, as conducted within a given formal system, presupposes a synthetic explanation of the syntactical tools involved, and that these forms of explanation should not end up into a semantic “tail-chasing” (rules justified by “meta- rules”), constitute the main philosophical tenets of the criticist position.

However, since we believe that transcendental syntax, as a technical, though philosophically oriented, program, must be evaluated on the field of its technical perspicuity, only an honest assessment of its results (those already obtained and, most of all, those yet to come) will tell for the validity of this proposal.”

He does reduce predicates to propositions in his latest. *That* was interesting, to say the least!

As for dialectics:

“This notion of virtual proof can be made more precise. Indeed (assuming Aη B, i.e., atomic axiom links), a switching of A induces a sort of proof of ∼A. This *proof* can be recursively written in sequent calculus, starting from its conclusion ⊢ ∼A. In the sequent ⊢ , formed with negations of subformulas of A,

on chooses a compound formula D*, so that

[Check the original (if you dare!) as I can’t get linear logic connectives into these comments]

Remark that there could be other splittings of the context, but that, in some sense, they are not needed: we are not producing *all* proofs of the negation, only *enough* of them**.

* – The choice of D only affects the sequent version of the proof, not its proof-net.

** -This remark anticipates upon the recent idea that tests are only a selection of virtual counter-proofs.

This construction stops when one reaches literals. Here, the only choice is to accept the sequents as *axioms*, which as the etymology suggests, is a purely arbitrary decision*. The correctness criterion can be rephrased as the convergence of the normalisation procedure between the proof of A and the *proofs* of ∼A induced by switching. The sequentialisation theorem says that there are enough *proofs* of ∼A to characterise the proofs of A.

The criterion can be restated in terms of *orthogonality* of permutations (*trip* formulation; with the graph formulation, orthogonality of partitions): (i) Let us list the literals of A as the finite set N := {0, . . . ,N − 1} (N 6= 0). (ii) Our concern is about permutations σ, τ, . . . ∈ S(N). These permutations may stand either for proofs (σ(i) = j in case of a link between i and j, hence σ(j) = i) of A or switchings of A (σ(i) = j when, starting downwards from literal i, the next literal to be visited (thus, upwards) is j; those permutations are not symmetrical).

(iii) If we say that σ, τ are orthognal, σ ⊥ τ when στ is cyclic (i.e., the (στ)n(0) all distinct for n ∈ N: τσ is thus cyclic too), then the correctness criterion can be written as:

σ is a correct proof of A if σ ⊥ τ for all τ ∈ G(A), where G(A) stands for the permutations arising from switchings of A.

(iv) By the way, cut-elimination ensures that negation matches orthogonality:

(G(A))^⊥⊥ = (G(∼A))^⊥ (1)

We have indeed reached the perfect example of contradictory foundations; the duality is between the stabilised sets (S(A))⊥ and (S(∼A))⊥⊥ that can be seen as dual sets of *proofs*. This contradictory foundations are not inconsistent for the very reason that most of those permutations are only deontic (i.e., they do not prove A, they only forbid — 6= refute — ∼A). Inconsistency is easily avoided by requiring that a *real* proof should be a symmetric permutation (σ = σ^−1) such that σ(i) =/= i for all i: the product of two such permutations cannot be cyclic**.

This contradictory foundations suggests an existentialist approach; instead of proceeding from the top (A with its frozen logical rules), we proceed from the bottom. We start with a sort of *id*, an unstructured magma of permutations.

These permutations are put in duality by:

σ ⊥ τ ⇔ στ cyclic (2)

A *superego*, i.e., a formula A thus appears a posteriori as a set of permutations equal to its biorthogonal (with the previous notations, A := G(A)^⊥):

A = A^⊥⊥ (3)

* Modern Greek: axiomatikos means *officer* , i.e., the guy whose orders, whatever stupid they may be, are beyond discussion.

** I am not claiming that this restriction actually caracterises proofs, what is wrong. I am

just getting rid of an illiterate objection against Hegel.

This existentialist approach is the origin of Geometry of Interaction, an important milestone in transcendental syntax. But not the last word: existentialism corrects the essentialist arrogance, but bends the stick too far by neglecting the idea of law. Except in the limited case of multiplicatives, we shall never get any certainty, relative or not, in this way. The multiplicative case is special in the sense that the previous analysis yields an absolute certainty: apodictic is the right expression in that case. Let us now express normalisation in this case; one knows (e.g., the Principle of the Tortoise, see infra) that cut can be reduced to Modus Ponens:

[More math]

which could become:

ρ := Mτ(1 − στ)^(−1)M = M(1 − τσ)^(−1)τM (6)

provided the series (5) converges. Since its terms are of discrete norms 0 or 1, the only possible way is that:

στ is nilpotent (7)

Indeed, coming back to the correctness criterion, we see that τ is orthogonal to all elements of A⊗∼B; if θ ∈ S(B) ⊂ ∼B is a switching of B, then the product of σ and τ ∪ θ is cyclic, what forces the nilpotency of στ. If we implement σ, τ on Hilbert space, then (6) expresses the solution of a *feedback equation*:

given

x ∈ CM, find x′ = ρ(x) ∈ CM and y, y′ ∈ CM+N\M such that:

τ(x + y) = x′ + y′ (8)

σ(y′) = y (9)

This equation expresses one of the deepest intuitions about implication and cut-elimination: implication is a connexion (typically, A −◦ A is an extension cord) and cut is plugging; the feedback equation yields the outcome of the plugging, thus proposing an equivalent plugging-free connexion. Nilpotency expresses the absence of short-circuit.”

Point being not to understand the math but yeah, suffice to say, it’s been taken care of already, not forgotten.

As for what trees probably “use”:

“This book deals with the role of curvature, a neglected dimension, in guiding chemical, biochemical and cellular processes. The curved surfaces that concern us might be those traced out by the head groups of phospholipid molecules that spontaneously self-assemble to form membranes and other building blocks of biology. Or they can be the surfaces of proteins involved in catalysis. They are provided in abundance par excellence by inorganic chemistry. In biology these dynamic entities have a marvellous capacity for self-organisation and self-assembly which is beginning to be understood. They transform one shape to another under the influence of the forces of nature with an astonishing ease that allows them to manage resources, direct complex sequences of reactions, and arrange for delivery, all on time. Shape determines function, and the energetics of function dictates the optimal structure required. At least that is our thesis.”

Implacably pervasive catallaxy. *BUSINESS*, as usual.

[Reply]

Posted on May 3rd, 2015 at 8:10 pm | Quote@No need for time series even, just consider lambda calculus. You can alpha rename “let a = a + 10 in e” to “let a’ = a+10 in e[a/a’]” which shows it is basically an issue of variable scopes (that is, two distinct variables which happen to have the same name). Furthermore, if I remember correctly, all sequential programs can be mechanically converted to lambda calculus with a bit of effort.

[Reply]

Posted on May 6th, 2015 at 9:37 pm | Quote