De Morgan laws of linear logic

$\newcommand{\par}{\operatorname{\wp}}\newcommand{\with}{\operatorname{\&}}$ Well, you can prove the DeMorgan identities using the one sided sequent calculus, since a two-sided sequent, $\;\Gamma \vdash \Delta\;$, is provable if and only if the sequent $\;\vdash \Gamma^\bot, \Delta\;$, is provable in the one sided calculus.


$$\dfrac{\dfrac{\dfrac{}{\vdash A^\bot, A}\text{id}\quad\dfrac{}{\vdash B^\bot, B}\text{id}}{\vdash A^\bot, B^\bot, A\otimes B}\otimes}{\vdash A^\bot\par B^\bot, A\otimes B}\par\qquad\dfrac{\dfrac{\dfrac{}{\vdash A^\bot, A}\text{id}\quad\dfrac{}{\vdash B^\bot, B}\text{id}}{\vdash A^\bot\otimes B^\bot, A, B}\otimes}{\vdash A^\bot\otimes B^\bot, A\par B}\par$$


$$\dfrac{\dfrac{\dfrac{}{\vdash A, A^\bot}\text{id}}{\vdash A\oplus B, A^\bot}\oplus\quad\dfrac{\dfrac{}{\vdash B, B^\bot}\text{id}}{\vdash A\oplus B, B^\bot}\oplus}{\vdash A\oplus B, A^\bot\with B^\bot}\with\qquad\dfrac{\dfrac{\dfrac{}{\vdash A, A^\bot}\text{id}}{\vdash A, A^\bot\oplus B^\bot}\oplus\quad\dfrac{\dfrac{}{\vdash B, B^\bot}\text{id}}{\vdash B, A^\bot\oplus B^\bot}\oplus}{\vdash A\with B, A^\bot\oplus B^\bot}\with$$



This is in answer to the last part of your question. Is the proof you present valid or invalid? And if it is invalid, why? I also discuss the question you imply without quite asking, which is about whether formulas that are all on one side of a sequent can commute. As I note below, I am still becoming comfortable with reasoning about linear logic myself, so please correct me if I've made any errors here.

Validity of the proof

I believe your proof is invalid, or at least incomplete. The proof in Graham Kemp's answer relies on the following rule:

A two-sided sequent

$$ \Gamma \vdash \Delta $$

is provable if and only if the sequent

$$ \vdash \Gamma^{\bot}, \Delta $$

is provable in the one sided calculus.

Your proof unfolds on the wrong side of the turnstile, and so fails this test. Assuming Graham Kemp has correctly stated the rule, it's a strict biconditional; we only have the two-sided sequent (the required proof result) when we can derive the one-sided sequent with all terms to the right of the turnstile. Yours are all on the left.

The why of the rule

To explain this in detail, we need to go over some of the basics of sequent calculus. I'm basing much of this on my understanding of sequent calculus in the context of classical logic, but I believe this argument follows for linear logic too. Indeed, it's only at the very end that the two really differ.

Basics of sequent calculus

Remember that in sequent calculus, the turnstile ($\vdash$) works like implication, and the formulas to the left and right of the turnstile are joined by conjunction and disjunction, respectively. In other words, the commas to the left of the turnstile mean "and," and the commas to the right of the turnstile mean "or."

It follows that the sequent is valid iff at least one value on the left does not hold, or at least one value on the right holds.

I don't know for sure whether the forms of conjunction and disjunction are additive, multiplicative, or classical in this case. But for our purposes, I don't think it matters. Under whatever interpretation of $\wedge$ and $\vee$ applies, the sequent

$$ \mathrm{A}_1 \wedge \dots \wedge \mathrm{A}_n \vdash \mathrm{B}_1 \vee \dots \vee \mathrm{B}_n$$

Is valid iff at least one $\mathrm{A}_x$ does not hold, or at least one $\mathrm{B}_x$ holds.

The meanings of empty expressions

We can also infer from standard definitions of empty products and empty sums that the empty conjunction is always true, and the empty disjunction is always false. Therefore, for any $\mathrm{A}$, the statement

$$ \vdash \mathrm{A} $$

asserts that $\mathrm{A}$ always holds, while the statement

$$ \mathrm{A} \vdash $$

asserts that $\mathrm{A}$ never holds. What it means to "hold" may be different in this context than in classical logic, but the syntactic structure is the same.

Conclusions about proof

Collecting all this together, it becomes clear that your proof doesn't successfully prove anything. The sequent

$$ \mathrm{A}, \mathrm{A}^{\bot} \vdash$$

is true because it says "A and not A implies nothing." All of your subsequent calculations are perfectly valid, but they just show that the terms inevitably fail to hold simultaneously.

If we could show that either $ \mathrm{A} $ or $ \mathrm{A}^{\bot} $ is certainly true, then perhaps we could convert your proof into a proof by contradiction. And it may be possible to convert your proof into Graham Kemp's using some kind of duality or symmetry property that linear logic has. I'm not sure, because I too am still trying to grok some of these details. But as it stands, I don't think what you have is a complete proof.

This is a good moment to recall that the meaning of negation in linear logic isn't the same as in classical logic. So we don't have the law of the excluded middle in its usual form. We may have it in another more limited form, but unless we add something to your proof as it stands, it could still be that neither $ \mathrm{A} $ nor $ \mathrm{A}^{\bot} $ is certainly true, which is what would be required to move one of the terms to the other side of the turnstile.

The rules for commutation

Commutation is allowed on either side of the turnstile, as long as nothing crosses over it without justification. This would follow automatically if the terms in the sequents were joined by classical conjunction and disjunction. But even if that's not the case, as I increasingly suspect, the Stanford Encyclopedia of Philosophy page on linear logic confirms that order doesn't matter for formulas on one side or the other of the turnstile.

Here, the left and right side of sequents are multisets of formulas: thus, the order of formulas in these contexts does not matter but their multiplicity does matter.

So although I don't think your proof was valid, your assumptions about commutation were correct.