TO this point, the object of our study has been a family of logics called propositional calculi; in particular, we have studied systems known as the intuitionist and the classical propositional calculi, IC and PC respectively. We now expand our study to include systems that in a sense are metatheoretical, that tell us things about our propositional logics, but which we shall study as object systems just as we did the calculi of the preceding sections.
In our study of propositional calculi, we have proved a number of important metatheorems, and in doing so, we have begun to put forward a 'theory of deduction'. This theory of deduction is informal, for it is stated in the unformalized language of mathematical English (this is in contrast to the objects of study, the propositional calculi themselves, which are indeed formalized, and formalized within the metalanguage of mathematical English). We may think of what we are about to do as a formalization of a part of our developing theory of deduction. We shall therefore define new systems, systems which will contain the notation of the propositional calculus and more besides, systems which may be thought of as formalizing part of the process of deducing propositional calculus statements from other propositional calculus statements. The principal way in which we shall use these systems is to provide decision procedures both for the propositional calculi we have been examining and for many of the modal systems we shall study; other points of metatheory may also be conveniently handled by them, however. Systems of the kind studied here were first developed in detail in Gentzen 1934; the rules we use are generally in the form of Ketonen 1944; our notation and terminology generally follows Kleene 1952.
Formulas, sequences, and sequents
As we have remarked, the systems of the present chapter will include all the symbols of the propositional calculus and the rules of formation for those symbols; a set of symbols will be called a wff here iff it is a wff in the standard propositional calculi. In addition to the symbols and rules of the propositional calculus, the present systems will have as part of their formal body of signs two additional symbols, the comma ',', and the arrow '®'. The comma will be used to form formal objects called 'sequences'; the sequence may be defined as follows (note that we use upper case Greek letters as metalinguistic signs for sequences, just as we use lower case Greek for formulas)
(1) An individual wff constitutes a sequence;
(2) Where G and D are sequences, so too is
G, D.
For purposes of clause (1), we include among the wffs a very special one, the 'null' formula, the blank space. Thus a sequence is a string of zero or more formulas, separated (when there are more than one) by commas.
The arrow will be used to form formal objects from sequences; these objects will be called 'sequents'. Thus we refer to the systems of the present chapter as 'sequent logics'. The definition of a sequent is simple:
Where G and Q are sequences,
G ® Q
is a sequent.
The sequent G ® Q may be read 'G yields Q'. Where Q is a single formula and g_{l}, . . , g_{n} are the formulas constituting G, this sequent might be interpreted as a formalization of the metalinguistic statement
g_{l}, . . , g_{n} q
where q is the one member of Q. Clearly, then, the simple ® Q might be interpreted as the statement 'Q is a theorem'. The notation of the sequentlogics is a bit broader at this point, however, than is that of the metalanguage we have been using, for we have attached no sense, till now, to a metalinguistic expression like a, b, which would have more formulas than one to the right of its sign of deducibility. In sequentlogic we must supply an interpretation for this situation. It is clear that when we write G ® Q, we mean to assert that Q follows from the conjunction of all the formulas of G, of all the formulas to the left of the arrow; we may say that the sequence to the left of the arrow is interpreted conjunctively, and this agrees with our intuitive grasp of the set of formulas which may appear to the left of the metalinguistic sign F. Τo maintain certain structural parallels between the left side of the arrow and the right, however, the sequence to the right will not be interpreted conjunctively, but rather disjunctively. G ® Q will be meant to assert that the disjunction of all the formulas of Q follows from the conjunction of all those of G. For convenience, from now on we shall refer to the sequence to the left of the arrow as the 'antecedent' and that to the right as the 'succedent' of the sequent.
There is another point of interpretation needed here which does not arise in the ordinary metalanguage of the PC. We have remarked that ® a could be read 'a is a theorem'; it states, in short, that a is deducible from the empty set of hypotheses. But in sequentlogic, the sequence Q in the sequent G ® Q may be empty; how is the sequent Ρ, to be interpreted? Just as the maintenance of structural parallels requires that a nonnull sequence right of the arrow be interpreted differently, in general, from one left of it, so too will the null sequence right differ from the null sequence left. Where ® Q means that the disjunction of Q's formulas is a theorem, G ® may be taken to mean that the conjunction of Gs formulas is contradictory. G ® may then be interpreted as the negation of the conjunction of the formulas of Γ. Another way of looking at it is to say that the empty sequence as antecedent is interpreted as a constant true proposition, and as succedent as a constant false proposition, the arrow always being a kind of implication.
We shall, in fact, have occasion to refer in a very specific manner to interpretations of sequents in ordinary propositional logic. For these purposes, the 'interpretation of the sequent G ® Q in the ordinary propositional calculus' will be the formula Cgq, where
(1) If G contains more formulas than one, g will be the conjunction of those formulas;
(2) If Q contains more formulas than one, q will be the disjunction of those formulas;
(3) If G or Q contains exactly one formula, g or q respectively will be that formula;
(4) If G is empty, g will be a constant true formula;
(5) If Q is empty, q will be a constant false formula.
The sequent is interpreted as showing a relationship of deducibility between its succedent and its antecedent; this is the deducibility which is formalized by the sequent logic. But there is a notion of deducibility which applies to the sequent logic itself, the notion of the deducibility of one sequent from another, or from a preceding set of sequents.
To express this notion of deducibility, we must move into the (unformalized) metalanguage in which we talk about sequent logics; in short, we shall state a number of rules of inference which will permit us to generate new sequents from old; the old sequents are the premises and the new are the conclusions. We shall express the rules of inference by writing the premiss or premises above and the conclusion below a horizontal line. Most of the rules we use are onepremiss rules, but a few of them employ two premises. Proofs in the system will be constructed as strings of instances of the rules; each step except the last will provide in its conclusion a premiss for further steps, and premises will be separated from their conclusions by horizontal lines. Since there are twopremiss rules, branchings will occur in many proofs, and indeed, the proof will fall into a tree form, with the final conclusion at the very bottom  this final conclusion may be referred to as the 'end sequent'. Α continuous line of proof from the tip of one of the branches to the end sequent will be called a 'proof string'. The tips of the branches are our startingpoints, and the top sequent in every branch will be an axiom of the system; the axioms are all defined by one axiom schema; where a is a wff, not including the null formula,
a ® a
is an axiom of the sequent logics we are discussing.
The rules of inference fall into two general classes. The function of one of these classes of rules is to introduce new symbols into sequents, specifically to introduce instances of the connectives of the propositional calculus; these rules are called the 'logical' rules of inference. The rules of the other set do not deal with specific logical symbols, but rather with 'housekeeping chores', in general, with manipulating of certain formulas within the sequents; these rules are called the 'structural' rules of inference. Four of the structural rules 'pair off', defining two 'structural functions' with one member of each functional pair affecting the antecedent and the other the succedent of the sequent. We display these rules in the following table.
rule  affecting antecedent  affecting succedent  
weakening 



contraction 


The weakening rules permit the addition of an arbitrary wff to either the antecedent or the succedent of a sequent; the contraction rules permit the dropping of duplicated formulas. An additional functional pair of rules is often stated in addition to weakening and contraction; these rules, commonly called 'interchange' or some similar name, permit the rearrangement of formulas in the antecedent and succedent. Rather than stating this pair of rules specifically, we shall simply consider a sequent standing in a proof to represent a11 sequents formable by simply permuting the formulas in the antecedent and in the succedent of the original sequent; thus, the location of a formula in a given antecedent or succedent will not affect the way it enters into the application of rules of the system. In addition to the rules in the table above, there is one more rule classified as structural, a rule about which we shall be hearing a considerable amount; it is called 'cut':



CUT  
G, D ® Q, L 
Cut differs from the other structural rules, of course, in that it has two premisses. It also differs from them and from all the other rules in that a formula appearing in a premiss (a) fails to occur itself or as a subformula of another formula in the conclusion.
We now move to the logical rules. As with the structural rules, the logical rules fall into functional pairs, one pair for each of the operators of the propositional calculus. One member of each pair governs the introduction of an operator into the antecedent of a sequent, and the other governs its introduction into the succedent.
Rule introduces:  Into Antecedent  Into Succedent  
C 



with Q° = Q for the Classical System; empty otherwise 

K 



A 



with one of a°, b° empty intuitionistically; both present classically 

N 



with Q empty intuitionistically 
These rules will be called 'the rule for C (or other connective involved) in antecedent (in succedent)' or the rule 'Cleft (Cright)' etc. They will most commonly be designated in even briefer form by writing an arrow with the appropriate connective letter to its left or right as the rule being named happens to be, respectively, an antecedent or a succedent rule. The above rules, then, are C® , ®C, Κ® , ®Κ, Α® , ®Α, Ν®, ®N. Notation due to Kleene 1952. It will be noticed that certain formulas in the premisses of the rules (both logical and structural) will pass through the application of the rule unchanged; these may be referred to as parametric formulas. The formulas explicitly shown in the premises of the rules (a and b) will be called 'side formulas', while those specifically shown in the conclusions (those introduced) will be called 'main formulas' or 'principal formulas'.
Sequentlogic and propositional calculus
Though we have listed above only one set of rules, it is evident that we have defined two separate systems of sequent logic. The specifications on the rules of weakening in the succedent, C®, ®Α, and ®N are such that any proof employing only the intuitionistic versions of these rules will be a proof in the intuitionistic sequentlogic, which we shall call LIC. Α proof employing the classical versions of these rules is a proof in the classical sequentlogic, called LPC. Our first task will be to show that LIC and LPC are equivalent, respectively, to IC and PC.
We cannot, of course, have equivalence here in the sense that every theorem of the one system will be a theorem of the other and vice versa, for IC and PC have as their theorems wffs, while the theorems of LIC and LPC are sequents. Α notion of equivalence can, however, be formulated in a manner convenient for our purpose. We shall say that a sequentlogic and an ordinary logic are equivalent iff the formula a is a thesis of the latter when and only when the sequent ® a a theorem of the former. We state part of the required equivalence in the following:
*3.1  If a is a theorem of IC (PC), then ® a is a theorem of LIC (LPC). 
We start out by proving that when a is an axiom of IC, ® a is provable in LIC (and so in LPC) and furthermore, that axiom C3, CCCpqpp, is provable in LPC. We first prove axiom C2:



®C 


The end sequent of this proof corresponds to axiom C2 of IC. Actually, it should be evident that this end sequent also corresponds to all substitution instances of axiom C2. This will be the case with each sequent which we prove corresponding to an axiom; any substitution instance of the axiom may be proved in a manner exactly similar to the proof of the axiom. This will mean, of course, that we have in these systems the equivalent of the rule of substitution for variables. Note, by the way, that in the above proof we have placed the name of the rule employed at each step at an end of the horizontal line associated with that step. This will be our common way of justifying steps in these proofs. We turn to axiom C1 of IC:

C® 


The final sequent in this proof corresponds to axiom Cl, Frege; again, all substitution instances of Frege are clearly derivable in parallel proofs. The reader should have no trouble constructing proofs for sequents corresponding to the rest of the axioms of IC; we shall here offer proofs for a couple more of them, however, as further examples of the use of the rules of inference.



A® 


Two more applications of the rule ®C will give an end sequent corresponding to axiom Α3 (and, again, all its substitution instances). We present one more example of a proof in LIC of an IC axiom:



C® 


The end sequent of this proof corresponds to reductio ad absurdum, axiom Ν1. Proof of sequents corresponding to the remaining IC axioms is left to the reader. We may take it as shown, then, that if a is an axiom of IC, then ® a is a theorem of LIC; indeed, considering our arguments accompanying the above proofs, we may say that if a is any substitution instance of an axiom of IC, then ® a is provable in LIC.
Let us now assume that the sequents ® a and ® Cab are both provable in LIC; we have then (using for the first time the rule of cut)



CUT 


Using cut, then, we are able to move from the assertion of ® a and ® Cab to the assertion of ® b. The system LIC then contains an analogue of the rule of detachment as a derived rule of inference. Since all substitution instances of sequents corresponding to axioms of IC are provable, this means that substitution for variables is also a derived rule of inference, and so we have shown that *3.1, as it applies to IC and LIC, holds; that is, that if a is a theorem of IC, then ® a will be a theorem of LIC.
We now turn to the systems PC and LPC. The reader may easily satisfy himself that if a sequent is provable by the LIC versions of our rules, then it will be provable by the LPC versions. In addition, we will have certain proofs which are characteristic of LPC.



C®(CLASSICAL) 


This last end sequent corresponds to Peirce, axiom C3. It is clear, then, that the broadening of the rules of LIC to those of LPC permits the proof in LPC of sequents corresponding to all theses of PC, and so if a is a PC thesis, ® a is a theorem of LPC, and *3.1 holds for PC and LPC.
There are other ways to obtain the above result; we show two of them to illustrate the use of the classical versions of the rules.
p ® p 


®N(CLASSICAL) 


This last sequent corresponds to strong double negation, which if added to IC yields PC; note that only the rule ®N is used here in its classical version.
p ® p 


WEAKENING (CLASSICAL) 


This last end sequent again corresponds to a formula which if added to IC extends it to PC; here only weakening and ®A in their classical forms are used. We thus see that there is a certain amount of redundancy in the rules we have offered for classical sequentlogic.
We now quickly state and prove a metatheorem which expresses the basic difference between the classical and the intuitionist sequentlogics:
*3.2  Where G ® Q is provable in LIC, Q is either empty or contains one formula only. 
We note that the succedent of an LIC axiom has always one member; further, the only rules that increase the number of formulas in a succedent are weakening and ®N. But in LIC these rules can be used only to increase that number from zero to one. By induction on number of steps needed to prove a sequent in LIC, then, *3.2 follows.
We now turn to the converse of *3.1, which is:
*3.3  If ® a is provable in LIC (LPC), then a is a thesis of IC (PC). 
We prove *3.3 by first proving a lemma which is actually more general than *3.3:
*3.4  If a is the formula which is the interpretation (as previously defined) in IC (PC) of the sequent G ® Q, then if G ® Q is a theorem of LIC (LPC), a is a thesis of IC (PC). 
First of all, an axiom of one of our sequent logics is always of form
a ® a
The interpretation of this sequent as earlier defined is the formula Caa, which is a thesis of both IC and PC. Now let S be a sequent of LIC or LPC; we shall then call i(S) the formula of IC or PC which is the interpretation of S. Consider any of the onepremiss rules of LIC (LPC), letting S_{1} be the premiss and S_{2} the conclusion of that rule. We shall see that in the case of each of these rules, the statement 'If i(S_{1}) then i(S_{2})' is a derived rule of inference in IC (PC). Similarly for the twopremiss rules; taking S_{1} and S_{2} as the premisses and S_{3} as the conclusion of any of these rules, we shall find that the statement 'If i(S_{1}) and i(S_{2}) then i(S_{3})' i is a derived rule of inference of IC (PC). The presence of these rules of inference in IC (PC) coupled with the fact that all sequentlogic axioms have their interpretations as theses of IC and PC will be sufficient to prove *3.4. We turn now to proof of these rules of inference in IC (PC).
The general form of i(S) where S is the premiss of the rule of weakening in the antecedent is Cgq for nonempty and C 1q (with 1 as a constant true proposition) for empty antecedent. CKagg and CKal (both IC (PC) theses) along with syl justify the movement in IC (PC) respectively to CKagq and Cgq , the forms of the interpretation of the conclusion of weakening in the antecedent in the nonempty and the empty antecedent cases. Thus there is in IC (PC) a rule of inference paralleling this sequent logic rule. Similarly for weakening in the succedent; the relevant i(S) forms here for S as premiss are Cgq and CgÆ (Æ = constant false proposition) for the nonempty and the empty succedent cases. By. the theses CqAqa and CÆa we move to the proper conclusion forms, CgAqaa and Cga.
Analogues of the rules of contraction in the antecedent and in the succedent are present in IC (PC) by virtue of SBC and the theses ΕpΚpp and ΕpΑpp. The permission to consider all permutations of a sequence as equivalent to that sequence must also be considered here; this permission is justified by SBC and the theses EKpqKqp and EApqAqp.
The last structural rule is cut; for the system LIC, where S_{1} and S_{2} are the left and the right premisses of cut, by *3.2 the sequence Q must be empty, and so i(S_{1}) and i(S_{2}) are Cga and CΚadl (or Cal for empty D). By SSI we move immediately from these formulas to CΚgdl (or Cgl) as required for the interpretation of S_{3}, where S_{3} is the conclusion of cut. In LPC, we must consider the case where Q is nonempty; here i(S_{1}) is CgAqa, i(S_{2}) as before. In the classical PC, thisi(S_{1}) is equivalent to CgCNqa and so to CKgNqa; this last formula with i(S_{2}) and SSI yields CKKgNqdl; classically, this easily transforms to CKgdAql, the interpretation of the conclusion of cut with nonempty G, D, Q, and L. The proof must be varied somewhat as different combinations of these sequences occur empty; details are left to the reader. We see, then, that the rule cut has an analogue in IC and PC.
Turning to the rule C®, we note that the interpretations i(S_{1}) and i(S_{2}) of the left and the right premises (in LIC) are Cga and CKbgq, for nonempty G. In IC, modus ponens, CpCCpqq, and i(S_{1}) give CgCCabb, which comms and imports to CKCabgb. This with i(S_{2}) and SSI yields CKKCabggq; the iterated g in this formula is dropped, ultimately, by ΕpKpp, leaving us with the interpretation of the conclusion of this rule. Proof for the case in which G is empty is similar and is left to the reader, as is the proof for the rule in LPC.
Left to the reader too will be the proofs for the rest of the logical rules; we shall supply here, however, the key theorems of IC and PC employed in the proofs for each of these rules  working out the details of the proofs will then be simple. For ®C, use 2.45, CCKpqrCpCqr; for K®, the interpretations of the premiss and the conclusion will be the same, as they will for the classical ®A (for intuitionist ®A, use axioms Α1 and Α2, CpApq and CqApq); for ®K, use 2.37, CCpqCCprCpKqr; for A®, use axiom Α3, CCprCCqrCApqr; for N®, use the equivalence ENpCpÆ and importation (a bit more detail is required in all these for the full classical cases); for the intuitionist ®N, use exportation and ENpCpÆ; for the classical ®N, use exportation and the 'definitional' equivalence ECpqANpq. We may now consider lemma *3.4 proved.
The proof of *3.3 follows immediately. If ®a is provable in LIC (LPC), then (with 1 as constant true) its interpretation C1a is an IC (PC) thesis by *3.4. But 1 is an IC (PC) thesis itself, and so by detachment we have a as an IC (PC) thesis.
From *3.1 and *3.3 comes the statement of equivalence of the ordinary propositional calculi and the sequent logics of this chapter:
*3.5  ® a is provable in LIC (LPC) iff a is an IC (PC) thesis. 
We now come to the metatheorem which is the heart and centre of the theory of sequentlogics. Gentzen 1934 refers to it as his Hauptsatz, his principal assertion; Curry 1963 calls it the 'elimination theorem'; we shall follow Kleene 1952 in referring to it as the 'normal form theorem'.
This metatheorem is called the normal form theorem not because it has anything to do with a formula or sequent in some normal form, but because it states that any proof in our sequentlogics may be put into a certain standard form. The normal form theorem will show, essentially, that any sequent provable in LIC or LPC will be derivable in a proof in which each succeeding step in a proof string will be no shorter than the step which precedes it. (An exception here is shortening of a sequent by contraction; we shall see later, however, that contraction as well as cut is redundant as a rule of inference.) As it turns out, every formula occurring in a sequent of a given proof string will occur as a subformula in every sequent below the sequent in which it initially occurred; this property of certain proofs in sequentlogics may be called the 'subformula property'. The specific proofs always having the subformula property are those in which no application of the rule cut occurs; before going on to the normal form theorem, we state a metatheorem about the subformula property:
*3.6  If a formula occurs in a sequent S of a proof of LIC or LPC, then it will occur as a subformula of some formula in the end sequent of that proof, provided there is no application of the rule cut between S and the end sequent. 
Proof is immediate; it may be verified by examination of the rules other than cut.
We now begin to move toward the statement and proof of the normal form theorem, which in the presence of *3.6 will permit us to assert that all theorems of LIC (LPC) are provable in proofs having the subformula property. It will be simplest to do so by slightly altering our systems; what we shall do is to replace the rule cut by a very similar rule which simplifies the requisite proofs but is exactly equivalent to cut in so far as the present matter is concerned. The new rule is called 'mix' and is as follows:



MIX  
G, D_{m} ® Q_{m}, L 
m is a formula having one or more occurrences in each of D and Q, and is called the 'mix formula'. The sequences D_{m} and Q_{m} are like D and Q except for having all occurrences of m deleted. The system formed by replacing cut by mix in LIC (LPC) is called LIC' (LPC'). Now suppose that the rule cut is applied somewhere in an LIC (LPC) proof (using the notation of the original presentation of cut). If this cut is replaced by a mix with a  the formula eliminated by the cut  as mix formula, the conclusion will be G, D_{a} ® Q_{a}, L (with D_{a} and Q_{a} free of the formula a) rather than G, D ® Q, L. But if D or Q originally contained occurrences of a, these occurrences may now be restored in D_{a} and Q_{a}, by appropriate applications of weakening, attaining the same result as did the cut. On the other hand, if a mix occurs in an LIC' (LPC') proof, the occurrences of the mix formula in D and Q in the premises may be reduced to one in each of those sequences by proper applications of contraction, transforming the premises to G ® Q_{m}, m and m, D_{m} ® L; an application of cut to these preτnisses yields the conclusion of mix as indicated above. On the basis of this argument, we may state:
*3.7  Α sequent is a theorem of LIC (LPC) iff it is a theorem of LIC' (LPC'). Furthermore, if a sequent is provable in LIC' (LPC') without the use of mix, then it is provable in LIC (LPC) without the use of cut, and if a sequent is provable in LIC (LPC) without the use of cut, then it is provable in LIC' (LPC') without the use of mix. 
This last sentence follows from the fact that the systems are identical except for the cutmix distinction. We shall now state the normal form theorem for LIC and LPC; *3.7 then permits us to prove it by showing that the rule mix is redundant in LIC' and LPC'.
*3.8  If a sequent is provable in LIC (LPC) then there is a proof free of applications of the rule cut for that sequent in LIC (LPC). 
The proof of *3.8 involves a triple induction; the main induction is on the number of applications of mix occurring in an LIC' (LPC') proof. For this induction, we note that if a proof contains any applications at all of mix, then it contains one which is the 'highest' application of mix in the proof, that is, one which has above it no applications of mix. If we are able to eliminate applications of mix above which there are no applications of mix, then we can successively eliminate all applications of mix in a proof, working from the top of the proof tree to the bottom. This first induction, then, reduces the proof of *3.8 to a question of showing that a proof containing a single mix may be replaced by a proof with the same end sequent and containing no mixes.
It will be sufficient, then, to consider an LIC' (LPC') proof whose last step is an application of mix, and in which there is no other application of mix. We have already done one of our three inductions; the present case is then a double induction, first of all on a property of the mix in question called grade, and  within the induction on grade  on a property of the mix called rank. The grade of a mix is dependent on the mix formula μ of that mix; specifically, it is equal to the number of occurrences of connectives  C, Κ, Α, Ν  in the mix formula. We turn now to rank; to determine the left rank of a mix, examine each proof string which terminates in the left premiss of that mix, counting upwards from the left premiss (inclusive) the number of consecutive sequents in that proof string that contain the mix formula m in their succedents. After checking each such proof string, the highest such number is the left rank of the mix. The right rank is determined similarly, counting upwards from the right premiss the presence of m in antecedents. The rank itself is the sum of the left and the right ranks.
We shall first set up the subordinate induction, that on rank; there are some preliminaries. Our mix will be of form



MIX  
G, D_{m} ® Q_{m}, L 
Suppose that m is in G (m is in L ). Here we may begin with the right preτniss (or, in the m in L case, the left premiss) alone, as follows:


In either case, then, the conclusion is derivable by weakening and contraction and so the mix may be eliminated.
In the cases we consider, we assume that this preliminary case is not in question. We turn now to the induction step of the induction on rank. Let us assume that whenever the rank of a mix with no mixes above it is £ j (with j ³ 2, for each of the left and the right ranks must be at least = 1) that mix is redundant. Consider now the cases in which the mix is of rank j + 1. Since j ³ 2, at least one of the left or right ranks will be³ 2; if a side has a rank ³ 2 the premiss (of mix) on that side cannot be an axiom, but must be the conclusion of a rule application, structural or logical. Look first at the case in which the left rank ³ 2, the left premiss of the mix is the conclusion of a singlepremiss rule, and the mix formula is parametric (neither a principal nor a side formula) in that rule. Here the bottom of the proof is of form:



MIX  
G, D_{m} ® Q_{m}, L 
We replace this deduction by the following:



MIX  
P, D_{m} ® S_{m}, L 
The original rule, whatever it is, may now be applied to the conclusion of this mix to yield the sequent G, D_{m} ® Q_{m}, L. But the mix in this latter deduction is of rank j, and so falls into the scope of the induction hypothesis and may be eliminated. This applies both to the classical and the intuitionist systems, of course; note that the restrictions on the intuitionist rules of ®N and weakening in the succedent cause us no problem, for neither of these rules can be involved hereelse m would have to be the principal formula of the rule, which is excluded for this case.
If the right rank is ³ 2 and the right premiss of the mix is again the conclusion of a single premiss rule with m parametric, the treatment is just as above; here, of course, form the new deduction by first mixing the premiss of the rule involved with the left premiss of the original mix, and apply the rule to the conclusion of the mix. If the rule involved here is the intuitionist ®N or weakening in the succedent, we point out that by the restrictions on those rules, S (succedent of the rule involved) is empty and Q (succedent of the left premiss of the original mix) is simply m, by *3.2. The end of the proof in question for these two rules is then:



MIX  
G, D_{m} ® l 
This is replaced by:



MIX  

This achieves the required results within the necessary restrictions.
Now let us consider the cases in which the left rank is ³ 2, the rule from which the left premiss results is a singlepremiss rule, and the mix formula is the principal formula of that rule. In this case, the rule must be a succedent rule, and the general form of the end of the original proof will be:



MIX  
G, D_{m} ® Q_{m}, L 
This may be replaced by:



MIX  
G, D_{m}, D_{m}® S_{m}, L, L 
The end sequent of this proof is easily converted by contraction to that of the original proof. The upper mix in the new proof is of rank j and so is able to be eliminated by the induction hypothesis. The lower mix may then be considered to have no mixes above it; we note that the left rank of this mix is precisely 1, while its right rank is the same as that of the original proof. Its total rank, then, is equal to or less than j, for the left rank of the original proof was ³ 2. The lower mix then also falls into the scope of the induction hypothesis, and so is eliminable. Note that if the rule involved is contraction, execution of the upper mix alone will give us the desired result. Again, we have no trouble with the rules ®N and weakening on the right in their intuitionist forms; since the premisses of these rules must have empty succedents, the left rank of mixes whose left premisses are derived by these rules cannot be greater than one, while here it must be at least two. These rules are then excluded from consideration here.
The cases in which the right rank is ³ 2, the rule involved is a singlepremiss rule, and the mix formula is the principal formula of that rule, are treated in similar fashion to the cases above; here, of course, the rules involved are antecedent rules, and the replacing proof will start out by mixing the premiss of the rule involved with the left premiss of the original mix, applying the rule, and mining the result of the application again with the left premiss of the original mix; contraction again will be given the briefer treatment indicated in the above paragraph.
We now turn to cases in which the left rank is ³ 2, the rule involved is a singlepremiss rule, and the mix formula is the same as a side formula of the relevant rule; here only the logical rules are involved. The general form of deduction here is:



MIX  
G, D_{m} ® Q_{m}, L 
Replace this deduction by:



MIX  

The new mix again falls into the scope of the induction hypothesis and so is redundant. The treatment for the cases in which the right rank of the mix is ³ 2 is obviously parallel.
We now have exhausted the cases in which the rule involved is a singlepremiss rule; we turn to the cases in this induction on rank in which the relevant rule is twopremiss. The treatment is analogous to that for the onepremiss rules, but is a bit more complex because of the structure of the twopremiss rules. These rules are three: C®, in which the principal formula is in the antecedent of the conclusion and there is a side formula in the antecedent of one of the premisses and another in the succedent of the other premiss; ®K, where principal and side formulas are all in succedents; and A®, where principal and side formulas are all in antecedents. We look first of all  as with the singlepremiss rules  at the cases in which the left rank is ³ 2 and the mix formula is parametric in the step leading to the left premiss. There are two subcases here: first, when the rule involved is the intuitionist C®, and second, when it is any other twopremiss rule. We examine the latter of these cases using the classical C® as an example; the treatment for A® and ®K in both classical and intuitionist forms is identical. The proof in question will end:



MIX  
Cab, G, D_{m} ® Q_{m}, L 
This may be replaced by:



C® 

Cab, G, D_{m} ® Q_{m}, L 
As we remarked, the cases for ®K and A® are similar (note that intuitionist ®K is excluded from this case). For the intuitionist C®, we have the Q° of the left premiss empty, and so must slightly alter the replacing form:



C® 

Cab, G, D_{m} ® Q_{m}, L 
Q in this case is precisely the formula m, by the way. In all the above cases, we are able to replace the original proof by one in which the rank of the mix is j or lower, and so the mixes in these cases are eliminable. The situations in which the right premiss of the rule is by a twopremiss rule, right rank is³ 2, and the mix formula is parametric in that twopremiss rule, are handled in analogous fashion; details are left to the reader; here of course, C® will not require special treatment, but all rules will be handled alike.
Next, examine the situations in which the left premiss is by a twopremiss rule, the left rank is ³ 2, and the mix formula is the principal formula of the rule involved. Here the rule can only be ®K, and the proof involved ends



MIX 

G, D_{Kab} ® Q_{Kab}, L 
Replace this by:



MIX  
G, D_{Kab}, D_{Kab}® Q_{Kab}, Q_{Kab}, L, L 
The conclusion of this proof is converted by contraction to the end sequent of the original proof. The upper two mixes will be of grade j and so are eliminable; the lower mix then becomes a mix with no mixes above it and with a left rank of exactly one and a right rank the same as that of the original proof; since the original proof had a left rank ³ 2, the rank of the lower mix is £ j, and it too may be eliminated. The cases in which it is the right premiss of the mix generated by a twopremiss rule, right rank is ³ 2, and the mix formula is the principal formula of that twopreτniss rule, are treated in similar fashion; here of course, the applicable rules are those with principal formula in antecedent, C® and A®. Details are left to the reader.
Finally, we look at the cases in which the mix formula is a side formula of the twopremiss rule involved. In the cases involving the left premiss of the mix, we shall be concerned with the rules that have side formulas in succedents, C® and ®K; for those involving the right premiss, the relevant rules are those with side formulas in antecedent, C® and A®. We shall specifically look only at a case in which the left premiss is the one under consideration, that is, where the left rank is ³ 2; we shall use the rule ®K as our example:



MIX 

G, D_{a} ® Q_{a}, L, Kab 
We have taken a here as the mix formula; the above proof conclusion is replaced by:



MIX 


The mix in the replacing proof falls into the scope of the induction hypothesis and is eliτninable. The procedure for the rules C® and A® is parallel. This completes all cases for the induction step in the induction on rank.
Our next task is to show that the base step in the induction on rank holds in two locations: the base step and the induction step of the induction on grade of mix. The base case for rank is, we shall recall, the situation in which rank = 2, with left and right rank both = 1; we have thus reduced the proof of the normal form theorem to cases in which a proof contains only one mix and that mix is of rank 2.
In this situation, the mix formula must occur in the succedent of the left premiss of the mix and in the antecedent of the right, and not in the succedent of a formula immediately above the left premiss nor in the antecedent of a formula immediately above the right premiss. Since this is so, each of the premisses of the mix in question must be either an axiom or a sequent resulting from the application of a rule in which the mix formula is the principal formula. Consider first the case in which one of the mix premises is an axiom; we show here the case for the left premiss:



MIX 

a , D_{a} ® L 
This is replaced by:
D ® L 


WEAKENING 


Thus, whenever the left premiss of a mix is an axiom, that mix is eliminable. It is clear that the same is the case when the right premiss is an axiom.
The above may be considered a preliminary case in the induction on grade; another we shall consider is that in which the mix formula is introduced by a structural rule into the antecedent of the right or succedent of the left premiss of the mix. It is clear that the rule introducing the mix formula must here be weakening; when the premiss involved is the left one of the mix, we have:



MIX 

G , D_{m} ® Q.L 
Here the mix is unnecessary, for this proof may be replaced by:
D ® L 


WEAKENING 
G , D_{m} ® Q.L 
The above argument is easily extended to the case in which the right premiss of mix is generated by a weakening introducing the mix formula.
We are now ready to move into the specific induction on grade of mix. Suppose, first of all, that the grade of a mix (of rank 2) is zero. The mix formula then contains no connectives, and is thus a propositional variable. It cannot then have been introduced into left or right premiss by a logical rule, but only by an application of weakening or by the fact that the premiss is an axiom. But by our two preliminary cases, the mix is then eliminable; the base case ο£ the induction on grade is then established.
Suppose now that whenever the grade of a mix (of rank 2, of course) is £ k, that mix is redundant; consider the instances in which a mix's grade is k + 1. Here it is possible that the mix formula be introduced into one or the other premiss by weakening or that one or the other premiss is an axiom; again, by our preliminary cases, we can here eliminate the mix. Thus, we need consider only the cases in which both relevant instances of the mix formula have been introduced by a logical rule. Suppose that the mix formula is Cab; the proof with which we are concerned then ends:



MIX 

G , D ® Q. L 
This may be replaced by:



MIX 

D, G , D_{b.a} ® L_{a}°, Q_{b}. L 
The end sequent of the replacing proof contains no formulas not in the end sequent of the original proof; by weakening and contraction then, we may convert it to the end sequent of the original. But where the mix in the original  involving Cab  is of grade k + 1, those in the replacing proof  involving a and b respectively  are of grade £ k. By the induction hypothesis, then, the upper of these mixes is eliminable; once it is gone, the other mix becomes a mix with no mixes above it and falls into the scope of the induction hypothesis and may be eliminated.
The situations in which the mix formula is Kab, Kab, or Na are similarly handled; the finding of specific replacing proofs with mixes of grade £ k is left as an exercise for the reader. The technique is simply to mix the premises of the logical rules in the original proof in such a pattern as to eliminate all the side formulas of these sequents, and then to turn the final product of these mixes into the end sequent of the original proof by weakening and contraction. In each case, of course, the left premiss of the original (grade k + 1) mix is supplied by the succedent rule for the logical connective involved, and the right premiss by the antecedent rule for that connective.
We have now considered all cases, and the triple induction we have set up to prove the eliminability of mix is complete. Any sequent provable in LIC' (LPC'), then, is provable without the use of mix. By the earlierestablished deductive equivalence of mix and cut, any sequent provable in LIC (LPC) is then provable without the use of the rule of cut, and *3.8 has been proved.
The rule of contraction and its elimination
We have shown that the rule cut is redundant in the sequentlogics of this chapter. The import of this is that every provable sequent is derivable in a proof in which no formula  once it has made its appearance in a sequent of the proof  can 'disappear'. Each formula occurring in the proof at a particular point will occur at every lower point of the proof as a subformula of some formula belonging to that lower point. That this is a key feature in the use of sequentlogic in decision procedures should be evident. But we are left with a rule which  while not so troublesome as cut in the establishment of a decision procedure  must be examined. This rule, while it does not totally delete formulas occurring in proved sequents, still does shorten sequents, unlike all other logical and structural rules except cut. The rule is contraction. If we are able to show that contraction, like cut, is eliminable, we shall then know our systems to be such that each provable sequent is provable in a proof whose successive sequents are of ever increasing length; this, of course, will provide us with a decision procedure for the systems.
We shall approach the question of contraction elimination somewhat indirectly, by first investigating some features of the rules of these systems. Later we shall apply the results of these investigations to the question of contraction and its redundancy.
Suppose we have before us the proof of a sequent containing applications of a number of the rules of our system. In a given proof these applications come in a certain order. But just how important is that order? Given the initial premisses of the original proof, could the order in which the rules are applied be 'scrambled' (within certain limits) but still produce the same result? Τo put the question a bit more precisely, supposing the end sequent of the original proof to contain a compound formula (one beginning with a connective), can that proof be so rearranged as to permit the introduction of that compound formula at the final step of the proof? If the proof can be so rearranged, we shall say that the rule which introduces the compound formula in question is invertible. Actually, we shall restrict ourselves to an apparently narrower version of inversion than the above. Suppose a sequent results from an application of the rule Χ, and in turn each of the premisses of Χ resulted from an application of the rule R, with the principal formula in each such Rapplication the same. The premisses for these applications of the rule R may be called the 'initial premisses'. We then say that the rule R is 'simply invertible with respect to the rule Χ', if the original proof figure may be replaced by one in which the ultimate sequent is the same as that of the original proof, but results from an application of the rule R to premisses each of which resulted from an application of the rule Χ, the initial premisses here being the same as those of the original proof. The principal formula of the rule R in the original proof, of course, must be parametric in the application of Χ in that proof. We note that if a rule is simply invertible with respect to every other rule of its system, then clearly, it is invertible in that system in the sense originally mentioned. (For a detailed discussion of invertibility, see Curry 1963.)
It turns out that since our prime concern is the rule of contraction, we shall be able somewhat to limit the number of rules we must examine for invertibility. We shall consider a rule to be 'contraction relevant' iff the formula it introduces (its principal formula) is so positioned as to be able to be one of the side formulas for a later application of the rule of contraction. (Such side formulas for contractionwhich are, of course, identical with the principal formula of the rule of contraction  will be called 'contracted constituents'. The earlier introduced of the contracted constituents will be the 'first contracted constituent', and the one introduced later on in the proof is the 'second contracted constituent'.) In the classical system LPC, all the logical rules are contraction relevant, as are both weakening rules. In the system LIC, however, only the logical antecedent rules and weakening in the antecedent are contraction relevant; by *3.2, we can never have more formulas than one in the succedent of an LIC theorem, and so the question of contraction in the succedent can never arise there.
Suppose, now, that a sequent arises from the application of a rule Χ tο a premiss (or premisses, if Χ is a twopremiss rule) generated by an application (applications, for twopremissed Χ; here the side and principal formulas for each of the applications generating the premisses must be respectively the same) of a rule R. Both Χ and R here are to be logical rules. We call the end sequent of this proof figure (that is, the conclusion of the rule Χ) G, j, y. In using this method of naming a sequent we do not specify on which side of the sequent the indicated formulas stand, but only that they are present in the sequent; j will be the principal formula of the rule R in our proof figure, and y the principal formula of Χ. The formulas G are those which are parametric through the application of both R and Χ. Let a and b be the side formulas for the application of R and let g and d be those for the rule Χ (if R is a negation rule, b is empty, and if Χ is a negation rule, d is empty; the only other rule employing only one side formula is the intuitionistic ®A, which is not contraction relevant; since we are interested in the invertibility only of rules which are contraction relevant, and since the rule R is the one whose invertibility we are considering, ®A can here occur only as the rule Χ  thus, if Χ is ®A, d will be empty). The initial premisses of our proof figure will take different forms depending on whether R is one or twopremised and whether Χ is one or twopreτnissed. If both R and Χ are onepremiss rules, for example, the single initial premiss will have the form G, a, b, g, d (b or d possibily empty as earlier noted); if R is a onepremiss rule and Χ twopremised, there are two initial premisses, G, a, b, g and G, a, b, d. The reader can easily work out for himself the other possible combinations of initial premisses.
Let us now consider our proof figure to be part of a proof in the classical LPC. Suppose both R and Χ to be singlepremiss rules; the proof figure is then
G, a, b, g, d 


R  

Simple examination of the singlepremiss logical rules of LPC shows that in each possible case, the above figure may be altered to
G, a, b, g, d 


X  

This means that each of the singlepremiss rules of LPC is simply invertible with respect to each of the other such rules. If R is a onepremiss rule and Χ is twopremised, the figure ends thus:



X  
G, j, y 
This figure may in each such case be replaced by



X  

Thus each singlepremiss LPC rule is invertible with respect to each twopremiss rule. The reader may easily satisfy himself that similar results obtain in the other two possible cases, when R is twopremissed and Χ onepremissed, and when both rules have two premisses. Thus every rule of LPC is simply invertible with respect to every rule (we are considering only logical rules here, of course).
The situation in LIC is not so simple. An examination of the contraction relevant rules of LIC shows that so long as the rule R is one with both of its side formulas in the antecedent(s) of its premiss(es), the same reasoning applies as in the classical system. But when the rule has a side formula in the succedent of a premiss, trouble arises; suppose, for example, that R is the rule C® and Χ is ®C. Here the original proof figure takes the form



C®  

If this figure were to be inverted, the premisses required for the newly positioned C® would have to be G ® a and b, G ® Cgd The latter of these sequents can indeed be derived from the right of the aboveshown initial premisses by ®C (rule X), but G ® a cannot, in the general case, be derived from these initial premisses. In LIC, then, C® fails to be simply invertible with respect to all the LIC rules. The reader may satisfy himself that the same is the case with the intuitionistic ®N. On the basis of the above, then, we may state the following:
*3.9  Each of the contractionrelevant logical rules of LIC and LPC is simply invertible with respect to each of the rules of LIC and LPC respectively; the LIC rules C® and ®N are exceptions, however, in not being universally so invertible. 
We now move on to another theorem which we shall use in our work on contraction. Recall that every sequent of form a ® a is an axiom of both LIC and LPC. The next metatheorem shows that we could get by with a less comprehensive set of axioms, namely, the subset of the present set in which a is only a propositional variable.
*3.10  Every sequent of form a ® a (and so every provable sequent) in LIC and LPC is derivable in a proof in which all the axioms employed contain no propositional calculus connectives. Furthermore, if the rule for which either of the formulas a is principal formula is contraction relevant (except for the rules C® and ®N in LIC), that rule may be used as the last step in the proof of a ® a. 
The proof is by induction on the number of connectives in a. If none, the theorem holds. Suppose that the first sentence of the theorem holds when the number of connectives in a is no greater than some number k, and consider the cases in which the number of connectives is k + 1. a ® a then takes one of the forms Cbg ® Cbg, Kbg ® Kbg, Αbg ® Αbg, or Nb ® Nb. The reader may easily establish for himself that each of these sequents is derivable in a proof figure beginning with the sequents b ® b and/or g ® g; that is, given these sequents, the logical rules and weakening permit the derivation of any form of a ® a relevant to this case. But b ® b and g ® g both fall into the scope of the induction hypothesis, and so are derivable in proofs beginning with connectivefree axioms; so too, then, is a ® a, and the first sentence of the theorem stands proved. The second sentence follows from *3.9.
Contraction elimination once again
Consider now a proof in LIC or LPC with the following features:
1.  The proof concludes with a contraction. 
2.  The proof above the contraction is contraction and cutfree. 
3.  If the proof is in LIC, neither contracted constituent might be introduced by C® or N®. 
We note first that it will be sufficient to consider proofs in which the contraction follows immediately upon the step in which the second contracted constituent is introduced. Proofs in which both contracted constituents are carried as parameters for one or more steps before the contraction can always be replaced by proofs in which the second contracted constituent is eliminated by a contraction step (by contraction steps) immediately after the step (steps) in which it is introduced; this follows since the applicability of the rules of our systems is precisely the same in cases in which there is only one instance of a parametric formula on a given side of a premiss as in the cases in which there are more instances than one of that parametric formula on that side. (In the case of LIC, of course, the side involved can only be the antecedent side.)
Let the grade of a contraction be defined analogously to that of a mix: it is simply the number of connectives occurring in each of the contracted constituents. If we let the grade of the contraction in a proof as described above be zero, then neither of the contracted constituents can have been introduced by a logical rule; furthermore, only the first, at most, can have been introduced as part of an axiom. Thus, the second contracted constituent must have been introduced by the rule of weakening. The original proof may then be replaced by one in which this weakening step and the contraction are simply omitted, the rest of the proof being the same. The premiss of the weakening step in the original proof is then the conclusion of the new proof, and is precisely the same as the conclusion of the contraction step in the original proof, and the contraction may thus be eliminated.
Assume that the contraction in a proof as earlier described may be eliminated provided the grade of that contraction is no greater than the number k; consider the case in which that grade is k + 1. Here we shall use a subsidiary induction analogous to that in the proof of the eliminability of mix. Define the rank of a contraction as the largest number of steps immediately preceding the contraction and containing one or both of the contracted constituents. In our case only the step immediately preceding the contraction contains both of these formulas, so rank here is a measure of how many steps stand between the introduction of the first and that of the second contracted constituent. Clearly, the minimum possible rank of a contraction is two.
We then consider the case in which, with grade of k + 1, a contraction has rank of two, so that the introduction of the first contracted constituent immediately precedes that of the second. By their position in the contracted sequent and their main connective, the contracted constituents will be such that they might have been introduced by a specific one of the logical rules, the logical rule 'proper' to those constituents; by *3.10, we need not consider the case in which a contracted constituent is introduced as part of an axiom, since such an introduction may always be replaced by one in which the constituent is introduced by its proper logical rule. Two subcases will be considered; the first is that in which the logical rule proper to the constituents is a onepremiss rule. First of all, if in the original proof in this subcase one of the contracted constituents is introduced not by its proper logical rule but by weakening, the situation is trivial, and the treatment is much like that of the base case in the induction on grade; in the new proof figure, simply eliminate the weakening step and the contraction. Now suppose that both contracted constituents in this subcase are introduced by their proper logical rule. In what follows, R will be that rule, j will be the contracted constituent, a and b will be the side formulas for R (b empty for a negation rule), and the sequents will be written without indicating on which side the various formulas stand (of course, two formulas with the same Greek letter designation will always be on the same side). The original proof will conclude:
G, a, b, a,b 


R  

Replace this proof figure by one in which the top sequent is contracted to eliminate the extra occurrence of a, the resulting sequent is contracted to eliminate the extra b, and the resulting sequent (G, a, b) has R applied to it to produce the end sequent of the original proof. In the replacing proof figure, both contractions will be of grade £ k, and by the induction hypothesis may be eliminated.
The subcase in which R is a twopremiss rule is a little more complicated but is quite similar in treatment. Here the contracted constituent first introduced is introduced twice, once into the left and once into the right premiss of the application of R introducing the second contracted constituent. Note that if the second contracted constituent is introduced by weakening, or if both instances of the first are introduced by weakening, we may again simply drop both the weakening step(s) and the contraction. If both instances of the first contracted constituent are introduced by that constituent's proper rule R, then the initial premisses of the original figure ending the proof are G, a, a and G, a, b and G, a, b and G, b, b; we leave it to the reader to construct both the original proof figure and the replacing figure in which the contractions are of grade £ k. If one of the instances of the contracted constituent first introduced is by the proper rule R and the other by weakening, the premises for R will be in the form of the first two listed above and that for weakening in the form G, b. The weakening introducing j (the first contracted constituent) may be replaced by weakenings introducing a and b to produce G, a, b and G, b, b, making the treatment of this situation the same as that in which both instances of the first contracted constituent are introduced by the proper rule R; details are left to the reader. In the case where grade of contraction = k + 1, then, and rank = 2, the contraction is eliminable.
Assume now that if the rank of a contraction such as we have been discussing is equal to j  the grade being k +1  that contraction may be eliminated. Let the rank of the contraction now be j + 1. In this case, first of all, it is possible that the first contracted constituent was inserted by the rule of weakening into one or more of the places which give the contraction a rank of j + 1. Make the mode of insertion of that constituent uniform by replacing all such introductions by weakening by introductions by the proper logical rule. Do this simply by replacing the weakening step which inserts the first contracted constituent by a weakening step or steps which insert the side formulas required by R for the introduction of the contracted constituent. With premises for R thus supplied, apply R and let the rest of the proof be as before. Clearly, the rank of the contraction in this new proof is still j + 1. We now have a proof with rank of j +1 in which all 'highest' introductions of the first contracted constituent are by the rule proper to that constituent. Recall that we have limited ourselves in this discussion to contracted constituents introduced by (contractionrelevant) rules other than C® and N® in LIC. By *3.9, all such rules are simply invertible with respect to all rules of the system in which we are operating. By the makeup of our rules so far as parametric formulas required in premises is concerned, twopremiss as well as onepremiss rules will always be in a position to be simply inverted; thus we replace the proof with contraction of rank j + 1 by one in which the steps introducing the 'highest' instances of the first contracted constituent are simply inverted. This proof's contraction has a rank of j, and so that contraction falls within the scope of the induction hypothesis and is eliminable. (It is possible that the rule following one of these 'highest introductions' is not a logical rule, but is weakening.*3.9 discussed inversion with respect only to the logical rules, but the reader may easily verify that each of our rules here considered is also invertible with respect to weakening.)
We have now shown that any proof ending in a contraction and otherwise cut and contractionfree, and such that the contracted constituents do not have C® or N® as their proper logical rule, is also such that the contraction is redundant. If a proof contains more such contractions than one, all such contractions may then be eliminated by starting at the top and working down. We are now in position to assert:
*3.11  The rule of contraction is redundant in LPC, and in LIC when (in the latter system) the contracted constituents do not have C® or N® as proper logical rules. 
We should like to extend the elimination of contraction to LIC as a whole; this may be done by a slight modification of the rules C® and N® for this system. We thus formulate a system, which may be called the 'modified LIC' and which is exactly like LIC except for the replacement of C® or N® respectively by:




Cab, G ® Q 
and
G ® Q, a 

Na, G ® 
The additional formulas appearing in the premises of these rules may be called 'quasiprincipal formulas', after Curry 1963. Modified LIC and LIC are equivalent, for any proof ending in C® or N® in the latter system may be duplicated in the former simply by inserting quasiprincipal constituents into the appropriate premises by weakening, and any proof in the former system may be duplicated in LIC by following the C® or N® by a contraction of the principal formula. Also, a check of the relevant stages of the proof of *3.8 for the original LIC will show that that normal form theorem is provable for the modified system as well. Thus we have
*3.12  LIC and the modified LIC are equivalent. Furthermore, the normal form theorem is provable for the modified LIC. 
Unless there is danger of confusion, we shall refer to both the original and the modified LIC simply as LIC.
Suppose now that we have a proof in LIC which contains a contraction, the constituents of which have C® or N® as their proper rule. It again will be sufficient to consider the case in which the contraction follows immediately upon the introduction of the second contracted constituent. One possibility is that that formula was introduced by an instance of weakening; if this is the case, the contraction is eliminated simply by dropping the weakening and the contraction step. If it is introduced by its proper rule in the original LIC, the proof ends (we show the case for N®; that for C® is similar):
Na, G, ® a 


N®  

In the modified LIC, this proof figure's initial premiss is precisely the premiss of the rule N®, which when applied gives the end sequent of the figure without the contraction. We thus conclude:
*3.13  The rule of contraction is redundant in LPC and (modified) LIC. 
Suppose now that we wish to decide whether or not a given sequent is a theorem of LPC or LIC. We may write down the sequent in question and then begin to construct a tree proof for it, working backwards from that sequent. By *3.8 and *3.13, we know that if the sequent is a theorem, there is a cutfree and contractionfree proof for it. Consider first the case in which the system in which we desire to decide our sequent is LPC. In this system, a cutfree and contractionfree proof is a proof of everincreasing length. That is, each conclusion of a step in that proof contains a greater total number of logical symbols (connectives and variables) than did the premiss of that step (or either of the premisses, for a twopremiss rule). We may attempt to reconstruct a proof for our sequent by beginning with it and working upwards, indicating applications of the LPC rules as we go and supplying the premisses necessary for these applications. Eventually in our attempt  since the premisses are always shorter than the conclusions  we reach points where no further rule applications can be indicated; these points are the tips of the branches of the proof tree we are attempting to reconstruct. If each branch in an attempted reconstruction is tipped with an axiom of our sequent logic, then our attempt has been successful, and the sequent in question is a theorem. If one or more branches (extended as far as possible) is tipped with a sequent which is not an axiom, the attempt is unsuccessful. Now one unsuccessful attempt does not, of course, prove that the sequent in question is not a theorem. But because of the everincreasing length property of LPC proofs, there is only a finite number of candidates for a successful proof reconstruction attempt for any LPC sequent. Each such candidate may in theory be attempted, and we can know when we have attempted them all. If none of the attempts possible yields a success, then the sequent is a nontheorem; if any of the possible attempts is successful, then the sequent is a theorem. Thus LPC is decidable.
The case for LIC is similar, though a little more complex. The complexity is introduced by the fact that C® and N® in that system  although they preserve the subformula property  do not preserve the everincreasing sequent length property as it holds for LPC. However, if we attempt to reconstruct from bottom up a (modified) LIC proof in which C® or N® is employed, we shall find that we cannot meaningfully use an C® or a N® to introduce the formula that will eventually be the quasiprincipal formula for a lower C® or N® without somewhere between the two C®'s or N®'s using a rule which requires a shorter premiss (or premisses) than its conclusion; this means that a modified version of the increasing length property holds for modified LIC, and that LIC too is decidable. (Details left to reader. As a start, recall that the very first introduction of a formula beginning with an Ν or a C into a modified LIC proof must always be by weakening, or as part of an axiom.) We may now state the following:
*3.14  LPC and LIC are decidable. Furthermore, these systems provide decision procedures for PC and IC respectively. 
As a simple example of the use of these systems as decision procedures, consider the formula ΑpΝp. Α proof for the sequent corresponding to this formula is easily set down in LPC, but such a proof in LIC would require one of the sequents ® p, ® Np, or simply ® to be an axiom (the last sequent is the one with empty antecedent and succedent); none of these sequents is an axiom, of course, and so ® ΑpΝp is not an LIC theorem, nor is ApNp an IC thesis.
Α simple generalization of the above result with ApNp  the details of which generalization we leave to the reader  enables us to assert the following metatheorem, which we offer simply as an example of the metatheoretical results obtainable through the use of sequent logics:
*3.15  If the formula Aab is a thesis of IC, then either a is an IC thesis, or b is. 