Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Made some changes at logic and started inductive inference and George Polya.
There are still things to change at logic
As a discipline, logic is the study of methods of reasoning. While in the past (and often today in philosophical circles), this discipline was prescriptive (describing how one should reason), it is increasingly (and usually in mathematical circles) descriptive (describing how one does reason).
Could whoever wrote it explain what they meant? Seems odd to me.
Also I don’t think that category-theoretic logic should be there. Should it not appear in mathematical logic, or be a new page?
David, thanks for looking into this. These entries deserve to be improved on!
I think you should follow the usual practice: if you feel expert enough to decide what’s good and bad content (and here you do) you go ahead and improve the entry.
Should somebody really complain, and should your expert notion of improvement really conflict with somebody else’s and that somebody else does not end up agreeing that your edit is an improvement – if all that really happens, then we can still roll back.
After all, we don’t want the $n$Lab to emhasize content that that is prior by editing time. We want to emphasize content that is prior by its value.
The thing I quote I could modify, but I really don’t know how best to organise the category-theoretic logic section. Presumably it’s covered elsewhere. But then should mathematical logic have a section on it?
I wrote that paragraph, so I just rewrote it to go with what you wrote before it.
What I most miss is what used to come after it, explaining the meaning of the word ‘logic’ as a count noun, so I put that back (slightly changed).
I really like the Idea-section now. Reads very well.
I tried to make also the following section nicer. Have renamed it to just “Mathematical logic” and then reworked it a little. But see if you can think of ways to improve this further.
Re #4, I wonder if it might be worth giving an instance or two of the descriptive study of reasoning rather than prescriptive. I’m not sure whether you mean situations where I study constructive logic,or a substructural logic, without saying that it should be adopted but I just think it’s interesting, or situations where I study how people actually reason, such as the fast and frugal approach. Or maybe both.
Re #5, how about we just transfer that ’mathematical logic’ section over to mathematical logic, pointing out that there’s a nPOV on mathematical logic, as summarised by that section? As it stands I just want to send mathematical logic back to that section of logic, which seems a bit odd.
Further thought re #4,
A logic is a specific method of reasoning. There are several ways to formalise this as a mathematical object;
Some people nitpick and would prefer ’the use of a logic is a specific method of reasoning’, i.e., the logic codes the rules rather than judgements as to whether and how to employ the rules.
how about we just transfer that ’mathematical logic’ section over to mathematical logic
Yes, I’d be fine with that. For the moment, however, myself I won’t be touching the entries.
OK, so moved the category-theoretic logic to mathematical logic.
Thanks!
I keep thinking that eventually we need the keywords formal system and language to point somewhere. Currently I always have in the text some workaround like
… a formal system – a theory – such that …
But, myself, I can’t look into this right now.
As soon as you start on one entry, you end up working on a chain of other entries. abduction got me working on Charles Peirce which got me need indexed monoidal category, which is only a stub.
To me the examples at abduction look like “from $A$ and $A\Rightarrow B$ deduce $B$” (deduction), “from $A$ and $B$ deduce $A\Rightarrow B$” (induction) and “from $B$ and $A\Rightarrow B$ deduce $A$”. Is that correct?
Presumably deduction includes also the introduction and elimination rules of all other logical connectives and quantifiers, not just the elimination rule of implication as mentioned there. Are there versions of induction and abduction for those, and if so, what is the general rule distinguishing abduction from induction?
Good question, and one I’ll need to think about. I think Peirce was looking for examples from each variety of reasoning that he could make as comparable as possible. So the example may give a deceptive appearance if the cases are taken as core examples of each variety.
“from $B$ and $A\Rightarrow B$ deduce $A$” can’t be the whole story for abduction, as there may be $A_1, A_2, ...$ similarly placed. Somehow one takes one of these to be the ’best’ explanation. If there’s a fixed list of such $A_i$ though, one can often phrase things in Bayesian fashion, so the best explanation is one which is by far the most plausible. On the other hand, if we have a range of explanations involving valid deductions, then posterior probability will be proportional to prior.
I’ll think on.
Would perhaps a more modern perspective take ’abduction’ to refer to the entire process of Bayesian reasoning, with consequent probabilities assigned to each possible explanation, rather than insisting on their being a unique “best” explanation that we deduce (“abduce”?)
If Popper had problems with inductive reasoning, you may imagine that abduction is even more controversial. For one thing, we’d have to have a good notion of explanation, and for that we might have to wade through the literature, e.g., Four decades of explanation. It’s clearly not just a matter of devising a statement which has the thing to be explained (explicandum) as a consequence. You want it to ’give the reason for’ the observation. This is notoriously difficult to do. There’s a growing literature now on the concept of ’explanatory proofs’ in mathematics.
In situations where you have a set of rival hypotheses explaining the observation, it may be possible to employ Bayesian reasoning. What to do, however, if you don’t think you have an exhaustive set is not so obvious. When astronomers in the nineteenth century tried to account for the anomalies in the position of Mercury’s perihelion, they tried out all manner of explanations: maybe there was a planet inside Mercury’s orbit, maybe there was a cloud of dust surrounding the sun, maybe the power in the inverse square law was (2 - $\epsilon$),… Assigning priors and changing these as evidence comes in is one thing, but it would have been wise to have reserved some of your prior for ’none of the above’.
I’m sorry, is that supposed to be an answer to my question? It just made me more confused about what ’abduction’ and ’induction’ are supposed to mean.
I want to move deduction to deductive reasoning and put the formal notion of deduction-as-subproof in deduction. (Some entries already link to that, although not in an interesting way yet.)
ETA: I did the move, but I haven’t written a new deduction.
I’m not sure whether you mean situations where I study constructive logic,or a substructural logic, without saying that it should be adopted but I just think it’s interesting, or situations where I study how people actually reason, such as the fast and frugal approach.
I was thinking more of the former, which is more in the purview of the Lab, but the latter would also count. (Whether it’s well enough defined to be subject to mathematical study is another matter.)
Looking only at Peirce’s examples, I’d say that abduction has a good chance of working sometimes, but induction is silly.
Let me redo Mike’s paraphrases in #12 to use universal quantification as well as implication (although this still doesn’t capture the plural in ‘these beans’): Deduction derives $Q(a)$ from $\forall{x},\, P(x) \Rightarrow Q(x)$ and $P(a)$; induction derives $\forall{x},\, P(x) \Rightarrow Q(x)$ from $P(a)$ and $Q(a)$; abduction derives $P(a)$ from $\forall{x},\, P(x) \Rightarrow Q(x)$ and $Q(a)$.
In practice, the conclusion of abduction might well be what you want to conclude; but the conclusion of induction is really only useful if you also have $P(b)$ and then apply deduction to get $Q(b)$. And $Q(b)$, again, has a good chance of being right, even when $\forall x,\, P(x) \Rightarrow Q(x)$ is, strictly speaking, still wrong.
If induction and abduction are supposed to be covered by Bayesian reasoning (and deduction too, for that matter), then we see this again. It’s much easier to get a high probability of $Q(b)$ or $P(a)$ than of $\forall x,\, P(x) \Rightarrow Q(x)$.
Edit: fixed typos, some serious.
Once we move away from deduction, life inevitably gets messier, so much so that I seem to have confused Mike with my answer @ 16.
We’d never speak as bluntly as Peirce in his example of induction. Much would depend on background knowledge. Do you know already that bags tend to contain beans of the same colour? Do you know how many colours beans come in? Do you know if bags are filled randomly from some production line?
Then there are the quantitative factors, the key one being how many beans are in the sample, but also how many are in the bag. Perhaps you can only estimate the latter.
After all this, you would be looking to arrive at a degree of belief in the statement ’All beans in this bag are white’, or perhaps a distribution of degrees of belief over the proportion of beans which are white.
The best account of this kind of work I ever read is
Edwin Jaynes, Probability Theory: The Logic of Science. Cambridge University Press, (2003)web
In fact it’s so good I’ll add it now to inductive reasoning.
I gave inductive reasoning the Philosophy-context badge. But maybe it should carry a different one (maybe in addition).
Yes, Jaynes is awesome.
But sometimes I wonder how much this sort of thing —Bayesian probability— really covers the classical philosophical concept of induction (and abduction for that matter). Bayesian reasoning is definitely the correct way to reason, but does it subsume all of the older stuff, are is that stuff just wrong?
I’ve read Jaynes too, and enjoyed it a lot (although I don’t agree with him about everything when he gets outside his domain of expertise).
I’m no longer sure what this discussion is about or whether we’re disagreeing about anything. (-: I’ve never encountered the word “abduction” before; I always thought that “deduction” referred to ordinary mathematical reasoning (from premise to conclusion) and “induction” (outside of a mathematical context) to the reverse (from conclusion to premise). Now I see that apparently depending on which premise one is reasoning backwards to, one calls it “induction” or “abduction”. But there doesn’t seem to be that much difference to me, from a modern Bayesian standpoint – even deduction is the special case of Bayesian reasoning when all probabilities are 0 or 1.
Now I see that apparently depending on which premise one is reasoning backwards to, one calls it “induction” or “abduction”.
But the ‘one’ here may only be Peirce.
By the classical concept of induction, do you mean anything more than Russell’s:
The principle we are examining may be called the principle of induction, and its two parts may be stated as follows:
(a) When a thing of a certain sort A has been found to be associated with a thing of a certain sort B, and has never been found dissociated from a thing of the sort B, the greater the number of cases in which A and B have been associated, the greater is the probability that they will be associated in a fresh case in which one of them is known to be present;
(b) Under the same circumstances, a sufficient number of cases of association will make the probability of a fresh association nearly a certainty, and will make it approach certainty without limit. (1912, 103)
So long as there’s no background knowledge, you can give that a Laplace sunrise solution, which is effectively a certain indifferent prior on the value of a binomial parameter. Of course, this is generally not the case, e.g., here you have extra info concerning the sun.
But what else is there in the classical concept?
As for abduction, it never had the same stability of sense as induction. It could be said that there’s an interesting strand in Whewell on consilience, so that a good explanation is one that explains a wide range of phenomena.
But the ’one’ here may only be Peirce.
But what a ’one’! The man cracked string diagram notion for indexed monoidal categories in the 19th century!
More seriously, it’s true there’s a lot of quirkiness in Peirce. He links everything up to firstness, secondness and thirdness, and I’m fairly sure I remember that he has 1, 2 and 3 varieties of deduction, induction and abduction in some order.
So we could drop abduction,but then the closely resembling ’inference to the best explanation’ is around, and there are certainly some who don’t see that as reducible to deduction/induction. One important text on that is Peter Lipton’s Inference to the best explanation (hm, a chapter on Bayesian abduction). I remember him talking a lot about ’loveliest’ explanation. In any case, as I said, the literature on explanation is enormous, and broader than deduction/induction.
Reasoning in mathematics depends on the closed world assumption (everything that is relevant is known). This is obviously not the case for every day actual human reasoning and various terms such as abduction have been given meanings in this situation. There are numerous attempts to formalize such information poor reasoning. A few wikipedia links are Negation as Failure, Non-monotonic Logic, Autoepistemic Logic, Circumscription, Default Logic.
I haven’t followed this field or a while and don’t know how much it is still active or has produced results (as opposed to problems). But anyway, a discussion of types of reasoning or inference does need to mention the “closed world assumption” which is so much a part of the bedrock of mathematics that it is assumed without mention.
Also in linguistics there are various related notions to deal with inference problems, one being Optimality theory which can be crudely regarded as the problem of finding the most consistent subset of a set of constraints that may conflict. Here the world IS closed but inconsistent because various defaults may be at odds with each other.
According to your link
The closed world assumption (CWA) is the presumption that what is not currently known to be true, is false.
That would seem a rather dangerous assumption in mathematics. Am I to take the Riemann Hypothesis as false? Sure I shouldn’t use it, but that’s a very different thing.
But anyway, outside the final reasoning represented in a research paper, surely there are all kinds of inductive and abductive moves happening. “It seems to me that this problem is analogous to that one, which I’ve just shown to be true. Then that makes the original problem more likely in my eyes, and it may well be solved if I can find the reason the second is true and generalise it via a common ground.” This is just the kind of thing Polya talks about in Mathematics and Plausible Reasoning.
Am I to take the Riemann Hypothesis as false? Sure I shouldn’t use it
Of course, many people do use it, as part of their everyday practice, to determine the scope and consequences of RH, and as research programs go that seems reasonable to me. (And to make a more trivial point, using a proposition one suspects is false, aiming for a proof by contradiction, is a common enough strategy.)
The closed world assumption (CWA) is the presumption that what is not currently known to be true, is false.
That would seem a rather dangerous assumption in mathematics. Am I to take the Riemann Hypothesis as false? Sure I shouldn’t use it, but that’s a very different thing.
Above the “known” I think is supposed to mean “knowable” along the lines of “known as an axiom” or deducible from axioms, which is basically talking about Negation as Failure. NaF is used a lot in Logic Programing when for some situations one can assume an effective proof procedure such that failure to find a proof “proves” something is false.
The “closed world assumption” in general applies to say an example where given initial conditions one can accurately give the position of the Moon in space as a function of time - in a closed world that doesn’t have to worry about a large asteroid crashing into the Moon. I cannot conceive of an analogous situation in mathematics where one wants a closed world that doesn’t take into account the possibility of a meteor strike destroying an axiom.
This supposed “closed world assumption” doesn’t sound to me like any mathematics I’m familiar with.
I can kind of see why you want to say that mathematics is a closed world. We take our basic assumptions to be absolutely certain, while in real life this cannot be true indefinitely (no matter how well we know currently the system that we’re interested in) because something from outside that system (and I think that ‘system’, in the sense of a physical system, is better than ‘world’ here) could change things later.
But even in mathematics, something surprising may come along to make us question —and sometimes ultimately reject— our former assumptions. Whether or not you want to think of that as coming from outside, it does mean that we might well decide that something unprovable is true. (And in a way, we do this often by applying Gödel incompleteness, but you might not want to count any formal system for mathematics as the world of mathematics.)
1 to 32 of 32