1. Consider the offer: "If you give me a sound deductive argument that I'll give you $1000, then I'll give you $1000." It feels like something has been risked in making the offer. But surely nothing has been risked—neither one's integrity nor one's money.
Or is there really a risk that there is a sound argument for a contradiction, and hence for any conclusion?
2. Suppose Fred is a super-smart being who, while very malicious, exhibits perfect integrity (never lies, never cheats, never breaks promises) and is a perfect judge of argument validity. Fred offers me the following deal: If he can find a valid argument for a self-contradictory conclusion with the argument having no premises, he will torment me for eternity; otherwise, he'll give me $1. Should I go for the deal? Surely I should! But it seems too risky, doesn't it?
3. Suppose Kathy is a super-smart being who, while very malicious, exhibits perfect integrity and is omniscient about what is better than what for what persons or classes of persons. Kathy offers me the following deal: If horrible eternal pain is in every respect the best thing that could happen to anyone, then she will cause me to suffer horrible pain for eternity; otherwise, she'll give me $1. Shouldn't I go for this? After all, I either get a dollar, or I get that which is the best possible thing that could happen to anyone.
Do these cases show that we're not psychologically as sure of some things as we say we are? Or do they merely show that we're not very good at counterpossible reasoning or at the use of conditionals?
[The first version of this post had screwed-up formatting, and Larry Niven pointed that out in a comment. I deleted that version, and with it the said comment. My thanks to Larry!]