How to Prove (∀x)(A→B) ≡ ((∃x)A) → B Using First-Order Logic

In summary: If you are looking for a proof in first-order logic, you might be able to find a truth tree proof that you are familiar with and then use that as a foundation to build a formal logic proof. Alternatively, you could try looking for a theorem that allows the left side to be implied.
  • #1
programmer123
31
0
Hello,

I am having particular trouble with the below problem. We are using http://studygig.com/uploads/materials/math1090_mathematical_logi.pdf and we must prove this statement using Hilbert style or Equational style proof (first-order logic), but any proof of any type would point me in the right direction.

(∀x)(A→B) ≡ ((∃x)A) → B

So far I have tried to use Ping-Pong Theorem to prove that
1. (∀x)(A→B) → (((∃x)A) → B)
and
2. (((∃x)A) → B) → (∀x)(A→B)
but I can't prove either 1. or 2.

If anyone could provide insight, I would appreciate it.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
To be sure I understand the notation, you are trying to show that
"In all cases, if A is true, then B is true" is logically equivalent to "If there is a case where A is true, then B is true."
Using contrapositive, (can you assume the contrapositive at this point?)
for all x, not B implies not A, and not B implies not (exists a case where A is true).
Looking at the right side, the negative of (there exists x)A is for all x, not A.
That seems like it should be in the right neighborhood.
 
  • #3
Hey RUber,

Thanks for the reply and you understood my notation perfectly.

I think we can assume the contrapositive. However, when you get to for all x (not B implies not A), I don't know how to make that equivalent to (not B) implies for all x (not A)

left side: ∀x(~B → ~A)
right side: ~B → ∀x(~A)

There is a theorem that allows the left side to be implied to ∀x(~B) → ∀x(~A), but then no way to get that universal quantifier off of the not B.

Sorry if this is hard to follow, this logic book seems a bit esoteric.
 
  • #4
How about using truth trees for predicate logic? You basically try to falsify your expression, finding a counterexample, i.e., making premises true and conclusion false.
 
  • #5
Hey WWGD,

Thanks for the response.

Truth trees seem like an interesting concept from my quick Googling, but we weren't introduced to those yet in the text and I think that would be outside of the scope of a formal proof in first-order logic (or a Hilbert style proof).

Is there perhaps any way to translate a truth tree into a formal logic proof that you know of? If I could figure out a truth tree proof, maybe that could help me.
 
  • #6
Hi Programmer, actually a truth tree is just a way of trying to invalidate an argument: the idea is that of assuming the negation of the (a) conclusion
and seeing if that negation can be derived from the premises, so that, e.g., if you are given ## A \rightarrow B ## to prove, you try to see if you can derive
## \lnot B ## from ##A ##. Please give me till tomorrow, it is kind of late here and I need to get up a bit early. Will be back ASAP.
 
  • #7
From a logical perspective, if the cases that falsify the claim are equivalent, then the statements are equivalent.
##\forall x (A \to B) ## is false when?
Only when ##\exists x( A \text{ and } \text{ not } B ) .##
This is false if for all x, not B implies not A and A implies B.

The right side,##\exists x( A) \to B## is false when?
Only when ##\exists x( A) \text{ and } \text{ not } B.##
This is false when not B implies not A (for all x), and exists x( A) implies B

It looks like these are essentially the same.

If you look at this from a set theory perspective, for all x, A implies B means that A is a subset of B. This is really saying that A is only possible within the context of B being true. If B is false, then there is no possibility for A to be true. So the left side implies the right side.
Does the right side imply the left?
If there exists an x in A, then B must be true...Using the contrapositive, (not B ) then forall x (not A) , should bring you back to the same set description, indicating that no part of set A is outside of set B.
 
  • #8
Hey RUber,

I definitely see what you are saying, and if I could prove this using intuition or naive logic, it would be so much easier.

I just can't seem to translate that reasoning into the strict formal logic of only using inference rules and absolute theorems to derive one side from the other.

You essentially got it down to left side: NOT (∃x(A and not B)) , right side: NOT (∃x(A) and not B), but existential is around both terms for left side and only around one term for right side. That's the same dilemma as below where I have universal around both terms for left and universal around only one term for right.

left side:
∀x(A→B)

∀x(notA ∨ B)

right side:
∃x(A)→B

not(∀x(notA))→B

∀x(notA)∨B

Any idea where I can go from there?
 
  • #9
I see the trouble. I think this needs to be broken up into showing that one implies the other and vice versa...I don't think you'll get there with strict equivalences.
Does this work?
∀x(A→B) ⇔∀x (~B→~A) ∀x (~B)→∀x(~A) ⇔~B→ ∀x(~A)⇔∃x(A)→B
and...
∃x(A)→B A →B ⇔ ∀x(A→B)
Thus, since one implies two and two implies one, the biconditional is true.
 
  • #10
Wow, that is actually at least very close, but I still can't figure out a few of your steps.

∀x (~B)→∀x(~A) ⇔~B→ ∀x(~A)
is this a theorem that is true in general? I can't seem to find any reference in the text that allows for this removal of universal quantifier (but I could be missing it)∃x(A)→B ⇒ A →B
I guess this is similar to above. Are we allowed to remove the existential quantifier like that?

A →B ⇔ ∀x(A→B)
This looks like weak generalization, but we can only use weak generalization if x does not occur free in our assumptions, and our assumption is what we started with ∃x(A)→B, which could have a free x in B.

If you could shed any light on the above, that would be great. It could be a shortcoming of my own knowledge that I don't realize why these are allowed.
 
  • #11
programmer123 said:
Wow, that is actually at least very close, but I still can't figure out a few of your steps.

∀x (~B)→∀x(~A) ⇔~B→ ∀x(~A)
is this a theorem that is true in general? I can't seem to find any reference in the text that allows for this removal of universal quantifier (but I could be missing it)
You are right, maybe that should be a one way conditional statement. ∀x (~B)→∀x(~A) ⇒~B→ ∀x(~A)
∃x(A)→B ⇒ A →B
I guess this is similar to above. Are we allowed to remove the existential quantifier like that?
If it is true that existence of any x in A implies B, then A would indicate the existence is true. That's why I needed to switch to the one-way method.
A →B ⇔ ∀x(A→B)
This looks like weak generalization, but we can only use weak generalization if x does not occur free in our assumptions, and our assumption is what we started with ∃x(A)→B, which could have a free x in B.
I suppose I would look at it this way,
∀x(A→B) is the same a A→B, isn't it? A free x in B doesn't affect the logic. B and not A is acceptable in all cases.
A implies ∃x(A), so if it is true that ∃x(A)→B, then A→B.
For all x, ( if A then ∃x(A) then B ).
 
  • #12
Hmm, I'm still not sure of the first two. They seem right, but I just can't find any support for them.

I suppose I would look at it this way, ∀x(A→B) is the same a A→B, isn't it? A free x in B doesn't affect the logic. B and not A is acceptable in all cases. A implies ∃x(A), so if it is true that ∃x(A)→B, then A→B. For all x, ( if A then ∃x(A) then B ).
Assume this interpretation: A is "x=0", B is "x=1", the domain is {0, 1}, and x is 1.
A→B is true, because A is false
∀x(A→B) is false because there is an x that will make A true and B false, namely x being 0.
 
  • #13
So you interpret A then B as there exists an A such that B is true?
If A then B implies the rule holds all the time, doesn't it?
 
  • #14
Hey RUber,

Yeah, it gets kind of confusing. The way our class is interpreting them is like this:

∃x(A→B) is true if at least one x in the domain satisfies A→B
A→B is true if the specific x we have just picked from the domain satisfies A→B
∀x(A→B) is true if for all x in the domain A→B is always true

So after a domain is picked, ∃x(A→B) and ∀x(A→B) are either true or false and can not change.

After a domain is picked, A→B is still not known as true or false until we also pick x (assuming x occurs free in A or B). And if we change the value of x, A→B can also change.
 
  • #15
Hey RUber, WWGD, and anyone else who was working on this problem. I actually just disproved this statement to be true and then found out it was indeed a typo by my teacher. Sorry if I caused anyone much pain over this problem.

To disprove that (∀x)(A → B) ≡ ((∃x)A) → B is an absolute theorem, consider the following interpretation:

Let D be {0, 1}, A be x=0, B be x=0, and xD be 1.

((∀x)(A → B) ≡ ((∃x)A) → B)D

(∀x∈D)(x=0 → x=0) ≡ ((∃x∈D)x=0) → xD=0

(∀x∈D)(x=0 → x=0) ≡ ((∃x∈D)x=0) → 1=0
 
Last edited:
  • #16
It is assumed that B contains no free occurences of x, right? Otherwise, the equivalence is not valid.
 
  • #17
Yeah, that was actually the typo from the instructor, it was supposed to say that x does not occur free in B.

The original problem actually did not specify that B contains no free occurrences of x so, as you said, the equivalence is not valid.
 
  • #18
programmer123 said:
Hello,

I am having particular trouble with the below problem. We are using http://studygig.com/uploads/materials/math1090_mathematical_logi.pdf and we must prove this statement using Hilbert style or Equational style proof (first-order logic), but any proof of any type would point me in the right direction.

(∀x)(A→B) ≡ ((∃x)A) → B

So far I have tried to use Ping-Pong Theorem to prove that
1. (∀x)(A→B) → (((∃x)A) → B)
and
2. (((∃x)A) → B) → (∀x)(A→B)
but I can't prove either 1. or 2.

If anyone could provide insight, I would appreciate it.

Okay, if there are no occurrences of x in B, then some of the manipulations RUber did ought to do the trick. A couple more hints are...

To get 1), use the contrapositive and the fact that ∀x(F→G(x)) → (F→∀xG(x)) when x does not occur free in F.

To get 2), you'll need to know that (G→H)→((F→G)→(F→H)) and also that F→(G→H) ≡ G→(F→H). This last one can be very useful if G happens to be a theorem, so that you can apply MP.

This is the style of proof you would find in Hilbert and Ackermann's logic book.

programmer123 said:
but any proof of any type would point me in the right direction
This is a good attitude to have.
 
Last edited by a moderator:
  • #19
Hey techmologist, thanks for those hints and I have since solved it after knowing there are no free occurrences of x in B.
This is the style of proof you would find in Hilbert and Ackermann's logic book.
Interesting, thanks for the references. I am going to look at some other logic books to broaden my perspective, especially since no one logic system or even notation seems to be the standard.
 
Last edited:
  • #20
programmer123 said:
Hey techmologist, thanks for those hints and I have since solved it after knowing there are no free occurrences of x in B.

Cool. How did you do it?

Interesting, thanks for the references. I am going to look at some other logic books to broaden my perspective, especially since no one logic system or even notation seems to be the standard.

Yeah, that's a good idea. Depending on how much you like logic, you might try to prove the axioms of one system as theorems in another system, and vice versa. That can help you develop a feel for what is going on.
 
  • #21
techmologist said:
Cool. How did you do it?

We have a theorem that states: (∀x)(A'∨B') ≡ A'∨(∀x)B' if x dnof in A'

So the equational style proof is:

((∃x)A)→B
⇔<def'n of ∃>
(¬(∀x)¬A)→B
⇔<abs thm + Leib>
¬(¬(∀x)¬A)∨B
⇔<abs thm + Leib>
((∀x)¬A)∨B
⇔<axiom>
B∨((∀x)¬A)
⇔<thm (∀x)(A'∨B') ≡ A'∨(∀x)B' if x dnof in A'>
(∀x)(B∨¬A)
⇔<abs thm + Strong Leib>
(∀x)(A→B)
 
  • #22
programmer123 said:
We have a theorem that states: (∀x)(A'∨B') ≡ A'∨(∀x)B' if x dnof in A'

So the equational style proof is:

((∃x)A)→B
⇔<def'n of ∃>
(¬(∀x)¬A)→B
⇔<abs thm + Leib>
¬(¬(∀x)¬A)∨B
⇔<abs thm + Leib>
((∀x)¬A)∨B
⇔<axiom>
B∨((∀x)¬A)
⇔<thm (∀x)(A'∨B') ≡ A'∨(∀x)B' if x dnof in A'>
(∀x)(B∨¬A)
⇔<abs thm + Strong Leib>
(∀x)(A→B)
Nice. I like how you were able to prove it all in one sequence with equations. I don't know what all the abbreviations, like "Leib", stand for, but I follow what you did.

I broke it up into two steps, as you originally suggested. I used the weaker "one-way" version of that theorem you mention, and with A' negated so that it turns into an implication. I also used a rule of inference that says from A'→B'(x), derive A'→∀xB'(x) when x dnof in A'. I made the occurrence of x in a formula explicit, writing it as A(x). A(y) is that same formula with all the occurrences of x replaced by y.1)

∀x(A(x)→B) ≡ ∀x(~B → ~A(x))
<logical equivalence of contrapositive>

∀x(~B → ~A(x)) → (~B →∀x(~A(x)))
<theorem mentioned above>

~B →∀x(~A(x)) ≡ (∃xA(x)) → B
<contrapositive, def of ∃>

2)

((∃xA(x)) → B) → ( (A(y) →∃xA(x)) → (A(y)→B) )
<propositional logic theorem, (G→H)→((F→G)→(F→H))>

(A(y) →∃xA(x)) → ( ((∃xA(x)) → B) → (A(y)→B) )
<prop. logic thm, F→(G→H) ≡ G→(F→H)>

((∃xA(x)) → B) → (A(y)→B)
<thm A(y) →∃xA(x), then Modus Ponens>

((∃xA(x)) → B) →∀y(A(y)→B)
<rule of inference>

Then you can replace that bound y with a bound x.

BTW, Where do I find the logical not symbol? I don't see it in the list.
 
  • #23
BTW, Where do I find the logical not symbol? I don't see it in the list.
I just copied and pasted it from google.Nice solution, I can follow both parts. (G→H)→((F→G)→(F→H)) and F→(G→H) ≡ G→(F→H) are two very useful theorems now that I think about it.

I don't know what all the abbreviations, like "Leib", stand for
Leib (Leibniz) is a basic rule of inference in our logic. It allows us to apply a theorem to part of the formula and leave the rest as is. You did this in your proofs too, but probably the logic you're using allows this without any citing.
 
  • #24
programmer123 said:
I just copied and pasted it from google.

Okay, thanks.
Leib (Leibniz) is a basic rule of inference in our logic. It allows us to apply a theorem to part of the formula and leave the rest as is. You did this in your proofs too, but probably the logic you're using allows this without any citing.

You are right, I did not cite that as a rule of inference. In Hilbert and Ackermann's logic, it is a derived rule of inference. Their basic rule of inference is uniform replacement, where you have to replace all occurrences of one thing with another thing. So they show that if you have a formula A in which the subformula B appears, then if B' ≡ B, you can build up the formula A' ≡ A. Here A' is the formula obtained from A by replacing B with its logical equivalent. But yeah, I was just taking it for granted, and probably a few other results, too.
 

Related to How to Prove (∀x)(A→B) ≡ ((∃x)A) → B Using First-Order Logic

1. How do you define (∀x)(A→B) and ((∃x)A)?

(∀x)(A→B) is a logical statement that means "for all values of x, if A is true, then B must also be true." On the other hand, ((∃x)A) means "there exists a value of x for which A is true."

2. What is the significance of the arrow (→) in the statement?

The arrow (→) is a logical symbol for implication or conditional statement. In the statement (∀x)(A→B), it indicates that if A is true, then B must also be true for all values of x.

3. Why is it important to prove (∀x)(A→B) ≡ ((∃x)A) → B using first-order logic?

First-order logic is a fundamental part of mathematical and logical reasoning. It allows us to formally prove the equivalence of logical statements, which is crucial in ensuring the validity of arguments and deductions.

4. How can you prove (∀x)(A→B) ≡ ((∃x)A) → B using first-order logic?

To prove this statement, you can use the rules of inference and the laws of logic, such as the law of contrapositive, to manipulate and transform the statement into an equivalent form. This will involve breaking down the statement into simpler, more manageable parts and using logical reasoning to show that they are equivalent.

5. Can this statement be proven using other methods besides first-order logic?

Yes, there are other methods of proof, such as proof by contradiction or proof by induction, that can be used to show the equivalence of logical statements. However, first-order logic is a widely accepted and rigorous method that is commonly used in mathematical and logical proofs.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
385
  • Set Theory, Logic, Probability, Statistics
Replies
26
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
536
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Back
Top