#### Termes les plus recherchés

# [PDF](+46👁️) Télécharger Maths pdf

#### Maths for Computer ScinceTélécharger gratuit Maths pdf

"mcs" — 2012/11/12 — 16:25 — page i — #1

Mathematics for Computer Science

revised Monday 12 th November, 2012, 16:25

Eric Lehman

Google Inc.

F Thomson Leighton

Department of Mathematics

and the Computer Science and AI Laboratory,

Massachussetts Institute of Technology;

Akamai Technologies

Albert R Meyer

Department of Electrical Engineering and Computer Science

and the Computer Science and AI Laboratory,

Massachussetts Institute of Technology

Creative Commons <i£L— J 201 1, Eric Lehman, F Tom Leighton, Albert R Meyer .

'mcs" — 2012/11/12 — 16:25 — page ii — #2

'mcs" — 2012/11/12 — 16:25 — page iii — #3

Contents

/ Proofs

Introduction 3

1

What is a Proof? 5

1.1

Propositions 5

1.2

Predicates 8

1.3

The Axiomatic Method 8

1.4

Our Axioms 9

1.5

Proving an Implication 1 1

1.6

Proving an "If and Only If" 1 3

1.7

Proof by Cases 15

1.8

Proof by Contradiction 1 6

1.9

Good Proofs in Practice 17

1.10

References 19

2

The Well Ordering Principle 25

2.1

Well Ordering Proofs 25

2.2

Template for Well Ordering Proofs 26

2.3

Factoring into Primes 28

2.4

Well Ordered Sets 29

3

Log

ical Formulas 37

3.1

Propositions from Propositions 38

3.2

Prepositional Logic in Computer Programs 41

3.3

Equivalence and Validity 44

3.4

The Algebra of Propositions 46

3.5

The SAT Problem 51

3.6

Predicate Formulas 52

4

Mathematical Data Types 73

4.1

Sets 73

4.2

Sequences 77

4.3

Functions 77

4.4

Binary Relations 80

4.5

Finite Cardinality 84

5

Induction 99

5.1

Ordinary Induction 99

mcs

2012/11/12

16:25

page iv

#4

iv Contents

5.2

Strong Induction 108

5.3

Strong Induction vs. Induction vs. Well Ordering 113

5.4

State Machines 114

6

Recursive Data Types 151

6.1

Recursive Definitions and Structural Induction 151

6.2

Strings of Matched Brackets 155

6.3

Recursive Functions on Nonnegative Integers 158

6.4

Arithmetic Expressions 161

6.5

Induction in Computer Science 166

7

Infinite Sets 179

7.1

Infinite Cardinality 180

7.2

The Halting Problem 184

7.3

The Logic of Sets 188

7.4

Does All This Really Work? 1 9 1

// Structures

Introduction 205

8

Number Theory 207

8.1

Divisibility 207

8.2

The Greatest Common Divisor 212

8.3

Prime Mysteries 218

8.4

The Fundamental Theorem of Arithmetic 221

8.5

Alan Turing 223

8.6

Modular Arithmetic 227

8.7

Remainder Arithmetic 229

8.8

Turing's Code (Version 2.0) 232

8.9

Multiplicative Inverses and Cancelling 234

8.10

Euler's Theorem 238

8.11

RSA Public Key Encryption 245

8.12

What has SAT got to do with it? 248

8.13

References 248

9

Directed graphs & Partial Orders 275

9.1

Digraphs & Vertex Degrees 277

9.2

Adjacency Matrices 281

9.3

Walk Relations 284

9.4

Directed Acyclic Graphs & Partial Orders 285

'mcs" — 2012/11/12 — 16:25 — page v — #5

V

Contents

9.5

Weak Partial Orders 288

9.6

Representing Partial Orders by Set Containment 290

9.7

Path-Total Orders 291

9.8

Product Orders 292

9.9

Scheduling 293

9.10

Equivalence Relations 299

9.11

Summary of Relational Properties 30 1

10 Communication Networks 327

10.1

Complete Binary Tree 327

10.2

Routing Problems 327

10.3

Network Diameter 328

10.4

Switch Count 329

10.5

Network Latency 330

10.6

Congestion 330

10.7

2-D Array 331

10.8

Butterfly 333

10.9

Benes Network 335

11 Simple Graphs 347

11.1

Vertex Adjacency and Degrees 347

11.2

Sexual Demographics in America 349

11.3

Some Common Graphs 35 1

11.4

Isomorphism 353

11.5

Bipartite Graphs & Matchings 355

11.6

The Stable Marriage Problem 360

11.7

Coloring 367

11.8

Simple Walks 371

11.9

Connectivity 373

11.10 Odd Cycles and 2-Colorability 376

11.11 Forests & Trees 378

11.12 References 386

12 Planar Graphs 415

12.1

Drawing Graphs in the Plane 415

12.2

Definitions of Planar Graphs 415

12.3

Euler's Formula 426

12.4

Bounding the Number of Edges in a Planar Graph 427

12.5

Returning to ^5 and ^3,3 428

12.6

Coloring Planar Graphs 429

12.7

Classifying Polyhedra 43 1

"mcs" — 2012/11/12 — 16:25 — page vi — #6

vi Contents

12.8 Another Characterization for Planar Graphs 434

III Counting

Introduction 443

13

Sums and Asymptotics 445

13.1 The Value of an Annuity 446

13.2 Sums of Powers 452

13.3 Approximating Sums 454

13.4 Hanging Out Over the Edge 458

13.5 Products 465

13.6 Double Trouble 467

13.7 Asymptotic Notation 470

14

Cardinality Rules 489

14.1 Counting One Thing by Counting Another 489

14.2 Counting Sequences 490

14.3 The Generalized Product Rule 493

14.4 The Division Rule 497

14.5 Counting Subsets 500

14.6 Sequences with Repetitions 502

14.7 Counting Practice: Poker Hands 505

14.8 The Pigeonhole Principle 510

14.9 Inclusion-Exclusion 520

14.10 Combinatorial Proofs 526

14.11 References 530

15

Generating Functions 561

15.1 Infinite Series 561

15.2 Counting with Generating Functions 562

15.3 Partial Fractions 568

15.4 Solving Linear Recurrences 571

15.5 Formal Power Series 576

15.6 References 580

IV Probability

Introduction 595

16

Events and Probability Spaces 597

'mcs" — 2012/11/12 — 16:25 — page vii — #7

vii Contents

16.1 Let's Make a Deal 597

16.2 The Four Step Method 598

16.3 Strange Dice 607

16.4 Set Theory and Probability 615

17

Conditional Probability 627

17.1 Definition and Notation 628

17.2 Why Tree Diagrams Work 630

17.3 A Posteriori Probabilities 633

17.4 The Law of Total Probability 635

17.5 Independence 639

17.6 Mutual Independence 640

17.7 The Birthday Principle 645

18

Random Variables 665

18.1 Random Variable Examples 665

18.2 Independence 667

18.3 Distribution Functions 668

18.4 Great Expectations 676

18.5 Linearity of Expectation 688

19

Deviation from the Mean 713

19.1 Why the Mean? 713

19.2 Markov's Theorem 714

19.3 Chebyshev's Theorem 716

19.4 Properties of Variance 720

19.5 Estimation by Random Sampling 725

19.6 Confidence versus Probability 730

19.7 Sums of Random Variables 731

19.8 Really Great Expectations 741

20

Random Walks 763

20.1 Gambler's Ruin 763

20.2 Random Walks on Graphs 773

V Recurrences

Introduction 789

21

Recurrences 791

21.1 The Towers of Hanoi 791

21.2 Merge Sort 794

'mcs" — 2012/11/12 — 16:25 — page viii — #8

Contents

21.3 Linear Recurrences 798

21.4 Divide-and-Conquer Recurrences 805

21.5 A Feel for Recurrences 812

Bibliography 819

Glossary of Symbols 823

Index 826

"mcs" — 2012/11/12 — 16:25 — page 1 — #9

/ Proofs

'mcs" — 2012/11/12 — 16:25 — page 2 — #10

'mcs" — 2012/11/12 — 16:25 — page 3 — #11

Introduction

This text explains how to use mathematical models and methods to analyze prob-

lems that arise in computer science. Proofs play a central role in this work because

the authors share a belief with most mathematicians that proofs are essential for

genuine understanding. Proofs also play a growing role in computer science; they

are used to certify that software and hardware will always behave correctly, some-

thing that no amount of testing can do.

Simply put, a proof is a method of establishing truth. Like beauty, "truth" some-

times depends on the eye of the beholder, and it should not be surprising that what

constitutes a proof differs among fields. For example, in the judicial system, legal

truth is decided by a jury based on the allowable evidence presented at trial. In the

business world, authoritative truth is specified by a trusted person or organization,

or maybe just your boss. In fields such as physics or biology, scientific truth 1 is

confirmed by experiment. In statistics, probable truth is established by statistical

analysis of sample data.

Philosophical proof involves careful exposition and persuasion typically based

on a series of small, plausible arguments. The best example begins with "Cogito

ergo sum," a Latin sentence that translates as "I think, therefore I am." This phrase

comes from the beginning of a 17th century essay by the mathematician/philosopher,

Rene Descartes, and it is one of the most famous quotes in the world: do a web

search for it, and you will be flooded with hits.

Deducing your existence from the fact that you're thinking about your existence

is a pretty cool and persuasive-sounding idea. However, with just a few more lines

'Actually, only scientific falsehood can be demonstrated by an experiment — when the experiment

fails to behave as predicted. But no amount of experiment can confirm that the next experiment won't

fail. For this reason, scientists rarely speak of truth, but rather of theories that accurately predict past,

and anticipated future, experiments.

'mcs" — 2012/11/12 — 16:25 — page 4 — #12

Part I Proofs

of argument in this vein, Descartes goes on to conclude that there is an infinitely

beneficent God. Whether or not you believe in an infinitely beneficent God, you'll

probably agree that any very short "proof" of God's infinite beneficence is bound

to be far-fetched. So even in masterful hands, this approach is not reliable.

Mathematics has its own specific notion of "proof."

Definition. A mathematical proof of a proposition is a chain of logical deductions

leading to the proposition from a base set of axioms.

The three key ideas in this definition are highlighted: proposition, logical deduc-

tion, and axiom. Chapter 1 examines these three ideas along with some basic ways

of organizing proofs. Chapter 2 introduces the Well Ordering Principle, a basic

method of proof; later, Chapter 5 introduces the closely related proof method of

Induction.

If you're going to prove a proposition, you'd better have a precise understand-

ing of what the proposition means. To avoid ambiguity and uncertain definitions

in ordinary language, mathematicians use language very precisely, and they often

express propositions using logical formulas; these are the subject of Chapter 3.

The first three Chapters assume the reader is familiar with a few mathematical

concepts like sets and functions. Chapters 4 and 7 offer a more careful look at

such mathematical data types, examining in particular properties and methods for

proving things about infinite sets. Chapter 6 goes on to examine recursively defined

data types.

'mcs" — 2012/11/12 — 16:25 — page 5 — #13

What is a Proof?

1.1 Propositions

Definition. A proposition is a statement that is either true or false.

For example, both of the following statements are propositions. The first is true,

and the second is false.

Proposition 1.1.1. 2 + 3 = 5.

Proposition 1.1.2. 1 + 1=3.

Being true or false doesn't sound like much of a limitation, but it does exclude

statements such as, "Wherefore art thou Romeo?" and "Give me an A!" It also ex-

cludes statements whose truth varies with circumstance such as, "It's five o'clock,"

or "the stock market will rise tomorrow."

Unfortunately it is not always easy to decide if a proposition is true or false:

Proposition 1.1.3. For every nonnegative integer, n, the value of ' n 2 + n + 41 is

prime.

(A prime is an integer greater than 1 that is not divisible by any other integer

greater than 1. For example, 2, 3, 5, 7, 11, are the first five primes.) Let's try some

numerical experimentation to check this proposition. Let l

p(n)::=n 2 + n + 41. (1.1)

We begin with p(0) — 41, which is prime; then

p{\) = 43, p(2) = 47, p(3) = 53, . . . , p(20) = 461

are each prime. Hmmm, starts to look like a plausible claim. In fact we can keep

checking through n — 39 and confirm that p(39) = 1601 is prime.

But p(40) — 40 2 + 40 + 41 = 41 • 41, which is not prime. So it's not true that

the expression is prime for all nonnegative integers. In fact, it's not hard to show

that no polynomial with integer coefficients can map all nonnegative numbers into

prime numbers, unless it's a constant (see Problem 1.6). The point is that in general,

'The symbol ::= means "equal by definition." It's always ok simply to write "=" instead of :

but reminding the reader that an equality holds by definition can be helpful.

'mcs" — 2012/11/12 — 16:25 — page 6 — #14

Chapter 1 What is a Proof?

you can't check a claim about an infinite set by checking a finite set of its elements,

no matter how large the finite set.

By the way, propositions like this about all numbers or all items of some kind

are so common that there is a special notation for them. With this notation, Propo-

sition 1.1.3 would be

V« e N. p(n) is prime. (1.2)

Here the symbol V is read "for all." The symbol N stands for the set of nonnegative

integers, namely, 0, 1, 2, 3, ...(ask your instructor for the complete list). The

symbol "e" is read as "is a member of," or "belongs to," or simply as "is in." The

period after the N is just a separator between phrases.

Here are two even more extreme examples:

Proposition 1.1.4. [Euler's Conjecture] The equation

a 4 + b 4 + c 4 = d 4

has no solution when a,b,c,d are positive integers.

Euler (pronounced "oiler") conjectured this in 1769. But the proposition was

proved false 218 years later by Noam Elkies at a liberal arts school up Mass Ave.

The solution he found was a = 95800, b = 217519, c = 414560, d = 422481.

In logical notation, Euler's Conjecture could be written,

Va eZ + Vk Z+ Vc e Z+ Vrf 6 Z+. a 4 + b 4 + c 4 / d 4 .

Here, Z + is a symbol for the positive integers. Strings of V's like this are usually

abbreviated for easier reading:

Va,b,c,d eZ+.a 4 + b 4 + c 4 / d 4 .

Proposition 1.1.5. 313(x 3 + y 3 ) — z 3 has no solution when x,y,z e Z + .

This proposition is also false, but the smallest counterexample has more than

1000 digits!

It's worth mentioning a couple of further famous propositions whose proofs were

sought for centuries before finally being discovered:

Proposition 1.1.6 (Four Color Theorem). Every map can be colored with 4 colors

so that adjacent 2 regions have different colors.

Two regions are adjacent only when they share a boundary segment of positive length. They are

not considered to be adjacent if their boundaries meet only at a few points.

'mcs" — 2012/11/12 — 16:25 — page 7 — #15

1.1. Propositions 7

Several incorrect proofs of this theorem have been published, including one that

stood for 10 years in the late 19th century before its mistake was found. A laborious

proof was finally found in 1976 by mathematicians Appel and Haken, who used a

complex computer program to categorize the four-colorable maps; the program left

a few thousand maps uncategorized, and these were checked by hand by Haken

and his assistants — including his 15 -year-old daughter. There was reason to doubt

whether this was a legitimate proof: the proof was too big to be checked without a

computer, and no one could guarantee that the computer calculated correctly, nor

was anyone enthusiastic about exerting the effort to recheck the four-colorings of

thousands of maps that were done by hand. Two decades later a mostly intelligible

proof of the Four Color Theorem was found, though a computer is still needed to

check four-colorability of several hundred special maps. 3

Proposition 1.1.7 (Fermat's Last Theorem). There are no positive integers x, y,

and z such that

X n + y n = z n

for some integer n > 2.

In a book he was reading around 1630, Fermat claimed to have a proof but not

enough space in the margin to write it down. Over the years, it was proved to

hold for all n up to 4,000,000, but we've seen that this shouldn't necessarily inspire

confidence that it holds for all n; there is, after all, a clear resemblance between

Fermat's Last Theorem and Euler's false Conjecture. Finally, in 1994, Andrew

Wiles gave a proof, after seven years of working in secrecy and isolation in his

attic. His proof did not fit in any margin. 4

Finally, let's mention another simply stated proposition whose truth remains un-

known.

Proposition 1.1.8 (Goldbach's Conjecture). Every even integer greater than 2 is

the sum of two primes.

Goldbach's Conjecture dates back to 1742. It is known to hold for all numbers

up to 10 16 , but to this day, no one knows whether it's true or false.

For a computer scientist, some of the most important things to prove are the

correctness of programs and systems — whether a program or system does what

The story of the proof of the Four Color Theorem is told in a well-reviewed popular (non-

technical) book: "Four Colors Suffice. How the Map Problem was Solved." Robin Wilson. Princeton

Univ. Press, 2003, 276pp. ISBN 0-691-11533-8.

In fact, Wiles' original proof was wrong, but he and several collaborators used his ideas to arrive

at a correct proof a year later. This story is the subject of the popular book, Fermat's Enigma by

Simon Singh, Walker & Company, November, 1997.

'mcs" — 2012/11/12 — 16:25 — page 8 — #16

Chapter 1 What is a Proof?

it's supposed to. Programs are notoriously buggy, and there's a growing community

of researchers and practitioners trying to find ways to prove program correctness.

These efforts have been successful enough in the case of CPU chips that they are

now routinely used by leading chip manufacturers to prove chip correctness and

avoid mistakes like the notorious Intel division bug in the 1990's.

Developing mathematical methods to verify programs and systems remains an

active research area. We'll illustrate some of these methods in Chapter 5.

1.2 Predicates

A predicate is a proposition whose truth depends on the value of one or more vari-

ables.

Most of the propositions above were defined in terms of predicates. For example,

"n is a perfect square"

is a predicate whose truth depends on the value of n. The predicate is true for n = 4

since four is a perfect square, but false for n = 5 since five is not a perfect square.

Like other propositions, predicates are often named with a letter. Furthermore, a

function-like notation is used to denote a predicate supplied with specific variable

values. For example, we might name our earlier predicate P :

P(n) ::— "n is a perfect square".

So P(4) is true, and P{5) is false.

This notation for predicates is confusingly similar to ordinary function notation.

If P is a predicate, then P(n) is either true or false, depending on the value of n.

On the other hand, if p is an ordinary function, liken 2 + 1, then p(n) is ^numerical

quantity. Don't confuse these two!

1.3 The Axiomatic Method

The standard procedure for establishing truth in mathematics was invented by Eu-

clid, a mathematician working in Alexandria, Egypt around 300 BC. His idea was

to begin with five assumptions about geometry, which seemed undeniable based on

direct experience. (For example, "There is a straight line segment between every

pair of points.) Propositions like these that are simply accepted as true are called

axioms.

'mcs" — 2012/11/12 — 16:25 — page 9 — #17

1.4. Our Axioms 9

Starting from these axioms, Euclid established the truth of many additional propo-

sitions by providing "proofs." A proof is a sequence of logical deductions from

axioms and previously-proved statements that concludes with the proposition in

question. You probably wrote many proofs in high school geometry class, and

you'll see a lot more in this text.

There are several common terms for a proposition that has been proved. The

different terms hint at the role of the proposition within a larger body of work.

• Important true propositions are called theorems.

• A lemma is a preliminary proposition useful for proving later propositions.

• A corollary is a proposition that follows in just a few logical steps from a

theorem.

These definitions are not precise. In fact, sometimes a good lemma turns out to be

far more important than the theorem it was originally used to prove.

Euclid's axiom-and-proof approach, now called the axiomatic method, remains

the foundation for mathematics today. In fact, just a handful of axioms, called the

axioms Zermelo-Fraenkel with Choice (ZFC), together with a few logical deduc-

tion rules, appear to be sufficient to derive essentially all of mathematics. We'll

examine these in Chapter 7.

1.4 Our Axioms

The ZFC axioms are important in studying and justifying the foundations of math-

ematics, but for practical purposes, they are much too primitive. Proving theorems

in ZFC is a little like writing programs in byte code instead of a full-fledged pro-

gramming language — by one reckoning, a formal proof in ZFC that 2 + 2 = 4

requires more than 20,000 steps! So instead of starting with ZFC, we're going to

take a huge set of axioms as our foundation: we'll accept all familiar facts from

high school math.

This will give us a quick launch, but you may find this imprecise specification

of the axioms troubling at times. For example, in the midst of a proof, you may

start to wonder, "Must I prove this little fact or can I take it as an axiom?" There

really is no absolute answer, since what's reasonable to assume and what requires

proof depends on the circumstances and the audience. A good general guideline is

simply to be up front about what you're assuming.

'mcs" — 2012/11/12 — 16:25 — page 10 — #18

10 Chapter 1 What is a Proof?

1.4.1 Logical Deductions

Logical deductions, or inference rules, are used to prove new propositions using

previously proved ones.

A fundamental inference rule is modus ponens. This rule says that a proof of P

together with a proof that P IMPLIES Q is a proof of Q.

Inference rules are sometimes written in a funny notation. For example, modus

ponens is written:

Rule.

P, P IMPLIES Q

When the statements above the line, called the antecedents, are proved, then we

can consider the statement below the line, called the conclusion or consequent, to

also be proved.

A key requirement of an inference rule is that it must be sound: an assignment

of truth values to the letters, P , Q, ..., that makes all the antecedents true must

also make the consequent true. So if we start off with true axioms and apply sound

inference rules, everything we prove will also be true.

There are many other natural, sound inference rules, for example:

Rule.

P IMPLIES Q, Q IMPLIES R

P IMPLIES R

Rule.

NOT(P) IMPLIES NOT(g)

Q IMPLIES P

On the other hand,

Non-Rule.

NOT(P) IMPLIES NOT(g)

P IMPLIES Q

is not sound: if P is assigned T and Q is assigned F, then the antecedent is true

and the consequent is not.

Note that a propositional inference rule is sound precisely when the conjunction

(AND) of all its antecedents implies its consequent.

As with axioms, we will not be too formal about the set of legal inference rules.

Each step in a proof should be clear and "logical"; in particular, you should state

what previously proved facts are used to derive each new conclusion.

'mcs" — 2012/11/12 — 16:25 — page 11 — #19

1.5. Proving an Implication 11

1.4.2 Patterns of Proof

In principle, a proof can be any sequence of logical deductions from axioms and

previously proved statements that concludes with the proposition in question. This

freedom in constructing a proof can seem overwhelming at first. How do you even

start a proof?

Here's the good news: many proofs follow one of a handful of standard tem-

plates. Each proof has it own details, of course, but these templates at least provide

you with an outline to fill in. We'll go through several of these standard patterns,

pointing out the basic idea and common pitfalls and giving some examples. Many

of these templates fit together; one may give you a top-level outline while others

help you at the next level of detail. And we'll show you other, more sophisticated

proof techniques later on.

The recipes below are very specific at times, telling you exactly which words to

write down on your piece of paper. You're certainly free to say things your own

way instead; we're just giving you something you could say so that you're never at

a complete loss.

1.5 Proving an Implication

Propositions of the form "If P, then Q" are called implications. This implication

is often rephrased as "P IMPLIES Q."

Here are some examples:

•

(Quadratic Formula) If ax + bx + c — and a / 0, then

x = (-b ± Vb 2 - 4acJ /2a.

• (Goldbach's Conjecture 1.1.8 rephrased) If n is an even integer greater than

2, then n is a sum of two primes.

• If < x < 2, then -x 3 + 4x + 1 > 0.

There are a couple of standard methods for proving an implication.

1.5.1 Method #1

In order to prove that P IMPLIES Q:

1. Write, "Assume P."

2. Show that Q logically follows.

'mcs" — 2012/11/12 — 16:25 — page 12 — #20

12 Chapter 1 What is a Proof?

Example

Theorem 1.5.1. I/O < x < 2, then -x 3 + 4x + 1 > 0.

Before we write a proof of this theorem, we have to do some scratchwork to

figure out why it is true.

The inequality certainly holds for x — 0; then the left side is equal to 1 and

1 > 0. As x grows, the Ax term (which is positive) initially seems to have greater

magnitude than — x 3 (which is negative). For example, when x — 1, we have

4x = 4, but —x 3 = —1 only. In fact, it looks like — x 3 doesn't begin to dominate

until x > 2. So it seems the — x 3 + 4x part should be nonnegative for all x between

and 2, which would imply that — x 3 + 4x + 1 is positive.

So far, so good. But we still have to replace all those "seems like" phrases with

solid, logical arguments. We can get a better handle on the critical — x 3 + 4x part

by factoring it, which is not too hard:

-x 3 + 4x = x(2 - x)(2 + x)

Aha! For x between and 2, all of the terms on the right side are nonnegative. And

a product of nonnegative terms is also nonnegative. Let's organize this blizzard of

observations into a clean proof.

Proof. Assume < x < 2. Then x, 2— x, and 2 + x are all nonnegative. Therefore,

the product of these terms is also nonnegative. Adding 1 to this product gives a

positive number, so:

x(2-x)(2 + x) + 1 >

Multiplying out on the left side proves that

-x 3 + 4x + 1 >

as claimed. ■

There are a couple points here that apply to all proofs:

• You'll often need to do some scratchwork while you're trying to figure out

the logical steps of a proof. Your scratchwork can be as disorganized as you

like — full of dead-ends, strange diagrams, obscene words, whatever. But

keep your scratchwork separate from your final proof, which should be clear

and concise.

• Proofs typically begin with the word "Proof" and end with some sort of de-

limiter like □ or "QED." The only purpose for these conventions is to clarify

where proofs begin and end.

'mcs" — 2012/11/12 — 16:25 — page 13 — #21

1.6. Proving an "If and Only If" 13

1.5.2 Method #2 - Prove the Contrapositive

An implication ("P IMPLIES Q") is logically equivalent to its contrapositive

NOT(g) IMPLIES NOT(P) .

Proving one is as good as proving the other, and proving the contrapositive is some-

times easier than proving the original statement. If so, then you can proceed as

follows:

1. Write, "We prove the contrapositive:" and then state the contrapositive.

2. Proceed as in Method #1 .

Example

Theorem 1.5.2. If r is irrational, then y/r is also irrational.

A number is rational when it equals a quotient of integers — that is, if it equals

m/n for some integers m and n. If it's not rational, then it's called irrational. So

we must show that if r is not a ratio of integers, then */r is also not a ratio of

integers. That's pretty convoluted! We can eliminate both nofs and make the proof

straightforward by using the contrapositive instead.

Proof. We prove the contrapositive: if ^fr is rational, then r is rational.

Assume that sfr is rational. Then there exist integers m and n such that:

,- m

Vr = —

n

Squaring both sides gives:

2

m

Since m 2 and n 2 are integers, r is also rational. ■

1.6 Proving an "If and Only If"

Many mathematical theorems assert that two statements are logically equivalent;

that is, one holds if and only if the other does. Here is an example that has been

known for several thousand years:

Two triangles have the same side lengths if and only if two side lengths

and the angle between those sides are the same.

The phrase "if and only if" comes up so often that it is often abbreviated "iff."

'mcs" — 2012/11/12 — 16:25 — page 14 — #22

14 Chapter 1 What is a Proof?

1.6.1 Method #1: Prove Each Statement Implies the Other

The statement "P IFF Q" is equivalent to the two statements "P IMPLIES Q" and

"Q IMPLIES P." So you can prove an "iff" by proving two implications:

1. Write, "We prove P implies Q and vice-versa."

2. Write, "First, we show P implies Q." Do this by one of the methods in

Section 1.5.

3. Write, "Now, we show Q implies P." Again, do this by one of the methods

in Section 1.5.

1.6.2 Method #2: Construct a Chain of Iffs

In order to prove that P is true iff Q is true:

1 . Write, "We construct a chain of if-and-only-if implications."

2. Prove P is equivalent to a second statement which is equivalent to a third

statement and so forth until you reach Q .

This method sometimes requires more ingenuity than the first, but the result can be

a short, elegant proof.

Example

The standard deviation of a sequence of values x\, x% x n is defined to be:

/ (xi - /x) 2 + {x 2 -ix) 2 + --- + (Xn - [l) 2

y n

where jx is the mean of the values:

X\ + x 2 + h x„

n

Theorem 1.6.1. The standard deviation of a sequence of values X\,...,x n is zero

iff all the values are equal to the mean.

For example, the standard deviation of test scores is zero if and only if everyone

scored exactly the class average.

Proof. We construct a chain of "iff" implications, starting with the statement that

the standard deviation (1.3) is zero:

(xi - jx) 2 + (x 2 - lA 2 + • • • + (x n ~ l-i) 2 _

'mcs" — 2012/11/12 — 16:25 — page 15 — #23

1.7. Proof by Cases 15

Now since zero is the only number whose square root is zero, equation (1.4) holds

iff

fa - [if + (x 2 - [if + • • • + (x„ - [if = 0. (1.5)

Squares of real numbers are always nonnegative, so every term on the left hand side

of equation (1.5) is nonnegative. This means that (1.5) holds iff

Every term on the left hand side of (1 .5) is zero. (1.6)

But a term (x,- — \xf is zero iff x% — fi, so (1.6) is true iff

Every xi equals the mean.

1.7 Proof by Cases

Breaking a complicated proof into cases and proving each case separately is a com-

mon, useful proof strategy. Here's an amusing example.

Let's agree that given any two people, either they have met or not. If every pair

of people in a group has met, we'll call the group a club. If every pair of people in

a group has not met, we'll call it a group of strangers.

Theorem. Every collection of 6 people includes a club of 3 people or a group of 3

strangers.

Proof. The proof is by case analysis 5 . Let x denote one of the six people. There

are two cases:

1. Among 5 other people besides x, at least 3 have met x.

2. Among the 5 other people, at least 3 have not met x.

Now, we have to be sure that at least one of these two cases must hold, 6 but that's

easy: we've split the 5 people into two groups, those who have shaken hands with

x and those who have not, so one of the groups must have at least half the people.

Case 1: Suppose that at least 3 people did meet x.

This case splits into two subcases:

5 Describing your approach at the outset helps orient the reader.

Part of a case analysis argument is showing that you've covered all the cases. Often this is

obvious, because the two cases are of the form "P" and "not P." However, the situation above is not

stated quite so simply.

'mcs" — 2012/11/12 — 16:25 — page 16 — #24

16 Chapter 1 What is a Proof?

Case 1.1: No pair among those people met each other. Then these

people are a group of at least 3 strangers. So the Theorem holds in this

subcase.

Case 1.2: Some pair among those people have met each other. Then

that pair, together with x, form a club of 3 people. So the Theorem

holds in this subcase.

This implies that the Theorem holds in Case 1.

Case 2: Suppose that at least 3 people did not meet x.

This case also splits into two subcases:

Case 2.1: Every pair among those people met each other. Then these

people are a club of at least 3 people. So the Theorem holds in this

subcase.

Case 2.2: Some pair among those people have not met each other.

Then that pair, together with x, form a group of at least 3 strangers. So

the Theorem holds in this subcase.

This implies that the Theorem also holds in Case 2, and therefore holds in all cases.

1.8 Proof by Contradiction

In a proof by contradiction, or indirect proof, you show that if a proposition were

false, then some false fact would be true. Since a false fact by definition can't be

true, the proposition must be true.

Proof by contradiction is always a viable approach. However, as the name sug-

gests, indirect proofs can be a little convoluted, so direct proofs are generally prefer-

able when they are available.

Method: In order to prove a proposition P by contradiction:

1 . Write, "We use proof by contradiction."

2. Write, "Suppose P is false."

3. Deduce something known to be false (a logical contradiction).

4. Write, "This is a contradiction. Therefore, P must be true."

'mcs" — 2012/11/12 — 16:25 — page 17 — #25

1.9. Good Proofs in Practice 17

Example

Remember that a number is rational if it is equal to a ratio of integers. For example,

3.5 = 7/2 and 0.1111 •• • = 1/9 are rational numbers. On the other hand, we'll

prove by contradiction that a/2 is irrational.

Theorem 1.8.1. V2 is irrational.

Proof. We use proof by contradiction. Suppose the claim is false; that is, V2 is

rational. Then we can write V2 as a fraction n/d in lowest terms.

Squaring both sides gives 2 = n 2 / d 2 and so 2d 2 — n 2 . This implies that n is a

multiple of 2. Therefore n 2 must be a multiple of 4. But since 2d 2 = n 2 , we know

2J 2 is a multiple of 4 and so d 2 is a multiple of 2. This implies that d is a multiple

of 2.

So the numerator and denominator have 2 as a common factor, which contradicts

the fact that n/d is in lowest terms. Thus, V2 must be irrational. ■

1.9 Good Proofs in Practice

One purpose of a proof is to establish the truth of an assertion with absolute cer-

tainty. Mechanically checkable proofs of enormous length or complexity can ac-

complish this. But humanly intelligible proofs are the only ones that help someone

understand the subject. Mathematicians generally agree that important mathemati-

cal results can't be fully understood until their proofs are understood. That is why

proofs are an important part of the curriculum.

To be understandable and helpful, more is required of a proof than just logical

correctness: a good proof must also be clear. Correctness and clarity usually go

together; a well-written proof is more likely to be a correct proof, since mistakes

are harder to hide.

In practice, the notion of proof is a moving target. Proofs in a professional

research journal are generally unintelligible to all but a few experts who know all

the terminology and prior results used in the proof. Conversely, proofs in the first

weeks of a beginning course like 6.042 would be regarded as tediously long-winded

by a professional mathematician. In fact, what we accept as a good proof later in

the term will be different from what we consider good proofs in the first couple

of weeks of 6.042. But even so, we can offer some general tips on writing good

proofs:

State your game plan. A good proof begins by explaining the general line of rea-

soning, for example, "We use case analysis" or "We argue by contradiction."

'mcs" — 2012/11/12 — 16:25 — page 18 — #26

18 Chapter 1 What is a Proof?

Keep a linear flow. Sometimes proofs are written like mathematical mosaics, with

juicy tidbits of independent reasoning sprinkled throughout. This is not good.

The steps of an argument should follow one another in an intelligible order.

A proof is an essay, not a calculation. Many students initially write proofs the way

they compute integrals. The result is a long sequence of expressions without

explanation, making it very hard to follow. This is bad. A good proof usually

looks like an essay with some equations thrown in. Use complete sentences.

Avoid excessive symbolism. Your reader is probably good at understanding words,

but much less skilled at reading arcane mathematical symbols. Use words

where you reasonably can.

Revise and simplify. Your readers will be grateful.

Introduce notation thoughtfully. Sometimes an argument can be greatly simpli-

fied by introducing a variable, devising a special notation, or defining a new

term. But do this sparingly, since you're requiring the reader to remember

all that new stuff. And remember to actually define the meanings of new

variables, terms, or notations; don't just start using them!

Structure long proofs. Long programs are usually broken into a hierarchy of smaller

procedures. Long proofs are much the same. When your proof needed facts

that are easily stated, but not readily proved, those fact are best pulled out

as preliminary lemmas. Also, if you are repeating essentially the same argu-

ment over and over, try to capture that argument in a general lemma, which

you can cite repeatedly instead.

Be wary of the "obvious." When familiar or truly obvious facts are needed in a

proof, it's OK to label them as such and to not prove them. But remember

that what's obvious to you may not be — and typically is not — obvious to

your reader.

Most especially, don't use phrases like "clearly" or "obviously" in an attempt

to bully the reader into accepting something you're having trouble proving.

Also, go on the alert whenever you see one of these phrases in someone else's

proof.

Finish. At some point in a proof, you'll have established all the essential facts

you need. Resist the temptation to quit and leave the reader to draw the

"obvious" conclusion. Instead, tie everything together yourself and explain

why the original claim follows.

'mcs" — 2012/11/12 — 16:25 — page 19 — #27

1.10. References 19

Creating a good proof is a lot like creating a beautiful work of art. In fact,

mathematicians often refer to really good proofs as being "elegant" or "beautiful."

It takes a practice and experience to write proofs that merit such praises, but to

get you started in the right direction, we will provide templates for the most useful

proof techniques.

Throughout the text there are also examples of bogus proofs — arguments that

look like proofs but aren't. Sometimes a bogus proof can reach false conclusions

because of missteps or mistaken assumptions. More subtle bogus proofs reach

correct conclusions, but do so in improper ways, for example by circular reasoning,

by leaping to unjustified conclusions, or by saying that the hard part of the proof is

"left to the reader." Learning to spot the flaws in improper proofs will hone your

skills at seeing how each proof step follows logically from prior steps. It will also

enable you to spot flaws in your own proofs.

The analogy between good proofs and good programs extends beyond structure.

The same rigorous thinking needed for proofs is essential in the design of criti-

cal computer systems. When algorithms and protocols only "mostly work" due

to reliance on hand-waving arguments, the results can range from problematic to

catastrophic. An early example was the Therac 25, a machine that provided radia-

tion therapy to cancer victims, but occasionally killed them with massive overdoses

due to a software race condition. A more recent (August 2004) example involved a

single faulty command to a computer system used by United and American Airlines

that grounded the entire fleet of both companies — and all their passengers!

It is a certainty that we'll all one day be at the mercy of critical computer systems

designed by you and your classmates. So we really hope that you'll develop the

ability to formulate rock-solid logical arguments that a system actually does what

you think it does !

1.10 References

[9], [1], [32]

'mcs" — 2012/11/12 — 16:25 — page 20 — #28

20 Chapter 1 What is a Proof?

Problems for Section 1.1

Class Problems

Problem 1.1.

Identify exactly where the bugs are in each of the following bogus proofs. 7

(a) Bogus Claim: 1/8 > 1/

Lire la suite

- 7.81 MB
- 15

##### Vous recherchez le terme ""

65

69