BackReturn Home

Full Title: Argument: A Guide to Critical Thinking
Author(s): Perry Weddle
Publishing / Edition: McGraw-Hill Book Company, 1978
Purchase; Read: Borrow the eBook from Internet Archive.

Content Review

Perry Weddle is a wonderful writer, thorough, yet clear. Even though this is a short textbook, at 188 pages in length (minus the Index), it is informationally dense. By this we mean that there are a large number of topics covered in a short amount of space, not that it is overly complex. It is well organized and fairly comprehensive, giving a good foundation in the kind of reasoning that everyone encounters regularly within day-to-day life (e.g.: deciphering news, advertising, political commentary, basic scientific studies, etc.). While it does touch upon the general use and understanding of statistics, it does not cover "formal" or mathematical logic.

Due to the age of the book (published in 1978!), some of the examples make reference to situations that might seem dated, such as Nixon's presidency. However, the information is timelessly relevant and quite useful. There is also a pleasant dash of humour here and there.

Let's explore the contents of each chapter. Click a link to jump to it...

Chapter 1 - The Realm of Reason
Chapter 2 - Fallacy
Chapter 3 - Language
Chapter 4 - Authority
Chapter 5 - Generality
Chapter 6 - Comparison
Chapter 7 - Cause

Chapter 1 - The Realm of Reason

This chapter covers the meaning of the term "argument" and what constitutes "good reasoning". In this context, "arguments" are not fights. An "argument" is a set of "premises" and a "conclusion". A "conclusion" is a claim about something, whereas a "premise" is a fact which supports it. Arguments can be "simple" (leading to a single conclusion), or "complex" (when the conclusion becomes a premise for another conclusion, a "line of reasoning").

Every argument has two aspects, an "external claim" and an "internal claim":

• The "external claim" is that the premises are true (i.e.: factual). If the premises are false, then the argument is "unsound".
• The "internal claim" is that the conclusion follows from the premises. If it does, then the argument is "valid". If it doesn't, then the argument is "invalid" (what we sometimes call a "non-sequitur").

If the premises are true and the conclusion is valid, then the argument is "sound". It is important to note that unsound and invalid arguments can sometimes contain accurate conclusions, but the argument itself did not produce it!

"Reasoning" is the ability to understand sound arguments. There are four elements to it:

1. The Argument Proper - What are the arguments themselves?
2. The Reasoner(s) - Who is formulating and exchanging arguments?
3. The Issue - What is the context for the arguments? What are the topics being discussed?
4. The Point - What is the purpose of the arguments? What useful fact(s) are established?

With all of this in mind, the author highlights seven characteristics of "good reasoning"...

• Characteristic #1: The arguments are "from fact".

We must always do our best to establish premises which are true. "Hypotheses" are assumptions and "probabilities" are likelihoods. Although we can form arguments from them, they are not facts. What are facts? Facts are often thought of as pieces of information that are verifiable through direct observation, measurement, replicable experiment, etc. [*see Related Resources section below]

• Characteristic #2: The premises are independent of the conclusion.

In other words, premises cannot depend on the conclusion (e.g.: someone who is known for lying cannot vouch for their own honesty).

• Characteristic #3: The arguments are relevant.

Just like how a conclusion must follow from the premises in order to be considered valid, an argument has to address specific issues. Some arguments might be irrelevant to the topic at hand.

• Characteristic #4: Premises must adequately support the conclusion.

Consider and summarize all available evidence. An argument can be inconclusive if necessary evidence is missing.

• Characteristic #5: Premises are usually more accessible than conclusions.

To put it another way, arguments serve to make things more known, more clear, or more accepted. Therefore, conclusions are often drawn from information that is already well known, clear, and/or accepted. There are exceptions to this characteristic though (e.g.: involved technical explanations for everyday phenomena).

• Characteristic #6: Arguments "go somewhere".

They have a point, some purpose that we are trying to accomplish by formulating them.

• Characteristic #7: Arguments are "open".

They can be corrected whenever necessary and the environment allows for multiple viewpoints to be shared freely.

Chapter 2 - Fallacy

A "fallacy" is a faulty argument. The author divides them into two general categories of his own naming:

1. "Oversimplification" - distorting facts by making them simpler than they actually are; neglecting relevant facts
2. "Smokescreen" - diversion of any kind, whether intentional or unintentional

The author then gives several examples within each category...

Oversimplification

Improper Questions (also sometimes called a "loaded question") - This is when a question has an underlying assumption contained within it. It is remedied by dividing up the question and countering any false assumptions. For example, the question "Have you stopped beating your wife?" implies that the person being questioned is a wife-beater, even if they say "No" in response. The only appropriate responses would be things like "I don't beat my wife and never have." or "I am not married and I would not beat my wife if I was." (assuming that these are honest answers).

False Dilemma - This is to be presented with a rigid "either-or" choice when other possibilities are possible. It is remedied by altering or expanding the options.

Straw Man - This is to misrepresent another's position (e.g.: by taking only part of it or exaggerating aspects of it). It is remedied by addressing the whole of it directly and in ways that are balanced.

Stereotyping - This is to label something and to make assumptions about it based upon the associations that the label might evoke. Instead, examine things on a case-by-case basis, without prejudice.

Half-Truth - This is to withhold evidence (especially when it is contrary to one's belief or aims). It is remedied by honesty and/or being held accountable by others.

Black/White Thinking - This is to assume something is composed completely of opposites. It is remedied by challenging the assumption; show where it doesn't hold true.

Appeal to Ignorance - This is to assume that since [blank] cannot be disproven, then it must be true. We have to be able to verify the claims that we make.

Begging the Question - This is when a conclusion is used to support itself. It is a type of "circular reasoning". It is remedied by making sure that the premises and conclusion are distinct from one another, and that the latter actually follows from the former.

Smokescreen

Ignoring the Issue - This is to avoid an issue by changing the subject, talking about it only vaguely ("in principle"), or homing in on a subissue. It is remedied by describing how the argument does not address the issue at hand.

The following bypass reason by heightening emotion. Whether or not they are considered a fallacy depends on how and why they are used...

Appeal to Pity - This is an attempt to induce sympathy.
Appeal to Vanity - This is to use flattery.
Appeal to Gregariousness - This is essentially "peer pressure".
Appeal to Popular Prejudice - This is to justify something through cultural proclivities or biases.
Appeal to Subconscious Motivation - This is to motivate through subconscious desires or instincts (e.g.: "subliminal messages").

Chapter 3 - Language

The author covers not only the structure of arguments, but also how they are worded. He focuses on two main categories, Clarity and Objectivity:

Clarity

Needle Details - Remove unnecessary redundancy.
Seek Simplicity - Use plain English.
Expose Structure - The order of presentation should complement the argument(s).

Objectivity

Connotation - Notice when words are "pejorative" (evoking negative responses), "honorific" (evoking positive responses), or "euphemistic" (acting as a substitute for a pejorative term).

Parasitizing Connotation - Keep in mind how connotation can be used to influence perception!

He then goes on to describe several related issues...

"Equivocation" is to change the meaning of a word mid-argument, sometimes with the intent to mislead. It can take on several different forms:

Equivocating on Relative Terms - This is to use relative terms (such as "more" or "less") to change how two things compare. What is being compared subtly shifts from one point of reference to another.

Equivocation "by Inertia" - This is to use the same word to refer to a changing condition or circumstance.

Equivocation by Name-Change - This is to call the same things by different names.

How are words "defined" (i.e.: how do we assign them meanings)? There are several different ways:

Lexical Definition - A word is defined by how it is used (grammatically speaking)

Stipulative Definition - A word is defined by its context within a specific work

Operational Definition - A word is defined by describing a process it refers to

Persuasive Definition - Disguising opinions as lexical definitions

Equivocations are sometimes connected to the definitions of words. The author also points out several common misconceptions about definitions:

• "Definitions must give the common and distinctive property of what they define."
• "Defining by example is no good."
• "Definition must never be circular."
• "Definition must precede understanding."
• "Definition is a form of discovery."

Chapter 4 - Authority

The author states that we should "nurture skills for distinguishing reliable authority from poor authority". He provides a few questions that we can ask ourselves to help determine this:

• "Is it a matter for authority?"
• "Is the authority expert on exactly this subject?"
• "Is the authority well recognized?" (e.g.: What are their Academic Training / Degrees, Awards / Grants, Publications / Articles, Invitations, Citations, Professional References, etc.?)

He also points out other sources that people might consider as an "authority":

Print
Experience
Tests and Seals
Tradition and Privilege

Sometimes authority is not accurate or relevant. Ultimately, we cannot accept and deny arguments merely because others accept or deny them. We have to assess them carefully.

Other than in the case of providing testimony, who someone is should not affect the contents of their argument. They still might be legitimate arguments! To attack a person in order to distract from their argument is "Ad Hominem", and to deny an argument because the one offering it is inconsistent or hypocritical is called "Tu Quoque".

The author provides some questions for analyzing the validity of statistics specifically:

• "Does it reflect reality?"
• "Is it complete?"
• "Could they have found that out?"
• "What method did they use?"
• "Does it compare the comparable?"
• "When were the measurements made?"
• "Is it appropriately precise?"

Finally, to wrap up the chapter, there are a few tips on issues related to statistics.

Tip #1: "Significant Figures" show the accuracy of measurements by the number of digits that are used within them. The more digits, the finer the measurement. Use this system whenever possible. Be wary of measurements that give a false sense of precision by simply having more numbers in them.

Tip #2: There are different ways of handling lists of data. For example...

• "(Arithmetic) Mean" - This is found by adding up all of the values and diving the result by the total number of values. This is also more generally known as an "Average".
• "Median" - This is found by arranging all of the values from least to greatest and finding the middle value.
• "Mode" - This is found by finding the value that occurs most frequently within a given range of values.
...etc.

The one that will give us the most meaningful information depends on the situation. Whenever we are presented with a quantity, we should try to understand how it was derived. For example, an average can seem to be telling us one thing, but could actually be indicating something else entirely.

Tip #3: Pictorial representations of statistical data can be misleading. Be sure to pay close attention to the values that images are supposed to represent and how the axes of a graph are labelled!

Chapter 5 - Generality

"Generalizations" are statements that apply to all or none of something. They "distribute" a property over an entire group. A generalization can be either true or false. They are distinct from statements that summarize results and statements which elaborate on the consequences of a definition.

There are several ways of classifying generalizations, although these categories are not exhaustive or standard:

Definitional - This is when the generalization being made about a thing is part of what defines it as that thing. For example: "No U.S. senator's official age is under 25." In other words, you have to be "over 25" to be "a U.S. senator". A specific age is a necessary requirement in this case.

Factual - This is when the generalization refers to a representative sample of some type of thing. For example: "No U.S. senator has a reported net income of less than $50,000." Having a particular income is not a condition for being a "a U.S. senator", yet this characteristic might help to describe them.

Of these two categories, factual generalizations are usually the more common.

Some other categories are...

Hard - This is when a generalization only applies to a specific amount of a group, all or none of some type. They are challenged by finding a counter example.

Soft - This is when a generalization applies to the typical [blank]. It is not intended to cover every case (e.g.: a "truism").

People sometimes draw inaccurate conclusions by mistaking a soft generalization for a hard one, and vice versa.

Further, there are several things that are often mistaken for generalizations:

Moral Principles - These are reasons why one would carry out a particular behavior. They are context dependent to some extent. To mistake them as generalizations is to reduce them to formulas that may not always be applicable.

Physical Laws - These are bits of factual information derived from the scientific method. They only make sense within a particular theoretical framework. To mistake them as generalizations is to ignore that framework.

Collective Statements - These are statements that relate a part to a whole in some manner. For example, averages are sometimes considered collective statements. To mistake them as generalizations may produce a fallacy. To be more specific...

The Fallacy of Division is to assume that a property of the whole also applies to every one of its parts. Inversely, The Fallacy of Composition assumes that a property of one part also applies to the entire whole.

In order to make inferences like this, they have to be backed up by reliable information. For example, a "Statistical Generalization" is to use a percentage or proportion of something to describe an entire class of that thing.

Now that we know what generalizations are and are not, the author demonstrates several forms that a generalization might take:

Head Count - "All A's are B (because we've assessed them all)."
Simple Projection - "No A's have been found that are non-B. Therefore, all A's are B."
Statistical Projection - "A proportion of A's are B. Therefore, a proportion of all A's are B."

For groups that are changing rapidly or situations that are altered by measuring them, a head count might be impossible. In these cases, a projection would make more sense.

"Sampling" is to how statistical projections are formed.

The group that we are studying is called a "population". The proportion that we take from it is our "sample". A sample is "representative" when it captures the relavant characteristics of the entire population, and it is "biased" when it does not.

It can sometimes be tricky to get a sample which is truly representative of a population. There are many different techniques that can help reduce bias:

Randomizing - This is when each member of the population has an equal chance of being selected. The more random a sample, the more it is likely to reflect the whole population.

It can create a bias if something is chosen and not put back into the population upon the next round of selection. If the same thing can be reselected, it is called "sampling with replacement".

"The Gambler's Fallacy" is when a person assumes that the odds of something being chosen are changing when they are actually staying the same. In other words, sampling with replacement means that the odds of selecting a particular thing do not change!

Stratifying - If a population is not uniform throughout, a sample can become skewed towards the aspects of it that are most common. To remedy this situation, the population is "stratified" (i.e.: split into smaller groupings called "strata") to account for these different aspects. Bias is minimized by noting the percentage that these strata are of the entire population when taking a random sample.

Systematic (or Interval) Sampling - This is to choose things at regular intervals from a random starting point within a population. In order to avoid bias, take a random sample within one of these intervals. This keeps characteristics from repeatedly "slipping through the cracks" (i.e.: a "cyclical bias").

Cluster Sampling - This is to take samples in small groups, or "clusters", of a population. In order to avoid bias, randomly select the clusters and randomly sample within each cluster.

Quota Sampling - This is to take a certain number of samples, a "quota", from each strata of a population. In order to avoid bias, randomly choose where the quotas are taken from within the population as a whole. Don't take them all from the same cluster.

There are other techniques for sampling, but the above are the ones that are covered by the author.

The "sample size" is how large or small a sample is relative to the entire population. A large sample size does not necessarily mean that a statistical projection is more accurate. One can draw meaningful conclusions from a small sample size out of a relatively large population. How? [To be honest, the author is so concise when it comes to describing this aspect of statistics that it is a little hard to follow. We will do our best to try to render it useable...]

There are two things that we have to pay attention to:

Margin of Error - This is a range of values that describes how much a random sample differs from the population as a whole. The wider the margin of error, the less precise the sample.

Confidence Level - This is a percentage that describes the likelihood that we would get the same results if we repeated the sampling process. The higher the confidence level, the more consistent the statistic overall.

In combination, these give us a measure called "sampling error". A sampling error can help us to decide if a projection is reliable or not! It is referred to as an "error" because, in most situations, it is impossible for a statistical projection to be a perfect reflection of reality. However, we can determine how far off it probably is.

For example, you might come across a poll with a sampling error that is something like this:

95% +/- 4

This would mean that 95% of the time, all of the values within the statistic are more or less within 4% of their true values. In this case, the "95%" is the confidence level and the "+/- 4" is the margin of error. The higher the confidence level, the wider the margin of error. Therefore, low confidence levels usually lead to more conservative estimates. A larger sample size could also make the margin of error smaller, but only up to a certain point.

Take projections that are based on comparisons of figures that are smaller than the margin of error with a grain of salt. To continue the above example, if a political candidate was ahead of another in the polls by two percentage points given the above sampling error, then it would be no better than a guess. In short, we cannot gauge the accuracy of statistical projections if the sampling error is unknown.

So long as they are not used to manipulate, polls and surveys are useful. We should ask ourselves, "Is the organization carrying it out reputable? Are they open about their methodology?" The author lists six things to look out for:

1. The Sponsor's & Surveyor's Name - Who is giving the survey and why?
2. The Sample Size & Sampling Error
3. The Date of Contact - When did the survey take place?
4. The Population Sampled & Method of Contact - Who is sampled and through what means (e.g.: in-person interview, phone, mail, etc.)?
5. The Degree of Non-Response - How many didn't reply?
6. The Exact Questions Asked - What was the content of the survey itself?

Chapter 6 - Comparison

Whenever we compare two things, we look for similarities and differences between them. "Analogy" is to point out a similarity, while an "analogical argument" is to draw a conclusion from that similarity. While they may not seem as common, the inverse is also possible. A "disanalogy" is to point out a difference, and a conclusion can be drawn from that difference as well.

The author addresses eight things that are helpful to keep in mind. The first three highlight the pieces of information within the analogical argument that we should be clear on:

1. "What is at issue?"
2. "What is being compared?"
3. "What are the real similarities and differences?"

...the next five are techniques for handling the analogical argument itself once they are clear:

4. Challenge half of the comparison
5. Challenge or support the analogy itself
6. Change the analogy
7. Extend the analogy
8. Challenge the generalization implied by the analogy

The author then covers a few different types of analogies based on the context in which they are made:

Historical Comparisons - This is to make a comparison between two different historical events.
Moral Comparisons - This is to make a comparison between a moral judgement of two different circumstances.
Implicit Comparisons - This is whenever a comparison between two things is not explicitly stated.

Chapter 7 - Cause

This final chapter covers the concept of "causality" [and issues related to the "scientific method"].

We normally think of causality in terms of "A causes B". In other words, the latter arises as a result of the former. The result can then go on to be the cause of another result in a "causal chain".

However, the author focuses upon two underlying assumptions behind these kinds of statements:

1. The situation that the statement refers to is usually something that we are trying to control in some way.

2. That which is labelled as a "cause" can change depending on one's relation to that situation.

Therefore, causes can be either "immediate" (i.e.: primary, connected directly to that situation) or "proximate" (i.e.: secondary, indirectly contributing). Exactly how much something contributes (i.e.: its "statsitical significance") can also change. Some complex situations are made up of multiple factors of equal or varying importance (i.e.: "contributory causes").

A "causal argument" has two aspects:

Congruence - This is a general connection made between two things. For example, "Whenever A happens, B also occurs." This is often stated as a "causal hypothesis" (e.g.: "If A, then B."). It is called a "hypothesis" because we haven't yet determined if it holds true, which leads us to...

Exclusion - This is when any other possible factors are eliminated. "If and only if A, then B."

"Correlation" is when congruence repeats (i.e.: a connection is made over and over again). We can describe correlation by how these factors change in relation to one another:

Direct - "A and B co-exist, rising and falling together."

Inverse - "As A rises or falls, B does the opposite."

...or by their rate of occurrence:

Perfect - "Every time A, also B."

Statistical - "Sometimes B when A."

To use the familiar phrase, "correlation does not imply causation". In other words, just because two things are correlated does not mean that one is necessarily the cause of the other. Exclusion is important! In order to find out the extent to which congruence holds, we need some way of testing it, an experiment. [This is essentially what the "scientific method" does!]

A "test subject" is what we do the experiment on. The aspects of it that we can change are called "variables". A "control" is an unchanging standard that we compare the test subject against in order to narrow down the factors involved and to isolate a cause. A "natural control" is when this standard occurs within Nature. Whenever we make a change in our test subject, we see how it differs from our control. We are looking for results that can be "replicated" (i.e.: consistently repeated). An experiment that accounts for changes within two or more variables at the same time is called a "factorial experiment".

How do we know that a statistical correlation is significant (i.e.: when two things are repeatedly connected beyond chance)? "Retrospective studies" compare data that has already been collected in order to find potential causes, while "prospective studies" look at incoming data with the intent of more carefully isolating those causes. We can also design experiments to do the same.

For example, sometimes people create a particular result by anticipating it (e.g.: "placebo effect"). To minimize this, the people involved in the experiment, the test subjects in this case, can be "blind" (i.e.: unaware of the expected result). If those administering the experiment to the test subjects are also unaware of the expected results, then it is called a "double blind". These methods help eliminate "order effects" (i.e.: when the sequence in which something is done changes the results).

Related Resources

• "Fact From Opinion"
Perry had several articles within the Informal Logic Newsletter. I find this one quite illuminating.

QualiaSoup - Critical Thinking
This is a fantastic little video on the topic of Critical Thinking.