Polarize vb. to cause people to adopt extreme opposing positions (from thefreedictionary.com)
A web search reveals many different opinions of the SAT, most either very negative or stoutheartedly supportive.
I’ve been teaching and analyzing the SAT for 28 years, and I strive to form opinions that are as free from personal bias as I can make them. And my opinion is the boring, uncontroversial, middle-of-the-road one – the SAT is neither an awful test nor a very well-developed one.
Here I will give my best answers to several questions about the quality of the SAT. Given the format of this blog, I will attempt to be informative, but by no means comprehensive.
Does the SAT do what it is designed to do?
The primary stated goal of the SAT is to predict grades in the freshman year. The second goal is to predict grades throughout undergraduate college. Various studies have been published, most of which indicate that there is a significant correlation between SAT scores and college grades, but that the SAT doesn’t provide much information beyond what high school grades do.
One of the first studies was sponsored by Educational Testing Service, who wrote the SATs. They were unhappy with the results, but the authors released their findings, which were published in several articles and at least one book. One of the researchers reported that he had expected the results to support the usefulness of the SAT, and was surprised when they did not.
More recent studies have yielded similar conclusions, although some scholars have questioned their methodology. For example, check out this article in Pencil Nerd’s blog.
It stands to reason that a grade average of 90 at one high school may not equate to a similar one at another school. On the other hand, one would hope that most admissions officers have detailed data on just what different high schools’ grades mean. Nevertheless, the SAT should serve to “level the playing field” by providing colleges with uniform scores.
In conclusion, the SAT probably does some of what it’s supposed to do, but not necessarily as well as it might.
Is the SAT a fair test?
Numerous studies have demonstrated that women, minorities, and residents of some states perform below average on some or all of the sections of the SAT. Groups such as the National Organization for Women and FairTest have claimed that the SAT is biased against these groups.
The obvious question is: are they blaming the messenger? In today’s hyper-politically-correct climate, many people are eager to point the finger at anyone nearby whenever any minority is at a disadvantage. But if men outscore women on the math SAT, does that mean the test is bad, or that men tend to outperform women on math, period? Perhaps women shy away from math because of gender biases. Perhaps something in our genetics predisposes men to like/do better at math more (shudder – did I just suggest that people might be different?). What about minorities, such as blacks or Hispanics? Don’t they tend to have lower incomes than whites, and thus live in poorer districts with poorer schools. Should we blame a test for that?
Some years ago, when the SAT had analogy questions, FairTest pointed out an analogy for which the correct answer was oarsman:regatta. If you don’t know, a regatta is a series of boat races, and wealthy people who live in coastal areas tend to take part in them. Certainly, such people (or children in their families) would have an advantage on this question.
FairTest cited the regatta question as evidence of SAT bias. But I’m very familiar with the test, and I have seen very few questions that are biased in this fashion. How many SATs, with well over 100 questions each, did FairTest have to pore through before they gleefully discovered this question (and a couple of others of its ilk). ETS and The College Board have been dealing with accusations of bias for years; you can bet that they now take painful steps to avoid “rich kid” questions.
That last sentence is my conclusion. The SAT writers bend over backwards to make the SAT politically correct, so that, if anything, it would tend to benefit minorities (the Reading section abounds with passages about feminists and abolitionists). It’s time to point the finger somewhere else.
Are the questions well written?
Yes! An impressive amount of time is devoted to developing, reviewing, and revising the questions. Then new questions are seeded into experimental sections on the SAT, so that thousands of students “review” them. Very few flawed questions make it to the actual SAT.
By contrast, I have seen a lot of practice tests in commercial prep books, and I can instantly recognize many flaws in all of them.
Are the question types good ones?
Again, my review is mixed here. The general types are okay, but the test developers deliberately make the questions tricky in various ways (e.g. trap answers, tricky wording, etc.).
The fact is that a single exam of only a few hours length is not the optimal tool for measuring the college-readiness of all students. There is a huge range of abilities in the population of students who take the SAT, from those who hope to achieve a minimum score in order to be eligible for sports scholarships to those who need top scores for top schools. That means the test writers need to include questions that are aimed at all of these students – i.e. very easy to very difficult questions. The result is that there are very few questions directly aimed at any particular student.
Useful hard questions are particularly difficult to create. It is not enough that few students answer them correctly; to be meaningful it is vital that only the best students nail them. The test writers employ several strategies to help achieve that result, but they are flawed, since it is possible to analyze them and coach students accordingly.
Has the SAT improved, and what can be done to improve it?
The SAT has improved. Some flawed question types have been dumped. There are also fewer tricky questions and answers than there used to be. The addition of the Writing section was also an improvement.
The SAT could be improved by adding more questions such as those found on the ACT. The ACT is more of a test of learned knowledge; the SAT tests reasoning more. Why not test more of both?
Clearly, eliminating tricks and traps would be a positive step. Hard questions should require higher level thinking, rather than the ability to navigate a sea of tricks and traps.
The SAT tests the good ole’ “3 Rs” (reading, ‘riting, and ‘rithmetic). It couldn’t hurt to add some science and social studies to the mix.
I feel that the best way to improve the SAT would be to administer two separate tests on different days. The first test would assess the student’s general level of ability in each section. The second test would be tailored toward that ability. For example, if a student scored 4 out of 5 on the Math section of the preliminary test, she would then take a “Level 4” Math test afterwards, with only questions of difficulty 3-5.
Naturally, there would be some resistance to this proposal, since it would take up more time overall, and the first test would need to be graded before the second was administered. However, I think it makes good sense, and would even help reduce pressure on many students.
Finally, The College Board has been working hard to prevent cheating on the SAT.