Sunday, January 22, 2012

Thinking About how to Rate Wine

Much has been written and many debates take place about how to rate wine. It seems now that the 100 point scale is seen as "old guard," that it has not been effective at communicating a wine's quality. There are of course other rating systems, and their effectiveness is also debatable. I don't want to spend time here summarizing the various arguments, and I don't have a definitive opinion on the best rating system for wine. But I do have some thoughts that I want to share.

I think that some wines are better than others. That might sound silly to say, but there are folks who think that endeavors in the world of art and craft cannot and should not be measured in an absolute sense. They point out that one person's Mozart is another's Black Sabbath, and that both are equally excellent to the individual beholder. And it is true that we each have our own preferences regarding things like paintings, film, music, wine, roast chicken, and so on. It's romantic to say that "the perfect wine is the one you drink with your lover at sunset in a cafe overlooking the ocean." But there is a difference between personal preference and objective quality, and this is the whole point of professional criticism. The critic is supposed to be able to put their personal preferences and experiences aside and evaluate based on a set of established criteria, and then tell the rest of us something definitive about objective quality. What I'm saying here is that DRC is better than Yellowtail. It is higher quality wine. There may be people who prefer the smell and taste of Yellowtail, or who cannot distinguish between then two, and those people are welcome to their preferences and should go forth in peace and be happy. But one is a better wine than the other, regardless of personal opinion or the cafe at sunset context.

If you agree that there is objective quality to wine, then you probably agree that there must be some way for a critic to measure a wine's quality and communicate this to the rest of us. This is the hard part.

Some things are easy to rate - things that can be expressed finitely in purely mathematical terms. If I wanted to know which brand is the best AA battery available on the market, I could find out the average number of minutes each one lasts, determine the average price of each brand, and create a statistic that tells me how many minutes-per-dollar-spent I can expect from each battery.

Rarely is it this simple, however, even when things can be expressed purely in mathematical terms. Think about rating cars or schools or baseball hitters. How do we know which hitter is the best? Batting average is a start - some are higher than others, and there is a highest each year. But is the person with the highest batting average the best hitter? Is someone who hits 10 singles in 20 trips to the plate a better hitter than someone who hits 8 doubles in 20 trips to the plate? What about someone who hit only 5 singles in 20 trips to the plate, but those singles came at crucial points in the game and scored runs for the team. It is possible to determine which hitter has the highest batting average or hit for the most total bases in a season, but determining which is the best hitter requires more than statistics.

Painting, film, cooking, making music, wine...those things don't easily lend themselves to measurement in mathematical terms. But we have inherited a system of wine criticism that attempts to impose a mathematical framework on wine evaluation. The 100 point scale requires us to accept the idea that it is possible to measure something about wine, to assign a numeric value to one or more of its traits and arrive at a finite conclusion. That there is an objective qualitative difference between a 93 and a 92 point wine. Perhaps there is, but I'd like to see the rubric used to arrive at such a conclusion - how are those points generated?

To me, it makes sense not to try to impose finite mathematical rating systems when the subject matter does not itself generate outputs that can be measured using numbers. Why not relieve ourselves of the burden of ordering wines in such tiny groups (87 points, 88 points, 89 points, etc.) and instead work within larger groups, accepting that there are no exact measurements for wine quality. I would prefer a system in which the professional wine critic tells me which wines are of the highest quality, which are of high quality, which are above average, and so on, without attempting to distinguish between wines within each group.

Which are the highest quality wines of Meursault? For me, it would be enough to read a critic who tells me (and I'm making this up) that Coche-Dury, Comte Lafon, Pierre Morey, and Roulot make the highest quality wines of Meursault; François Jobard, Pierre Matrot, Pierre Yves Colin-Morey make high quality wines, and so on. I also would like to read about which wines by Comte Lafon, for example, are the best. And I'm frustrated with the fact that Perrières gets 94 points, Charmes and Genevrières get 91-93 points, Gouttes d'Or gets 90-92 points, and Clos de la Barre gets 89-91 points. From that I understand that the critic rates the wines generally in that order (and every year, they all do), but I still don't understand the value of one point. Perrières is 94 points and Charmes is 93 points, so Perrières is one point better. But what generated that extra point? I accept the idea that Perrières might objectively be a better wine, but not the idea that the critic who awards the additional point experienced something in drinking the wine that can be measured and expressed by a 94 as opposed to a 93.

My guess is that Perrières, Charmes, and Genevrières are all highest quality wines. Perhaps we don't need to take it any further than that - they are all highest quality. There may in fact be some objective truth - one of them might be better than the others in a certain vintage, but it seems to me that the sensations the drinker experiences in coming to this conclusion are not quantifiable.

How, then, should the professional critic explain the criteria for "highest quality," "high quality," and so forth? Sorry, but I'm asking questions and don't have answers. Here, though, is one that makes a lot of sense to me (from Peter Liem's ChampagneGuide.net):

* One star denotes a wine of particular quality and distinctiveness of character, one that stands out among its peers in some significant way.

** Two stars means that this wine is outstanding in its class, showing a marked quality, expression and refinement of character.

*** Three stars indicates a champagne of the highest class, demonstrating a completeness and expression of character that places it among the very finest wines within its context. Needless to say, these wines are uncommon.

This sort of system puts wines in large groups and requires me to do some thinking on my own, and I like that. Really he's just telling me the groups of wines that he thinks are best - which are very good, which are good, and which are not as good - the rest is up to me. There are over 1,000 wines reviewed on Peter's site, and 61 of them are awarded three stars. I'm sure Peter could tell me his favorites among those 61, but would laugh at the idea that there is one "best" wine within this three star group, that it is possible to construct a strict ordering of those 61 wines. That said, he could explain what it is about each of those 61 wines that merits it being in the three star group, and why each of the 251 two star wines is not in the three star group.

-----

I understand that my analysis here is incomplete, and I'm not trying to start an argument. I guess I'm just saying that in trying to impose a strict mathematical ordering on wine evaluation, we are barking up the wrong tree. If you have something thoughtful to say about this, I'd love to hear it. But spare us from rants about points and the evil culture of selling wine, and also from salt of the earth declarations about how beautiful the simplest country wine can be with fish just-plucked-from-the-sea. I'm starting with the notion that some wines are objectively better than others, and that there must be some way of measuring this. Just not the 100 point scale we've been using. How can this objective quality best be measured? And how should this measurement be communicated?

6 comments:

Darby From Vinodiversity said...

A nice try Wine Guy at trying to sort out what I reckon is an intractable mess.

The problem with all numerical wine rating scales is that they give a single dimension to a multidimensional concept. the higher the number the more the implied precision, and hence the more removed from reality.

There is also more than one "100 Point Scale" see my article at http://www.vinodiversity.com/wine-ratings.html about how some points are tastier than others. The 20 point scale, used widely in Wine judging in Australia, is also not what it seems. About half the wines score half points, now what does half a point taste like. Five point scales with stars or icons representing bottles or glasses are similarly afflicted with half points.
As for the objective aspects of wines - alcohol content, sugar and tannin content, acidity - these are easy to measure and report in isolation but their effect on the subjective appreciation of wine needs to be interpreted in light of the interaction between them and the environment in which the wine is consumed.
I think we must have some sort of ratings and the five point scale is what I flavour, along with notes about what food to pair with the wine rather than silly notes about gooseberries.

King Krak, Oenomancer said...

"I think that some wines are better than others." Next you'll be telling me that some girl's mothers are bigger than other girl's mothers.

Douglas said...

In the 1980's Robert Parker was my hero.

At the time, I was living in Iowa and my local wine shop (if you can call it that) was a branch of the Iowa Liquor Control Authority. As a wine novice, I was faced with a jumble of labels I didn't understand and a helter-skelter collection of wine that some official in the liquor bureaucracy thought Iowans should (or would) drink. Points saved my life.

At that time, of course, Robert Parker rarely rated a wine 90 points. His Wine Advocate guided me to some very good wines which I could afford. I even followed his advice and bought a couple of cases of 1982 Bordeaux. For this advice I will always be grateful.

Some thirty years later, the point system looks increasingly like a straight jacket, not a pair of angel's wings. If you think about it, no art lover, music devotee, or poetry reader tries to find a 100-point piece of art right off the bat. Many music lovers are probably like me. They hear Tchaikovsky's 1812 overture (best consumed when young) and are impressed. This leads to other pieces of music and greater appreciation of classical music in general. And, of course, the realization that the 1812 Overture might not, after all, be the best piece of classical music. Time, thinking, and experience makes the person a better listener, a connossieur.

In the wine world, the point system can easily become confining, leading a wine lover down a narrow path. It is a path that can involve some amount of self-deception. I know, because I've been down this road.

After using a version of the 100-point scale for many years, I realized that the greatest defect of the system is not that it is too specific. No, the biggest problem is that it forces the taster to award points for components -- color, depth of fruit, bouquet, etc. -- and that these components (when added up) often yield the wrong score! It is like looking at a Rembrandt and evaluating subject matter, color, light, etc. and then coming up with a score of 84 points. Most raters who use the 100-point system will come to the conclusion that they are simply poor at rating wine. Maybe I should pay more attention to Parker, they will think. This is a sure road to self deception, not self knowledge, a road that (I hope) I have finally left.

I often wonder what Parker thinks after tasting a wine, rating it 90, and then realizing -- "This wine is boring!" Does the wine make it to publication with tepid praise, or is it simply chucked?

To all the younger wine drinkers out there -- Find a great wine shop (there are many in NYC), ignore the points, think for yourself, and rate the wines too if you like. But rate them using words, like "excellent" or "very good" and don't try to compete with those who think they know a 91+ from a 92.

P.S. To the Iowa Alcoholic Beverages Division (their current name): It wasn't all bad. I discovered Ridge Vineyards and bought Delas Hermitage for $6 on sale!

I also have some thoughts on ratings, if you want to look at:

No Score! Some Wines are Better Left Alone

tueuboeuf said...

True points. I might add I always wonder about physical differences, no two persons taste the same no matter how professional they are. Apart from that even between bottles of (very) good wines there can be a lot of difference which can be either a joy or a disapointment. Another often overlooked thing is the drinkability factor, some wines may taste good at the first sip but in the end they can be impossible to finish. Add to that 'food compatibility' and to me most ratings systems fail. I do think that something like Peter is using is the best solution.

José Luiz said...

Kermit Kermit Lynch made the best observation about that at the unmissable Adventures on the Wine Route: A Wine Buyer's Tour of France. Points really doesn't make sense after all.

Dan Goudge said...

Really like your suggestions for broad tiers of wines. First I think it makes way more sense to someone just beginning to learn about wine, as it spells out what the differences are between groups. But more importantly in my eyes, it forces professional critics to actually articulate what the difference is between wines. Nothing is more informative to a consumer than the description of the wine, as it allows them to make an informed decision about what flavours they prefer in wines. Plus, it can never be a bad thing when we demand the production of better, more articulate writing!

However, I must say your baseball comparison is a bit off. Since the general acceptance of Bill Jamesian analysis, there really has been the development of statistics that quite accurately identify the best hitter. In particular, OPS, which combines on base numbers with power statistics and average, give you a pretty complete picture of how each hitter contributes to the offense of the team. It still doesn't weight hitting based on how critical a hit is, but generally statistical analysis doesn't believe in that kind of "clutch-ness". That would seem closer to factoring in the context in which a wine was consumed, reducing the overall objectivity of the analysis.