I’m reprinting below an essay—a brief potted history of the graded record review—I wrote for the late, lamented Watt in 2017, since the site seems to have disappeared and because I still find people who have internalized the idea that the convention is some kind of unassailable constant. I particularly stand by the point that art that gets taken “seriously” (gallery shows, literature) is not subjected to the “consumer report” model of review because nobody assumes the point of the thing is to make money, which assumption applies to recorded music even more now than it did eight years ago.
On a not-unrelated topic, I reviewed Liz Pelly’s spectacular Spotify takedown Mood Machine for the Washington Post.
And if you were one of the folks who did want the folder-of-mp3s version of my Audio Yearbook it’s here.
The convention of assigning elementary school letter grades, star ratings, or best-of-ten subdivisions to music reviews is so ubiquitous now that it’s easy to forget that it wasn’t always this way. Star ratings in the travel, hotel, and restaurant industries are terms of luxury. Book reviews typically eschew them—with the exception of things like the “starred” Kirkus review as an indication of particular excellence—as do theater and the visual arts. What is the history of this claim to objective and quantifiable judgment, why in the array of cultural production is it so specific to popular music and film, and what purpose does it serve? Who does it serve?
The first publications to take rock criticism seriously, including Rolling Stone in its Jon Landau and Greil Marcus heyday, ran prose reviews without topline judgments. The practice of grading music seems to have originated in 1969 with Robert Christgau’s Consumer Guides in the Village Voice, as part of his professorial shtick as the self-styled “Dean of American Rock Critics.” Dave Marsh’s 1979 Rolling Stone Record Guide, influenced by Christgau and film critic Leonard Maltin, added a five-star rating system to its reviews. The success of Marsh’s compendium encouraged Rolling Stone publisher Jann Wenner to bring star ratings to the magazine itself in 1981, against the protests of its editors who, according to Wenner, “viewed it as reductive.” From there, the five-star metric quickly became standard practice for glossy music magazines in the Rolling Stone model.
In the late nineties, the rising music website Pitchfork introduced a new 10-point scale. This was then adopted by similar websites (and by Spin, which also tried Twitter-only capsule assessments before settling on ungraded reviews). The early Pitchfork homepage included a clarifying rubric on the side of the page (“10.0: Essential…7.0-7.4: Not brilliant, but nice enough…2.0-2.9: Heard worse, but still pretty bad…0.0-0.9: Breaks new ground for terrible”), which disappeared in the early 2000s. The ersatz objectivity of the decimal ratings, combined with a brief period of extraordinary influence for the site, created a cottage industry in exegesis and even mathematical analysis. Meanwhile, the Onion spinoff The A.V. Club took letter grades to an obsessive level, attempting to evaluate nearly every piece of cultural product according to both individual reviewers’ assessments and crowd-sourced consensus.
The rating/grading bottom line exists in the service of the commercial incentive, as the name of Christgau’s initial Consumer Guide makes explicit: A graded rating scale is simply a less crude version of thumbs up/thumbs down—or, to cut to the chase, buy/don’t buy. This explains why the practice is so much more prevalent in recorded music and film, the arts that have in modern times existed in the context of large commercial culture industries. But is it even useful? Or, perhaps more accurately, who does it actually serve?
Noisey’s Dan Ozzi (whose site eschews reviews) recently pointed out that “overtly negative reviews are becoming an increasingly rare occurrence,” noting that on the review-aggregating site Metacritic “[b]etween the years of 2013 and 2015, not a single record [was rated as bad]. Every album released in that three-year period averaged out as having either a good or mixed response from critics.” Ozzi points to “the current advertising-dependent, click-friendly state of music journalism [in which] online publications have become too indebted to artists,” obscuring the line between publicity and criticism.
(One doesn’t find rated reviews in general-interest publications of self-consciously high critical culture, which have a wider portfolio and lack the pressures that come with being entrenched in the music-publicity ecosystem. Tobias Carroll, discussing a recent anthology of music writing, pointed out that the newer entries “were first published in high-profile publications that cover a broader cultural spectrum…The New York Review of Books…GQ and The New Yorker,” rather than the alt-weeklies and online outlets commonly associated with “the nation’s most essential” popular music writing.)
The Guardian’s Michael Hann agreed with Ozzi, saying “Reviews, now, serve the music industry more than they serve readers. Their main purpose, so far as I can tell, is to provide star ratings for press advertisements and to enable artist managers to feel content their client is getting coverage.” And Christgau, on his personal website, explains that after 1990 he preferred to highlight records he considered “A”s or “high B plusses”—his rubric, unlike Pitchfork’s, remains public—since his column’s premise as a consumer guide made it pointless to feature anything he couldn’t recommend. “Consumers,” he observes, “were just looking for records to buy.”
When critiquing the negative effect of the commercial imperative on discourse around popular music, one can’t proceed without checking in with the ur-crank on the topic. Theodor Adorno, in his 1938 jeremiad “On The Fetish-Character in Music and the Regression of Listening,” referred to “the primitive question about liking or disliking,” which he called “inappropriate to the situation.” This, he argued, was not just because I like it/I don’t like it is a “regressive” and unsophisticated approach to art (Drew Millard in Noisey argues a version of this when he talks about separating his “fan brain” from his “critic brain”), but because it doesn’t matter when music serves “as an advertisement for commodities which one must acquire to be able to hear music.” In a modern context, these commodities are all items and services—from LPs to streaming subscriptions to festival tickets—tying the music-focused outlets closer to the labels, corporations, and publicists that primarily support them. Reviews that imply a buy/don’t buy or listen/don’t listen axis reinforce and entrench this commodification, as the question turns from “it is successful art?” to “is it worth your money and time.” In Adorno’s Marxist terms, the music grading system exemplifies “exchange value…tak[ing] over the function of use-value.” And the commercial focus feeds the churning fetish for novelty: “Regressive listening is always ready to degenerate into rage…directed primarily against everything which could disavow the modernity of being with-it and up-to-date…[T]he regressive listener would like to ridicule and destroy what yesterday they were intoxicated with.” Sound like any blogs you know?
So what is the critic’s job, if not to pass judgment? While there is a long literature of defining the role of the arts critic, I’ll settle for quoting Millard paraphrasing David Hume: “A critic's viewpoint should not serve a commercial purpose as much as it should help their reader make sense of what they're listening to.” The graded review demeans the artist by assuming a teacher-student relationship in which artists are blithely praised or hand-slapped. Lou Reed once said of Christgau on a live album, “Could you imagine working for a year and you get a B+ from some asshole in the Village Voice?” (Christgau, for his part, “thanked Reed for pronouncing his name right and gave the album a C+,” according to Stereogum’s Tom Breihan.) But graded music reviews also diminish the artistry of the critic. As pleasurable as it can be to write (or read) a scorched-earth takedown or an angel-choir rave, the vast majority of cultural product falls in the middle range, rendering the gradations meaningless but the role of the critic in parsing its strengths and weaknesses even more useful. (I don’t mean here the holiday shopping season mania for ranked lists, a subset of the consumer-guide problem which decorates group-think in a fanciful idea of objectivity.) Let the writers do what they do best: write, subtly and subjectively.
The graded record review, in the absence of a full curve for reference, serves a consumer function, not a critical function—even (I’ve argued before) as rock-lineage music moves away from the consumerist imperative. In the guise of consumer service, it infantilizes prospective listeners by approaching them using the language of students, instead of cultural actors with the ability to absorb a nuanced assessment. A rated review presents readers with a school child’s formula for quality: a grade. It makes the analysis of art a battle fought on regressive, reductive terms, rather than the wide, contested, and fruitfully excavated middle ground. It manages to be condescending to musician and listener alike, while constraining the critic with the necessity of simply passing judgment, rather than exercising their own artistry and interpretive skills. It is arbitrary, meaningless, and outdated false objectivity, promoting consumerism in an industry past its commercial prime. It’s time for it to go.
Great piece, Franz, stylish and smart like all your stuff. Back in the day, I was sometimes editorially compelled to assign a rating to a review, which was frustrating but at least collaborative. Worse was when Metacritic assigned ratings, always ineptly. I think for Christgau the grade system allowed him to subdue judgement (or at least delete adjectives) and was a boon to his concision project, though I think its satiric element was lost pretty early on while its gadfly element lingered. I want to say Down Beat helped pioneer or at least popularize the star system in its record-review section and Blindfold Test. I still read the mag and find that younger Blindfold Participants are almost always collegial and self-censoring whereas in the past players would sometimes punch hard. I feel more and more estranged from ranking and list-making, though I used to carry around the second RS guide like a vade mecum. I'm not sure if I still believe this, but I used to think record collectors and baseball statisticians overlapped to some degree; at least for me, the hunger for numerical evaluation and order was probably some kind of anxiety response.