Wednesday, March 11, 2015

On the suit



The “Blurred Lines”/“Got to Give it Up” judgment is the latest in silly copyright cases (heres a good musicological argument against it), but there is one detail that has been mostly ignored:


The $7.3 million number beats the record high judgment in a copyright infringement suit.

A relevant counterpoint from my book (Chapter 7):


. . . where art intersects with commerce, progress traps occur too. In a capitalist system, artists, labels, technology companies, and other music professionals naturally seek to grow their profits. (“If you sell fifty million records one year and seventy the next year,” notes Jeff Gold, describing the expansion of Warner Music in the 1990s, then soon someone is going to ask “how are you going to sell eighty?”) As elites become more efficient at producing, marketing, and selling music, that increased efficiency stresses the system. Music becomes, as William Patry puts it, “a zero sum game, where the more people vie for the top, the fewer make it, but the rewards are disproportionately greater.” In the process, the thoughtful listener is left with a nagging feeling that just as we cannot understand music outside of recording, or our own thoughts about it outside of writing, it is now difficult to even conceive of it outside of money—outside of our transactional roles as producers, consumers, or both.  

My friends complain about modern pop all the time. I wish I could evaluate it in aesthetic terms. But I feel like I can’t even hear it. It sounds like money to me. I hear the money that went into the production. I hear the money that went into the promotion. I hear the money that is being exchanged every time it is performed. I hear the money that is expected as a kind of birthright. Lord help me, I cant get past the money. 

Call me crazy, but I think thats a problem.

Friday, February 27, 2015

On the dress



(image by Alex Tarr)

The latest in silly Internet memes: 
. . . for the past half-day, people across social media have been arguing about whether a picture depicts a perfectly nice bodycon dress as blue with black lace fringe or white with gold lace fringe. And neither side will budge. 

And a relevant counterpoint from my book (Chapter 4):

The point is that while we readily admit there is no one way to understand a work of art, no one manner of perceiving—“all art is subjective” is one of the great clichés of aesthetic dialogue—we ignore the consequences of that statement: that there is, as far as perception goes, never a singular work to agree about in the first place. Instead, we cling to the reified idea of music [or any other artwork], using it, in the worst-case scenario, to police the responses of others, or else, more kindly, to prioritize the overlap in our perceptions—as with, for example, the concert protocol that calls for simultaneous group applause, and the impression of consensus it produces by eliding complexly differentiated responses into a symbolic burst of ostensible mass agreement. 

Alas, perception is fundamentally idiosyncratic—whether were talking about listening, looking, eating, touching, or smelling. The fact that “the dress” (as it has come to be known) has pushed us to argue about what is “really” there proves how uncomfortable we are with that basic truism.

Friday, January 30, 2015

Decomposition around the web


(image c/o BookPeople)


Since the book's release date on November 18:

* I wrote a piece for Huffington Post called “The Discomforts of Digital Musica plea for listeners to break out of the star system, which I think ultimately hurts us all. (A sample: “To me, the real gift of digital technology is not the feeding frenzy of infinite free music; its the possibility of fostering artistic communities that are viable precisely because they are intimate and idiosyncratic, and because they form spontaneously, through the unprecedented channels of communication to which we now have access. If such communities are allowed to derive from shared passion, shared passion itself will nurture economic justice.)

* I did two book readings, one at Powell's in Portland (December 1), and one at Town Hall in Seattle (December 2). Both were great fun (though the Powell's event was better attended and sparked a longer discussion). 






* In advance of the Powell's event, writer Robert Ham did a nice piece on me for the Portland Mercury. It was great to meet and chat with him, and I appreciated his smart questions. (I should clarify for the record, though, that I haven't been 42 since 2011.)


* For the Seattle Weekly, Gavin Borchert did this preview of my talk at Seattle's Town Hall:





That's very flattering! (A small correction: the 
demythologizing without demeaning line comes from the books introduction, not the afterword.)

* At the end of December, Decomposition made it onto Los Angeles Magazine
Best ‘Little Music Books of 2014a welcome surprise, to say the least. Matthew Duertsen called it refreshingly unstodgyrefreshingly going against the grain of some of the more glib criticism the book has received.

* (PhD candidate) Madison Heyling's in-depth analysis of Decomposition for Music and Literature as well is probably one of the more detailed and thoughtful write-ups the book has yet received, and for that I'm very grateful. (I know how hard it is to be a grad student and do other intellectual work, so I truly appreciate the time this must have taken.)

* Ethan Iverson gave the book some love, both on the DoTheMath site (
covers an exceptionally wide turf; indeed, I can't think of reading a previous book that glosses jazz, classical, and pop in equal measure and with equal conviction) and on Twitter:



As I remarked in my response to Ethan: that may be the first time anyone has called the book 
fun!


* * * * *

Given the book’s polemic, I have been pondering how to respond to the criticism that has emerged alongside the praise (sometimes from the same critic). I've been a little hesitant,  honestly. Aside from the trouble it takes to formulate a response—I’d much rather be spending that on new projects—doing so also runs the risk of seeming unseemly. After all, it’s a reader’s prerogative to read the way she reads. And a thoughtful writer always has to be comfortable with the possibility of miscommunication.

Still, there are things in the criticism that have been sticking in my craw, and that I feel I should address at least briefly. One of them is the idea that I use the pronoun “we” recklessly. Borchert, for instance, dings me for the line “We are convinced that the quality of a musical work cannot derive, even if only partially, from its context.” Heyling makes a similar point:

One of Decomposition’s troublesome aspects is that Durkin bases many of his arguments on a set of assumptions that he positions as universals about listening. For instance, he writes: “We have become accustomed to focusing on the end result of musical production as if that’s all there is to it.” Similarly, he pronounces: “There has been a great deal of anxiety about how we value music—but also what music means . . . and even what it is.” 

To a point, I understand these complaints. I certainly find it irritating when other writers overuse the “we” convention—one of my favorite recent non-fiction reads, Kathryn Schulz’s Being Wrong, is, in my opinion, marred by this same tic. And I can’t deny that the lines that Borchert and Heyling quote are in my book, and that they sound a little pompous taken out of context. 

Yet context is important. Consider: at the beginning of the book I write that the influence of authorship and authenticity—what Heyling assumes I have posited as a “universal”—“is by no means universal.” Later, in introducing the section on authorship, I say that “I don’t want to exaggerate the case here by suggesting that the rhetoric of genius”—the mode of speaking about authorship that I am critiquing—“is the only available mode for speaking about music in our culture.” And in laying out the history of authenticity (a worldview that argues against the importance of context, and thus is directly pertinent to the sentence Borchert cites), I argue that inauthenticity “is at least as important” as a cultural phenomenon, “whether we live with it as a hard, inescapable truth or intentionally turn to it as a source of postmodern nirvana.” 

I could cite other examples; this sort of qualification goes on throughout the book. I had assumed, perhaps too easily, that readers would take this framework into account whenever coming across my use of first person plural pronouns. 

But I will also admit that there are two other things going on here that complicate the discussion. The first is that I’m trying to make a distinction between musical discourses and musical experiences, even as I recognize that they are mutually influencing. (“Ultimately,” I write, “rather than defining music, I am interested in how we discuss whatever it is we think music is, as well as what that discussion obscures.”) And in terms of musical discourses, the challenge is that in many cases “we” actually does apply—in the same way that it applies when, say, a nation goes to war against the wishes of at least some of its citizens. In that sense, I certainly can say that “we are convinced that the quality of a musical work cannot derive, even if only partially, from its context.” Even if I don’t literally count myself as a part of that “we” any more than Borchert or Heyling do, I am still part of the culture that holds this as a discursive value. It is really only in terms of the category of musical experiences that the “we” doesn’t apply, because that is where perceptual individuation happens. 

Missing this distinction, Heyling makes an odd move, recognizing that I am “rather self-aware about [my] background and personal preferences,” but then asserting that I do “not seem to have fully allowed that those biases have colored the book’s premises.” Yet when I talk about music experientially, I certainly do correct for my biases. After all, I spend a good deal of the book empathetically exploring music and musical practices that I, as a listener, don’t particularly enjoy or understand—Milli Vanilli, for instance, or auto-tuning, or drone metal. And when I talk about music as a discursive practice, my own biases are irrelevant, because I am addressing what people say and write about music, not what they actually experience (which is inaccessible to me, and which may indeed be inexpressible).

The second reason this is difficult to discuss is that there’s a case to be made that perhaps the mania for authorship and authenticity are more widespread than any of us care to admit. Like the white middle-class Liberal who doesn’t want to believe she has any role in perpetuating racism, the academically informed music fan doesn’t want to believe she has any role in perpetuating essentialized ideas about art. And yet my argument is that the discursive practice runs deep, and is hard to override. (If I knew Borchert and Heyling better, I would be willing to bet I could find examples of its expression in their work, without too much trouble. Indeed, I often find myself unintentionally falling into this way of speaking and writing too.) In part that’s because the practice is extremely convenient, especially as culture gets more dense and complex. “It is much more elegant,” as I put it in the book, “to say that ‘Cotton Tail’ is Duke Ellington’s composition than it is to say ‘Cotton Tail’ was a messy palimpsest, composed by Ellington, Ben Webster, George Gershwin, some unknown musician who first used the rhythm changes, et al.” But in part it’s because it is habitual, and human beings are creatures of habit.

One final point about the Heyling piece and then I’ll be done critiquing the critics. She argues that my “bibliography makes it clear that [I have] not engaged with most of the influential musicological literature from the last thirty years, in spite of the book’s copious references to other scholarship from other fields.” She’s absolutely right that I don’t draw on Philip Bohlman, Katherine Bergeron, or Lawrence Kramer (the musicologists she cites).* I’m sure the book is weaker for it. For the record, however, here are some of the musicologists (or musicology-informed thinkers) I do draw on, most of whom have indeed published important work within the last thirty years: Richard Taruskin, Lydia Goehr, Carolyn Abbate, Theodore Gracyk, Susan McClary, Joseph Kerman, Jonathan Sterne, Christoph Wolff, Simon Frith, Joseph Horowitz, R. Murray Schaefer, Christopher Small, Alex Ross . . .

Still and all: I am very grateful that readers and critics are engaging with the book. I look forward to further commentary.


* Heyling is wrong that I don’t cite Benjamin, however. I cite him twice.

Monday, November 17, 2014

Why don't you write me?

Hello. It has been a while since I have posted anything, and so I have a number of updates.

Tomorrow, my first book, Decomposition, will be officially published. I guess that’s a milestone for me. For those of you who don’t know, Decomposition began as a dissertation (I defended it way back in 2004)—and in 2009, in a fit of boredom, I began publishing pieces of it on this very blog. In that form it was soon discovered by the woman who would become my agent—the amazing Barbara Clark. One thing led to another, and here we are.

For what it’s worth, a lot of work went into revising the book from its dissertation version. Three years of work, in fact. I mention this only because one of the critiques that seems to be emerging (in the Amazon Vine reviews, at least) is that the book is difficult and academic. Which is not to say that Decomposition is not difficult and academic (fair warning, though your mileage may vary)—only that, if it is, it is probably a lot less difficult and academic than it used to be.

Still, I’m pleased that the Amazon Vine reviews are, on the whole, favorable—even some of those who struggled with the content gave it high marks, and at the moment there are two 5 star reviews. All of which is certainly gratifying. 

I simultaneously have two other book projects going on, each of which is occupying a good deal of my attention (one reason I haven’t been blogging much). First is another non-fiction book, which at the moment is just an idea, really . . . a set of notes and sketches. I have wanted for a while to do a Decomposition-type book (that is, a turn-conventional-wisdom-on-its-head-type book) about each of the three subjects my mother insists shouldn’t be discussed in public: religion, politics, and sex. So I have embarked on the first of these—an agnostic argument about religion, belief, epistemology, and ethics, informed by my years as a church organist here in Portland.

My other book project is much closer to completion. Actually, I thought I had completed it last Spring—it’s a novel I wrote for my daughter, about a tree that grows across the Cosmos and connects two worlds, and the cat who travels between them. Over the summer, I decided (based on some expert advice from Barbara) that it needed more work. I guess I hadn’t fully appreciated that it can take more than twelve months to write a first novel . . .

So I have been deep in revisions on this cat book through most of the Fall. It’s utterly different from any creative work I have yet undertaken—less beholden to the “real world” facts that guide non-fiction writing, but somehow more dependent on a clear internal logic than any of the instrumental music I write. Not that my instrumental music has no internal logic—though I’m sure not everyone thinks so!—just that that logic doesn’t need to be communicated to the audience as explicitly. With music, I can just sort of “feel” where a piece works, without necessarily having to articulate its structure to myself or anyone else. In a novel, you have to use these things called words . . .

So I’ll get back to it. I hope to post here more regularly in the weeks ahead—though I should say that I haven’t had much interest in the controversies that have dogged the jazz world over the last year (everything from the Sonny Rollins satire to that Mostly Other People Do the Killing Album). There was a time when I would have been all over that stuff. But now the idea of engaging it feels so unproductive—even unhealthy. So I’ll probably post more broadly about music, or fiction, or writing in general, as I can.


Thanks for reading!

Monday, June 30, 2014

These are not the droids we were looking for



Astra Taylor
The People's Platform: 
Taking Back Power and Culture in the Digital Age

(Get it here.)

So far, digital culture has had a convoluted history. For most of the first decade of the twenty-first century, the lines in the sand seemed clear enough: on one side were the legacy content industries, exemplified by institutions like the RIAA and the MPAA, those infamous acronyms that fought tooth and nail to protect the idea that art and culture were private property. On the other was the freewheeling web, which promoted more democratic ideas about what it meant to create, to be an author, to be a cultural participant. (As I argue in Decomposition, my forthcoming book, these were not new ideas, but rather new articulations of old ones.) The point is that whichever side you chose, the choice itself seemed uncomplicated: either you were for the new way, or you were for the old one.

In the wake of Web 2.0 (when can we start calling it Web 3.0?), staking out a position in this battle is more problematic, subject to all kinds of uncomfortable intersections and realignments. In music, once upon a time, being for independent artists and the new technologies that were supposed to help them meant that you were against the legacy industry. To some extent the opposite was true as well. Today, arguing for the new technologies and their aesthetic affordances can easily be mistaken as a strike against art and culture, one that starves those things in the name of a vague idealism. Conversely, speaking out about the material realities that artists face can seem the worst kind of conservatism, a way of giving in to “the man,” or wagging one’s finger in a fit of haughty moralism. 

With this new complexity a narrative of regret has emerged: a sense that we have been duped, or misled, or at least that the early promises of digital culture are not so easily realized as we once thought. (Lars Ulrich, as some music fans have grown fond of saying, was right after all.) Some of the most high-profile and articulate expressions of this new narrative have come from writers Robert Levine, Chris Ruen, and especially Nicolas Carr and Jaron Lanier. But in my opinion, the best one so far is Astra Taylor’s The People’s Platform: Taking Back Power and Culture in the Digital Age.

Taylor is a documentary filmmaker with a philosophy background; her film Examined Life, a series of discussions with contemporary philosophers, is recommended. The People’s Platform, her first book, contains precious little philosophy, but in my view it surpasses the other texts in this genre by considering the problem we face from within a progressive framework. Taylor’s is essentially a labor-based analysis, one that strives to get us to see artists’ work in terms of the larger economic structure, and especially in terms of the twenty-first century’s unprecedented upward concentration of wealth. Rightly decrying that trend, she urges us toward a sustainable culture, in which wealth is broadly invested (rather than hoarded) and work is nurtured (rather than depleted).  

All of which strongly suggests that the digital dilemma is a predictable outcome of free market capitalism, since the techno-utopian mega-corporations (Google, Facebook, Amazon, Apple, and so on) have seized upon that economic philosophy to achieve and justify their now-immense power. That analysis makes Lanier's proposal that users be compensated with nanopayments feel like an extension of the problem—as if the world is not commodified enough alreadybut Taylor offers a number of stronger, more left-leaning solutions, even as she admits all of them will require a degree of consciousness-raising: from the ethos of sustainability itself; to a revivified national arts policy that builds on the history of the NEA or Public Broadcasting or even the WPA; to the idea of regulating service providers and popular Internet platforms as public utilities; to increased subsidies and taxes to be paid by advertisers and technology companies; to inchoate micro-economies built around practices like crowdfunding or sites like Bandcamp. (Taylor doesn’t mention this last example explicitly, but it certainly qualifies.) These may be the best solutions we have at the moment, and though I am not convinced that we will see them realized in my lifetime, I believe they are worth fighting for. 

At the same time I want to push back against the impression Taylor leaves that the problem of techno-utopian wealth concentration (and the concomitant impoverishment of the creative class) is ultimately a function of the philosophy of “free culture” and its expression in the everyday practices of audiences. Of course, the regret narrative is right to point out that that philosophy can be, and has been, used for problematic ends. The digital era has produced its own brand of charlatanism, and Taylor is right to expose it. (Though I should add that her analysis of Lawrence Lessig is incomplete; she praises Lessig's critique of copyright law, but then adds that he “ignores the problem of commercialismoverlooking the fact that for the last seven years Lessig has been focused not on defending file sharing kids but on getting money out of politics). 

Because of the philosophy of “free culture”, Taylor suggests, we now blithely assume

that traditional gatekeepers will crumble and middlemen will wither. The new orthodoxy envisions the Web as a kind of Robin Hood, stealing audience and influence away from the big and giving to the small. Networked technologies will put professionals and amateurs on an even playing field, or even give the latter an advantage. Artists and writers will thrive without institutional backing, able to reach their audiences directly. A golden age of sharing and collaboration will be ushered in, modeled on Wikipedia and open source software. 

Put that way, and considering the obscene sums of money now made by the winners in our winner-take-all system, “free culture” does sound naive and delusional—just as "All You Need is Love" sounded naive and delusional by the end of the Sixties. But the fact that its concepts have been cynically appropriated in ads and business models and TED talks (is anything in our culture immune to such appropriation?) does not mean the concepts themselves are bad, or that they don’t remain an effective way to foster creativity, or that they will be useless for breaking out of our commercial stranglehold at some point in the future. Indeed, as I argue in Decomposition, it is not a question of whether collaboration, or sharing, or remixing, or sampling, or piracy, or whatever other forms the concepts take, should or should not occur, as the outcome of a single, simple moral decision. The history of art reveals that, whether we like it or not, these ideas have been crucial to creativity since long before the first computer was ever built. (“All art,” as Glenn Gould once said, “is really variation on some other art.”) Denying that history, even in the name of economic fairness, may create more problems than it solvesdepending on what sort of culture we actually want.

Moreover, “free culture” (or, in my own, more broadly-defined term, decomposition) is still the best way of critiquing the real issue: the ideology of authorship and authenticity. (For what it's worth, here's how I define those terms in my book, and in terms of music: authorship is the idea that works are created by solitary individuals, and authenticity is the idea that there is a “singularly true, ideal experience of music that trumps all others, disregarding the variability of audience perception, and accessible only to those with ‘correct’ knowledge and ‘proper’ understanding.”) It’s still the best way, for instance, to undermine the celebrity worship that propels our interactions on the network. Taylor’s point that Web 2.0 has led to the consolidation of superstars as a class is instructive here; superstars are now both fewer and wealthier. In a world of aesthetic abundance, the veritable “celestial jukebox” we were promised, why should that be? I blame our willingness to believe in putatively objective hierarchies of quality—the individual god-like artists and reified expressions of music that we are all supposed to agree are among “the best that has been said and thought in the world” (to use Matthew Arnold’s famously narrow definition of culture). As belief systems, authorship and authenticity create powerful cultural cliques; they have a tendency to pull audiences toward an arbitrary center of gravity, to work against a more haphazard and chaotic process of taste formation, and to stomp down the so-called long tail in favor of a disproportionately large head.

But not only do authorship and authenticity corrupt our understanding of art—they also drive the free market system that makes the techno-utopian mega-corporations possible in the first place. They are crucial to private property (see John Locke). They are crucial to advertising (see trademark law). They are crucial to planned obsolescence (see the parade of new devices, each made possible by an updated slate of proprietary technology). They inform our understanding of the techno-utopian mega-corporations themselves (see the cult of Steve Jobs). So it is actually not that these corporations have truly embraced the “free culture” philosophy they benefit from, or done away with authorship and authenticity, as some purveyors of the regret narrative would have it. Instead, they have claimed authorship (this brand made this product!) and authenticity (it is objectively the best; buy it!) for themselves. What is Mark Zuckerberg now if not, legally, the “author” of a significant chunk of the Internet—a rights holder of the donated experiences and expressions of the enormous number of people who use his network? If “free culture” had really come to pass, that kind of ownership would be impossible.

Given this continuity, the danger of the regret narrative is its propensity for engendering a feeling of nostalgia—not a critique of the system itself, but a critique of the current manifestation of the system. Taylor is better at avoiding this trap than most, but even she betrays an occasional fondness for the more dubious aspects of what we have lost. She repeats, for instance, the problematic notion that one of the benefits of the old label ecosystem is that its pop hits functioned to “funnel revenues from more successful acts to less successful ones.” She calls that dynamic “cross-subsidies,” but to me, it sounds a lot like trickle-down economics: nice if you happen to be one of the fortunate few to get some of the windfall. More broadly, if free market capitalism is the problem, why would changing the elite beneficiaries of that system—subbing in the big record labels or movie studios for any of the major technology companies—be the solution?


What we have needed for a while now is a radical disruption of some sort. Perhaps the Internet was that disruption—or perhaps it merely made us aware of the ideas that will make that disruption possible. In either case, we should be careful of banishing those ideas to the dustbin of history, just because we have not yet been able to take full advantage of them.

Wednesday, March 26, 2014

What Music Writing Really Needs

Ted Gioia recently wrote a piece on the state of music journalism in 2014; it has been making the rounds, so you’ve probably seen it already.
I’ve admired Gioia’s work ever since I read his The Imperfect Art and West Coast Jazz, both of which I came across as a graduate student. West Coast Jazz in particular was an important book for me, given that I read it shortly after abandoning the east coast, and while I was trying to set myself up in Los Angeles. It was incredibly gratifying to hear someone say out loud (in my head, at least) that New York is not the only place where good jazz exists, or can exist.
Gioia’s latest essay--which bears the provocative title “Music Criticism Has Degenerated Into Lifestyle Reporting”--is more of a complaint than a deconstruction. In Gioia’s view (as if one couldn’t tell from the title), modern “music criticism” is failing both audiences and musicians. The problem is critics’ lack of what Gioia calls “technical knowledge”--by which he seems to mean, first, a direct discussion of how a performance is executed, preferably informed by the critic’s own experience as a musician; and second, references to things like “song structure, harmony, or arrangement techniques” (that is, expressions of music theory). 
Gioia never identifies which publications or writers he is reacting to, but we can guess--when he says “one can read through a stack of music magazines and never find any in-depth discussion of music,” he probably isn’t talking about The Wire. And it would be foolish to deny that the big glossy periodicals like Rolling Stone have descended pretty far into becoming, basically, fashion magazines--though that’s not news, I don’t think; it’s been happening for years.
Still, it’s not as if there is no meat on the bones here. And yet--something in Gioia’s article doesn’t quite ring true. In part, it’s that the strain of discourse he addresses is not and never has been “critical,” per se. What he is actually focusing on, I think, is the modern incarnation not so much of record or concert reviews, but of the cult of celebrity--the passionate adulation of stars that stretches all the way back to nineteenth century musicians like Niccolò Paganini and Jenny Lind and Franz Liszt. “Lifestyle reporting,” in this sense, is not the sudden blossoming of lowest-common-denominator excess, but a deeply-ingrained cultural habit, and one that has always served a different function than criticism, even when it has been informed by tendencies from high art (such as the hagiography of genius).
Which is not to say Gioia is wrong in claiming that there is a problem. I agree that something is missing from popular writing about music--or a lot of it, anyway. I’m just not sure that what’s missing is “technical knowledge.” Or maybe, more exactly, I’m not sure that what’s missing is technical knowledge only. After all, a harmonic progression, or a song structure, or a time feel, is never inherently meaningful. Each of these technical aspects of a musical work takes its significance from the way it is deployed in a culture--both from how it relates to the technical expressions of other musicians, and from how it is socially valued. The blues scale, for instance, could not be understood as an important detail of blues music--it would not be worth writing about in the first place--if it didn’t speak to something about the lived experience of the people who listened to and enjoyed the blues.
To put that another way: it’s not enough to decry the absence of theory in popular music discourse. The real problem is the inability, or the unwillingness, to connect theory and praxis. Go ahead and write about that blues scale if you like--or that harmonic progression, or that song structure, or that time signature--but if you do, make sure you follow through and make a connection to your readers’ daily lives. A critic’s job, if I may be so bold, should be to bridge the chasm between the abstract and the concrete--not to celebrate theory for its own sake. That was Harry Connick, Jr.’s mistake in bringing up--one is almost tempted to say brandishing--the subject of pentatonic scales on American Idol. He didn’t make much of an attempt to explain why he thought they were undesirable in that context. Indeed, if they were good enough to be what he called “classic go-tos” for R&B, gospel, and jazz musicians, why on earth should they be avoided by aspiring singers? And what is it about pentatonic scales that makes them so attractive in the first place? (See Bobby McFerrin for a much better example--though it’s not music criticism per se--of how one can connect a technical idea to a lived experience, with respect to exactly this question.) 

The same problem hampers Owen Pallett's valiant analysis of a Katy Perry song, in an essay that he offers in response to Gioia. Pallett remarks that Perry's


voice is the sun and the song is in orbit around it . . . The insistence of the tonic in the melody keeps your ears' eyes fixed on the destination, but the song never arrives there. Weightlessness is achieved. Great work, songwriters!
Huh? Delayed resolution is one of the oldest clichés of music analysis. But why should "weightlessness" be important for listeners? That's a vague concept masquerading as an insight. Why should we care?    

Perhaps I shouldn’t be so hard on HCJ or Pallett. In music, there is frustratingly little precedent for finding the connections between theory and praxis (or technical concerns and their social context). Someone, in one of the many social media threads on this (don’t remember who, sorry), pointed out that it is not unusual to see popular writing on film, or photography, or fine art, or television, make use of technical terms and concepts in order to drive an analysis home. That’s true, I think, but it’s not because visual culture is somehow fundamentally more accessible. The problem is that music--thanks in part to musicology’s historical obsession with scores, and performance departments’ (more understandable) obsession with professionalization--lacks a robust reception theory. The cluster of disciplines that deal primarily with visual culture (media studies, film studies, and the like) have simply had a huge head start when it comes to thinking about audience. (And whatever your feelings on academia, those disciplines have influenced the way their subjects are discussed in the broader culture.) 
Music, in contrast, is often discussed as if it happens in a vacuum. Ben Ratliff made the point recently in a blog post on his new project, a “book about listening to music” (which I am very much looking forward to). In talking about his research, Ratliff noted (the emphasis is mine): 
I have spent a lot of time with the books in my house that come to grips with listening as process and reaction and ritual, the real-time experience of it, how music might change our listening and how our listening might change music. I am always looking for books like this. There aren’t that many.

Sad but true. And crucially important. We know precious little about how we listen, or about the complex relationship between how we listen and what we consider “music” to be. And so we can talk or write about theory and technique until we’re blue in the face--I for one think even untrained readers can handle it!--but those things will never really be relevant, let alone be useful to a listener’s experience, until we begin to understand them in situ.

Wednesday, February 12, 2014

Ellington addendum

Thanks to Jason Crane for pointing me to this thought-provoking Ethan Iverson piece, following up on his interview with Terry Teachout and his reflective essay about Ellington

Iverson is pretty critical of blogger and writer Maria Popova, who recently posted a brief but favorable review of Teachout’s Duke. In the main bit Iverson focuses on, Popova writes:
What Ellington did was simply follow the fundamental impetus of the creative spirit to combine and recombine old ideas into new ones. How he did it, however, was a failure of creative integrity. Attribution matters, however high up the genius food chain one may be.
Personally, I have a hard time responding to this, because I’m not sure what Popova means by that phrase “failure of creative integrity.” (Is she accusing Ellington of an ethical failure, I wonder?) And what is a “genius food chain”?
In any case, Iverson is right that “everyone who loves jazz has always known how important dozens of great musicians were to Duke Ellington and the sound of Ellington's music. No other bandleader has shined such a powerful light on so many of his team.” And so it’s odd that the rest of what he says in this piece restricts that light only to Ellington, criticizing a broadly collaborative interpretation of his career. Iverson begins with an analogy, citing (as an example of how artistic appropriation happens) the opening portion of Stravinsky’s Rite of Spring, noting that the composer borrowed the theme from “a book of folk melodies”—in that book, the tune was called “Tu, manu seserėlė.” After comparing scores for the opening of Rite and its source, Iverson claims that “the source could never have been a hit. But Stravinsky’s asymmetric rhythm and the astonishing orchestration (the highest notes of the bassoon) made it an earworm.”
Wait a minute. For something to qualify as a “folk tune,” isn’t it by definition already an earworm? In this case, didn’t “Tu, manu seserėlė” need to have some kind of life as a shared melody, handed down by ear for some period of time, before Stravinsky ever got his own ears on it? How else would it have ended up in a book of folk melodies in the first place?
Iverson never comes out and says it, but he seems to be giving credence to the idea (the wrong idea, I think) that the relationship between Stravinsky and the author(s) of “Tu, manu seserėlė”—like that between Ellington and his musicians—was fundamentally hierarchical. He credits Ellington with the divine act of bestowing immortality on the music—just as, in another analogy, “Bach's harmonizations immortalized chorale tunes from the Lutheran community” (he cites Matthew Guerrieri on this). Like the Stravinsky, that’s an interesting comparison, but I have to say, anecdotally: I play some of those chorale tunes every Sunday for a Lutheran congregation here in Portland, and very rarely do I have the pleasure of being asked to play them in their Bach harmonizations. Indeed, I wish I had more opportunities to do so—if I had my way, I’d be playing as much Bach as I could. But the truth is that at least some of those tunes are doing just fine without him, thank you very much—and that’s a kind of immortality too.
This notion of authorial hierarchy is an easy bias to adopt, because of our cultural habit of lionizing the composer. But like the desire to “definitively” separate the work of Ellington and Strayhorn, it clouds our understanding of collaboration, which is a much more pervasive phenomenon than we're comfortable admitting. It’s like saying the composer is Europe, and the other guys are the New World, providing the raw materials that are then harvested and mined and turned into art—and then celebrating that interpretation of the relationship. “I am absolutely convinced this is what Duke did with fragments from the early horn players as well,” Iverson writes, in his comparison of Ellington and Stravinsky. “None of them—not a single Duke horn player, ever!—has contributed a standard to the repertoire. It was the settings that made these fragments famous.” 
But that’s not quite fair. If the Ellington horn players were never known to have created standards outside of the band, that only proves that they depended on context just as much as their boss did, and that the context Ellington provided was a rare and inspirational thing for everyone, even when (as was sometimes the case) there were bad feelings involved. Ensembles like that don’t just drop out of the sky, and they require something more than simply putting a bunch of talented individuals together in the same room. 
More importantly, why are we so obsessed with saying that Ellington-undoubtedly a man of prodigious giftswas more essential to the overall creative process than his bandmates? Especially in cases where he used a tune (or a melody, or a lick) that came from elsewhere? What view of the world does that interpretation protect? If the end result of a composition was greater than the sum of its parts, what justifies the impulse to single out the author of only one part? 
I (respectfully) suggest that arguments that force a value choice between raw materials and work done to them are inevitably informed by their makers’ own creative strengths and weaknesses. No one can do everything well. I do well what I do well, and it’s easy for me to overlook the importance of what the other guy can do better than me. (I once had a literature professor who pointed out how tempting it is to talk shit about iambic pentameter until you try to write it yourself.) Coming up with a worthy tune—even coming up with a worthy tune fragment—is never as easy as it seems to the person who didn’t actually do it. It’s silly to dismiss that part of the process as if it’s literally nothing. Yet incorporating it into our conception of creativity requires wrapping our minds around a dynamic that is more complicated, profound, and beautiful than we have been taught to expect. And that may take a little work too.