Monday, November 17, 2014

Why don't you write me?

Hello. It has been a while since I have posted anything, and so I have a number of updates.

Tomorrow, my first book, Decomposition, will be officially published. I guess that’s a milestone for me. For those of you who don’t know, Decomposition began as a dissertation (I defended it way back in 2004)—and in 2009, in a fit of boredom, I began publishing pieces of it on this very blog. In that form it was soon discovered by the woman who would become my agent—the amazing Barbara Clark. One thing led to another, and here we are.

For what it’s worth, a lot of work went into revising the book from its dissertation version. Three years of work, in fact. I mention this only because one of the critiques that seems to be emerging (in the Amazon Vine reviews, at least) is that the book is difficult and academic. Which is not to say that Decomposition is not difficult and academic (fair warning, though your mileage may vary)—only that, if it is, it is probably a lot less difficult and academic than it used to be.

Still, I’m pleased that the Amazon Vine reviews are, on the whole, favorable—even some of those who struggled with the content gave it high marks, and at the moment there are two 5 star reviews. All of which is certainly gratifying. 

I simultaneously have two other book projects going on, each of which is occupying a good deal of my attention (one reason I haven’t been blogging much). First is another non-fiction book, which at the moment is just an idea, really . . . a set of notes and sketches. I have wanted for a while to do a Decomposition-type book (that is, a turn-conventional-wisdom-on-its-head-type book) about each of the three subjects my mother insists shouldn’t be discussed in public: religion, politics, and sex. So I have embarked on the first of these—an agnostic argument about religion, belief, epistemology, and ethics, informed by my years as a church organist here in Portland.

My other book project is much closer to completion. Actually, I thought I had completed it last Spring—it’s a novel I wrote for my daughter, about a tree that grows across the Cosmos and connects two worlds, and the cat who travels between them. Over the summer, I decided (based on some expert advice from Barbara) that it needed more work. I guess I hadn’t fully appreciated that it can take more than twelve months to write a first novel . . .

So I have been deep in revisions on this cat book through most of the Fall. It’s utterly different from any creative work I have yet undertaken—less beholden to the “real world” facts that guide non-fiction writing, but somehow more dependent on a clear internal logic than any of the instrumental music I write. Not that my instrumental music has no internal logic—though I’m sure not everyone thinks so!—just that that logic doesn’t need to be communicated to the audience as explicitly. With music, I can just sort of “feel” where a piece works, without necessarily having to articulate its structure to myself or anyone else. In a novel, you have to use these things called words . . .

So I’ll get back to it. I hope to post here more regularly in the weeks ahead—though I should say that I haven’t had much interest in the controversies that have dogged the jazz world over the last year (everything from the Sonny Rollins satire to that Mostly Other People Do the Killing Album). There was a time when I would have been all over that stuff. But now the idea of engaging it feels so unproductive—even unhealthy. So I’ll probably post more broadly about music, or fiction, or writing in general, as I can.


Thanks for reading!

Monday, June 30, 2014

These are not the droids we were looking for



Astra Taylor
The People's Platform: 
Taking Back Power and Culture in the Digital Age

(Get it here.)

So far, digital culture has had a convoluted history. For most of the first decade of the twenty-first century, the lines in the sand seemed clear enough: on one side were the legacy content industries, exemplified by institutions like the RIAA and the MPAA, those infamous acronyms that fought tooth and nail to protect the idea that art and culture were private property. On the other was the freewheeling web, which promoted more democratic ideas about what it meant to create, to be an author, to be a cultural participant. (As I argue in Decomposition, my forthcoming book, these were not new ideas, but rather new articulations of old ones.) The point is that whichever side you chose, the choice itself seemed uncomplicated: either you were for the new way, or you were for the old one.

In the wake of Web 2.0 (when can we start calling it Web 3.0?), staking out a position in this battle is more problematic, subject to all kinds of uncomfortable intersections and realignments. In music, once upon a time, being for independent artists and the new technologies that were supposed to help them meant that you were against the legacy industry. To some extent the opposite was true as well. Today, arguing for the new technologies and their aesthetic affordances can easily be mistaken as a strike against art and culture, one that starves those things in the name of a vague idealism. Conversely, speaking out about the material realities that artists face can seem the worst kind of conservatism, a way of giving in to “the man,” or wagging one’s finger in a fit of haughty moralism. 

With this new complexity a narrative of regret has emerged: a sense that we have been duped, or misled, or at least that the early promises of digital culture are not so easily realized as we once thought. (Lars Ulrich, as some music fans have grown fond of saying, was right after all.) Some of the most high-profile and articulate expressions of this new narrative have come from writers Robert Levine, Chris Ruen, and especially Nicolas Carr and Jaron Lanier. But in my opinion, the best one so far is Astra Taylor’s The People’s Platform: Taking Back Power and Culture in the Digital Age.

Taylor is a documentary filmmaker with a philosophy background; her film Examined Life, a series of discussions with contemporary philosophers, is recommended. The People’s Platform, her first book, contains precious little philosophy, but in my view it surpasses the other texts in this genre by considering the problem we face from within a progressive framework. Taylor’s is essentially a labor-based analysis, one that strives to get us to see artists’ work in terms of the larger economic structure, and especially in terms of the twenty-first century’s unprecedented upward concentration of wealth. Rightly decrying that trend, she urges us toward a sustainable culture, in which wealth is broadly invested (rather than hoarded) and work is nurtured (rather than depleted).  

All of which strongly suggests that the digital dilemma is a predictable outcome of free market capitalism, since the techno-utopian mega-corporations (Google, Facebook, Amazon, Apple, and so on) have seized upon that economic philosophy to achieve and justify their now-immense power. That analysis makes Lanier's proposal that users be compensated with nanopayments feel like an extension of the problem—as if the world is not commodified enough alreadybut Taylor offers a number of stronger, more left-leaning solutions, even as she admits all of them will require a degree of consciousness-raising: from the ethos of sustainability itself; to a revivified national arts policy that builds on the history of the NEA or Public Broadcasting or even the WPA; to the idea of regulating service providers and popular Internet platforms as public utilities; to increased subsidies and taxes to be paid by advertisers and technology companies; to inchoate micro-economies built around practices like crowdfunding or sites like Bandcamp. (Taylor doesn’t mention this last example explicitly, but it certainly qualifies.) These may be the best solutions we have at the moment, and though I am not convinced that we will see them realized in my lifetime, I believe they are worth fighting for. 

At the same time I want to push back against the impression Taylor leaves that the problem of techno-utopian wealth concentration (and the concomitant impoverishment of the creative class) is ultimately a function of the philosophy of “free culture” and its expression in the everyday practices of audiences. Of course, the regret narrative is right to point out that that philosophy can be, and has been, used for problematic ends. The digital era has produced its own brand of charlatanism, and Taylor is right to expose it. (Though I should add that her analysis of Lawrence Lessig is incomplete; she praises Lessig's critique of copyright law, but then adds that he “ignores the problem of commercialismoverlooking the fact that for the last seven years Lessig has been focused not on defending file sharing kids but on getting money out of politics). 

Because of the philosophy of “free culture”, Taylor suggests, we now blithely assume

that traditional gatekeepers will crumble and middlemen will wither. The new orthodoxy envisions the Web as a kind of Robin Hood, stealing audience and influence away from the big and giving to the small. Networked technologies will put professionals and amateurs on an even playing field, or even give the latter an advantage. Artists and writers will thrive without institutional backing, able to reach their audiences directly. A golden age of sharing and collaboration will be ushered in, modeled on Wikipedia and open source software. 

Put that way, and considering the obscene sums of money now made by the winners in our winner-take-all system, “free culture” does sound naive and delusional—just as "All You Need is Love" sounded naive and delusional by the end of the Sixties. But the fact that its concepts have been cynically appropriated in ads and business models and TED talks (is anything in our culture immune to such appropriation?) does not mean the concepts themselves are bad, or that they don’t remain an effective way to foster creativity, or that they will be useless for breaking out of our commercial stranglehold at some point in the future. Indeed, as I argue in Decomposition, it is not a question of whether collaboration, or sharing, or remixing, or sampling, or piracy, or whatever other forms the concepts take, should or should not occur, as the outcome of a single, simple moral decision. The history of art reveals that, whether we like it or not, these ideas have been crucial to creativity since long before the first computer was ever built. (“All art,” as Glenn Gould once said, “is really variation on some other art.”) Denying that history, even in the name of economic fairness, may create more problems than it solvesdepending on what sort of culture we actually want.

Moreover, “free culture” (or, in my own, more broadly-defined term, decomposition) is still the best way of critiquing the real issue: the ideology of authorship and authenticity. (For what it's worth, here's how I define those terms in my book, and in terms of music: authorship is the idea that works are created by solitary individuals, and authenticity is the idea that there is a “singularly true, ideal experience of music that trumps all others, disregarding the variability of audience perception, and accessible only to those with ‘correct’ knowledge and ‘proper’ understanding.”) It’s still the best way, for instance, to undermine the celebrity worship that propels our interactions on the network. Taylor’s point that Web 2.0 has led to the consolidation of superstars as a class is instructive here; superstars are now both fewer and wealthier. In a world of aesthetic abundance, the veritable “celestial jukebox” we were promised, why should that be? I blame our willingness to believe in putatively objective hierarchies of quality—the individual god-like artists and reified expressions of music that we are all supposed to agree are among “the best that has been said and thought in the world” (to use Matthew Arnold’s famously narrow definition of culture). As belief systems, authorship and authenticity create powerful cultural cliques; they have a tendency to pull audiences toward an arbitrary center of gravity, to work against a more haphazard and chaotic process of taste formation, and to stomp down the so-called long tail in favor of a disproportionately large head.

But not only do authorship and authenticity corrupt our understanding of art—they also drive the free market system that makes the techno-utopian mega-corporations possible in the first place. They are crucial to private property (see John Locke). They are crucial to advertising (see trademark law). They are crucial to planned obsolescence (see the parade of new devices, each made possible by an updated slate of proprietary technology). They inform our understanding of the techno-utopian mega-corporations themselves (see the cult of Steve Jobs). So it is actually not that these corporations have truly embraced the “free culture” philosophy they benefit from, or done away with authorship and authenticity, as some purveyors of the regret narrative would have it. Instead, they have claimed authorship (this brand made this product!) and authenticity (it is objectively the best; buy it!) for themselves. What is Mark Zuckerberg now if not, legally, the “author” of a significant chunk of the Internet—a rights holder of the donated experiences and expressions of the enormous number of people who use his network? If “free culture” had really come to pass, that kind of ownership would be impossible.

Given this continuity, the danger of the regret narrative is its propensity for engendering a feeling of nostalgia—not a critique of the system itself, but a critique of the current manifestation of the system. Taylor is better at avoiding this trap than most, but even she betrays an occasional fondness for the more dubious aspects of what we have lost. She repeats, for instance, the problematic notion that one of the benefits of the old label ecosystem is that its pop hits functioned to “funnel revenues from more successful acts to less successful ones.” She calls that dynamic “cross-subsidies,” but to me, it sounds a lot like trickle-down economics: nice if you happen to be one of the fortunate few to get some of the windfall. More broadly, if free market capitalism is the problem, why would changing the elite beneficiaries of that system—subbing in the big record labels or movie studios for any of the major technology companies—be the solution?


What we have needed for a while now is a radical disruption of some sort. Perhaps the Internet was that disruption—or perhaps it merely made us aware of the ideas that will make that disruption possible. In either case, we should be careful of banishing those ideas to the dustbin of history, just because we have not yet been able to take full advantage of them.

Wednesday, March 26, 2014

What Music Writing Really Needs

Ted Gioia recently wrote a piece on the state of music journalism in 2014; it has been making the rounds, so you’ve probably seen it already.
I’ve admired Gioia’s work ever since I read his The Imperfect Art and West Coast Jazz, both of which I came across as a graduate student. West Coast Jazz in particular was an important book for me, given that I read it shortly after abandoning the east coast, and while I was trying to set myself up in Los Angeles. It was incredibly gratifying to hear someone say out loud (in my head, at least) that New York is not the only place where good jazz exists, or can exist.
Gioia’s latest essay--which bears the provocative title “Music Criticism Has Degenerated Into Lifestyle Reporting”--is more of a complaint than a deconstruction. In Gioia’s view (as if one couldn’t tell from the title), modern “music criticism” is failing both audiences and musicians. The problem is critics’ lack of what Gioia calls “technical knowledge”--by which he seems to mean, first, a direct discussion of how a performance is executed, preferably informed by the critic’s own experience as a musician; and second, references to things like “song structure, harmony, or arrangement techniques” (that is, expressions of music theory). 
Gioia never identifies which publications or writers he is reacting to, but we can guess--when he says “one can read through a stack of music magazines and never find any in-depth discussion of music,” he probably isn’t talking about The Wire. And it would be foolish to deny that the big glossy periodicals like Rolling Stone have descended pretty far into becoming, basically, fashion magazines--though that’s not news, I don’t think; it’s been happening for years.
Still, it’s not as if there is no meat on the bones here. And yet--something in Gioia’s article doesn’t quite ring true. In part, it’s that the strain of discourse he addresses is not and never has been “critical,” per se. What he is actually focusing on, I think, is the modern incarnation not so much of record or concert reviews, but of the cult of celebrity--the passionate adulation of stars that stretches all the way back to nineteenth century musicians like Niccolò Paganini and Jenny Lind and Franz Liszt. “Lifestyle reporting,” in this sense, is not the sudden blossoming of lowest-common-denominator excess, but a deeply-ingrained cultural habit, and one that has always served a different function than criticism, even when it has been informed by tendencies from high art (such as the hagiography of genius).
Which is not to say Gioia is wrong in claiming that there is a problem. I agree that something is missing from popular writing about music--or a lot of it, anyway. I’m just not sure that what’s missing is “technical knowledge.” Or maybe, more exactly, I’m not sure that what’s missing is technical knowledge only. After all, a harmonic progression, or a song structure, or a time feel, is never inherently meaningful. Each of these technical aspects of a musical work takes its significance from the way it is deployed in a culture--both from how it relates to the technical expressions of other musicians, and from how it is socially valued. The blues scale, for instance, could not be understood as an important detail of blues music--it would not be worth writing about in the first place--if it didn’t speak to something about the lived experience of the people who listened to and enjoyed the blues.
To put that another way: it’s not enough to decry the absence of theory in popular music discourse. The real problem is the inability, or the unwillingness, to connect theory and praxis. Go ahead and write about that blues scale if you like--or that harmonic progression, or that song structure, or that time signature--but if you do, make sure you follow through and make a connection to your readers’ daily lives. A critic’s job, if I may be so bold, should be to bridge the chasm between the abstract and the concrete--not to celebrate theory for its own sake. That was Harry Connick, Jr.’s mistake in bringing up--one is almost tempted to say brandishing--the subject of pentatonic scales on American Idol. He didn’t make much of an attempt to explain why he thought they were undesirable in that context. Indeed, if they were good enough to be what he called “classic go-tos” for R&B, gospel, and jazz musicians, why on earth should they be avoided by aspiring singers? And what is it about pentatonic scales that makes them so attractive in the first place? (See Bobby McFerrin for a much better example--though it’s not music criticism per se--of how one can connect a technical idea to a lived experience, with respect to exactly this question.) 

The same problem hampers Owen Pallett's valiant analysis of a Katy Perry song, in an essay that he offers in response to Gioia. Pallett remarks that Perry's


voice is the sun and the song is in orbit around it . . . The insistence of the tonic in the melody keeps your ears' eyes fixed on the destination, but the song never arrives there. Weightlessness is achieved. Great work, songwriters!
Huh? Delayed resolution is one of the oldest clichĂ©s of music analysis. But why should "weightlessness" be important for listeners? That's a vague concept masquerading as an insight. Why should we care?    

Perhaps I shouldn’t be so hard on HCJ or Pallett. In music, there is frustratingly little precedent for finding the connections between theory and praxis (or technical concerns and their social context). Someone, in one of the many social media threads on this (don’t remember who, sorry), pointed out that it is not unusual to see popular writing on film, or photography, or fine art, or television, make use of technical terms and concepts in order to drive an analysis home. That’s true, I think, but it’s not because visual culture is somehow fundamentally more accessible. The problem is that music--thanks in part to musicology’s historical obsession with scores, and performance departments’ (more understandable) obsession with professionalization--lacks a robust reception theory. The cluster of disciplines that deal primarily with visual culture (media studies, film studies, and the like) have simply had a huge head start when it comes to thinking about audience. (And whatever your feelings on academia, those disciplines have influenced the way their subjects are discussed in the broader culture.) 
Music, in contrast, is often discussed as if it happens in a vacuum. Ben Ratliff made the point recently in a blog post on his new project, a “book about listening to music” (which I am very much looking forward to). In talking about his research, Ratliff noted (the emphasis is mine): 
I have spent a lot of time with the books in my house that come to grips with listening as process and reaction and ritual, the real-time experience of it, how music might change our listening and how our listening might change music. I am always looking for books like this. There aren’t that many.

Sad but true. And crucially important. We know precious little about how we listen, or about the complex relationship between how we listen and what we consider “music” to be. And so we can talk or write about theory and technique until we’re blue in the face--I for one think even untrained readers can handle it!--but those things will never really be relevant, let alone be useful to a listener’s experience, until we begin to understand them in situ.

Wednesday, February 12, 2014

Ellington addendum

Thanks to Jason Crane for pointing me to this thought-provoking Ethan Iverson piece, following up on his interview with Terry Teachout and his reflective essay about Ellington

Iverson is pretty critical of blogger and writer Maria Popova, who recently posted a brief but favorable review of Teachout’s Duke. In the main bit Iverson focuses on, Popova writes:
What Ellington did was simply follow the fundamental impetus of the creative spirit to combine and recombine old ideas into new ones. How he did it, however, was a failure of creative integrity. Attribution matters, however high up the genius food chain one may be.
Personally, I have a hard time responding to this, because I’m not sure what Popova means by that phrase “failure of creative integrity.” (Is she accusing Ellington of an ethical failure, I wonder?) And what is a “genius food chain”?
In any case, Iverson is right that “everyone who loves jazz has always known how important dozens of great musicians were to Duke Ellington and the sound of Ellington's music. No other bandleader has shined such a powerful light on so many of his team.” And so it’s odd that the rest of what he says in this piece restricts that light only to Ellington, criticizing a broadly collaborative interpretation of his career. Iverson begins with an analogy, citing (as an example of how artistic appropriation happens) the opening portion of Stravinsky’s Rite of Spring, noting that the composer borrowed the theme from “a book of folk melodies”—in that book, the tune was called “Tu, manu seserÄ—lÄ—.” After comparing scores for the opening of Rite and its source, Iverson claims that “the source could never have been a hit. But Stravinsky’s asymmetric rhythm and the astonishing orchestration (the highest notes of the bassoon) made it an earworm.”
Wait a minute. For something to qualify as a “folk tune,” isn’t it by definition already an earworm? In this case, didn’t “Tu, manu seserÄ—lÄ—” need to have some kind of life as a shared melody, handed down by ear for some period of time, before Stravinsky ever got his own ears on it? How else would it have ended up in a book of folk melodies in the first place?
Iverson never comes out and says it, but he seems to be giving credence to the idea (the wrong idea, I think) that the relationship between Stravinsky and the author(s) of “Tu, manu seserÄ—lÄ—”—like that between Ellington and his musicians—was fundamentally hierarchical. He credits Ellington with the divine act of bestowing immortality on the music—just as, in another analogy, “Bach's harmonizations immortalized chorale tunes from the Lutheran community” (he cites Matthew Guerrieri on this). Like the Stravinsky, that’s an interesting comparison, but I have to say, anecdotally: I play some of those chorale tunes every Sunday for a Lutheran congregation here in Portland, and very rarely do I have the pleasure of being asked to play them in their Bach harmonizations. Indeed, I wish I had more opportunities to do so—if I had my way, I’d be playing as much Bach as I could. But the truth is that at least some of those tunes are doing just fine without him, thank you very much—and that’s a kind of immortality too.
This notion of authorial hierarchy is an easy bias to adopt, because of our cultural habit of lionizing the composer. But like the desire to “definitively” separate the work of Ellington and Strayhorn, it clouds our understanding of collaboration, which is a much more pervasive phenomenon than we're comfortable admitting. It’s like saying the composer is Europe, and the other guys are the New World, providing the raw materials that are then harvested and mined and turned into art—and then celebrating that interpretation of the relationship. “I am absolutely convinced this is what Duke did with fragments from the early horn players as well,” Iverson writes, in his comparison of Ellington and Stravinsky. “None of them—not a single Duke horn player, ever!—has contributed a standard to the repertoire. It was the settings that made these fragments famous.” 
But that’s not quite fair. If the Ellington horn players were never known to have created standards outside of the band, that only proves that they depended on context just as much as their boss did, and that the context Ellington provided was a rare and inspirational thing for everyone, even when (as was sometimes the case) there were bad feelings involved. Ensembles like that don’t just drop out of the sky, and they require something more than simply putting a bunch of talented individuals together in the same room. 
More importantly, why are we so obsessed with saying that Ellington-undoubtedly a man of prodigious giftswas more essential to the overall creative process than his bandmates? Especially in cases where he used a tune (or a melody, or a lick) that came from elsewhere? What view of the world does that interpretation protect? If the end result of a composition was greater than the sum of its parts, what justifies the impulse to single out the author of only one part? 
I (respectfully) suggest that arguments that force a value choice between raw materials and work done to them are inevitably informed by their makers’ own creative strengths and weaknesses. No one can do everything well. I do well what I do well, and it’s easy for me to overlook the importance of what the other guy can do better than me. (I once had a literature professor who pointed out how tempting it is to talk shit about iambic pentameter until you try to write it yourself.) Coming up with a worthy tune—even coming up with a worthy tune fragment—is never as easy as it seems to the person who didn’t actually do it. It’s silly to dismiss that part of the process as if it’s literally nothing. Yet incorporating it into our conception of creativity requires wrapping our minds around a dynamic that is more complicated, profound, and beautiful than we have been taught to expect. And that may take a little work too.

Friday, February 07, 2014

The realest thing to do

A week ago I could not have predicted that the 2014 Superbowl would inspire so many interesting discussions about music. I have already mentioned the Bob Dylan thing. Apparently there is also a bit of an uproar over the notion that Bruno Mars performed the halftime show for exposure instead of a monetary fee. (In fairness, he isn't the first pop star to have done so.) That may not exactly suck for Bruno Mars, who probably got a ton of new fans as a result -- but it does suck for those of us farther down the musical food chain, who already have a hard time convincing presenters to pay us for what we do. One can imagine the strange rationalizing: well, if Bruno Mars can play for exposure, why can't you? 

Still, the incident is a useful reminder that market capitalism is inherently exploitative. Within this system, there might be short-term solutions to the problem Mars highlighted (e.g., labor organizing, boycotts, etc.), but if you're truly interested in economic fairness for musicians, you should think about getting a different system altogether.

I also want to comment on a third discussion, about how members of the Red Hot Chili Peppers -- who appeared in tandem with Mars -- mimed the performance of their song "Give It Away." Bassist Flea has a thoughtful post on the subject. He explains how difficult it would have been, from a technical standpoint, to actually perform live at such an enormous venue, under the given time constraints. When the band, for whom performing is a "sacred thing," was offered the gig, they gave it a lot of consideration, and decided to do it as a once-in-a-lifetime opportunity -- a "wild trippy thing to do." Flea asks:

Could we have plugged [our instruments] in and avoided bumming people out who have expressed disappointment that the instrumental track was pre recorded? Of course easily we could have and this would be a non-issue. We thought it better to not pretend. It seemed like the realest thing to do in the circumstance. 

I like that. The "realest" thing -- implying that in the act of perception, reality is always a question of degree, and never as absolute as we assume, or as it inevitably feels. Riffing on Flea, one could not say of any performance that it was the real thing to do -- only that it seemed like the realest, given the context.

Every time one of these controversies comes along, I want to point out that the resulting outrage is never itself genuine. Once upon a time, one could perhaps claim to be legitimately astonished at the inauthenticity of a performance. (Perhaps.) But in 2014, it's not like audiences don't know, deep down, that some kind of fakery (or, to put that more generously, some kind of mediation) is an unavoidable component of both live and recorded music. We all have had our moments of disillusionment, whether your touchstone is one of the obvious ones -- Luciano Pavarotti (2006), say, or Ashlee Simpson (2004), or Milli Vanilli (1989), or Michael Jackson (1983) -- or something more local, like the pre-recorded soundtrack used at your kid's choir concert. More broadly, in the digital age, fakery is a way of life. We all partake in it. Your Facebook page? It's a construction. The photos on your iPhone? You have probably edited them to make them look cooler. 

For the person sitting at home watching the Chili Peppers on TV, what is the functional difference between the performance with plugged-in instruments and the performance without? Either way, the vibrating air that reaches your tympanic membrane had to originate with a human musician. Either way, it is highly processed, through the instrument, through the soundboard, through the football stadium, through the satellite broadcast, through your television speakers, through the cultural baggage of the Superbowl, through your living room, through your physical hearing apparatus and subjectivity. How much more is it changed if the sound that registers in your awareness actually began in a studio the day before? 

It's not enough to say "live music is better," because what people roughly mean by "live music" -- an almost mystical communion with the artist -- is unattainable with current technology. What's most interesting to me is the weird psychic dance we do in response to that truth. We lie to ourselves. Immersed in fakery, we are not naive; and yet we delight in pretending the fakery is not there, or that it is possible to eliminate -- as if we could plug directly into each other's aesthetic consciousness. We feign horror when that pretense is exposed, as it inevitably will be. Why?

Monday, February 03, 2014

Is there anything more stupid than stupidity?



Completely by chance, the only bit of the entire Superbowl I happened to see was the Bob Dylan car commercial. I used to be a Dylan fan, but I found it instantly laughable and annoying. I did not bother watching the rest of the game, or the advertising blitz it was a platform for. I tried to content myself with a brief Facebook vent: "I walk in the room long enough to see Bob Dylan selling cars. I walk out."

But today, I find this defense of the commercial as laughable and annoying as the commercial itself:

Let’s be clear: Dylan’s greatest asset over the course of his long and still-going-strong career is precisely his willingness to disappoint and shock his fans and force them to reconsider their relationship to their singing savior. 

Come on. Aside from the fact that a commercial is not an album (does the former really have to be counted as part of Dylan's "career"?), and aside from the fact that Gillespie's celebration of Dylan's "willingness to disappoint and shock" is really just a disingenuous swipe at Sixties liberalism, his argument is pretty weak on aesthetics too. Lauding an artist's desire to go against the grain for its own sake (Dylan "changes identities as often as most of us change our socks," and so on) is as tiresome as lauding an artist for being predictable -- indeed, it is a kind of predictability. Anyone who really cares about art (and has graduated from high school) knows that there is good art that lives up to expectations and is comforting, and there is bad art that is disappointing and shocking. It takes a special kind of speciousness to assume that only the opposite can be true. 

My own reaction to the commercial had less to do with shock or disappointment -- no plaintive cries "oh no, they got Bob Dylan!" here -- than with an overwhelming sense of frustrated rage at the sheer tedium of what I was witnessing. Of course they got Bob Dylan. Someday they will get Springsteen. Hell, someday they may even get Woody. That's just the way Capitalism works. But why the fuck should I spend my valuable time on it? I'm not even in the market for a car.

Thursday, January 30, 2014

A meandering post on collaboration

“Certainty is as it were a tone of voice in which one declares how things are . . .”
(Wittgenstein)
Recent reads: two interviews with Terry Teachout, author of a controversial new Ellington biography. I’m about halfway through that book myself, and so far I’m not loving it, although of course the subject is interesting to me—not only because Ellington is a personal hero of mine (he’s the guy who got me interested in jazz in the first place), but also because he is a recurring theme in my own book, Decomposition (to be published later this year). 

Whatever brouhaha exists around Duke is not the first Teachout has prompted. A few years back, he was the guy behind the discussion that led to the “#jazzlives” hashtag. With Duke, the complaints seem to have to do with challenges to Ellington hagiography. As Teachout says in the interview with Darcy James Argue:

Some Ellington buffs hate my book. I have ample reason to know that. And I think the reason why some of them hate it is because — whether they fully understand this or not — they don't believe that he's a great enough man to stand up to an honest discussion of what he was like, both as a man and as an artist.

I respect the swagger here, and I’m all for challenging hagiography, but that comment really captures some of the things that irk me about Teachout’s work. Note for instance the insouciance with which he claims to be able to see into the souls of his critics. More vexing is the underlying assumption that the most important conversation we can have about art is the one in which we designate greatness—a term that I think invites trouble almost every time it is uttered. 

I am not an Ellington scholar, but I know enough about Ellington scholarship to have seen that trap before. Jazz fans, critics, biographers, and academics are often concerned with determining Ellington’s place in relation to already-assumed benchmarks of importance (usually, the so-called classical masters). Teachout summarizes, and then responds to, the most frequent version of this argument in his interview with Ethan Iverson:

Some people think that in order to take Duke Ellington seriously as a composer, we have to believe that he was successful as a composer of large-scale works.  The idea, I guess, is to push him up into the classical-music arena: he played in Carnegie Hall, therefore he's serious.  And that's completely wrong. Duke Ellington is serious because he is Duke Ellington.  [. . .] Jazz is a completely successful form of expression in and of itself, the same way the mystery novel is. 

Put simply, this is a disagreement about criteria. The classically-oriented people say that Ellington belongs in the western canon, on that canon’s terms. And the response Teachout describes—what could be called the “apples-and-oranges” argument—says that Ellington is great for reasons of his own. 

There’s a case to be made for both viewpoints, but neither is particularly interesting to me, because each requires getting into the viper pit of musical analysis, which typically addresses everything but its own subjectivity. I don’t need to be convinced of Ellington’s greatness; I already know his music saved my life. I’d rather look at the man’s biography in terms of process. The collaboration narratives that pervade it have so much more to offer than a circular exchange about whether or not he mastered extended form. Like no other popular twentieth century figure I am aware of, Ellington’s story practically begs us to develop something musicology has always neglected: a compelling, robust theory of collaboration.

The endless anecdotes about how Ellington stole melodies from his sidemen, or put his name on pieces he didn’t work on, or handed off certain arranging tasks, are only controversial because of how little respect we have, culturally, for collaboration—which in turn is a function of how little we understand it. It is strangely both blasĂ© and shocking, almost fifty years after “The Death of the Author,” to say that artists are, by virtue of the physiology of being alive, always collaborating in what they do. But what Ellington demonstrated so forcefully is that collaboration is not a special case: it is the norm. It is “composition.” It is how music happens, no matter the genre. We shouldn’t be reaching back to see how Ellington measures up to Beethoven (or whoever)—we should be looking for the truths that Ellington’s example teaches us about all music.

The problem is that we give lip service to the role of collaboration and then turn around and bury it under the same old stories about individual compositional genius. Consider the way Teachout handles the relationship between Ellington and composer Billy Strayhorn—perhaps the best known of modern musical collaboration narratives, second only to Lennon/McCartney. The modern Ellington biographer’s problem is complicated by the fact that for a long time the notion of Ellington/Strayhorn has been skewed, so that Strayhorn was made to seem an extension of Ellington, a subordinate, a protĂ©gĂ©. Teachout, to his credit, pushes back against that version of the story, drawing heavily on Something to Live For, Walter van de Leur’s book on Strayhorn, which emphasizes the younger man’s creative individuality.

And that's a good thing. Strayhorn definitely got the short shrift in jazz history, and deserves to be more celebrated. And yet. . . “Ellington and Strayhorn, individual geniuses” is not that much more helpful a way of describing the creativity of these men. It's two steps forward and one step back. Even with their strengths, van de Leur’s book and Teachout’s gloss of it both inhibit the creation of a collaboration theory; their idea of composition is the same idea we have always had: music is aural stuff, written by individual composers. They approach their subjects as an archaeologist would—putting excessive faith in our ability to analyze and understand the objects that have been left behind, and ultimately indulging in a kind of aesthetic positivism that seems to have precious little connection to art as a lived experience. 

Van de Leur’s argument, for instance, rests heavily on what could uncharitably be called handwriting analysis. At times his willingness to treat the provenance of the texts he examines as self-evident (just because they are handwritten) is maddening. How exactly can we be sure which squiggles or dots or lines came from which pen? The reader has no real way of knowing: unless I'm missing something, Something to Live For includes only two score facsimiles, and these are extremely short excerpts, tacked on in an appendix, almost as an afterthought. They are meant to illustrate the difference between Ellington’s and Strayhorn’s musical penmanshipbut studying the facsimiles myself, it seems there is a bit of interpretive magic going on. The Manhattan Murals excerpt is particularly confusing, because it supposedly includes a shift in penmanship—van de Leur writes that in bar nine “Strayhorn takes over to finish the eight-page manuscript.” To me, the handwriting in bar nine looks pretty much the same as the handwriting in bar ten. I’m not saying they are the same—and Van de Leur spent a lot of time in the archives, so it’s quite possible he sees something I don’t. But it would be nice to know what that something is.

Yet the problem is bigger than this quibble about unarticulated methodology. I realize of course that at times the provenance of a piece of written music is signaled by a composer’s signature or mark. But even in an autograph score, a huge part of the composing process is invisible, and that’s the very part that Ellington’s career stressed over and over again: context. Even if we can definitively prove that a given squiggle/dot/line came from a given pen—how do we know what else was going on in the room at that moment, influencing or even determining that gesture? How do we know someone else was not on the other end of a phone (as Strayhorn often was), singing or playing an idea or suggestion? How do we know what had happened on the bandstand earlier that night—or the night before that? 

Most important of all: how do we know which of these things to include as part of the act we call “composition”? And why?

Interestingly, van de Leur is aware, I think, of the trouble he is flirting with here. In his introduction, he includes “the idea that a manuscript unequivocally reflects the composer’s intentions” as “one of the greatest fallacies of musicology”—only to dismiss that critique in the very next sentence (where he says that “even with these possible pitfalls, the autograph scores provide the most powerful tool in understanding Strayhorn’s music”), and leaving it hanging there for the rest of the book. Later, he writes that Strayhorn and Ellington’s

musical partnership consisted of discussion, an exchange of musical ideas, and a quest for solutions to compositional problems, but not necessarily to joint compositions.

But that statement raises more questions than it answers. What is a “joint composition” if not the end result of a discussion, or the outcome of an exchange of musical ideas, or of a mutual questing for aesthetic solutions? Who knows how many discussions or conversations—even those that may have seemed inconsequential—resulted in trains of thought that ultimately led to a decision being made as the notes were inscribed on the paper? How many of those decisions even had to be conscious to “count”? And what is the mechanism by which we, decades after the fact, are able to claim access to knowledge about all of these things? 

These are the sorts of questions a theory of collaboration would at least get us thinking about. Instead, we’re left with critical hubris. Maybe that’s a strong way of putting it, but I don’t know how else to characterize Teachout’s comment to Argue, that van de Leur “went through all the manuscripts, all of them, and has identified with exquisite precision who wrote what”—or the similar statement he makes to Iverson, that the compositional distinctions between the two men have now been established "definitively for all time.”

“Exquisite precision”? “Definitively”? “For all time”? Even if Ellington and Strayhorn’s every movement had been observed, studied, and recorded, throughout their entire time together, such confidence would be misplaced. You’re a human being: are you that transparent? We’re obscuring the really important question: what is writing, anyway? You have an idea for how some notes go together, in a rhythmic or harmonic combination—but where did that idea came from? 

Can you even say what an idea is? Is it a bunch of nerve impulses? Something more? Is it divine inspiration? Aliens? 

Okay, I’m exaggerating a little—but my point is that assertions like Teachout’s ride roughshod over the fact that composition is a process of the mind. And if you think we understand the mind with “exquisite precision,” or “definitively”—beyond what it feels like to have a mind, or beyond what we tell each other of what we think—more power to you. I think that’s quite a leap.

Wednesday, January 01, 2014

Prepare a face to meet the faces that you meet

Happy 2014.

On my to-do list for the new year: websites for my book, Decomposition, and for my writing in general.

To that end, some publicity photos. 

Go ahead and laugh. I am utterly uneasy in front of a camera. How brutally it documents the limitations of skin, body, mind, mortality . . . 

Any beauty in these images is solely attributable to the photographer, the amazing Sara Hertel.