Wednesday, November 18, 2015

Is This Thing On?

Today is the anniversary of the "on-sale" date for Decomposition. Also, this year I have blogged a total of three times. 

Is there a connection?

Part of the reason for my absence is that I took most of 2015 to revise my middle-grade novel. I finally finished it in September, and now I'm waiting to see if it will find a publisher.

In the meantime I'm trying to triangulate the two writer-identities I have for some reason insisted on forging for myself: the lapsed academic who writes kooky things about music and culture, and the fantasy novelist.

I feel like there must be a connection there, but I haven't figured out what it is yet.

* * * * *

Things I'm listening to:

Chris Schlarb,Dropsy. (Chris and I have long shared an affinity for Zappa, and the melodies on this recording are some the most Zappa-esque I've heard from him.)

And something I just received in the mail and can't wait to dive into: The Big Reveal, by my friends Hot Breakfast! (Heard an advance cut from this a while back, and it was gorgeous. I bet the rest of the album is too.)

I'm also returning to some musical projects of my own, of which more soon.

For what it’s worth: if I believed in things like "record of the year," my vote might go to Sufjan Stevens's Carrie and Lowell

* * * * *

Via Alex Rodriguez, this JSTOR article about Miles Davis. I feel like stuff like this can only come from people who don't think hard enough about what music is. Consider this (quoted) question:

How are we to account for such glaring defects in the performances of someone who is indisputably one of the most important musicians in the history of jazz?

The answer is easy: first, redefine the word “defect.” Next, stop obsessing about what is or is not “important.” If you love a piece of music, great. If not, also great.

* * * * *

A friend recently reminded me how quiet (socially awkward?) I am in real life, which reminded me why I write in the first place: it’s my main and preferred form of communication. With that in mind, I pledge to be back here more often from now on.

Wednesday, March 11, 2015

On the suit

The “Blurred Lines”/“Got to Give it Up” judgment is the latest in silly copyright cases (heres a good musicological argument against it), but there is one detail that has been mostly ignored:

The $7.3 million number beats the record high judgment in a copyright infringement suit.

A relevant counterpoint from my book (Chapter 7):

. . . where art intersects with commerce, progress traps occur too. In a capitalist system, artists, labels, technology companies, and other music professionals naturally seek to grow their profits. (“If you sell fifty million records one year and seventy the next year,” notes Jeff Gold, describing the expansion of Warner Music in the 1990s, then soon someone is going to ask “how are you going to sell eighty?”) As elites become more efficient at producing, marketing, and selling music, that increased efficiency stresses the system. Music becomes, as William Patry puts it, “a zero sum game, where the more people vie for the top, the fewer make it, but the rewards are disproportionately greater.” In the process, the thoughtful listener is left with a nagging feeling that just as we cannot understand music outside of recording, or our own thoughts about it outside of writing, it is now difficult to even conceive of it outside of money—outside of our transactional roles as producers, consumers, or both.  

My friends complain about modern pop all the time. I wish I could evaluate it in aesthetic terms. But I feel like I can’t even hear it. It sounds like money to me. I hear the money that went into the production. I hear the money that went into the promotion. I hear the money that is being exchanged every time it is performed. I hear the money that is expected as a kind of birthright. Lord help me, I cant get past the money. 

Call me crazy, but I think thats a problem.

Friday, February 27, 2015

On the dress

(image by Alex Tarr)

The latest in silly Internet memes: 
. . . for the past half-day, people across social media have been arguing about whether a picture depicts a perfectly nice bodycon dress as blue with black lace fringe or white with gold lace fringe. And neither side will budge. 

And a relevant counterpoint from my book (Chapter 4):

The point is that while we readily admit there is no one way to understand a work of art, no one manner of perceiving—“all art is subjective” is one of the great clich├ęs of aesthetic dialogue—we ignore the consequences of that statement: that there is, as far as perception goes, never a singular work to agree about in the first place. Instead, we cling to the reified idea of music [or any other artwork], using it, in the worst-case scenario, to police the responses of others, or else, more kindly, to prioritize the overlap in our perceptions—as with, for example, the concert protocol that calls for simultaneous group applause, and the impression of consensus it produces by eliding complexly differentiated responses into a symbolic burst of ostensible mass agreement. 

Alas, perception is fundamentally idiosyncratic—whether were talking about listening, looking, eating, touching, or smelling. The fact that “the dress” (as it has come to be known) has pushed us to argue about what is “really” there proves how uncomfortable we are with that basic truism.

Friday, January 30, 2015

Decomposition around the web

(image c/o BookPeople)

Since the book's release date on November 18:

* I wrote a piece for Huffington Post called “The Discomforts of Digital Musica plea for listeners to break out of the star system, which I think ultimately hurts us all. (A sample: “To me, the real gift of digital technology is not the feeding frenzy of infinite free music; its the possibility of fostering artistic communities that are viable precisely because they are intimate and idiosyncratic, and because they form spontaneously, through the unprecedented channels of communication to which we now have access. If such communities are allowed to derive from shared passion, shared passion itself will nurture economic justice.)

* I did two book readings, one at Powell's in Portland (December 1), and one at Town Hall in Seattle (December 2). Both were great fun (though the Powell's event was better attended and sparked a longer discussion). 

* In advance of the Powell's event, writer Robert Ham did a nice piece on me for the Portland Mercury. It was great to meet and chat with him, and I appreciated his smart questions. (I should clarify for the record, though, that I haven't been 42 since 2011.)

* For the Seattle Weekly, Gavin Borchert did this preview of my talk at Seattle's Town Hall:

That's very flattering! (A small correction: the 
demythologizing without demeaning line comes from the books introduction, not the afterword.)

* At the end of December, Decomposition made it onto Los Angeles Magazine
Best ‘Little Music Books of 2014a welcome surprise, to say the least. Matthew Duertsen called it refreshingly unstodgyrefreshingly going against the grain of some of the more glib criticism the book has received.

* (PhD candidate) Madison Heyling's in-depth analysis of Decomposition for Music and Literature as well is probably one of the more detailed and thoughtful write-ups the book has yet received, and for that I'm very grateful. (I know how hard it is to be a grad student and do other intellectual work, so I truly appreciate the time this must have taken.)

* Ethan Iverson gave the book some love, both on the DoTheMath site (
covers an exceptionally wide turf; indeed, I can't think of reading a previous book that glosses jazz, classical, and pop in equal measure and with equal conviction) and on Twitter:

As I remarked in my response to Ethan: that may be the first time anyone has called the book 

* * * * *

Given the book’s polemic, I have been pondering how to respond to the criticism that has emerged alongside the praise (sometimes from the same critic). I've been a little hesitant,  honestly. Aside from the trouble it takes to formulate a response—I’d much rather be spending that on new projects—doing so also runs the risk of seeming unseemly. After all, it’s a reader’s prerogative to read the way she reads. And a thoughtful writer always has to be comfortable with the possibility of miscommunication.

Still, there are things in the criticism that have been sticking in my craw, and that I feel I should address at least briefly. One of them is the idea that I use the pronoun “we” recklessly. Borchert, for instance, dings me for the line “We are convinced that the quality of a musical work cannot derive, even if only partially, from its context.” Heyling makes a similar point:

One of Decomposition’s troublesome aspects is that Durkin bases many of his arguments on a set of assumptions that he positions as universals about listening. For instance, he writes: “We have become accustomed to focusing on the end result of musical production as if that’s all there is to it.” Similarly, he pronounces: “There has been a great deal of anxiety about how we value music—but also what music means . . . and even what it is.” 

To a point, I understand these complaints. I certainly find it irritating when other writers overuse the “we” convention—one of my favorite recent non-fiction reads, Kathryn Schulz’s Being Wrong, is, in my opinion, marred by this same tic. And I can’t deny that the lines that Borchert and Heyling quote are in my book, and that they sound a little pompous taken out of context. 

Yet context is important. Consider: at the beginning of the book I write that the influence of authorship and authenticity—what Heyling assumes I have posited as a “universal”—“is by no means universal.” Later, in introducing the section on authorship, I say that “I don’t want to exaggerate the case here by suggesting that the rhetoric of genius”—the mode of speaking about authorship that I am critiquing—“is the only available mode for speaking about music in our culture.” And in laying out the history of authenticity (a worldview that argues against the importance of context, and thus is directly pertinent to the sentence Borchert cites), I argue that inauthenticity “is at least as important” as a cultural phenomenon, “whether we live with it as a hard, inescapable truth or intentionally turn to it as a source of postmodern nirvana.” 

I could cite other examples; this sort of qualification goes on throughout the book. I had assumed, perhaps too easily, that readers would take this framework into account whenever coming across my use of first person plural pronouns. 

But I will also admit that there are two other things going on here that complicate the discussion. The first is that I’m trying to make a distinction between musical discourses and musical experiences, even as I recognize that they are mutually influencing. (“Ultimately,” I write, “rather than defining music, I am interested in how we discuss whatever it is we think music is, as well as what that discussion obscures.”) And in terms of musical discourses, the challenge is that in many cases “we” actually does apply—in the same way that it applies when, say, a nation goes to war against the wishes of at least some of its citizens. In that sense, I certainly can say that “we are convinced that the quality of a musical work cannot derive, even if only partially, from its context.” Even if I don’t literally count myself as a part of that “we” any more than Borchert or Heyling do, I am still part of the culture that holds this as a discursive value. It is really only in terms of the category of musical experiences that the “we” doesn’t apply, because that is where perceptual individuation happens. 

Missing this distinction, Heyling makes an odd move, recognizing that I am “rather self-aware about [my] background and personal preferences,” but then asserting that I do “not seem to have fully allowed that those biases have colored the book’s premises.” Yet when I talk about music experientially, I certainly do correct for my biases. After all, I spend a good deal of the book empathetically exploring music and musical practices that I, as a listener, don’t particularly enjoy or understand—Milli Vanilli, for instance, or auto-tuning, or drone metal. And when I talk about music as a discursive practice, my own biases are irrelevant, because I am addressing what people say and write about music, not what they actually experience (which is inaccessible to me, and which may indeed be inexpressible).

The second reason this is difficult to discuss is that there’s a case to be made that perhaps the mania for authorship and authenticity are more widespread than any of us care to admit. Like the white middle-class Liberal who doesn’t want to believe she has any role in perpetuating racism, the academically informed music fan doesn’t want to believe she has any role in perpetuating essentialized ideas about art. And yet my argument is that the discursive practice runs deep, and is hard to override. (If I knew Borchert and Heyling better, I would be willing to bet I could find examples of its expression in their work, without too much trouble. Indeed, I often find myself unintentionally falling into this way of speaking and writing too.) In part that’s because the practice is extremely convenient, especially as culture gets more dense and complex. “It is much more elegant,” as I put it in the book, “to say that ‘Cotton Tail’ is Duke Ellington’s composition than it is to say ‘Cotton Tail’ was a messy palimpsest, composed by Ellington, Ben Webster, George Gershwin, some unknown musician who first used the rhythm changes, et al.” But in part it’s because it is habitual, and human beings are creatures of habit.

One final point about the Heyling piece and then I’ll be done critiquing the critics. She argues that my “bibliography makes it clear that [I have] not engaged with most of the influential musicological literature from the last thirty years, in spite of the book’s copious references to other scholarship from other fields.” She’s absolutely right that I don’t draw on Philip Bohlman, Katherine Bergeron, or Lawrence Kramer (the musicologists she cites).* I’m sure the book is weaker for it. For the record, however, here are some of the musicologists (or musicology-informed thinkers) I do draw on, most of whom have indeed published important work within the last thirty years: Richard Taruskin, Lydia Goehr, Carolyn Abbate, Theodore Gracyk, Susan McClary, Joseph Kerman, Jonathan Sterne, Christoph Wolff, Simon Frith, Joseph Horowitz, R. Murray Schaefer, Christopher Small, Alex Ross . . .

Still and all: I am very grateful that readers and critics are engaging with the book. I look forward to further commentary.

* Heyling is wrong that I don’t cite Benjamin, however. I cite him twice.

Monday, November 17, 2014

Why don't you write me?

Hello. It has been a while since I have posted anything, and so I have a number of updates.

Tomorrow, my first book, Decomposition, will be officially published. I guess that’s a milestone for me. For those of you who don’t know, Decomposition began as a dissertation (I defended it way back in 2004)—and in 2009, in a fit of boredom, I began publishing pieces of it on this very blog. In that form it was soon discovered by the woman who would become my agent—the amazing Barbara Clark. One thing led to another, and here we are.

For what it’s worth, a lot of work went into revising the book from its dissertation version. Three years of work, in fact. I mention this only because one of the critiques that seems to be emerging (in the Amazon Vine reviews, at least) is that the book is difficult and academic. Which is not to say that Decomposition is not difficult and academic (fair warning, though your mileage may vary)—only that, if it is, it is probably a lot less difficult and academic than it used to be.

Still, I’m pleased that the Amazon Vine reviews are, on the whole, favorable—even some of those who struggled with the content gave it high marks, and at the moment there are two 5 star reviews. All of which is certainly gratifying. 

I simultaneously have two other book projects going on, each of which is occupying a good deal of my attention (one reason I haven’t been blogging much). First is another non-fiction book, which at the moment is just an idea, really . . . a set of notes and sketches. I have wanted for a while to do a Decomposition-type book (that is, a turn-conventional-wisdom-on-its-head-type book) about each of the three subjects my mother insists shouldn’t be discussed in public: religion, politics, and sex. So I have embarked on the first of these—an agnostic argument about religion, belief, epistemology, and ethics, informed by my years as a church organist here in Portland.

My other book project is much closer to completion. Actually, I thought I had completed it last Spring—it’s a novel I wrote for my daughter, about a tree that grows across the Cosmos and connects two worlds, and the cat who travels between them. Over the summer, I decided (based on some expert advice from Barbara) that it needed more work. I guess I hadn’t fully appreciated that it can take more than twelve months to write a first novel . . .

So I have been deep in revisions on this cat book through most of the Fall. It’s utterly different from any creative work I have yet undertaken—less beholden to the “real world” facts that guide non-fiction writing, but somehow more dependent on a clear internal logic than any of the instrumental music I write. Not that my instrumental music has no internal logic—though I’m sure not everyone thinks so!—just that that logic doesn’t need to be communicated to the audience as explicitly. With music, I can just sort of “feel” where a piece works, without necessarily having to articulate its structure to myself or anyone else. In a novel, you have to use these things called words . . .

So I’ll get back to it. I hope to post here more regularly in the weeks ahead—though I should say that I haven’t had much interest in the controversies that have dogged the jazz world over the last year (everything from the Sonny Rollins satire to that Mostly Other People Do the Killing Album). There was a time when I would have been all over that stuff. But now the idea of engaging it feels so unproductive—even unhealthy. So I’ll probably post more broadly about music, or fiction, or writing in general, as I can.

Thanks for reading!

Monday, June 30, 2014

These are not the droids we were looking for

Astra Taylor
The People's Platform: 
Taking Back Power and Culture in the Digital Age

(Get it here.)

So far, digital culture has had a convoluted history. For most of the first decade of the twenty-first century, the lines in the sand seemed clear enough: on one side were the legacy content industries, exemplified by institutions like the RIAA and the MPAA, those infamous acronyms that fought tooth and nail to protect the idea that art and culture were private property. On the other was the freewheeling web, which promoted more democratic ideas about what it meant to create, to be an author, to be a cultural participant. (As I argue in Decomposition, my forthcoming book, these were not new ideas, but rather new articulations of old ones.) The point is that whichever side you chose, the choice itself seemed uncomplicated: either you were for the new way, or you were for the old one.

In the wake of Web 2.0 (when can we start calling it Web 3.0?), staking out a position in this battle is more problematic, subject to all kinds of uncomfortable intersections and realignments. In music, once upon a time, being for independent artists and the new technologies that were supposed to help them meant that you were against the legacy industry. To some extent the opposite was true as well. Today, arguing for the new technologies and their aesthetic affordances can easily be mistaken as a strike against art and culture, one that starves those things in the name of a vague idealism. Conversely, speaking out about the material realities that artists face can seem the worst kind of conservatism, a way of giving in to “the man,” or wagging one’s finger in a fit of haughty moralism. 

With this new complexity a narrative of regret has emerged: a sense that we have been duped, or misled, or at least that the early promises of digital culture are not so easily realized as we once thought. (Lars Ulrich, as some music fans have grown fond of saying, was right after all.) Some of the most high-profile and articulate expressions of this new narrative have come from writers Robert Levine, Chris Ruen, and especially Nicolas Carr and Jaron Lanier. But in my opinion, the best one so far is Astra Taylor’s The People’s Platform: Taking Back Power and Culture in the Digital Age.

Taylor is a documentary filmmaker with a philosophy background; her film Examined Life, a series of discussions with contemporary philosophers, is recommended. The People’s Platform, her first book, contains precious little philosophy, but in my view it surpasses the other texts in this genre by considering the problem we face from within a progressive framework. Taylor’s is essentially a labor-based analysis, one that strives to get us to see artists’ work in terms of the larger economic structure, and especially in terms of the twenty-first century’s unprecedented upward concentration of wealth. Rightly decrying that trend, she urges us toward a sustainable culture, in which wealth is broadly invested (rather than hoarded) and work is nurtured (rather than depleted).  

All of which strongly suggests that the digital dilemma is a predictable outcome of free market capitalism, since the techno-utopian mega-corporations (Google, Facebook, Amazon, Apple, and so on) have seized upon that economic philosophy to achieve and justify their now-immense power. That analysis makes Lanier's proposal that users be compensated with nanopayments feel like an extension of the problem—as if the world is not commodified enough alreadybut Taylor offers a number of stronger, more left-leaning solutions, even as she admits all of them will require a degree of consciousness-raising: from the ethos of sustainability itself; to a revivified national arts policy that builds on the history of the NEA or Public Broadcasting or even the WPA; to the idea of regulating service providers and popular Internet platforms as public utilities; to increased subsidies and taxes to be paid by advertisers and technology companies; to inchoate micro-economies built around practices like crowdfunding or sites like Bandcamp. (Taylor doesn’t mention this last example explicitly, but it certainly qualifies.) These may be the best solutions we have at the moment, and though I am not convinced that we will see them realized in my lifetime, I believe they are worth fighting for. 

At the same time I want to push back against the impression Taylor leaves that the problem of techno-utopian wealth concentration (and the concomitant impoverishment of the creative class) is ultimately a function of the philosophy of “free culture” and its expression in the everyday practices of audiences. Of course, the regret narrative is right to point out that that philosophy can be, and has been, used for problematic ends. The digital era has produced its own brand of charlatanism, and Taylor is right to expose it. (Though I should add that her analysis of Lawrence Lessig is incomplete; she praises Lessig's critique of copyright law, but then adds that he “ignores the problem of commercialismoverlooking the fact that for the last seven years Lessig has been focused not on defending file sharing kids but on getting money out of politics). 

Because of the philosophy of “free culture”, Taylor suggests, we now blithely assume

that traditional gatekeepers will crumble and middlemen will wither. The new orthodoxy envisions the Web as a kind of Robin Hood, stealing audience and influence away from the big and giving to the small. Networked technologies will put professionals and amateurs on an even playing field, or even give the latter an advantage. Artists and writers will thrive without institutional backing, able to reach their audiences directly. A golden age of sharing and collaboration will be ushered in, modeled on Wikipedia and open source software. 

Put that way, and considering the obscene sums of money now made by the winners in our winner-take-all system, “free culture” does sound naive and delusional—just as "All You Need is Love" sounded naive and delusional by the end of the Sixties. But the fact that its concepts have been cynically appropriated in ads and business models and TED talks (is anything in our culture immune to such appropriation?) does not mean the concepts themselves are bad, or that they don’t remain an effective way to foster creativity, or that they will be useless for breaking out of our commercial stranglehold at some point in the future. Indeed, as I argue in Decomposition, it is not a question of whether collaboration, or sharing, or remixing, or sampling, or piracy, or whatever other forms the concepts take, should or should not occur, as the outcome of a single, simple moral decision. The history of art reveals that, whether we like it or not, these ideas have been crucial to creativity since long before the first computer was ever built. (“All art,” as Glenn Gould once said, “is really variation on some other art.”) Denying that history, even in the name of economic fairness, may create more problems than it solvesdepending on what sort of culture we actually want.

Moreover, “free culture” (or, in my own, more broadly-defined term, decomposition) is still the best way of critiquing the real issue: the ideology of authorship and authenticity. (For what it's worth, here's how I define those terms in my book, and in terms of music: authorship is the idea that works are created by solitary individuals, and authenticity is the idea that there is a “singularly true, ideal experience of music that trumps all others, disregarding the variability of audience perception, and accessible only to those with ‘correct’ knowledge and ‘proper’ understanding.”) It’s still the best way, for instance, to undermine the celebrity worship that propels our interactions on the network. Taylor’s point that Web 2.0 has led to the consolidation of superstars as a class is instructive here; superstars are now both fewer and wealthier. In a world of aesthetic abundance, the veritable “celestial jukebox” we were promised, why should that be? I blame our willingness to believe in putatively objective hierarchies of quality—the individual god-like artists and reified expressions of music that we are all supposed to agree are among “the best that has been said and thought in the world” (to use Matthew Arnold’s famously narrow definition of culture). As belief systems, authorship and authenticity create powerful cultural cliques; they have a tendency to pull audiences toward an arbitrary center of gravity, to work against a more haphazard and chaotic process of taste formation, and to stomp down the so-called long tail in favor of a disproportionately large head.

But not only do authorship and authenticity corrupt our understanding of art—they also drive the free market system that makes the techno-utopian mega-corporations possible in the first place. They are crucial to private property (see John Locke). They are crucial to advertising (see trademark law). They are crucial to planned obsolescence (see the parade of new devices, each made possible by an updated slate of proprietary technology). They inform our understanding of the techno-utopian mega-corporations themselves (see the cult of Steve Jobs). So it is actually not that these corporations have truly embraced the “free culture” philosophy they benefit from, or done away with authorship and authenticity, as some purveyors of the regret narrative would have it. Instead, they have claimed authorship (this brand made this product!) and authenticity (it is objectively the best; buy it!) for themselves. What is Mark Zuckerberg now if not, legally, the “author” of a significant chunk of the Internet—a rights holder of the donated experiences and expressions of the enormous number of people who use his network? If “free culture” had really come to pass, that kind of ownership would be impossible.

Given this continuity, the danger of the regret narrative is its propensity for engendering a feeling of nostalgia—not a critique of the system itself, but a critique of the current manifestation of the system. Taylor is better at avoiding this trap than most, but even she betrays an occasional fondness for the more dubious aspects of what we have lost. She repeats, for instance, the problematic notion that one of the benefits of the old label ecosystem is that its pop hits functioned to “funnel revenues from more successful acts to less successful ones.” She calls that dynamic “cross-subsidies,” but to me, it sounds a lot like trickle-down economics: nice if you happen to be one of the fortunate few to get some of the windfall. More broadly, if free market capitalism is the problem, why would changing the elite beneficiaries of that system—subbing in the big record labels or movie studios for any of the major technology companies—be the solution?

What we have needed for a while now is a radical disruption of some sort. Perhaps the Internet was that disruption—or perhaps it merely made us aware of the ideas that will make that disruption possible. In either case, we should be careful of banishing those ideas to the dustbin of history, just because we have not yet been able to take full advantage of them.