, , , , , , ,

Just a few minutes into my dissertation defense, a little over a year ago now, my outside reader pointed to numerous points in my dissertation that called for interdisciplinary efforts to build a “better bridge” between the humanities and the sciences, a phrase I myself borrowed from scholars and scientists such as Thomas Metzinger, Antonio Damasio, and Keith Oatley. Openly acknowledging that he wanted to play devils’ advocate for a moment, he asked why I felt that obnoxiously hubristic scientists should be seen as having anything to contribute to work in the humanities in general, or in literary studies in particular. (I’ll note that this particular committee member is someone who does quite a bit of interdisciplinary work himself, so I immediately understood that this was a genuinely open question). Following the opening question I outlined at the start of this post, the conversation went several different ways, as they are wont to do (and should) in these instances, and again, the person asking this question was presenting as an exercise, not putting it forward was a belief they held.

My response was that, outside of providing access to research and ideas outside the scope of humanities scholarship, I appreciate how those working in the sciences are usually prepared to admit that, after further consideration or in the light of new advances in research and technology, some of their previous assertions and ideas may no longer be viable. I very briefly explained that this is an attitude that humanities scholars would do well to adopt — follow hypotheses, conference and even publish arguments based on these philosophies, and then allow the community to help test it. Instead, the traditional model — develop an intricate and exacting idea, then proceed to dedicate subsequent decades to expanding and defending that idea (often) blindly and at all costs — leads to overspecialization, isolation of knowledge, and a system in which so-called “original” work is all-but impossible to produce, yet is still the standard used for hiring, promotion, and even job retention. (For more on this, look up Jonathan Gottschall’s excellent 2008 book, Literature, Science and a New Humanities).

The example I pointed to at that moment was Antonio Damasio’s acknowledgement, in the first pages of his magnificent 2010 book Self Comes to Mind, that he had recently “grown dissatisfied with my account of the problem [how the brain produces consciousness],” despite 30 years of work on the issue. He goes on to note that there are two particular issues he wishes to revisit — the origin of feelings and the mechanisms that work to construct the self — before noting that his book is simply an “attempt to discuss the current views,” while also identifying “what we still do not know but wish we did” (6). (For an in-depth review / overview of Self Comes to Mind, stop by Ginger Campbell’s always-insightful and fascinating Brain Science Podcast, which I’ve blogged about before, and check out her recent discussion of the book).

This was all brought back to my mind this morning, as my Twitter feed alerted me (via 3 Quarks Daily) to Daniel Dennett’s recent article over on Edge.org. From the very first sentence, Dennett makes it clear that he wishes to revisit a fundamental component of some of his earlier work and discuss the ways in which recent research more or less invalidates some of his original assumptions: “I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine” [homuncular functionalism]. In short, his original idea, following the McCullough-Pitts “logical neuron concept,” was that neurons could be viewed as “switches,” turning off / on as required during computational processes. While this was a grand oversimplification even at the time, he acknowledges, recent research has revealed more about just how much so: “each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.” Dennett goes on to note that this suggests that the brain is not at all a sort of “well-organized hierarchical control system where everything is in order,” but rather “more like anarchy with some elements of democracy.” He further acknowledges that this enhanced understanding is made possible thanks to two aspects of the current climate in neuroscience research, namely massive quantities of useful data and “bright young people who have grown up with this stuff and for whom it’s just second nature to think in these quite abstract computational terms,” terms and ideas which their predecessors had to struggle to conceive of and visualize 30 years ago.

Dennnett’s enthusiasm for the “happy” fate of his original concepts here, a fate shared by the ideas and concepts produced by others which made his own possible in the first place, is boundless. He’s happy to abandon the old, rigid models, which he feels were still “worth trying for all [they were] worth.” After all, you “go for the low-hanging fruit first,” which in this instance was to assume that brains operated as very simple machines. When that model is proved not to work, the trick is to understand why not, which, he notes, has started to become possible.

I won’t go into the details of Dennett’s revised ideas on the subject, other than to highlight his assertion (joining many, many other cognitive scientists) that increased understanding of brain structure continues to suggest strong links between brain evolution and cultural evolution; his next major project will be to “take another hard look at cultural evolution… [to facilitate] a proper scientific perspective on cultural change.” This perspective caught my eye in particular because it brought me full-circle back to my dissertation defense and certain lines of questioning. Anyone attempting to operate in the field of cognitive literary studies will likely tell you that they have encountered similar questions; some have had their work dismissed, ridiculed, or ignored precisely on the grounds that any literary scholar turning to the sciences for information, methodology, or even both, must be likewise “arrogant” and “hubristic.” And to be fair, one need not look far to find scientists for whom those labels may fit quite nicely (Richard Dawkins, for all the vitality of his early work, springs to mind here for some reason). Yet the humanities are equally rife with folks whose work fits that same description, and in my estimation, these often wind up being the very people that young scholars are expected to look to for role models. The furor and excitement over Stephen Greenblatt’s wildly popular and even more wildly overreaching 2011 book The Swerve is a good example of this. Even as the book went on to win a Pulitzer, the PEN/Faulkner award and, more recently, the MLA’s James Lowell Russell Prize for excellence in literary study, many scholars were left scratching their heads (Jeffrey Cohen provides an excellent analysis of the disparity between the book’s critical reception and critiques leveled against it by a variety of scholars and readers here).

Here’s what I would ask: that young scholars work to model the sort of behavior that Dennett and Damasio do in the instances I’ve highlighted here. In other words, conduct research and share results in the spirit of experimentation and collaboration, partnering with researchers and scholars in other fields — be that biology, anthropology, chemistry, computational science, neuroscience, music, dance, and so on — to r-econceptualize and re-engineer the fields they work in. And of course, even more important, humanities departments need to open themselves up to this new model as well, working simultaneously to create and sustain an atmosphere of experimentation and collaboration, while also convincing university administrations that the time is long past for abandoning the academic monograph as the sole standard of the importance and impact of one’s research. Last year, the department where I held my postdoc hosted a talk given by a seasoned editor from a storied and respectable academic publisher. Throughout the talk, he advocated for precisely the same thing, arguing that the current emphasis on pushing doctoral candidates in the humanities to produce dissertations that are ready for rapid transitions to monographs, particularly aimed at the elite (read: conservative) publishers is killing the vitality of the humanities. Instead, he urged young scholars and dissertation directors alike to encourage creativity and experimentation, aimed at creating and sustaining broader conversations in which highly specialized monographs are but one of many possible outcomes. For me — at work on broader research that is developing a significant portion of my dissertation project into a manuscript — this blog is one of the ways I can take part in that conversation.

But I would also ask that we all be willing to let ourselves fail from time to time — publish a blog post or even a paper outlining a project that arrived at unexpected or possibly even seemingly unhelpful conclusions — and that we be allowed to fail. As Dennett points out, without decades of work attempting to prove the McCullough-Pitt hypothesis and other similar ideas, the currently evolving concepts of the complexity and plasticity of the human brain would not be possible. Every once-in-awhile, it’s actually useful to consider what failure means. We’re awfully quick to assume or suggest that failure equates to a waste of resources, or even signifies some sort of personal and inherent weakness in those undertaking the task to begin with. “Good” scholars do not ever fail, we are told; “bad” scholars fail and do not receive jobs, promotions, tenure, publication, and so forth. If a particularly strong idea invites a decade of challenges on behalf of hundreds or thousands of academics, I’d like to think the results will be far more valuable and stimulating than operating on the assumption that the same idea is infallible and therefore should not be challenged. As an example, for me, Greenblatt’s recent book is utterly baffling in its assertions and seemingly slip-shod approach to historical research; but if that makes it a “failure” in the eyes of some scholars, who find the book’s claims a compelling reason to investigate not only the critical apparatus which produced it but also the assertions and methodology it displays, then it just might be one of the more important books of the year after all. Let’s start thinking about having the audacity to fail — to explore, to experiment, to challenge, and to experience the rush of excitement that comes from realizing that a failed hypothesis has actually opened the door to new possibilities.