Scholarly Labour & the Fantasy of Self-Fulfillment

sean-hannanby Sean Hannan

The work that is currently being done on “emptiness” by the University of Chicago Divinity School’s alumnus of the year, John Corrigan, should provoke serious reflection in any student of religion. His recent chapter on the rhetoric of emptiness as applied to issues of the body raises a number of questions concerning the ways that American Christians, especially, have imbued the seemingly material terminology of “empty” and “full” bodies with much-more-than-material heft. Hitting upon issues ranging from asceticism to mysticism to eroticism, Corrigan offers up a sober yet suggestive selection of sources that incorporate notions of both “emptiness” and “fullness” into discussions of Christian praxis that helped shape American (and not just American) history.

Yet the most salient aspect of “emptiness,” as Corrigan explores it, might be its role in rhetorically re-casting the economic alienation of the labourer in religious (or perhaps pseudo-religious) terms. For someone who studies religion—who in almost every case also happens to be someone who works on religion—this is indeed where the rubber hits the road with the greatest frictional force. Quite often, the work of the student—especially, but not exclusively, the graduate student—is framed not as labour in its purest sense, but rather as a kind of devotional practice. To be sure, referring to the scholarly life as a Weberian “vocation” has a long history; but the devotional tones surrounding postgraduate work in the humanities, social sciences, and especially religious studies can at times reach a fever pitch that would make Max Weber’s ears bleed.

On Corrigan’s account, the religious (or, once again, perhaps pseudo-religious) valorization of work ‘as if for its own sake’ has a long and varied history in Christian discourse. Yet Corrigan’s concerns are primarily modern, and so that is where our focus shall stay. Take, for example, this summary of Thomas Carlyle’s appraisal of the situation in the nineteenth century:

“The latest Gospel in this world is, Know thy work and do it.” For Carlyle, “a man perfects himself by working. . . . The blessed glow of Labour in him, is it not a purifying fire, wherein all poison is burnt up, and of sour smoke itself there is made bright blessed flame! . . . All true work is sacred.” (66)

In the wake of Carlyle and the contemporary Christians he casts as valorizing work for work’s sake, some had reason to pause and consider whether or not such ‘pure work’ deserved to be treated as an ultimate concern. How could the soot-covered, bedraggled workforce of the Gilded Age count as a manifestation of the glory of God? Would the intense extremes of industrialization lay waste to any fading fantasies of the ‘moral value’ of work? These questions rang truest, no doubt, alongside incendiary words like “Pinkerton” and “Haymarket.”

The sobering truth behind such questions continues to resonate in the concerns of labour today. And while it resonates most resoundingly in the fields of manufacturing and material industry, it does not fall silent when we turn to intellectual labour. As students of religion who also work on religion, it is incumbent upon many of us to apply Corrigan’s questions about the ‘spiritualization’ of labour to our own situation. To do so is by no means to diminish the intensity of the industrial exploitation that was the hallmark of the Gilded Age. Rather, it is to suggest that a collective identification of so many of us as labourers can, if executed properly, carve out a new path forward for us as we attempt to understand the dialectic of emptiness and fulfillment in our own working lives.

So how did this rhetoric of “emptiness” and “fulfillment” play out in earlier phases of the moralization of work? Here Corrigan has much to contribute. He argues that “in a Christian ethics that made gradual inroads into the workplace, fulfillment in work comes from losing oneself in work.” (69) Another generation might have cast our daily duties as a form of subtle self-sacrifice, chipping away at our hedonism in the name of the greater good. But this newer ethos implied that work, far from being a voluntary self-emptying, was in fact the medium of fulfillment itself.

Leisure empties; labour fulfills. A less ancient sentiment could hardly be found. Yet it remains remarkably resonant today, especially with those of us who have chosen to labour upon the fertile fields of religion. Laborare est orare: “to work is to pray.” (70) This is the closer to the operative maxim in our era. Could it be the case that cultured leisure (otium), rather than bustling busywork (neg-otium), might more closely model the ideal medium of research? Perhaps—but that is ancient logic, ill-suited to the debates of today.

So what are these ‘debates of today?’ One of the most common asks us whether or not academic work counts as “labour” in the fullest, most impossibly robust sense of the term. This is a question that is not alien to graduate students in the field of religious studies, because it is a question that pertains to graduate students of all stripes. All the recent headway made by organizations like Graduate Students Untied (GSU) at the University of Chicago stands as a testament to this fact.

The core of the question is this: does the fact that graduate students find more-than-economic fulfillment in their work negate the labour-value of that work? In other words: is their work a form of solipsistic self-fulfillment or a display of self-sacrifice in the name of society? Can it not be both? Refraining from offering up some sort of definitive conclusion here, we should at least admit that the dialectic of emptiness-and-fulfillment continues to wield force in ongoing debates about the meaning of student work as labour.

A similar debate has arisen closer to my current academic home in Canada. Just this year, our provincial government passed a bill that (to oversimplify) transformed vague ‘faculty associations’ into full unions with the right to strike. This was less an act of itinerant ideology than it was an attempt to bring our provincial system into legal harmony with certain judgments made by our federal Supreme Court. The result, however, has proven somewhat ideologically explosive.

To some, this is an act of socialist subversion, forcibly transmuting friendly faculty associations into fierce foes of our administrative partners. To others, this is an act of governmental overreach, throwing under-prepared proto-unions into a do-or-die battle with the very administrative bodies tasked with cutting their budgets. To a precious few, this is a fairly neutral bill aimed at bringing a provincial law into harmony with federal standards, however many side-effects it may bring along with it.

Refraining once again from offering any silver-bullet answers, we can at least say that the debate about the nature of academic work lives on. When “we,” as faculty, teach and research, are we not engaged in labour? When “we,” as graduate students, teach and research, are we not likewise engaged in labour? Does this labour, furthermore, take the shape of ascetic self-sacrifice or that of personal fulfillment? Is there no way out of this emptiness-fulfillment dialectic, after all?

The pessimist in us may want to argue that the economic landscape of the academy today is little more than a perverse imitation of the “Gospel of Prosperity” that Corrigan so helpfully categorizes for us in his manuscript. (81) If you ‘do well’—if you are ‘good,’ according to certain circumscribed categories—you will continue to do well. If you placate the deity, it will reward you—not later, but now(-ish). If you ‘do poorly’—again, usually according to deeply obscured metrics—you will continue to do poorly. And who have you, the academic worker, to blame for this situation? “O man, who art thou that repliest against God?” (Rom. 9:20)

We seem to have reached an impasse. If academic work is self-fulfillment, it is its own reward. How then is it “work,” in the socially meaningful sense? If academic work is self-sacrifice, it is perhaps deserving of compensation, but at the same time it loses the seemingly distinctive character of its intellectual vocation. The work of the scholar is either pure self-fulfillment or pure self-emptying. Is there no via media?

Perhaps there could be. There could be if we were willing to dispense with the notion that the emptiness-fulfillment dialectic is one that can be overcome only through individual effort. Time and time again, the current vagaries of the academic job market have proven that the myth of individual exceptionality must be put to rest. It is no longer a question of who “self-fulfills” or “self-empties” to a degree sufficient to deserve just compensation for their labour. Any dream of fulfillment we might still harbour resides not in the atomized unit of our own individual identity as a commodity on the market, but in the collective labour in which we engage together.

We all work on religion together; to most scholars of religion, this is uncontroversial. We all work in the humanities together; to most scholars in the humanities, this too is uncontroversial. Let us all work together to ensure that the following statement becomes just as uncontroversial: we all work together—full stop.

The Travails of Trying to Go Digital

by Sean Hannansean-hannan

When I took up the position of Assistant Professor in the Humanities here at MacEwan University, my job description included a mandate to engage with the digital humanities. The nature of this engagement was open-ended, both delightfully and terrifyingly so. It could mean intimately interweaving cutting-edge technology into a research project. Or it could mean focusing on how best to supply students with the tools they’ll need to make sense of digital information for purposes both scholarly and economic.

Regardless of the shape it took, this engagement clearly had to have an effect in the classroom. Though scholars at my institution are keen to maintain active research profiles, our main purpose is and has always been to teach students. (It’s a radical mission, I know!) Since our teaching doesn’t take place in a vacuum, this means we have to account for the latest developments in how students go about learning in the first place. Given the prevalence of technology in pretty much everything we do these days, I shouldn’t have to say much more about how embedded student learning is in the world of websites, apps, and other modern miscellanea.

One mantra often heard from the mouths of digital humanists is that students best ‘learn by doing.’ The practical effect of this mantra is to turn most instructors’ attention in the direction of assignment design. If we can design assignments that encourage students to engage with digital resources of their own accord, then we can actually combine our own pedagogical goals with skills already taking shape for most students. Instead of just hauling out your laptop and slapping some PowerPoint slides up via a projector, in other words, you might even be able to get students to develop technological prowess by creating their own polished presentations (hopefully on a platform more adventurous than PowerPoint).

At the same time, as a former denizen of Swift Hall, I can’t help thinking that ‘learning by doing’ should not supplant learning by, well, thinking. We might even want to say that, in the greater scheme of things, thinking and doing are not so opposed. (This is another daring proposition, I know!) Many a Wednesday I stood in line in the Reynolds Club for my one-(American)-dollar milkshake, surrounded by students with shirts stating: “That’s all well and good in practice, but how does it work in theory?” That leaves a mark on your psyche. (Here I will keep silent regarding claims about where exactly it is that “fun comes to die.”)

The higher goal, then, would be to approach assignment design in a way that honours both the practical wisdom of ‘learning by doing’ and the stodgy-sounding-but-still-salutary ‘learning by thinking.’ While still in Hyde Park, however, I knew that I had to take the initiative if I wanted to get more involved with the former side of things. If left to my own devices, I might have just fashioned a shelter in some long-forgotten corner of the Reg and kept reading the Corpus Christianorum Series Latina editions of Augustine until I transcended any sense of time and space. Perhaps it would eventually become necessary for a search-and-rescue operation to come find me, beard down to the floor, subsisting mostly off of Ex Libris coffee of varying quality (depending on the year in which I got lost).

Luckily, things didn’t end up that way for me. Instead, I went out and tried to procure odd jobs that would help me better understand how to wed twenty-first century tech to the humanities in ways that weren’t utterly lame. In the summer of 2015, I was one of a stout cohort of Divinity School students to contribute to the University’s 125th Anniversary departmental histories project. While some of us were chosen for more exploratory missions (like engaging with molecular engineering), I was given the imposing task of helping to document the long history of the Divinity School itself.

Given that the Divinity School is, by most estimates, about a quarter of a century older than the University of Chicago itself, it should suffice to say that there was much to do. The job involved digitizing old documents (yellowing pages of course programs from 1895), building up databases (of alumni and faculty), and finally contributing to the creation of visual products. That last bit was ultimately completed by people far more skilled than I, of course.

Aiming to keep my momentum going after this summer job, I spent a good chunk of the 2015-2016 academic year working as an Institutional Data Intern with UChicagoGrad and the Office of the Executive Vice President. While the 125th Fellowship job taught me a lot about how to put the ‘digital’ in ‘digital humanities,’ this internship had me not just building up databases, but figuring out how to subject them to data-tuning and then actually put them to work strategically. This was less about connecting digital resources to traditional humanities research and more about burrowing as deep into the digital as my humanities-addled brain could tolerate.

Once I re-emerged from the data-tunnels to the bright, Platonic surface of humanistic truth-seeking, I realized that I would have to figure out how best to put all of this to work in service of both my research and my teaching. That’s what I’ve been trying to do here at MacEwan over the past few months. I began, like all responsible Canadians, with a grand and reverent caution. In my second-year Medieval European History course, as an initial trial balloon, I slowly transitioned my students from traditional reading responses to digital mapping assignments.

For ease of access, I had them use Google’s MyMaps to construct historical maps of twenty distinct locations that were relevant to our study. Of course, given that Google will instantly locate any current site, I mostly used former place-names that are no longer commonly used, such as Constantinople for Istanbul or Königsberg for Kaliningrad. (The latter was surprisingly successful at stumping students!) MyMaps, while insanely simplistic compared to sophisticated mapping tools like GIS or Stanford’s Orbis, does let you play around a little bit. You can add in images and textual descriptions for each locale, while also colour-coding location markers based on relevant criteria (e.g., you can make all of the members of the League of Cambrai purple—just because!).

This time around, however, I decided just to stick with the basics and see how it went. Perhaps students would struggle with the basic elements of accessing MyMaps, navigating its interface, or sending their finished products along to me (as a link rather than a file). In the end, all of these concerns were proved baseless. Prepare for a shocking revelation: students who were mostly born in the late ’90s have little trouble making use of Google programs like this. The most common bit of feedback I got was that the assignment was simply too easy.

To such feedback I say: challenge made; challenge accepted. For my History 101 class this semester, which covers everything ever up to 1500 CE, I will be rolling out not one, not two, but (wait for it) three distinct assignments based on digital tools. The first will be ‘Mapping 2.0,’ which will now be certain to award points on the basis of aesthetic presentation (images, descriptions, and whatever else they can come up with). The second, haunted by the spectre of my 125th Anniversary fellowship, will be a polished timeline constructed using one of the multiple tools made for just such a purpose; I’m currently leaning toward Timeline JS.

The third and final project is, in a way, reminiscent of my institutional data internship, in that I will try to get a bit more technical and have students run analysis on selected textual fragments. Here, however, we run into a potential snag. In a pre-modern history course that is based on evidence provided by a textbook, students will encounter primary sources only by way of translation. An esteemed colleague of mine at Washington State University pointed out the obvious folly of having students run analysis on translated text. Without doubt, NVivo and Voyant can be valuable tools, and yet—might their value be diminished if we apply them not to Virgil but to some schlub’s rendering of his magisterial Latin into crudely modern English? (Apologies to my fellow schlubs out there.)

Translation issues are not the only problems that plague the aspiring digital humanist. Sometimes there are subterranean roadblocks that emerge as if from nowhere. In my senior-level seminar this year, for example, I designed an entire assignment around the use of the Augustine’s Confessions app, a fantastic piece of software developed by a team at Villanova University.[1] Students would be encouraged to consult the app as they worked through the Confessions on their own, then write a review of the app that reflected on its pedagogical potential. At the end of the semester, we would then all join together to present our findings at MacEwan’s common undergraduate research day or CURD. (Note: we don’t actually call it that!)

Alas, as I retired to check my emails one last time before bed after teaching our first session, I saw an email from a student indicating trouble downloading the app. After rooting around a bit, I discovered the core of the issue: the app, being American in origin, could not be purchased from a Canadian account. Rest assured: the obvious workarounds popping up in your head right now have been tried and found wanting. Further workarounds are being sought as we speak. We have our top people working on it. In the meantime, we must wallow in the unexpected awareness that, even in this age of global-digital quasi-bliss, good ol’ national borders still can get in the way of a good assignment.

Postscript: After letting Noel Dolan and the rest of the Confessions app team know about the issue, they fixed everything immediately by adding it to the Canadian app store, thereby cementing Villanova’s status as my favourite school east of Lake Michigan.

[1] http://www1.villanova.edu/villanova/unit/MobileComputing/VillanovaMobileApps/augustine-s-confessions.html

Twilight of the Textbooks: Smashing Idols through Classroom Dialogue

Thinking back upon the halcyon days of my graduate study in Hyde Park, I dimly recall a formative remark made at one of our always-well-organized (and characteristically well-attended) Craft of Teaching meetings. Alright, in full disclosure: I only received my doctoral degree from the Divinity School this summer, and so I remember those pedagogical sessions better than you might think. Still, permit me to paraphrase rather than cite from eidetic memory.

One or two of our august faculty members had decided to join this particular meeting and grace us with the wisdom they had accumulated over the years. After some brief but helpful opening statements, most of the session consisted of a rather lively question period. We gave the British parliament a run for their money that day. The question that stands out to me still ran as follows: “Since graduate school trains us to focus as intently as possible on increasingly precise topics, how should we go about preparing to teach a course as broad as, say, Introduction to World Religions?”

The authoritative response came down swiftly: “Well, I would have serious reservations about the intellectual integrity and pedagogical purpose of such a course.” (Again, I’m paraphrasing; real speech too often lacks the aesthetic allure of alliteration.) The problem with this response is that it is not really much of a response. It informs the questioner of the respondent’s views on poor syllabus design, I suppose, but it doesn’t help the questioner out of their quandary. As new instructors, we often find ourselves put in the position of having to teach courses that are either pre-made or at least heavily conditioned by departmental expectations.

This is especially so for those who labour as adjuncts or under term contracts, but it is also broadly true for early-career academics of all stripes. It takes time to build up the institutional capital needed to reshape the curriculum (and hopefully not just in one’s own image). To walk in, pedagogical guns a-blazin’, and tell the sheriff how things are going to run now that the new kid’s in town—well, this is not usually advised as best practices by our career advancement counselors.

The most frustrating thing about that response, however, was not that it begged the question and thereby missed the point. Far more frustrating was the fact that the response struck many of us in the room as correct. For a good number of doctoral students, especially those trained rigorously and exhaustively in their chosen fields, there’s much anxiety to be found in the transition from a firm grasp of a topic to a diffuse survey of innumerably many topics.

All of our alarm bells go off when we’re told we have to teach our classes on the basis of notions like “world history” or “the West” or, most alarming of all, “religion.” I remember too that, while I was still finishing up my dissertation, I went to interview for an adjuncting gig at another Chicago-area school. The job was to teach something like the “Intro to World Religion” bogeyman mentioned above. As I prepared for the interview, I racked my brain trying to go back over every critique I had ever read about the universalizing idea of “religion,” the integrity of the field of “religious studies,” and the reflective questions we need to ask if we are going to try to speak of global traditions that span countless times and places.

This was exactly the wrong approach to take, at least if my goal was to land the gig. Let it suffice to say that I did not. Shortly after I arrived, the interviewer asked me point-blank: “So, which religions would you be teaching?” Slightly (or not-so-slightly) stunned, I managed to stammer out some of the overcooked reflections about the problem of ‘religion as such’ I had come up with in advance. The interviewer stopped me mid-sentence, reiterating the request more assertively: “Just tell me which ones you’re gonna teach.”

In the end, it turned out, the interviewer just wanted me to provide a Wikipedia-style listing of so-called ‘world religions:’ Judaism, Christianity, Islam, Hinduism, Buddhism… Maybe Daoism or Shinto or Confucianism, if they’re lucky. Jainism would have probably been a bridge too far for this particular interviewer. Regardless, the underlying message bubbled to the top fairly quickly. Again, to paraphrase: “Just teach ‘em some religions and get out.”

Luckily, the position I now find myself in is much more welcoming than that. Still, my teaching duties demand from me a breadth that can remain a bit startling at times. As a member of a Humanities Department, I am surrounded by colleagues skilled in History, Classics, Philosophy, and a wide swath of languages. My degree says History of Christianity, but this semester I’m responsible for teaching everything from ancient Rome up to Shelley’s Frankenstein. Next year, I’ll be taking on my first proper ‘world history’ course, covering everything (well, not everything) from the time the aliens built their first pyramid (just kidding) up until the year 1500 CE.

Sticking to my grad-student guns may no longer cut it. One way or another, I will be teaching these courses. In some (not all) cases, I will be free to select a textbook from a pre-approved list of possibilities. Within an institutional context, of course, this freedom will not be absolute. Given these constraints, it is on me as a teacher to figure out how to do justice to the inherent diversity of these historical periods. And I’ll have to do so in a way that’s intellectually responsible, yet also accessible to the increasingly large number of students staring back at me.

In other words: ‘dialogue in the classroom’ isn’t just a goal or a nice aspiration. It’s a necessity. Rather than letting the textbook talk for itself (as if it could), we instructors have to make sure that we are speaking not just alongside the textbook, but oftentimes against it. Now, I don’t mean to say that every textbook is trash. Writing a textbook seems like an unpleasant and potentially unrewarding task, so I’m not out to vilify the good people who actually sit down and write these things. But the level of generality at which most textbooks operate lends itself to vagueness bordering on misinformation. Sometimes the Big Picture, however, useful as an introductory image, risks turning into a dangerous idol.

Dialogue in the classroom is how we smash that idol or, at the very least, provide our students with a number of diverse idols which can then hash it out in some kind of apocalyptic Twilight of the Idols, culminating in Ragnarok-like fashion at the end of the semester. To make this call for dialogue more substantive, let me suggest a few concrete steps we can take to keep ourselves away from the pitfalls plaguing the uncritically taught survey course.

There are countless ways we could go about framing different kinds of dialogue, so I’ll keep myself to just three. I’ll call these critical dialogue, digital dialogue, and political dialogue. Critical dialogue means not being afraid to call out the textbook. Does your medieval history textbook, despite aiming for pluralism, put the contributions of Jews, Muslims, and women in a secondary place? Mine does! If yours does too, say that. Let the students know. Some of them might be picking up on that already, but not everyone will be.

Digital dialogue, meanwhile, is a tricky one. It is tough to go there without coming off as vapid (“Digital Humanities changes everything!”) or snide (“What does digital humanities even mean, anyway?”). But the secret strength of digital resources is that they allow students who otherwise wouldn’t be able to join the dialogue to do so. Students who learn visually can excel in online mapping assignments; those who struggle to speak in class can join the conversation in other ways. It doesn’t always have to be old-school, stand-and-deliver pedagogy.

This broadening of accessibility brings me to political dialogue. Given recent events, this kind of dialogue should be as intimidatingly relevant as ever. But by ‘political dialogue’ I don’t necessarily mean explicit debates about policy, however necessary those might be. Instead, I’d like to draw a parallel between dialogue in the public sphere and in-class discussion. In both cases, there is an increasing concern that we are losing touch with one another. Our online echo chambers echo loudly with the reminder that we are stuck in those very echo chambers. The same might be said for the academic echo chambers many of us inhabit while in grad school. As you finish up the dissertation, you might find yourself talking (mostly in your head) to people who know a lot of what you’re going to say before you even say it. That is not at all the case when you have fifty minutes to teach a room full of teenagers about, say, the Hundred Years’ War. Political dialogue in the classroom, then, might also have to mean fine-tuning your approach to fit the backgrounds and the vocabularies of your students. Not everyone had the privilege of inhabiting the echo chamber you just spent seven-plus years exhaustively exploring.

Please indulge me as I close with one more anecdote or, in this case, an anecdote within an anecdote. On a certain lunch one fine Wednesday, I heard the historian of religion Bruce Lincoln recount his memories of an ongoing debate between two of his mentors, little-known scholars by the names of Mircea Eliade and Jonathan Z. Smith. This dispute, Lincoln told us, revolved around the question of which came first in cosmology and cosmogony: chaos or order. In Lincoln’s telling, his own academic formation took shape in the wake of hearing his two teachers engaged in substantive debate about an intellectually precise question.

Admittedly, it is hard to model this kind of precision in large survey classes. But that is precisely why we need to encourage critical dialogue within these classes. Even (or especially) when you’re giving students the Big Picture, you shouldn’t let them think that learning stops there. As they turn to sell their textbooks back to the campus store, as so many do, let them see this not only as a financially necessary concession, but also as an emancipatory act of idol-smashing. At the very least, it’ll sound cooler that way.