Over tea one night, my friend James was relaying an experiment he’d heard of where a research scientist decided to find out if luck was genetic. The scientist poled groups of people, asking them who the luckiest person they knew was. After doing this for some time, he managed to narrow down a group of individuals that other’s considered in possession of a higher-than-normal luck ration. Using this same process to find out who these people considered blessed, the scientist eventually ended up with a group of families that were inordinately ‘lucky’.
After thorough assessment of all the individuals involved, the scientist discovered that luck is inherited – though not on a genetic level, as such. Luck, he deduced, is about attitude. It’s about the willingness to take chances on yourself, to lean into your discomfort, and stay positive. Any inheritance tendency springs from learned behavior. A child observes a parent being willing to try new things, so the child adopts the same approach to life. Since opportunity is created and not gifted, those willing to take chances find themselves with more opportunities to mobilize upward.
This tied in closely with a lecture given by Gayatri Chakravorty Spivak I attended on a cold night in 2007 in Glasgow. Arguing for more attention and importance to be given to the humanities (she includes philosophy and literature under the same name), Spivak made a strong claim that the humanities train and enrich the mind, that they exercise the mind’s ability to think differently and create space for new opportunities. The humanities ‘support imaginative thinking’, and ‘give the mind its suppleness’ – a suppleness that characterizes civilization.
More than anything Spivak suggested, the idea that the humanities should supplement globalization and cosmopolitanism resonated the most with me. That the humanities resists globalization, that it cannot be condensed and reshaped to present one moral outlook, one way of being, is likely to be the greatest compliment to the category I’ve heard for some time. Of course, all artists with any political agenda know this. But in Spivak’s simple articulation of this fact, it suddenly occurred to me why humanities funding may so often be the first to be cut in national school systems.
The humanities teach individuals to think fully and independently. When we give over to a story, entertain a philosophical theory, or let a musical composition present new arrangements, our mind not only experiences these new perspectives, but the brain physically begins to create new neural pathways to encourage further exploration. Engaging with the humanities affects the human mind on both a theoretical and physical level. Maybe this is why humanities funding is cut; why art is such a threat. Governments cannot survive, cannot manipulate and control the masses if the masses are composed of free thinkers. The push on a government level for prosthetic technology is as much about creating one mind and control as it is progresses… I’ll digress for the moment because this was not really what I set out to write about in this post.
What had me thinking about Spivak’s presentation seven years ago is how her call-to-action might be being answered in a way that was (possibly) unpredictable at the time – through the Digital Humanities.
This thought comes in the wake of my foray into practical applications in network analysis and mapping in general. I noted in a previous post that I could see the possibilities for using these new technologies. That there is room for pedagogical development as well as for research (in my world, that possibility is slightly hindered by my inability to get certain programs to run properly…but that’s a subject for another time). It seems like Spivak’s claim is being articulated well in the DH realm in the sense that these technological tools are helping us to see the world differently.
Even more so, though, the emphasis on the ‘digital’ – the fact that the communities who construct these innovations often operate on an international level while maintaining a distinct community of practice – might mean that these technological developments are beginning to support cosmopolitanism… Maybe.
It also might mean that the humanities are entering a new paradigm. I think of the luck study noted before. How might the learned behaviors of younger scholars from mentors engaged in DH influence scholarship of the future? Will these new scholars be predisposed to finding connections that we might not have seen through our conventional research approaches? Will our inclusion of these new technologies form neural pathways that significantly alter the way we find and interpret new information on old topics?
It’s exciting, to be sure. But a part of me remains cautious and slightly skeptical, as well. Mostly, my skepticism comes from an understanding of just how easy it is to see things that are not really there.
There has been a lot of talk in various realms on “Big Data” (particularly in the realm of TED). In fact, one TED talk that addresses my skepticism in the most entertaining way is “What we learned from 5 million books” by Jean-Baptiste Michel and Erez Lieberman Aiden whose talk includes many wonderful Google N-gram graphs.
The graphs they present are comical and engaging. On the first instance, it’s hard not to get swept up in the excitement of seeing the presentation of information in a new form. It’s this excitement, I think, that gets in the way of our seeing what isn’t there. Whether we want to admit it or not, we have preconceived ideas about what will be found in research, and what the data mean once it’s been processed. We take a very unscientific approach to analyzing what is, in effect, a scientific method.
What the new technologies have not managed to implement, at least that I have seen in my somewhat limited exposure, are manipulative variables. I can run an N-gram on a place name, for example. But I unable to focus that search on the location occurrence of that place name – meaning, I can’t omit the front matter from the search so that the data does not include any text that was printed in that place. Or, separate occurrences of Cambridge, Boston from Cambridge, England where the “Boston” or “Cambridge” does not occur. It also means that I cannot include data that is not in the public domain.
This isn’t to say that the tools are not useful – just that we are not conditioned as researchers, yet, to think about the data scientifically; what it includes and what it omits. Case in point: An assignment given to me recently from a professor includes an emphasis on my running of an N-gram on a particular term. There was an immense amount of excitement about this as a research tool on the part of the professor, but no caveat about the variables that need to be taken into account when running such a test.
I can run this N-gram, and to be certain I will, and I could draw many conclusions about what it means. But would these conclusions be even remotely accurate? (I don’t think so.) And what if this was an actual research project that I intended to publish? What if someone else accepted my interpretation as fact and built on this further?
The Digital Humanities is in a wonderful position right now of being able to apply scientific reasoning methods to humanities subjects. But those laying the groundwork for future scholars should be mindful that taking a scientific approach to analyzing information requires the full implementation of the scientific method – not just using the tools without the unbiased interpretation. We need to know what is in, what is out, and then look at the data to see if it tells us anything. We need to use these as thinking tools, understanding that their use might simply be to facilitate our thinking on a given topic and not as the production of a specific thing. The end result, as in science, will likely be more questions, not answers. And that is okay. That is where the novelty of DH lies, I think.Like