Exactly 8 months ago, I was sitting in an office with my boss, the Dean of Academic Research at a CUNY institution, looking at a network map. We were both in awe. The College’s librarian had provided us with two PDFs — one with a white background, one with a black — of the same dataset that had been mapped in vivid colors, swirling lines, and weighted names. The dataset was a list of faculty members and how frequently their work had been cited by other scholars. It was gorgeous. Exciting (we considered using it for a project book-cover). And it meant nothing.
This visualization was ignoring the reality that some faculty members had produced little, but were cited often; that some were the only active scholars in a faction of their discipline; that some were immensely productive but their work was diluted in a field of many. It was with this experience in mind that I approached my Gephi project.
Anyone who spends ten minutes with me realizes pretty quickly that my research interests revolve around rhetoric in science, particularly scientific controversies. This year’s PhD work in particular has been focused somewhat narrowly on the ozone controversy that started in the summer of 1974 and only recently is finding some resolution. So when it was time for me to figure what my intro dataset for Gephi was going to revolve around, I figured I would draw on the project that I am currently immersed in: the uptake of scientific genres and how they translate into the public sphere.
So in approaching my Gephi project, I drew on a larger research collection of texts that I’ve been gathering in Omeka. I knew that Molina and Rowland’s text (“Stratospheric Sink for Chlorofluormethanes,” Nature 1974) was the seminal project in ozone depletion research, and indeed started a new discipline in science: atmospheric chemistry. What I didn’t know what the rate at which this research was taken up first by other scientists, and second by the popular media. And I was curious who was citing whom in this uptake process. So, I mapped it.
Taking my collection of texts, I read each to see what the referents were. Sullivan (writing for the New York Times), for example, referred to Molina and Rowland, Cicerone et al., and McElroy and Wofsy in his piece. He also referred to an interview with the Manufacturing Chemists Association, though a copy of that interview has yet to be found. Aerosol Age, a trade journal for the aerosol industry, referred not only to Molina and Rowland, but to the media reports that had been circulating and the National Resource Defense Council’s press conference and appeal to the Environmental Protection Agency, Food and Drug Administration, and Consumer Product Safety Commission to invoke a ban on CFC propellants within three years.
On and on I went with my dataset, putting texts into Gephi as nodes, making edges whenever there were connections, and here is (partly) what I got:
I say “partly” because for what I think is the fourth time since I started playing with the software, it crashed on me and, unfortunately, at a time where I had nearly finished uploading all the texts I have gathered to date. Even though I was saving as I went, none of my changes were saved. So the above network map shows only a part of the collection. STILL — it does tell me something. It tells me who was taken up as a credible and accessible source at the time, and who was not (because some of these texts were published in the same week).
So, while in previous posts I have expressed frustration with some of the DH software out there, I was excited to see how using a scientific approach works with this tool. If you ask the right software the right question, then you just might get something interesting. With this small dataset, I was able to see some things that I hadn’t picked up on when building my collection or reading the materials. For a start, McElroy and Wofsy, while cited as corroborating the chemical equations Molina and Rowland put forth and noted as publishing their own research in September 1974, never actually published anything and so are potentially an example of a failed uptake chain.
Other instances showed very clear uptake chains — from Lovelock to Molina and Rowland, to the NRDC and the State of Oregon, to the National Academy of Sciences, and to (eventually, though not shown here) to the Montreal Protocol. While this is only one chain in many, what the network map has done is given me a clear area for further research. Since I’m interested in genre uptake and translation, I now have a clear map of where that occurred, and where it didn’t, saving me time, effort, and most certainly a lot of frustration.