My Semester Made Visible

Last week in RCDH, Collin challenged us to visualize our semester. He left the form and content up to us. As I tend to do, I thought it would be interesting to look at it from the level of composites, allowing me to focus on different intensities through different lenses of focus.

From Tim Baker and Chris Shier’s gifmelter, I started with a gif that became an inside joke between Jason, Lindsey and myself after I used it as a response in a course last semester. Breaking the constraints of the gif (rectangle, perception based on timed loops), the image can be viewed differently. I think this particular iteration takes on new meaning at this scale (particularly poignant in its ability to be broken apart and pretty representative of this semester):



A significant event for me this semester was presenting (for the first time) at 4Cs. The panel I presented on was well received, but it also projected. These are ideas that are very much unsettled, active. Threads from this text network have been pulled, followed, and have connected me to other people/texts/ideas.

This slideshow requires JavaScript.

Another significant event this semester was participating in THATCamp CNY. While I didn’t showcase/workshop anything, I was party to interesting tools, conversations, fruit trays, and distributed dh connections across the campus. I wanted to do something with my tweets to reconstruct the events of those two days, as well as account for the retweets and favorites that connected each of my own tweets, as well as tweets I was drawn to saving/responding to. This is rather messy, but it’s an attempt at a sort of network (this is only the first of two days and chronicles the movement of my tweets + tweets I was mentioned in):


This slideshow requires JavaScript.



  • solid orange line: quoting speaker at THATCamp
  • solid black line: RT from someone in attendance
  • solid green: response to someone in attendance
  • solid purple: mentioned by someone in attendance (but not present)
  • dotted blue: favorited by someone not in attendance
  • long dashed black: favorited and RT by someone in attendance
  • dotted black: favorited by someone in attendance
  • black names: people present
  • blue names: people distant

Out of 17 tweets from those two days, 12 were tweets I RT or favorited. It’s debatable how to measure the furthest reaching; the picture of @ahhitt and I with Otto the Orange had the most favorites and left the central NY area, but the response/comment I made to @ahhitt that evoked @s2ceball might be the furthest reaching, if she is still in Oslo on her Fulbright…

Not pictured: Instagram foods and activities and other odd things; readings; notebooks and doodles; calendar of work and due dates; schedule of sleeping/waking; the starts and stops of an exercise routine; how many/little miles I walked each day; the amount of muscle rub applied to my “computer neck” on a daily basis; how many times I responded “[sigh] goooooood” when asked how I was doing vs. some other response.


This week in RCDH, we are discussing Matthew L. Jockers’ Macroanalysis. My role is that of catalyst, so I will work to synthesize the posts of my peers.

Jason appreciated having to read methodology more closely, explaining that in (humanities?) scholarship, method is usually not as developed or attention getting as arguments and conclusions being put forth. He also remarked that he respected Jockers’ balanced approach to making a case for more distant reading methods without diminishing more commonly used close methods; Jason explains “he’s advocating for scholars to use all of the available tools at their disposal”. Jason also found himself thinking about what rhetoric can bring to DH instead of the other way around, which he admits is how he has been thinking about our readings: DH makes him think of tools in these methods, while rhetoric makes him aware of the choices that go into creating, using, and analyzing these tools and their use. Jason ends by raising questions about difference in what can be gained distant reading a corpus of texts vs. a text, ultimately wondering what disservice we do to ourselves by seeing them as situated in different camps, caring for different matters.

Romeo appreciates Jockers’ readability and balanced concern for close and distant methods, which he highlights through Jockers’ definition of macroanalysis—that moving between micro and macro scales is most effective as these approaches inform each other. Romeo finds Jockers’ rationale for macroanalysis compelling, explaining that “He argues that macroanalysis is just another means for evidence gathering and that to embrace new approaches and methodologies to give way to new possibilities of analysis.” With attention to size of information/texts one is working with, Romeo condenses Jockers’ succinctly in describing how a macroanalytic (and computer assisted) approach can reveal information a researcher would otherwise miss (beyond unassisted human ability).

Lindsey, like Romeo and Jason, appreciated how accessible and interesting Jockers’ writing about DH methods is, even seeing it as a text not just about macroanalysis, but caring for DH methods more broadly. She was struck by thinking about research methods, more specifically, she realized she has been approaching reading our DH texts from a composition frame of reference, which surprises her because she is more trained in rhetoric. She is know wondering how DH might intersect with her interests in visual analyses of bodies, performances, and sexuality and what she might take from our course conversations “to conceive of a project that conducts a macro analysis of the rhetoric of our visual culture”. What she is now thinking about is what DH tools and methods there might be to study the visual—”how can a macro analysis account for the rhetorical study cultural artifacts?” Are DH methods limited to alphanumeric texts?

Despite the different points of engagement with Jockers’ texts, it seems like my peers are engaging with methodology/methods more deeply, moving from reading to understand how they are done to wondering what they might do with them in their different areas of interest in rhetoric and composition—perhaps a move too from wondering what is possible/potential to imagining what is possible/potential as DH work.

and how

In attempting to grasp topic modeling, I found it helpful to cluster together the readings from Andrew Goldstone and Ted Underwood, Meghan R. Brett’s introduction to topic modeling, and reviews/responses from Andrew Perrin and Laura Nelson to the Poetics special issue on topic modeling. My understanding of topic modeling is that within a large corpus of texts (huge, like 1000+), TM tools mine the texts by grouping words across the corpus into topics, or patterns of co-occurring words (a relationship of similarity). These topics are then examined by the researcher (who must know something about this text corpus to be able to understand the topics found) and illustrated as a befitting visualization that makes visible these topic relationships.

As I was reading, I noticed my questions seemed to echo questions I have had since I started reading about distant reading, text mining, and visualization a year ago. While I think my technical understanding of these methods is becoming more robust through engagement with work using such methods (and showing how they used the methods), the how is still obscure to me. I pulled quotes from these pieces (that are in ways at odd with one another in how they see TM as useful) that are helping me with the how:

Goldstone and Underwood: “The strictly linguistic character of this technique is a limitation as well as a strength: it’s not designed to reveal motivation or conflict” + “This technique can reveal shifts of emphasis that are more gradual and less conscious than the ones we tend to celebrate.”

TM, as advocated by Goldstone and Underwood, is rigorous enough that it should be considered evidence (product of research) to make claims/questions within a discipline. Their account acknowledges the limitations of this method in that it loses meta information within a text (does context still work at this scope? in these methods?), but affirms the method as a different way of seeing how knowledge has emerged within a discipline/disciplinary set of texts. The how might not have otherwise been visible.

Brett: “Topic modeling is not necessarily useful as evidence but it makes an excellent tool for discovery.”

Brett views TM as a part of research—making patterns visible to then pursue. I think this view of how TM is used, in comparison to that of Goldstone and Underwood, is a divide that emerges in differing scopes of text analysis. How this is determined, I am still uncertain other than varying engagements with the methods; but, what I do notice is how discovery through these methods is differently valued.

Perrin: “But culture is not just language, language is not just text, and text is not just words. Since these methods actually analyze text (not language and not culture) we need to attend to the processes by which culture becomes language and language becomes text.”

Perrin critiques TM because it cannot account for the context of texts—texts (as a collection of words) read through these methods cannot make certain relationships visible. This doesn’t appear to be the argument that I once thought existed between the values/constraints/affordances of “close” and “distant” reading (time with one text read closely v. time with a group of texts read for patterns—we have read much that complicates this) but an assertion that while these methods may make certain patterns visible, they cannot make others. Or, they might give the illusion of patterns that are distorted. Other than making this disclaimer in a methods section, I wonder how else we might account for contexts—especially in large corpora.

Nelson: [topic modeling] “It definitely will not magically help us understand the black box of culture. It’s science, not magic, and any science takes work.”

Nelson, who takes Perrin to task in arguing that understanding texts is a way to help us understand society (the context that Perrin argues is lost in texts that TM methods read). Nelson outlines what TM can and cannot do, making the usual disclaimers about understanding available options, methods as best fit for questions/data, and assumptions behind each method. What struck me in Nelson’s account (it should be noted that this is from sociology) was describing how these methods are science.

And How: I’m left wondering about how we see relations/patterns and meaning; this sounds simple, but what relations have our attention make for great variances in meaning. And while I think this is the point—the affordances and constraints of the methods—how are they (texts to topics, topics to relationships, relationships to patterns, patterns to visualizations, visualizations to relationships, relationships to meaning) considered?

microscopic, not myopic

This week’s readings on “microscopy” were absolutely captivating, providing an opportunity to engage with “scale”. My understanding of scale prior to these readings was more akin to an awareness of its existence and influence to generically understood (based on my limited knowledge) matters of concern such as: size of data set, how closely/distantly a text is being looked at, what can be made visible in terms of patterning, and different tools fitted to size/task of the data set.

I collected scraps from three readings that engaged with scale in interesting ways (note: I might be discussing scale lacking some nuance…): interface/language with range and “tuning-in-ability”.

Matthew Jockers and Julia Flanders “A Matter of Scale”: “We’ve tried to create an interface that supports that kind of shuttling between different levels of scale: seeing patterns, seeing outliers, zooming in and zooming out. The tools aren’t very good yet, but they’re getting better” (17).

Jockers and Flanders work to dissolve the binary between micro/macro data sets that typically get associated with distinct rendering/reading processes and products. In advocating for more discursive markup languages, this seems to be a call for tools that will allow for simultaneously micro/macro analysis to keep both in sight.

“I think you and I must begin from this point of agreement and now work our way towards what the photographers call depth of field. How do we alter the f-stop and shutter speed so as to keep as much in focus as we can?” (17).

This imagining was absolutely engrossing and reminded me of the concept of mise-en-scene—as everything within the shot, from composition, to sets, to props, to actors, to costumes, to lighting—is taken in through the scene. If we had the ability to tune in to these different elements and their connections to the composition of the text, what might we see?

Elizabeth Losh’s “Nowcasting/Futurecasting: Big Data, Prognostication, and the Rhetorics of Scale”: “Because of the distance at which such large collections of cultural objects become legible, we are reduced to being passive spectators as the map grows progressively larger than the territory” (450).

Losh brought to mind questions of when (timing) we can engage with cultural objects, particularly those that are born digital. Traditionally, DH work has directed its attention to print text collections (that are thus past), but there is (was?) a push (by who/what?) toward using DH methods to study emerging (not yet) culture. I wondered, in reading this, if futurecasting was essentially establishing patterns/trends that are in the process of happening based on what has happened?

“Rather than aspire to a predictive humanities, the task of “nowcasting” rather than “futurecasting” may promise a methodology of more engaged research in which the cultural, political, literary, artistic, and material life of the present becomes the focus of attention” (447).

In terms of time, I am having trouble understanding the difference between “future” and “now”, perhaps based on how I am imagining “future” (I wonder what implications this has). It seems that both would be attending, or tuning-in, to happening culture more closely, instead of waiting for it to pass to collect and examine.

Geoffrey Rockwell’s “What is Text Analysis, Really?”: “Rather than developing tools based on principles of unity and coherence we should rethink our tools on a principle of research as disciplined play” (212).

Rockwell is complicating the scale of the text we engage with by working to reterm “concordance” as a hybrid text—instead of looking to establish unity across a text by looking for patterns of coherence, a hybrid tunes into a text that is created by the original text and user choice—a text that is not original (of the author) nor of the concordance provoker. This shifts the scale of text that is being engaged, allowing for different “play”, or methods of interaction.

Rockwell proposes a portal model for text analysis tools that “makes available a variety of server-based tools properly supported, documented, and adapted for use in the study of electronic texts”, a “virtual library” (215).

These are only a part of a much more complex and interesting picture imagining of scale, but I appreciate the ability to trouble the macro/micro binary (myopic vs. panoramic) as something more akin to focus stacking: the combination of multiple images taken at different focus distances to create a resulting image with a greater depth of field than any of the individual source images (from Wikipedia).

"Top left are the three source image slices at three focal depths. Top right are the contributions of each focal slice to the final "focus stacked" image (black is no contribution, white is full contribution). Bottom is the resulting focus stacked image with an extended depth of field." - from Wikipedia "focus stacking"

“Top left are the three source image slices at three focal depths. Top right are the contributions of each focal slice to the final “focus stacked” image (black is no contribution, white is full contribution). Bottom is the resulting focus stacked image with an extended depth of field.” – from Wikipedia “focus stacking”

If microscopy aims to view objects that cannot be seen with an unaided eye, having techniques for “disciplined play” (Rockwell) at varying scales allows for different, and perhaps simultaneous, focus; allowing for close looking that is far from myopic.

useful and useless archives

In class this week, my role is the antithesis (I almost put a shopped image of myself with an evil Spock beard. Almost.) on the topic of archives in DH. Based on the readings, I have created the following matters of concern:

If one of the goals of DH work is to expand the audience beyond the academic sphere to the public sphere, how can archives be used to accomplish this? If ownership, authorship, and interpretation are factors, and DH makes strides to incorporate its readers as collaborators/builders/authors, how can the “public” (i.e. those outside of an academic institution physically or conceptually) gain access? Considering: location of archive, design of archive as user centered, permissions and data standards for collaborating, purpose of the archive project. Why should the public be interested? What are their stakes? What are they to gain?

Do we make archives? Do we make databases? Do we make collections? Do we make interfaces? Or do we make something similar, but just different enough? The conceptual usefulness in borrowing concepts come with shortcomings and pitfalls. With DH projects often blending and borrowing from established disciplines under the coherence of DH what might our work be complicating for the disciplines we borrow from?

How can DH address concerns of trained archivists, as expressed by Kate Theimer in Archives in Context and as Context, “What concerns me is that in the broadening of “archives” to extend to any digital collection of surrogates there is the potential for a loss of understanding and appreciation of the historical context that archives preserve in their collections, and the unique role that archives play as custodians of materials in this context”? Particularly when the conceptualization of the project and the design of the collection is often meant to expand as fluid, as ecology, as networks? If archive differs between DH and archivists, how does context hold up?

What are DH scholars doing to learn from archivists and librarians? How does a DH scholar interested in archives become an archivist-scholar?

What are the lifespans of DH archives? Who/what/where/how keeps them alive?

Are our archives just archives of our work?

Thinking of our readings made me recall Jeff and Jenny Rice’s 2012 CCCCs panel with Geoffrey Sirc, “Everyone Knows This is Nowhere: Writing in the Musical Age.”

Useless Archives: From Jeff Rice’s Useless Dylan

“The archives – in a university library, on a university website, in a university special collection – provides access to knowledge. Teaching the archives teaches a method for acquiring ideas, contextualizing ideas, and framing ideas anew. In the age of new media, archives, however, are institutional practices, and thus, they shape what is or is not important to a given research project. What about the peripheral items not featured in a given archive? How important are they to research?”

Do we view artifacts in an archive at a different scale? A different point of focus?

“How can I call “useless” what I value? Value is the wrong way to look at subject matter, whether musical, political, or some other item. Instead of value, I want the scraps, the outtakes, the speculation, the guesswork.  If I am to understand anything when I assemble a useless archive (and I do not promise any understanding), it is how the fragments of experience, when juxtaposed, allow insight previously prevented by what Vilem Flusser called the programmability of the political experience in the age of media”.

Can we make useless archives as serious scholarly pursuits? What are the potential consequences of the useless?

“The useless archive merely brings together. What I get from that combination, juxtaposition, linking, etc. depends. I may get a series of patterns. I may get a surface level “cute.” I may get affective response. I may get nothing. Whichever I get, that doesn’t mean that useless is “no good.” My archive is virtual. It is based on arrangement.”

How does arrangement get at/not get at selection—”archivists select materials for acquisition and accession” (Kate Theimer)?