microscopic, not myopic

This week’s readings on “microscopy” were absolutely captivating, providing an opportunity to engage with “scale”. My understanding of scale prior to these readings was more akin to an awareness of its existence and influence to generically understood (based on my limited knowledge) matters of concern such as: size of data set, how closely/distantly a text is being looked at, what can be made visible in terms of patterning, and different tools fitted to size/task of the data set.

I collected scraps from three readings that engaged with scale in interesting ways (note: I might be discussing scale lacking some nuance…): interface/language with range and “tuning-in-ability”.

Matthew Jockers and Julia Flanders “A Matter of Scale”: “We’ve tried to create an interface that supports that kind of shuttling between different levels of scale: seeing patterns, seeing outliers, zooming in and zooming out. The tools aren’t very good yet, but they’re getting better” (17).

Jockers and Flanders work to dissolve the binary between micro/macro data sets that typically get associated with distinct rendering/reading processes and products. In advocating for more discursive markup languages, this seems to be a call for tools that will allow for simultaneously micro/macro analysis to keep both in sight.

“I think you and I must begin from this point of agreement and now work our way towards what the photographers call depth of field. How do we alter the f-stop and shutter speed so as to keep as much in focus as we can?” (17).

This imagining was absolutely engrossing and reminded me of the concept of mise-en-scene—as everything within the shot, from composition, to sets, to props, to actors, to costumes, to lighting—is taken in through the scene. If we had the ability to tune in to these different elements and their connections to the composition of the text, what might we see?

Elizabeth Losh’s “Nowcasting/Futurecasting: Big Data, Prognostication, and the Rhetorics of Scale”: “Because of the distance at which such large collections of cultural objects become legible, we are reduced to being passive spectators as the map grows progressively larger than the territory” (450).

Losh brought to mind questions of when (timing) we can engage with cultural objects, particularly those that are born digital. Traditionally, DH work has directed its attention to print text collections (that are thus past), but there is (was?) a push (by who/what?) toward using DH methods to study emerging (not yet) culture. I wondered, in reading this, if futurecasting was essentially establishing patterns/trends that are in the process of happening based on what has happened?

“Rather than aspire to a predictive humanities, the task of “nowcasting” rather than “futurecasting” may promise a methodology of more engaged research in which the cultural, political, literary, artistic, and material life of the present becomes the focus of attention” (447).

In terms of time, I am having trouble understanding the difference between “future” and “now”, perhaps based on how I am imagining “future” (I wonder what implications this has). It seems that both would be attending, or tuning-in, to happening culture more closely, instead of waiting for it to pass to collect and examine.

Geoffrey Rockwell’s “What is Text Analysis, Really?”: “Rather than developing tools based on principles of unity and coherence we should rethink our tools on a principle of research as disciplined play” (212).

Rockwell is complicating the scale of the text we engage with by working to reterm “concordance” as a hybrid text—instead of looking to establish unity across a text by looking for patterns of coherence, a hybrid tunes into a text that is created by the original text and user choice—a text that is not original (of the author) nor of the concordance provoker. This shifts the scale of text that is being engaged, allowing for different “play”, or methods of interaction.

Rockwell proposes a portal model for text analysis tools that “makes available a variety of server-based tools properly supported, documented, and adapted for use in the study of electronic texts”, a “virtual library” (215).

These are only a part of a much more complex and interesting picture imagining of scale, but I appreciate the ability to trouble the macro/micro binary (myopic vs. panoramic) as something more akin to focus stacking: the combination of multiple images taken at different focus distances to create a resulting image with a greater depth of field than any of the individual source images (from Wikipedia).

"Top left are the three source image slices at three focal depths. Top right are the contributions of each focal slice to the final "focus stacked" image (black is no contribution, white is full contribution). Bottom is the resulting focus stacked image with an extended depth of field." - from Wikipedia "focus stacking"

“Top left are the three source image slices at three focal depths. Top right are the contributions of each focal slice to the final “focus stacked” image (black is no contribution, white is full contribution). Bottom is the resulting focus stacked image with an extended depth of field.” – from Wikipedia “focus stacking”

If microscopy aims to view objects that cannot be seen with an unaided eye, having techniques for “disciplined play” (Rockwell) at varying scales allows for different, and perhaps simultaneous, focus; allowing for close looking that is far from myopic.