Popular and Public History Online

Before I dive into this week’s assignment, I’d like to revisit our discussion from last week, because I found a very interesting digital mapping project. Amnesty International’s Strike Tracker project uses sequences of aerial photos of Mosul to identify when and where airstrikes took place. And they’re actually crowd sourcing labor through a module they built that lets volunteers look through a series of photos in sequential order to identify when buildings took damage. In theory, and with enough volunteers, they could potentially learn the time, location, and consequence of every bomb dropped on Mosul. It’s such an original and important application of the tools we discussed in class, I had to share it.

Now on to Steve Dietz essay, “Telling Stories: Procedural Authorship and Extracting Meaning from Museum Databases.” I would like to touch on his ideas about storytelling before moving on to discuss the websites we were asked to look at. But first I should admit my suspicion towards storytelling, which I suspect to be a kind of snake oil, even though all of the humanities seems so confident in its ability to engage audiences, encourage empathy, and convey information. It strikes me as a kind of magical thinking. And before I join the crowd in assuming that storytelling is a preferable way to convey information, I would like it explained to me in concrete, scientific terms the magic that allegedly happens when people hear stories. Are stories really more effective at conveying information than analysis? Or are stories more effective at conveying information to certain audiences because the simplify complex phenomena and focalize our attention on only the most relevant information? Is story telling by itself more effective than placing stories within an analytical framework? Is storytelling believed to be more effective because stories render information perspectival? Are there built in cognitive structures in our brains that process stories more easily? These are just a few of the curiosities I have about how story telling works and what it can and can’t do.

So, when Dietz moves very quickly from wanting to find ways to make museum databases available and engaging for the public to discussing storytelling as a solution, a flag went up in my mind. I don’t even know how to begin assessing his arguments without knowing the details of specific collections, the audiences that might be interested in using them, their level of motivation to learn, and how they might use this information. For example, moving from an object-centered collection to one that gives stories more prominence makes sense, I suppose, whether you’re using “predetermined narratives” or “multi-vocal” or “hyper-linear” ones. But how could we possibly evaluate such a claim? And how can I evaluate the effectiveness of websites in a way more objective than just relying on my gut instinct?

The best I think I can do is come up with some criteria and try to justify that criteria. So, I think a website that is effective at conveying the past should do all these things:

  1. The home page should immediately convey the topic, scope, and goals of the website. If there is any confusion in a visitors mind as to what a website is about after viewing the homepage for 30 seconds, then that website has failed.
  2. The site should utilize multiple rhetorical means to convey information in a coherent, engaging, and accessible manner, such as narrative, analysis, description, explanation, images, graphs, charts, maps, video, and timelines.
  3. Information should be organized in an intuitive manner and should be easily navigable.

I don’t know how to get more specific than this, unless I have a specific audience in mind. And of course, the big flaw in this criteria is that it doesn’t measure anything about the audience’s experience with a digital exhibit/database/whatever, and it can’t tell us whether or not a website has been effective.

So I’ve said all this just to express my anxiety with deciding which website is effective based on rhetorical factors that I’m only guessing are effective. And the winner is . . . the Raid on Deerfield. At first look, the subject of the website is clear enough: the raid on Deerfield in 1704. But its scope and goals aren’t. Otherwise the site is easily navigable and it does a good job of using multiple rhetorical tools to convey the past. The website could certainly be updated. The voice-over introduction sounds robotic and the images are very low resolution. But the conception and organization of the website are excellent. I have a good feeling that it could be a useful learning tool. If there is time in class, I’d be happy to discuss where I felt the other websites fell short.

Digital Mapping: Gettysburg

I would like to discuss the Smithsonian’s interactive map of the Battle of Gettysburg, by Anne Kelly Knowles. The Smithsonian’s online magazine boasts that visualizing the battlefield, using digital technology, can help us put ourselves in the commanders’ shoes, see what we saw, and better understand the decisions they made. Now let me explain why I don’t think this map lives up to these claims.

First, I know enough about military tactics and strategy to know that I, in fact, know very little. Most of us know terms like “high ground” and “flank” from movies. But troop movement is a game of trade offs, and “taking the high ground” or “flanking the enemy” are never just advantages without disadvantages. Things get more complicated when we factor in questions terrain and the comparative capabilities of each opposing side’s weapons. I know enough to know that these are really complicated considerations; and for me to fully understand them, I would need a military expert to explain them to me.

For example, why is occupying high ground considered an advantage in all cases? With projectile weapons, aren’t there cases when the high ground is a disadvantageous position? And why is being spread out, in the case of the Confederate Forces at Gettysburg, a disadvantage? Yes, the authors argue that Lee couldn’t receive info from his subordinates as quickly, but why? Why couldn’t they just use runners? And why did the advantage of not providing your enemy with a concentrated target not outweigh any communication challenges posed by having your forces spread out?

Thus, simply looking at a map that displays the topography of the battlefield and identifies troop positions does little by itself to help me understand the tactical and strategic questions at hand. But not only are these lines on the map not enough for me to understand the trade offs that General Lee and Meade faced (and not even the ground view feature), the narrative that accompanies the points on the timeline lack in their explanation of these military factors.

This brings me to my second point: Digital mapping, in the case of this project, was a tool that helped the researchers interpret battlefield events. It was not a useful tool in disseminating their findings to a lay audience. I have to take Knowles analysis of battlefield events at Gettysburg at face value, because I don’t know any better. When she asserts that the Union’s “compact position” conferred a “strategic advantage,” I have to take her word for it. I’m suspicious, because “strategy” refers to planning at a high-level of command for an entire campaign, not a single battle, (I suspect that what she meant was “tactical advantage”), but also because it just isn’t explained why this is advantageous, all things considered. Nonetheless, Knowles came to this position by reading her digitally generated maps along with the primary sources. Regardless of whether or not she came to the correct conclusions, any understanding that I’m walking away with is coming from the timeline narration and not the visualization.

In sum, digital mapping, in the case of this project, appears to have been more useful to its researchers than to the audience it was intended for.

Digital Scholarship

Every single week of this semester has been a mad scramble for me (because I habitually overcommit myself to too many things) and every deadline has come down to the wire. As usual, I am just now at the last minute realizing that a digital copy of the Leary reading isn’t immediately available through the library (neither of the databases that offer the Journal of Victorian Culture have issues from 2005), and I’m not looking forward to going to the library in person on Tuesday before class to look at print editions.

Having digital copies of journal articles and books available to me has made my own studies and research more efficient and productive. Also, programs like Zotero help me manage my notes and readings in a way that I could never do with physical notebooks. And thank God for spell check, without which I never could have made it into graduate school.

I suspect that for most scholars digital tools have changed the way they do research in a similar way. Word processing programs, search engines, digital texts, and file management programs have increased the amount of information we can take in, organize, and synthesize. I can’t imagine anyone who would want to go back to the old days of card catalogue systems, type writers with no easy way to delete large chunks of text, and print editions of academic journals.

I don’t really feel qualified to discuss the scholars who have made use of more complicated digital tools in their research. My understanding of those tools is too limited to be able to say anything useful. I came to all things digital very late. I don’t think I owned a computer until 2005, and I still consider myself a novice with technology. So I don’t have very clear ideas about how else digital tools can help me with my research, just for still not really know what’s possible. But I do understand that the internet could give me access to a lay audience that I’m much more interested in engaging with than a traditional academic one.

I’m only interested in doing scholarship if it can be useful to ordinary people. To that end, I’ve been influenced by two different models of public scholarship—Gramsci’s organic intellectual and Chomsky’s public intellectual. I think what interests me most about Gramsci is his strategic thinking about how scholars can contribute to social change. Gramsci contrasts armed insurrection (which he likens to a war of maneuver) with a cultural/intellectual struggle to create a working class hegemony (which he likens to a war of position). I think the internet is a very strategically useful place for a war of position.

Chomsky on the other hand combines a lot of anarchist ideas about anti-authoritarianism and “Cartesian common sense” into his notion of public intellectualism. I don’t see these two positions as conflicting. I don’t think I need to embrace a Leninist model of propaganda to think strategically about how scholarship participates in culture, and I don’t think that having rhetorical goals violates any principles of anti-authoritarianism or is overly persuasive (as apposed to demonstrative, which is what Chomsky would prefer).

Anyways, the internet is the place for public engagement and to whatever extent possible I would like to have a useful presence on the web, guided by Gramscian and Chomskian ideas, using whatever digital tools I can figure out how to use.

Copyright

I found the readings for this week a bit abstract. The historical overview of the development of copyright law and the shifting balance between private and public interests that drove these changes was appreciated. However, I still don’t know in practice what the current copyright law prevents me from doing. For all I know, I’m violating it every time a photocopy an article and distribute it as course material. So, I’m finding it a bit hard to respond to how copyright law might pose a challenge for digital historians. 

The most obvious lesson that came out of the Lessig reading was that there is now a world technological infrastructure, most of which I don’t understand, that is tied up in this issue of copyright. Not knowing how to hack into JSTOR, I couldn’t violate copyright law even if I wanted to. Also, corporations that distribute creative content have lobbied to change copyright law at the expense of the interests of creators/authors and the public—the two entities the original copyright laws were intended to protect. I get what this means for authors; they make less money for their labor because they need these corporations who control access to distribution networks. And I get what this means for the public. Information and culture are increasingly a privilege, not a right. But I’m not really sure what this means for historians working in other roles, as archivists or curators. 

I’m probably in violation of all kinds of copyright laws with my digital archive project. Of the few journal articles that I linked to my site, I asked the authors permission first. But now I’m unsure if authors even have the power to grant that permission. And I never bothered doing a property transfer deed when I conducted my oral history interviews; and as long as I have a say, I won’t do them. Asking someone to sign a contract after a serious conversation that required a great deal of trust feels really inappropriate. Technically, I believe these interviews are their property. And I’m not sure if I’m allowed to “borrow” their property on my site, the way you borrow a friends hedge clippers to fix your shrubs. And there are probably answers out there, buried in legalese; but I’d rather proceed as I’ve been doing, hoping that it will all work out. Is this guerrilla archiving? 

Like in most areas of my life, the biggest challenge I face is money. I work too much and I’m paid too little, so I don’t have time to try to decipher legalese or hire a lawyer. Independent scholars, authors, and creators are at a huge disadvantage to institutions that own the infrastructure controlling access to copyrighted materials and has lawyers at their disposal. How can we even out this imbalance in power? 

Digital Preservation

I’d like to focus on the challenge posed by the constantly changing hardware and programming languages used in digital preservation. This is something I think a lot about for my digital archive project. It’s hard to sink a lot of time and effort into building a digital collection on a WordPress site, when I know nothing about coding, content management systems like WordPress, servers, hosting services, etc. But I’m moving forward, taking precautions as best I can with the expectation that I’m going to need to start over from scratch several times. Using WordPress seems like a safe bet now, for a digital novice like myself. But who knows what will happen five years down the line. 

Without know what kinds of technological changes I should anticipate, my strategy has been to store my collection offline and on the Internet Archive, and then link each item to my website. Most of the materials that I want to include in my collection were born digital, like news articles and Amnesty International reports. So they can easily be moved from one page/site to another, in case I need to start over and build a new website from scratch. I’ve created a library with the Internet Archive where I’m collecting permalinks and uploading videos, PDFs, and images. I can then link the individual items from my library to my website. And if something happens with my website—if the hosting service goes bankrupt, or if WordPress jacks up the subscription fees—the really crucial material should survive, and I should be able to link them the a new website.

I also keep the same library offline using Zotero. So even if something happens with my IA library, the permalinks should still work and I should be able to upload the library again. I believe that Zotero and the IA have similar metadata fields, so I won’t be in danger of losing that either. 

My videos pose a much bigger challenge, however. I currently use the H.264 codex for my videos, which I believe is what YouTube uses. But I’ve read that H.265 (which is better for reasons I don’t fully understand) could soon become the new standard, and YouTube might adopt that standard. It is already getting very expensive for me to buy external hard drives large enough to store all of my videos, including the originals, the Adobe Premiere files, and the edited finals converted into H.264. I don’t know if I can realistically continue to store all these files from every interview I do, so that I can convert them to new codex as they come out. Alternatively, I don’t know if I can just leave them up on YouTube either. How long will it be before older video formats are no longer supported? 

Of course, the biggest challenge is my own tech-illiteracy and my inability to understand all the relevant factors and anticipate new trends in technology. 

Born Digital

For this week’s assignment, I’m trying to think about the authenticity, accessibility, and reliability of the materials in these digital archives, and the archives themselves. As I understand the challenges that go along with digital collections, we’re mostly concerned with the reliability of the hardware (which could fail and lose data), the accessibility of the materials in the collection (either due to too much information, or the proper technology isn’t available to everyone, and the authenticity of the materials themselves, which often aren’t traceable or their origins aren’t transparent. These are the concerns that stuck with me from this week’s readings.

That said, I don’t share the caution that Cohen and Rosenweig seem to have towards digital history. I don’t think the problems that come with web-based preservation and research are fundamentally different older ways of doing history.

For example, my first observation, after looking at the 9/11 digital archive, was that the inclusion of emails and anonymous submissions gave a forum to the most casual of impressions, memories, and reflections. The first email I looked at described a dream that the person had and his interpretation of it and how it related to 9/11. A good amount of these materials might be more useful to literary scholars who want to say something about the cultural imagination than to historians. In this case, digital tools made it very easy to collect lots of material, even stuff that wasn’t exactly choice historical documentation. But I don’t think this is a fundamentally new problem with archives.

My second observation, in regard to the Hurricanes Katrina and Rita archive, is that the volume of information available can make it difficult to browse materials or find what you’re looking for; especially if the materials are not organized by an intuitive schema. I found the Katrina and Rita archive very hard to use. With 8,462 items organized into four very big categories (“stories,” “images,” “oral histories,” and “video”), it’s not easy to browse or search for something specific. Here digital tools allowed the archivists to source an incredible amount of materials, but they failed to build an intuitive system that would allow visitors to easily browse, search, and retrieve items.

The April 16 Archive suffers from both the problems I discussed above. It appears to be the most casual collection of materials ever, sorted only by several hundred tags. While this website is an extreme case, my impression is that these challenges are not specific to digital collections. As we saw at the UMass Special Collections last week, they collect a great many things. But they have an excellent system for dealing with the volume of the material and making it easily searchable. To me, this is the most important feature of digital collection—that the information is organized in an intuitive manner and it is made easily searchable. Doing this well will mitigate these challenges discussed above.

Born Digital

I have a very practical interest in digital archives and digitization projects. Besides my People’s History of Fallujah digital archive, which is a pretty straightforward collection of materials that were already on the web (I’m just collecting them to a single location, I also want to start a digitization project for the Carlo Danio Library in Grumento Nova, where I’m doing my other project, Lingua e Memoria Grumentina.

If you followed the link, then you’ve seen that this library is a hidden treasure. The translation on the webpage is a bit difficult the follow, but you get the picture. It’s a well preserved library filled with books from the 17th century on, some in Latin, some in Vulgar Latin, some even in Italo-Romance languages other than Italian (like Grumentino). Scholars of the classics and Italian history should be traveling from all of the world to come visit this collection, but no one knows it exists. And it’s not easy to get to Grumento Nova (you need to figure out the chaotic southern Italian bus system to get there).

This is the paradox of southern Italy—it’s rich in natural resources and cultural heritage, and yet poor. Grumento Nova is in some ways better off, and in some ways worse off, than the rest of the south. The single largest problem that Grumento Nova is facing today is pollution from the Eni COVA oil extraction plant within its city limits. It’s the largest oil extraction site in all of Italy, and its ruining the entire Agri Valley area.

This library is one potential source of income for Grumento Nova. This, combined with a tourism industry, could help make Grumento less economically dependent on its petroleum resources. And I think the best way to utilize this library and start building an alternative economy in Grumento will be to digitize this collection and charge subscription fees to libraries around the world. An alternative approach would be to better advertise the contents of the library and hope that scholars come to use it. But my hunch is that more money could be made through selling online subscriptions. I’m not sue how exactly to predict how much more money could come in through online subscriptions. This is really just a hunch. But it seems like common sense to me.

At this point I’m unsure of the benefits of using some sort of mark-up language to digitize the books. I think it will be hard enough to get a grant to bring a scanner to Grumento, let alone finding someone who will put the labor into doing the mark-up (I’m definitely not doing it). I suspect that using a scanner with OCR technology will be sufficient for the vast majority of the books. However, I believe there might be a few handwritten manuscripts in the collection, too. For those, my intuition is that simple image scans will be sufficient.

Second, I’d like to discuss the Atlante Linguistico della Sicilia (the Linguistic Atlas of Sicily). This is a very interesting kind of digitization project because language, in many ways, is intangible. It’s a system of signs based on social conventions. You can begin to document a language and create a digital record by recording analogue sound waves as audio files. These files can then be visualized by transcribing the sounds using a phonetic alphabet. But there are even difficulties with this. First, the perception of linguistic sounds is not straightforward; the languages we speak can shape the way we perceive sounds from another language. So another way that audio files can be visualized is with a spectrogram, which measures amplitude and frequency over time. Spectrograms can help see what we can’t perceive by sound.

And yet this kind of analysis still only goes so far. Transcription and spectrograms can only really tell you about an individual of speaker, not the language itself, or the sociolinguistic context in which this language, other languages, and varieties of them exist together. That’s what I love about the Atlante Linguistico and its “geolinguistic” approach. It uses the traditional tools of language documentation and adds a geospatial dimension to it. The “carta sonora” (sound map) tab is an interesting feature of this site, because it uses a mapping program to show how the same word is pronounced differently in various Sicilian locals, using both transcription and audio files. There’s lots of analysis to go with this on the site’s other pages, which paints a complex picture of the linguistic situation on the island, in which hundreds of distinct languages (though mutually intelligible) exist together in a sociolinguistic environment. I think it’s a brilliant way of taking something so complicated, ephemeral, and intangible as spoken language and preserving it and making it available with digital tools.

Lastly, I’d like to discuss a personal website, managed by Dr. Phil Taylor, on the University of Leeds website. It’s called Phil Taylor’s Papers, and its a collection of articles, essays, doctrinal writings, and reports on the topic of information warfare and strategic communications. I wish more people were aware of how governments use information and the news media to advance their policy goals. So I’m glad to see someone collecting this information under one roof. However, it’s a very casual effort at archiving. It almost seems like Taylor just wanted all these resources in one place for the sake of organizing his own research materials. So I thought it might be worth discussing what went wrong here, when there was so much potential for this to be such a useful public resource.

First, Taylor just cut and pasted the materials he liked onto webpages and organized the many many links under menu tabs. It doesn’t seem like he logged in any metadata at all, so titles and even keywords aren’t searchable. Also, there are no permalinks provided to the original source, and none of the hyperlinks in the original text were preserved. And the themes according to which the materials are organized are really broad. One needs to understand the difference between PSYOP and Strategic Communications to understand what they’re looking at.

There are over 1,000 items in this collection and it would have been an enormously time consuming effort for this one man to catalogue each item, provide metadata and hyperlinks, and create an intuitive schema for organizing all the materials. As it is, it’s a great resource for researchers familiar with the topic, but not much more.

Review: The National Security Archive and Grumentum

So I have two separate website projects that I’m working on: https://peopleshistoryfallujah.org, using WordPress, and http://memoriaelinguagrumentina.org/neatline/show/g, using Omeka. The first is a pretty straight-forward digital archive, and the second is a mixed methods, public history and language documentation project focused on the Italian village of Grumento Nova. 

The National Security Archive has served as a model for my People’s History of Fallujah project. The biggest attraction to the website is, of course, the information. The staff of this site has created a fantastic public resource through their work with Freedom of Information Act (FOIA) requests. Scholars and the general public alike can find valuable primary source material on a number of foreign policy issues, recent and past. 

Being a website built around content, often just text, the organization of the site and its searchability seem to me to be its most important features. The design itself is minimalist, even though the home page is quite busy. There is a simple header image, then a menu bar, followed by a scrolling window of the top stories in the media. Below this, there is a left side-bar with the latest blog posts, a right side-bar with news items that feature the work of the Archive, and the central body of the home page lists the most recent additions to the archive in chronological order. 

The homepage could at first leave visitors feeling overwhelmed with the amount of information and it’s many different sources (from the blog, archive postings, news media, or various projects). However, this is not a website built for casual browsing. If one has a clear idea of what they want to look for, searching the website is actually very simple and smooth. You can search by project—usually a country of interest or a topic, like torture—or you can do a search for specific documents. Searches can be refined by keywords and date of publication. 

I have found searching by country to be very helpful. Many of the “postings” offer a collection of documents on a specific topic with some background context explained by the editorial staff. I’ve found several new documents that I didn’t even know existed by looking through these postings, and I’d like to use a similar approach for my People’s History of Fallujah. 

My other project, Memoria e Lingua Grumentina (“Memory and Language in Grumento Nova”) is much more challenging, and I haven’t found any website to serve as a good model. However, http://www.grumentum.net/index.php is a useful resource focusing on the village’s “beni culturali”—(something like “cultural resources” but with legal and economic connotations). 

The graphic on the home page is simple, attractive, and intuitive. The vertical color bars stand for categories of “beni culturali,” and these colored bars appear again in the header image. I think it looks good, and when you click on either “nature,” “archeology,” “history and traditions,” or “food and accommodations,” some snapshots open up on a real of film. Overall, I like the design. And even though the graphic is redundant with the header menu, it’s a good portal into the website’s contents. 

However, after this neat homepage, all the info about this town is a bit scattered with an over-reliance on text. The website functions like an e-book with hyperlinks, and each category of beni culturali is like a chapter. But there is so much text and so much info that I don’t think the website is useful for casual visitors. In my opinion the site would work much better if they had some more engaging materials to begin each section with (like a video or some kind of graphic) and then offer links from there to more information. To have the information organized as it is will attract only the most interested visitors. Others will likely move on. 

American Museum of Natural History’s Explorer App

I chose the AMNH’s Explorer app because I wanted to see what kind of app a prominent museum with a large budget would produce. I’ve looked at a few apps from some smaller Italian museums that, in my opinion, are poorly designed and barely functional. It feels like some of these museums are creating apps as a kind of gimmick. So I was curious to see what the best kind of museum app looked like, and whether it could actually improve a visitor’s experience.

Perhaps the first thing to note abut the Explorer App is that right away it offers you a list of topics and exhibits for you to chose from, and then guides you from exhibit to exhibit with your phones GPS technology. Obviously I wasn’t able to test this out, but I have tried using paper maps in museums before, and they’re not always easy. If this app makes it easier for visitors to navigate the museum building, then this free app is already worth downloading.

Other features worth noting is the alerts when you get near an exhibit that you marked as being of interest. The app then offers a bunch of photos, fun facts, and audio clips related to the exhibit. This feature of the app works brilliantly. The photos are all very good quality, the windows pop up on the app without any lag, and it’s easy to navigate back to the home screen. However, I wonder if the photos and the extra info really add to the visitor’s experience at the exhibit. I can imagine the app being just as much of a distraction as an aid. I also wonder why these photos and the extra info in the app were not added to the exhibit in the first place.

I was born in 1984 and I resisted making technology part of my life until like 2006. I still don’t like it when people fidget with their phone during dinner or during a conversation. So I think that I’m of a demographic that would be more content with the exhibit by itself. The photos, audio clips, and games that come with the app feel gimmicky to me. All I really want is the GPS navigator.

That said, I can also see the app being useful for a parent who is forcing their child to visit the museum. At least the kid can pretend like he’s experiencing the museum by playing with the app while mom and dad look at the exhibit. Maybe the tree of life game or the guided tour with the cartoon bear could be fun for a kid. But if I have to strain my brain for a scenario in which all the functions of the app are useful, than maybe most of it is just for show.

As visually appealing and intuitive as the app is, I think it’s best feature is the GPS navigator. Everything else that comes with the app will likely not add anything significant to a visitor’s experience. This could well be my prejudice against technology speaking; however, I think people visit museums because they want an unmediated encounter with the past. An app could help, but it could also get in the way.