Well my friends, it seems the semester has come to a close. It's gone by so quickly and although I'm ready for a few months off, I have learned so much this semester after my journey through the Digital Humanities.
Last week, Hailey and I presented our work for the semester to a class of our peers in Kean University's English and Writing Studies M.A. program. We summarized an introduction to the field, and had the class play around with Voyant-- which we decided was the most user-friendly DH program to introduce to beginners. The class seemed to respond well to the presentation, and it was a great way to sum up a semester of work.
I'm grateful for the experience of taking this independent study. Going into the semester, I didn't know where I'd end up in my journey. I used the first two weeks to compile my reading list, which can be found here, and used that as a jumping-off point for the remainder of the semester. Navigating my reading list proved to be helpful throughout the semester, and I edited it quite a few times over the months as my needs changed. I learned a lot about the background behind the digital humanities, as well as the coding and technical work behind the programs.
The biggest turning point in the semester happened in March, when I found out about THATcampDC. Toward the end of February, I had hit a wall in my work. I knew a good deal about the field, but was unable to find ways to practically apply the information I was learning. I knew about all of the DH programs, but didn't know how to use them. As I've come to understand, the DH is such a new and technological field that the programs can have a pretty steep learning curve. However, when I attended THATcampDC, I received the help I desperately needed. I sat in three sessions in which I met individuals in the field who were able to guide me. It took a good bit of courage for me to venture out into this new field of academia, but I'm thankful that I had the opportunity to meet some of the key people in the field.
As for what's next between me and the DH, next semester I start my thesis. Although I still have some thinking to do, I think I am in the perfect position to use my work this semester toward my thesis, by walking people through the DH and explaining how it can be used by students. The field can be daunting to people who don't understand it, but students are the perfect group of people to introduce it to. Young adults have grown up in the Digital Age, and I think they'll be able to utilize technology in ways that will amaze the world. I am particularly passionate about young adult dystopian literature, it is one of my favorite genres, and I think it would be interesting to study a handful of works using DH methodologies and incorporate this into my study as examples of how the methodologies can be used to analyze literature, as well as discussing things that can be revealed using computers. This way, I'll get to use DH methodologies, as well as talk about why the field is important.
The biggest thing I've learned this semester is how amazing the digital humanities are, and how vast the field is. Going into the semester, I knew it was interesting but I didn't know specifically how I would apply it to my work. As the months progressed, I realized that I needed to start small. You wouldn't jump into the deep end of a pool without learning how to swim, and I won't be attempting complex coding before first knowing how to use the basic programs.
All in all, I feel more secure in my understanding of the field, and I feel that I have a good cornerstone for further learning, and that was the goal I hoped to achieve this semester. Going out I can say, it's been an excellent experience.
Thursday, May 11, 2017
Monday, April 24, 2017
Voyant
After navigating Gephi last week, this week's Digital Humanities Tool of the Week (DGTW?) is Voyant! Whereas Gephi is a more complicated, more intensive program, Voyant is readily accessible, and the perfect tool for those who want an easier introduction into the world of DH tools. Voyant is new to me as well, so I'm going to play around with it a bit, and share my results.
You might know Voyant from seeing word clouds around the internet. These can be made with the Voyant program, by inputting data and adjusting settings to see the words Voyant spits back. For example, in a work where the word "cat" is input a total of 200 times, and the word "silly" is input 150 times, "cat" will be the biggest word in the cloud. This sounds silly, but isn't simplification the best?
As I've covered in readings throughout the semester, DH is complicated. There's a lot of coding and numbers, and data involved, and this kind of work isn't suited to everyone's skill set. Sometimes, it's helpful to have a preexisting program in which to input information, and use the results. Voyant fits this need.
As I've covered in readings throughout the semester, DH is complicated. There's a lot of coding and numbers, and data involved, and this kind of work isn't suited to everyone's skill set. Sometimes, it's helpful to have a preexisting program in which to input information, and use the results. Voyant fits this need.
Here are a few examples of word clouds, from this list I found on Buzzfeed.com:
Isn't interesting to see what the frequency of words in a text can reveal about the text?
This first image is a screengrab of everything that shows up on the screen when the corpus loads. Each individual box shows a different way of graphing the data in the story:
So, there's a lot to unpack here. First and foremost, the word cloud:
You can do a lot to edit the word cloud, such as expanding it to include more words, reformatting it to take a different shape, and edit the font and colors of the words.
There are a lot of other visualizations that can be applied to the data set, another option that jumped out at me was "Bubbles":
"Bubbles" took a while to sort through the 8000 words of Brave New World that were input into the program, so it took a while to work, but it was interesting to see the results. Here's a screengrab of the program running:
Here's another example of something you can do with Voyant. This tool is simply called "Link," and for this example I used a pre-existing corpus within the Voyant program-- a selection of eight Jane Austen novels. I thought that this corpus would be the perfect way to show how the "Link" tool works with a wider selection of works. Because screengrabbing capabilities are limited, let me explain, when you place your cursor on one of the words, the pathways "link" to other words that are connected. For example, the word "Mr." links to "said," "Mrs," "Knightly," "Weston," "Darcy," "Elton," and "Crawford," within the parameters of this data set.
For a fairly user-friendly program, there is so much that can be done with Voyant. If, like me, you're just getting into DH, I highly recommend playing around with this program. It's a user-friendly program and it shows you some of the cool things that can be done with a DH tool, without the complicated addition of coding. Whether this is the extent of your travels in DH, or just a stepping stone to learn more, it's worth of perusing.
There are a lot of other visualizations that can be applied to the data set, another option that jumped out at me was "Bubbles":
"Bubbles" took a while to sort through the 8000 words of Brave New World that were input into the program, so it took a while to work, but it was interesting to see the results. Here's a screengrab of the program running:
Here's another example of something you can do with Voyant. This tool is simply called "Link," and for this example I used a pre-existing corpus within the Voyant program-- a selection of eight Jane Austen novels. I thought that this corpus would be the perfect way to show how the "Link" tool works with a wider selection of works. Because screengrabbing capabilities are limited, let me explain, when you place your cursor on one of the words, the pathways "link" to other words that are connected. For example, the word "Mr." links to "said," "Mrs," "Knightly," "Weston," "Darcy," "Elton," and "Crawford," within the parameters of this data set.
For a fairly user-friendly program, there is so much that can be done with Voyant. If, like me, you're just getting into DH, I highly recommend playing around with this program. It's a user-friendly program and it shows you some of the cool things that can be done with a DH tool, without the complicated addition of coding. Whether this is the extent of your travels in DH, or just a stepping stone to learn more, it's worth of perusing.
Sunday, April 23, 2017
DiRT Directory & Gephi
I believe I mentioned this in my introductory comments at the beginning of the semester, but I am obsessed with interested in dystopian literature, and plan on writing my making it the topic of my thesis.
My interest in this field of literature is two-fold. First of all, dystopian worlds are particularly fascinating because they are the manifestation of people's fears of the unknown future-- usually this unknown future is filled with government control and thought-policing. These fears become all the more frightening when people start to recognize the doom of Orwell's 1984 encroaching on our own society. The second reason I am drawn to dystopian literature is because I have a great love of children's literature, and the dystopian genre has taken off in young adult and children's literature. It is interesting to me that children have always been a part of dystopian stories (for example, in 1984, children turn in their parents for wrong-think), but now they are becoming the main characters.
This is where DH comes in. I'm interested in finding programs that will help me pinpoint references to children and the theme of childhood in dystopian novels. To do this, I will need a program that takes the text I put into it, and spits out visualizations. This is where DiRT Directory comes in.
DiRT Directory is something I learned about at THATcampDC, and it has changed the course of my research. DiRT means "Digital Research Tools," and this website serves as a collection of resources that are organized by category. Each entry on the site has an about page where a synopsis of the tool is given, as well as the link to download the tool. Here's an example.
First, on the home page, you must decide the kind of tool you need for your work:
For my purposes, I looked up Visualization. This next image shows the further options that appear when a category is selected. For "Platform," I chose "Windows." For "Cost," I chose "Free."
Here is a fantastic step-by-step tutorial that I found, which imports the text of Les Miserables in order to analyze connections between characters. This link was particularly helpful to me because, although Gephi can be used to visualize all kinds of data, this is the kind of data I will be working with.
Here is a slightly more complex tutorial which includes information about the coding behind the program.
This next link is also a tutorial, but I am including it to show the kinds of visualizations that can be achieved by Gephi.
"Visualizing Historical Networks" is a group of projects hosted by the Center for History and Economics at Harvard University, which utilized Gephi to "map the way people in the past interacted with each other and their surroundings." I encourage you to peruse the site, the work is fascinating!
Gephi appears to be an incredibly helpful tool, and I'm excited to play around with it in my own research!
My interest in this field of literature is two-fold. First of all, dystopian worlds are particularly fascinating because they are the manifestation of people's fears of the unknown future-- usually this unknown future is filled with government control and thought-policing. These fears become all the more frightening when people start to recognize the doom of Orwell's 1984 encroaching on our own society. The second reason I am drawn to dystopian literature is because I have a great love of children's literature, and the dystopian genre has taken off in young adult and children's literature. It is interesting to me that children have always been a part of dystopian stories (for example, in 1984, children turn in their parents for wrong-think), but now they are becoming the main characters.
This is where DH comes in. I'm interested in finding programs that will help me pinpoint references to children and the theme of childhood in dystopian novels. To do this, I will need a program that takes the text I put into it, and spits out visualizations. This is where DiRT Directory comes in.
DiRT Directory is something I learned about at THATcampDC, and it has changed the course of my research. DiRT means "Digital Research Tools," and this website serves as a collection of resources that are organized by category. Each entry on the site has an about page where a synopsis of the tool is given, as well as the link to download the tool. Here's an example.
First, on the home page, you must decide the kind of tool you need for your work:
For my purposes, I looked up Visualization. This next image shows the further options that appear when a category is selected. For "Platform," I chose "Windows." For "Cost," I chose "Free."
Here are some of the programs that are listed in the results for "Visualization+Windows+Free." There are many more than are pictured, this is just a sampling.
Gephi and Weave stuck out to me as potentially helpful to my work, and so I clicked them both. Here's what the description pages look like:
DiRT Directory is an excellent resource because it offers links a multitude of programs that can be used in a multitude of ways. I decided to download Gephi because it seemed like it would be helpful, and I had heard the name tossed around at THATcampDC. With the help of DiRT Directory, I was able to pinpoint a resource which, prior to this point, had been a challenge.
Although Gephi looks daunting, I was able to harness the power of the internet to find
tutorials and examples of how to best harness the program's power. Gephi's website is pretty straight forward in explaining the goals and usages of the program, and served as a helpful jumping-off point.Here is a fantastic step-by-step tutorial that I found, which imports the text of Les Miserables in order to analyze connections between characters. This link was particularly helpful to me because, although Gephi can be used to visualize all kinds of data, this is the kind of data I will be working with.
Here is a slightly more complex tutorial which includes information about the coding behind the program.
This next link is also a tutorial, but I am including it to show the kinds of visualizations that can be achieved by Gephi.
"Visualizing Historical Networks" is a group of projects hosted by the Center for History and Economics at Harvard University, which utilized Gephi to "map the way people in the past interacted with each other and their surroundings." I encourage you to peruse the site, the work is fascinating!
Gephi appears to be an incredibly helpful tool, and I'm excited to play around with it in my own research!
Monday, April 17, 2017
"That" Point in the Semester: Regrouping
Friends, it has reached that point in the semester where struggle is setting in hard. There are 24 days until the end of the semester (not that I'm wearily counting), and I've started to struggle at this point in my independent study. Excuses aside, here's the long and short-- I missed my blog post last week and it's time to regroup.
During the first half of the semester, I extensively studied the reasons behind the question "Why DH?" and I think I answered them pretty conclusively. As the second half of the semester rolled around, particularly after the amazing THATcampDC, I started seeking out methodologies that I could employ in my own work. Thankfully, I have a list of helpful tools and resources that should be helpful in the next step-- and it's time to take that next step.
During the first half of the semester, I extensively studied the reasons behind the question "Why DH?" and I think I answered them pretty conclusively. As the second half of the semester rolled around, particularly after the amazing THATcampDC, I started seeking out methodologies that I could employ in my own work. Thankfully, I have a list of helpful tools and resources that should be helpful in the next step-- and it's time to take that next step.
I missed my blog last week because I was desperately trying to reformat these last few weeks of the semester, so to get the most out of them in terms of DH practice. I'm going to be using some of the programs I've learned about in my thesis work next year, and I think it will be cool to play around with some of them at the end of this semester.
Websites such as DiRT Directory and The Programming Historian have proven to house invaluable resources for learning the kinds of tools that would be good for my interests. I was also recommended to look at tools such as Voyant, Open Refine, and Gephi. Now that I have a new focus, I'm excited to finish off this semester strong!
In conclusion, this is the first of two blog post this week. It's time to put the theory where the work is (Is that a saying? I guess it is now.) and put some practice into this Intro to DH course. Additionally, Hailey Carone (another graduate student here at Kean) and I will be presenting our studies and practice in the digital humanities to some of our peers in the Writing Studies M.A. program on May 1st, which will be a fun culmination of the semester.
See you guys in a few days!
Monday, April 3, 2017
Virtual Stacks?
Source |
This week's chapter of A Companion to Digital Literary Studies is Chapter 29, "The Virtual Library" by G. Sayeed Choudbury and David Seaman. Choudbury and Seaman highlight the vast amount of resources that we now have at our disposal, thanks the the internet, so that anyone interested in learning is welcome to learn. Libraries now have access to the digital copies of books that they might have only dreamed of having access to in the past. We have scans of magazines at periodicals at our fingertips that date back long before the internet was a far-off dream. A student in a tiny school in Western, P.A. can have access to a multitude of works that might totally shape their field of interest, or bring them into a new world. Stepping away from academia, children around the world need only find a computer and they can learn anything-- we are no longer bound by the physical.
This also ties in nicely to my readings on digital humanities because the people who make these books available are digital humanists. The books available online were once painstakingly typed, letter by letter, by a person who wanted to catalog them online for future learners. That is an incredibly selfless, and incredibly tedious, task to take on.
So, why do it? Why bother?
We bother because books are important and the internet is where the future is going. If we want history recorded and accessible, if we want to make the use of the tools we have, it's our responsibility to keep the virtual library in existence as a living, breathing organism.
This chapter was written 9 years ago, and it's interesting to see how much things have changed in 9 years. Choudbury and Seaman speak about the prevalence of online journals, and they have grown even more prevalent as the years have gone on. Now, if a scholar wants a relevant article written, they publish articles, and they certainly make their books available online. Much as we all love (and should preserve) traditional libraries, the traditional building simply cannot hold the vast amount of knowledge that is being produced every day by scholars with the world at their fingertips.
Freely available collections are another novel idea to the virtual library. Although copyright laws complicate matters, many works have become legally available online. Within a few clicks, anyone, anywhere in the world, can have any information that they need. Even if the book is within copyright and cannot be obtained online, we now have applications like Google Books which allows one to search, locate, and obtain the book in a matter of minutes.
The library tends to keep up with such developments and is a natural and willing partner with the humanities departments as they explore the possibilities such tools have for data mining and the display of results. Add to these software packages the blogs, wikis, and virtual communities that are being adopted, the digital tools for collaborative scholarship, for innovative ways of interrogating text, and for new teaching possibilities, and it is not difficult to see increasing potential for transformative change in the way that literary scholars research, publish, and teach.
Although I am fully in support of the idea of "the library as laboratory," I have learned that there a specific way in which this should be achieved. In sitting in a session during THATcampDC with a group of librarians, I came to understand that just as the digital world is changing, the perception of the librarians role must change as well. Librarians must be equipped with the skill set and help that they need in order to move the library along with the times. Enough librarians must be hired to help in all aspects of the virtual library, and this requires a restructuring of skills and strength.
It is no longer acceptable for the humanities to be considered "data poor," but it's up to the humanists to change this perception. As Choudbury and Seaman note, "the humanities are rich with content that is difficult to extract into digital form." This becomes all the more notable if you consider that the humanist's "data" has been uncharted for all of history. We have generations of data to work with and the sooner we start, the better.
As for literary studies, the "traditional" form of publishing — the monograph — influenced the way in which research has been conducted and conveyed. With new avenues for publishing, it is possible, even probable, that humanists will begin to explore new forms of research and dissemination.The natural flow of life is going digital. Shouldn't we follow in it's path?
Monday, March 27, 2017
Multiplying Knowledge
Now that THATcamp has passed, it's time to get back to my articles!
If you click this link to my syllabus, you'll see that I'm reaching the end of the track I laid out at the beginning of the semester. As I thought might happen, this independent study has taken me far beyond where I expected, and introduced me to people and resources I didn't know about at the start of the semester. At THATcampDC I learned about several resources that I may spend the end of the semester exploring. I think it would be very interesting to end this semester by exploring some of the tools and methodologies I've learned about. If you're reading this and have any ideas for readings that would be beneficial for me to check out, please drop me a line in the comments!This week, my reading selection is "'Knowledge Will Be Multiplied': Digital Literary Studies and Early Modern Literature" by Matthew Steggle.
In his chapter, Steggle defends the interpretation of data gathered though the use of DH methodologies. As I've summarized in previous posts, many scholars are wary of using digital tools in the English classroom, and this should not be the case. At one point, it may have been argued that the task of obtaining electronic copies of literature was hard and inaccessible, but at this given point in time, websites such as Project Gutenberg exist and allow scholars to access a huge amount of texts online. With some of the challenge gone, isn't it worth looking into the knowledge that could be mined through a new methodology?
DH could also bring a new kind of student into the English department. He cites a quote from Risa Bear, "I became interested in producing texts for internet distribution as an alternative to writing term papers." If we can keep students interested in literature and allow them to venture into new disciplines within the field, the study of English literature will only grow in strength.
It's also worth nothing that DH is a community effort. In the case of electronic literature, scholars depend on one another to type up and format entire books, so that they can input the typed file into a program for their own purposes. Steggle notes the Bear's transcription of The Faerie Queen, completed in 1995, as being one of many huge additions to the transcribed canon. Much of the academic world appears to be "every man for himself," and perhaps things don't have to be reduced to that. Perhaps, DH can unify people and help us to work together to meet our goals. I think it's worth a shot.
The Internet Shakespeare Experiment (ISE) is a prime example of the good intentions that can lead to books being made available online. Leader Michael Best defined the goal of the creation as follows:
to create a website with the aim of making scholarly, fully annotated texts of Shake-speare's plays freely available in a form native to the medium of the internet. A further mission was to make educational materials on Shakespeare available to teachers and students: using the global reach of the internet, I wanted to make every attempt to make my passion for Shakespeare contagious.Another example of the good that can arise from this movement is the Interactive Shakespeare Experiment, which contains hotlinked annotations which appear in another screen. The reader has the choice to click on these links as they appear, in order to read notes of criticism on the text.
The people who work on DH projects, especially transcribing texts and composing lists, work a selfless and labor-intensive job which deserves to be recognized and hailed for the treasure that it is. In ensuing that documents are available online to the average scholar, they have opened up the academic world.
Toward the end of his article, Steggle speculates that blogs may be the next jump in the academic community. Perhaps this is a bit meta of me, but I think he may be onto something. Looking at the unexpected trajectory that academia has fallen into, perhaps it's true that blogs may one day be used as tools, or mined to tell the future about the past. After the developments we've seen in academia since the dawn of DH, I wouldn't be surprised. All in all, the title of this chapter is perfect- "Knowledge Will Be Multiplied." It certainly appears that this is the case!
THATcampDC
This past weekend, I had the privilege of attending THATcampDC at George Washington University in Washington, D.C. The weather was beautiful, the cherry blossoms were blooming, the conversations were stimulating, and I am so grateful I had the ability to meet so many interesting people who also were passionate about DH!
After meeting up in the morning and deciding on a schedule of sessions for the day, based on what people were interested in discussing, the ~75 attendees went off to pursue our respective fields of interest. Excluding an hour set aside for "Dork Shorts," in which attendees were given 2.5 minutes to talk about their current passion projects, the day was broken into 4 hour-long sessions. Based on my interests, I attended sessions on using Wordpress.com in academic settings, DH training and support for librarians, as well as a much-needed session on DH tool sharing.
What impressed me more than anything about this day was how people were open to discussion and willing to share their knowledge. As a total newcomer to DH, I walked in knowing the bare minimum but wanting, desperately, to learn. In the morning idea generation session, I expressed the desire to learn about distance reading in particular and, although a session wasn't formed, two kind individuals offered to help me out and I was able to make valuable connections.
I left GWU with a list of tools to check out, as well as the email addresses of a few people who seem absolutely wonderful. It was a fantastic experience, and I would be excited to attend another THATcamp in the future!
Monday, March 20, 2017
Digital Humanities: "Tools to Think With"
This week I'll be turning back to look at what can be achieved through use of Digital Humanities methodologies, and the first article I'll be reading is "Stylistic Analysis and Authorship Studies" by Hugh Craig.
The concept of using technology to identify stylistic patterns in a corpus is both fascinating and unheard of, considering that such a feat would take considerable work without computer programs in a time when "with no way of assembling and manipulating counts of word-variables, researchers in earlier periods assumed that very common words varied in frequency only in trivial ways among texts." Now, however, a computer can be programmed to detect intricacies that go unnoticed by human beings, DH methodologies introduce a whole new world of possibilities to linguistics studies. To this note, Craig defines computational stylistics as follows:
Computational Stylistics aims to find patterns in language that are linked to the processes of writing and reading, and thus to "style" in the wider sense, but are not demonstrable without computational methods.In other classes in the Kean English and Writing Studies M.A. program, we have had extensive discussions on the tragedy that occurs when writing is limited to the English classroom, as well as the misfortune English departments suffer as being compartmentalized into "literature people." This chapter offers the perfect counter to this issue as stylistics. In the same breath that Craig explains that stylistics can be monitored to study associations in Shakespeare plays, the field is also important in courtrooms for the purpose of deciphering. Who says that English majors sit around analyzing literature all day?
That being said, I'm an English literature geek and analyzing literature is important to me. Craig exemplifies one potential use of stylistics by analyzing Shakespeare's plays in order to study association and differences, based on dialogue. I found this study fascinating, and I'm excited to learn more about DH methodologies because this is exactly how I'd like to tie the tools into my own work. I don't know if I'm communicating how cool I think this is--- this is SO COOL. Shakepeare lived roughly 450 years ago. His works have been analyzed a million different ways and it's only now, in our lifetime, that we have the technology to come up with entirely new scholarship. And this isn't just about Shakespeare-- as I mentioned earlier, we can use the computer to make so many things happen. This might just revolutionize the English department.
As is the case with everything, there is always room for error, especially considering that all of this work still comes down to human interpretation. Craig allows the consideration that "at best, [this methodology provides] a powerful new line of evidence in long-contested questions of style; at worst, an elaborate display of meaningless patterning, and an awkward mismatch between words and numbers and the aesthetic and the statistical." Something worth considering, but worthy of keeping us from trying nonetheless!
This all being said, every point has a counterpoint, so enter the challenger: Stanley Fish. Ah, Stanley Fish. Is it an English department discussion without Fish? I'd argue not. Craig mentions Fish in conjunction with a discussion on the challenges against linguistics. Scholars such as Fish see a fundamental flaw in considering only stylistics when analyzing an article, as he feels that this divorces meaning from the text:
[Fish believes that] when abstracted from this setting [stylistics] refer to nothing but themselves and so any further analysis of patterns within their use, or comparison with the use of others or in other texts, or relating of them to meaning, is entirely pointless. Fish does not deny that there may be formal features which can enable categorical classification such as authorship (and so he is less hostile to authorship studies using computational stylistics), but he insists that these features cannot offer any information of interest about the text – just as a fingerprint may efficiently identify an individual but reveal nothing about his or her personality.Craig respectfully nods to Fish's genius as a humanist and scholar, and recommends Fish as crucial reading for anyone interested in stylistics, so that the young scholar is made aware of potential pitfalls along the way of research. However, he also notes that stylistics is not necessarily anti-humanist. Stylistic clues can reveal interesting facts about a text, but it's important that the researcher doesn't get caught up in the pieces of the puzzle and forget the big picture.
In general, stylistics can reveal trends and ideas worthy of study. Craig points out that "it's methods allow general conclusions about the relationship between variables," and this isn't something that should be dismissed easily. Likewise, stylistics shouldn't be thought of a merely quantitative: as Craig argues throughout his chapter, qualitative research can be derived from the numbers. I liked how he explained the field in this particular paragraph:
There is a strong instinct in human beings to reduce complexity and to simplify: this is a survival mechanism. Rules of thumb can save a great deal of time and effort. Stylistics is born of this instinct. What seems hard to explain in the individual case may be easier to understand when it is seen in a larger context. If these elements can be put in tabular form, then one can harness the power of statistics and of numerous visualizing and schematizing tools to help in the process of finding patterns.Craig splits off in the latter part of the chapter to discuss humanities computing in relation to the question of authorship, and makes note of Shakespeare, Homer, and the Bible, to make a point of the things that may be clarified through computing. Any English major, whether they agree or not, can tell you about the debate surrounding Shakespeare, which questions if he wrote every book that is attributed to his name. Although the methodology is not infallible, humanities computing may provide answers, or at least new ways of studying the question. Looking into such questions through a new lens in fascinating, and who knows the ideas that may be derived from the new tools we have at hand!
--
The second chapter I've chosen for this week is "Print Scholarship and Digital Resources" by Claire Warwick. In this article, Warwick gives the best term I've yet found to describe the DH methodologies: "tools to think with." (I liked it enough to make it the title of this post!)
DH methodologies certainly are tools, and it's for this reason that scholars should not be afraid. One of the fears that permeates the literary world is the fear that the DH are going to knock out the old methods of study, and Warwick purports that this is not, and will never be, the case. To support her case, she refers back to the 1990s when people feared that the book was in danger of going extinct. I myself remember in the mid-2000s when e-books became popular and people, myself included, fretted that this was the end of paper books. In 2017, we see that this is not the case. The world is changing, chain booksellers are struggling, but the paperback is not going anywhere. In the same way, Warwick argues, we should not worry about the traditional methodologies.
The computer is a tool, and should be used accordingly but, sadly, many critics refuse to try. In a particularly interesting paragraph, Warwick documents how scholars are presented with the wonders that technology can produce, but these wonders aren't personal, and she suggests that this may be due more to disinterest and misunderstanding, than to fear. Rather than "apparent conservatism," it may be that "Users have been introduced to all sorts of interesting things that can be done with computer analysis or electronic resources, but very few of them have been asked what it is that they do, and want to keep doing, which is to study texts by reading them." If reading is still important, the technology should serve the reader, just as it serves the programmer and the coder; "computers are of most use when they complement what humans can do best." Computers cannot do everything, and programs are limited. As Warwick points out, a computer cannot recognize figurative use of language, and different people may interpret figurative language differently. This is where human interaction with data becomes of utmost importance.
Throughout the rest of Warwick's article, she continues to defend both the DH, as well as the importance of human interpretation, and makes a compelling case for marrying the two together to successfully move English departments into the future, that is, if scholars are brave enough to take the plunge.
Friday, March 17, 2017
Text Markup and THATcamps
As I mentioned in last week's blog post, text markup is incredibly complicated and incredibly technical. Having a very basic understanding of coding, I understand the theory behind the work that goes into building markup tools, however, for my purposes, I believe that examples of practical application would be most helpful.
I found myself running into a wall in the past week as I attempted to figure out the best way to learn more about TEI and text markup. I'm excited about these methodologies and I've already considered using them in my thesis next semester. However, understanding theory will only go so far-- I need practice. There are several articles that teach theory but, when it comes to learning the programs that implement it, the field is pretty DIY. That all being said, to take a step toward solving my problem I've registered for a THATcamp taking place in Washington D.C. on the weekend of March 25th, and I'm very excited to meet others in the field. It'll be great to meet others interested in DH, and I'll certainly blog my experience afterwards!
This blog is going to be two parts this week. Because I'm piggybacking off of my last two blogs, the reading material I have for this post is one main article and two resources, which have been helpful to me in understanding the building blocks of text markup.
First up, the article! My main reading for this week was “The Text Encoding Initiative and the Study of Literature” by James Cummings.
Cummings introduces his article with a brief history of the Text Encoding Initiative (TEI) by introducing some of the guidelines and sponsors that together make up the initiative. He states the chapter's thesis as follows:
I've chosen this article because I feel that it's important to not only have a grasp of the technologies, but also to understand the history. The article includes technical language relating to different markup languages, SGML (Standard Generalized Markup Language) and XML (Extensible Markup Language), explains the history of these languages, and describes how they are used. I was interested by Cumming's explanation of the transition from GML (Generalized Markup Language), a noted "milestone system based on character flagging, enabling basic structual markup of electronic documents for display and printing," to SGML, which was "originally intended for the sharing of documents throughout large organizations." As time went on, SGML was not universal enough and XML was adopted and is still used, because of it's flexible nature.
Cummings also discusses the user-centic nature of the TEI. Due to the community-based nature of the field, it must deliver what users of all different disciplines need. This can be a challenge, but it also exemplifies the versatile nature of the beast. As I have explained, I'm interested in using text markup and the TEI in order to see what it can uncover about texts that have been close-read to death. In the field of literature, we all know close reading, we all know how to compare elements of book. I want to take this to the next level- I want to see what technology can show me, and I want to learn how to use the programs.
Cummings explains that the TEI may have been influenced by New Criticism, a school of literary criticism with which I am quite familiar, and Cummings purports that the TEI, instead of reacting against this structuralism, as many poststructuralists might desire, in fact is compatible with New Criticism, as "the TEI's assumptions of markup theory as basically structuralist in nature as it pairs record (the text) with interpretation (markup and metadata)." This is something I would like to delve into further, because I can understand both sides of the New Criticism comparison argument.
I highly suggest reading this article, as Cummings successfully accomplishes his proposed thesis statement. I came away with feeling as if I learned the key points in the history of the TEI, without being drowned in technical conversation. I am increasingly interested in learning to code, as I am amazed by the things we can achieve with computer programs.
If you are interested in the technical side, Stanford University Digital Humanities department website includes many helpful resources, particularly "Metadata and Text Markup," which further explains buzzwords and phrases in the field, and "Content Based Analysis," which explains more about text content mining.
Additionally, the TEI website has several helpful links that may take one down many rabbit holes. I got stuck for a long while going through project examples which use TEI encoding.
I found myself running into a wall in the past week as I attempted to figure out the best way to learn more about TEI and text markup. I'm excited about these methodologies and I've already considered using them in my thesis next semester. However, understanding theory will only go so far-- I need practice. There are several articles that teach theory but, when it comes to learning the programs that implement it, the field is pretty DIY. That all being said, to take a step toward solving my problem I've registered for a THATcamp taking place in Washington D.C. on the weekend of March 25th, and I'm very excited to meet others in the field. It'll be great to meet others interested in DH, and I'll certainly blog my experience afterwards!
This blog is going to be two parts this week. Because I'm piggybacking off of my last two blogs, the reading material I have for this post is one main article and two resources, which have been helpful to me in understanding the building blocks of text markup.
First up, the article! My main reading for this week was “The Text Encoding Initiative and the Study of Literature” by James Cummings.
Cummings introduces his article with a brief history of the Text Encoding Initiative (TEI) by introducing some of the guidelines and sponsors that together make up the initiative. He states the chapter's thesis as follows:
This chapter will examine some of the history and theoretical and methodological assumptions embodied in the text-encoding framework recommended by the TEI. It is it intended to be a general introduction...nor is it exhaustive in its consideration of issues...This chapter includes a sampling of some of the history, a few of the issues, and some of the methodological assumptions...that the TEI makes.It is still fascinating to me that TEI is such a young endeavor. According to Cummings, it was formed at a conference at Vassar College in 1987, and very few of the principles established at that time have changed. This is exciting because the field is new and accessible-- the people who dive in are free to determine how the tools are used.
I've chosen this article because I feel that it's important to not only have a grasp of the technologies, but also to understand the history. The article includes technical language relating to different markup languages, SGML (Standard Generalized Markup Language) and XML (Extensible Markup Language), explains the history of these languages, and describes how they are used. I was interested by Cumming's explanation of the transition from GML (Generalized Markup Language), a noted "milestone system based on character flagging, enabling basic structual markup of electronic documents for display and printing," to SGML, which was "originally intended for the sharing of documents throughout large organizations." As time went on, SGML was not universal enough and XML was adopted and is still used, because of it's flexible nature.
XML has been increasingly popular as a temporary storage format for web-based user interfaces. Its all-pervasive applicability has meant not only that there are numerous tools which read, write, transform, or otherwise manipulate XML as an application-, operating-system- and hardware-independent format, but also that there is as much training and support available.Throughout the article Cummings highlights key points and goal of TEI. The design goals section examined the standards set for TEI to be as straightforward and accessible as possible for anyone interested in learning the text encoding methodology. He examines the community-centric nature of TEI, and the emphasis on keeping the field open and collaborative. I'm excited to be coming into the academic world at this time because although the DH field has a distinct technological learning curve, I'd rather face the curve in a community setting, rather than the traditional closed off world of academic hazing.
Cummings also discusses the user-centic nature of the TEI. Due to the community-based nature of the field, it must deliver what users of all different disciplines need. This can be a challenge, but it also exemplifies the versatile nature of the beast. As I have explained, I'm interested in using text markup and the TEI in order to see what it can uncover about texts that have been close-read to death. In the field of literature, we all know close reading, we all know how to compare elements of book. I want to take this to the next level- I want to see what technology can show me, and I want to learn how to use the programs.
Cummings explains that the TEI may have been influenced by New Criticism, a school of literary criticism with which I am quite familiar, and Cummings purports that the TEI, instead of reacting against this structuralism, as many poststructuralists might desire, in fact is compatible with New Criticism, as "the TEI's assumptions of markup theory as basically structuralist in nature as it pairs record (the text) with interpretation (markup and metadata)." This is something I would like to delve into further, because I can understand both sides of the New Criticism comparison argument.
I highly suggest reading this article, as Cummings successfully accomplishes his proposed thesis statement. I came away with feeling as if I learned the key points in the history of the TEI, without being drowned in technical conversation. I am increasingly interested in learning to code, as I am amazed by the things we can achieve with computer programs.
If you are interested in the technical side, Stanford University Digital Humanities department website includes many helpful resources, particularly "Metadata and Text Markup," which further explains buzzwords and phrases in the field, and "Content Based Analysis," which explains more about text content mining.
Additionally, the TEI website has several helpful links that may take one down many rabbit holes. I got stuck for a long while going through project examples which use TEI encoding.
Tuesday, February 21, 2017
Data and Dimension- Pt. 2
Welcome back! If by some odd chance you've ended up here first, please click here to see part 1 of this blog post.
---
The second text that I'll be exploring this week is "Marking Texts of Many Dimensions" by Jerome McGann. Within the first few paragraphs of this article, I'm already lost in the meta nature of the text:
This chapter in Blackwell's Companion is, much like "Databases," incredibly technical, although I expected this when I selected and paired the two together. As I mentioned above, DH is a technical field and I think it's important to have an introductory backbone that addresses how technical it can be.
Text markup involves the breakdown of language into words and even smaller units, in order to analyze the bits that work together to communicate ideas. This may sounds a bit scientific and don't be mistaken, it is. In face, McGann compares it to physics.
In text markup, the objective is to instruct the computer to identify basic elements of natural language text, in order to understand the redundancy and ambiguity that is inherently tied to language. This is not unlike the example database Ramsey analyzes in the first article. He spends a good bit of time examining the redundancies caused by errors in the programming of his data. In this article however, we are finally introduced to the TEI, or Text Encoding Initiative, which I've come to learn is major in DH methodologies. The TEI system is, according to McGann, "designed to 'disambiguate' entirely the materials to be encoded."
This is still very murky and confusing. Luckily McGann backtracks a bit, and explains what he calls "traditional textual devices," in order to later unpack the intricacies of TEI and SGML- standard generalized markup language, the overarching title for markup languages. The power of traditional textual devices lies in their ability to make records of their progress and process these records without the use of technology.
It's at this point I realize I'm going to need to dial it back and come back to this article. The syllabus that I've been compiling for this course is an ever-changing being and, after getting about halfway through this article, I see that I need to find a more basic explanation of TEI and SGML. Despite reading through the article several times, I feel completely lost-- which just means I need to learn more, and go down another rabbit hole.
McGann does pull my interest back when he applies markup to the poem, "The Innocence," by Robert Creeley. Although I am murky on how markup is done, McGann's six readings through the text showed the different elements that come to light through markup, which wouldn't be immediately obvious otherwise.
Of his choice of this poem, McGann explains:
As was the case with "Databases," I'm going to need some examples or practical application because the theory is quite dense, but it's incredible to know the things that can be accomplished when technology is married to the humanities. It certainly seems that the digital humanities use both sides of the brain, fusing logical and creative, to create something entirely new.
--
As a side note to this blog, it's going to be really funny when I read this later on and have a deeper understanding of DH, and see how much I'm struggling to unpack all of the theory
---
Source |
Consider the phrase "marked text"...How many recognize it as a redundancy? All text is marked text, as you may see by reflecting on the very text you are now reading. As you follow this conceptual exposition, watch the physical embodiments that shape the ideas and the process of thought. Do you see the typeface, do you recognize it? Does it mean anything to you, and if not, why not? Now scan away (as you keep reading) and take a quick measure of the general page layout: the font sizes, the characters per line, the lines per page, the leading, the headers, footers, margins. And there is so much more to be seen, registered, understood simply at the documentary level of your reading: paper, ink, book design, or the markup that controls not the documentary status of the text but its linguistic status. What would you be seeing and reading if I were addressing you in Chinese, Arabic, Hebrew – even Spanish or German? What would you be seeing and reading if this text had been printed, like Shakespeare's sonnets, in 1609?Alright McGann, you have my attention.
This chapter in Blackwell's Companion is, much like "Databases," incredibly technical, although I expected this when I selected and paired the two together. As I mentioned above, DH is a technical field and I think it's important to have an introductory backbone that addresses how technical it can be.
Text markup involves the breakdown of language into words and even smaller units, in order to analyze the bits that work together to communicate ideas. This may sounds a bit scientific and don't be mistaken, it is. In face, McGann compares it to physics.
Words can be usefully broken down into more primitive parts and therefore understood as constructs of a second or even higher order. The view is not unlike the one continually encountered by physicists who search out basic units of matter. Our analytic tradition inclines us to understand that forms of all kinds are "built up" from "smaller" and more primitive units, and hence to take the self-identity and integrity of these parts, and the whole that they comprise, for objective reality.I might even compare it to chemistry, studying the molecules that make up compounds in order to understand why the compounds act as they do. How interesting, to compare that to words.
In text markup, the objective is to instruct the computer to identify basic elements of natural language text, in order to understand the redundancy and ambiguity that is inherently tied to language. This is not unlike the example database Ramsey analyzes in the first article. He spends a good bit of time examining the redundancies caused by errors in the programming of his data. In this article however, we are finally introduced to the TEI, or Text Encoding Initiative, which I've come to learn is major in DH methodologies. The TEI system is, according to McGann, "designed to 'disambiguate' entirely the materials to be encoded."
This is still very murky and confusing. Luckily McGann backtracks a bit, and explains what he calls "traditional textual devices," in order to later unpack the intricacies of TEI and SGML- standard generalized markup language, the overarching title for markup languages. The power of traditional textual devices lies in their ability to make records of their progress and process these records without the use of technology.
A library processes traditional texts by treating them strictly as records. It saves things and makes them accessible. A poem, by contrast, processes textual records as a field of dynamic simulations. The one is a machine of memory and information, the other a machine of creation and reflection... Most texts – for instance, this chapter you are reading now – are fields that draw upon the influence of both of those polarities.SGML, on the other hand, looks at texts through the scope of data and coding and uses these tools to process and record although the use of the tools requires a humanist to curate the work. TEI, more specifically, can be programmed to focus directly on things that stand apart, to mark them as different, so the humanist can later come in and analyze meanings that may be tied to.
It's at this point I realize I'm going to need to dial it back and come back to this article. The syllabus that I've been compiling for this course is an ever-changing being and, after getting about halfway through this article, I see that I need to find a more basic explanation of TEI and SGML. Despite reading through the article several times, I feel completely lost-- which just means I need to learn more, and go down another rabbit hole.
McGann does pull my interest back when he applies markup to the poem, "The Innocence," by Robert Creeley. Although I am murky on how markup is done, McGann's six readings through the text showed the different elements that come to light through markup, which wouldn't be immediately obvious otherwise.
Of his choice of this poem, McGann explains:
I choose "The Innocence" because it illustrates what Creeley and others called "field poetics." As such, it is especially apt for clarifying the conception of the autopoietic model of textuality being offered here. "Composition by field" poetics has been much discussed, but for present purposes it suffices to say that it conceives poetry as a self-unfolding discourse. "The poem" is the "field" of action and energy generated in the poetic transaction of the field that the poem itself exhibits. "Composition by field", whose theoretical foundations may be usefully studied through Charles Olson's engagements with contemporary philosophy and science, comprised both a method for understanding (rethinking) the entire inheritance of poetry, and a program for contemporary and future poetic discourse (its writing and its reading).As I came to the end of this chapter, I found the appendices to be helpful in unraveling some of the more complex parts of the discussion but, overall, I think that I need to find a more basic reading that will start from square one of markup, so that I'll be able to build a stronger base understanding of the methodology. I knew I was in for a lot in this field of study, and the intricacies haven't scared me off yet!
As was the case with "Databases," I'm going to need some examples or practical application because the theory is quite dense, but it's incredible to know the things that can be accomplished when technology is married to the humanities. It certainly seems that the digital humanities use both sides of the brain, fusing logical and creative, to create something entirely new.
--
As a side note to this blog, it's going to be really funny when I read this later on and have a deeper understanding of DH, and see how much I'm struggling to unpack all of the theory
Monday, February 20, 2017
Data and Dimension - Pt. 1
Going into the field of DH, one must be just as aware of the "Digital" side, as the "Humanities." The digital end of digital humanities is no small matter and there's quite a bit of work that goes into tools for textual analysis. This week, I'll be walking through two articles from Blackwell's A Companion to Digital Humanities, both of which talk about the technological side.
A few semesters back, I decided to learn about coding. I took lessons at Code Academy and learned a little about HTML and CSS, and I loved it. It was amazing to learn a little about the "language" of computers, and I honestly wish I had had this opportunity when I was younger. Part of what draws me to DH is the opportunity to learn to use technology in a way that marries it to my first love, literature.
Databases are quite complicated entities, and I'm going to try to keep my secondhand explanation as simple as possible. Ramsey delves into the finer points of what goes into a database and, at its most simple, a database includes a system which can store and query data, as well as one that can answer simple questions by linking together stored data. However simple as these systems are, databases can get quite large, and that's when the algorithms start getting more complicated.
Ramsey goes into the different categorizations of data in his example of a database that stores information about current editions of American novels. By showing the problems that can arise from the most simplistic categorizations, he explains that there are other ways in which data can be categorized that are more complicated, but yield better results. Further, there are different ways in which data can be compared, which complicates things even more. In his American novels example, he talks about comparing one author to many works (1:M), or comparing many publishers to many works (M:M), and how the system would logically go about making the calculations needed to give a result.
Are you still with me? It's going to get much more technical. The next subject to be discussed is schema design. This is where we get into programming, database schema, which is created using Structured Query Language, or SQL. The best, least daunting way I can think to describe this is by saying that in using SQL the user is telling the machine what to do. The humanist tells the computer what they want it to do with the data they will be using. Even though it includes a lot of code, it's somehow less daunting if you think of it as giving direction. Below is an example of SQL, a basic structure that will later be filled in with data.
Ramsey broaches the discussion of data management in the last few paragraphs of his article. With great power comes great responsibility, so to speak, and anyone who has ever played around with HTML can tell you that the smallest error can throw off a large amount of work. The same is true with database management, and Ramsey suggests giving full access to very few people, for the sake of data and code security. After all, not many people need full access to an entire system. The less room for error, the better.
As we can see from the brief introduction to databases, here's a lot that goes into database programming, and there's certainly a learning curve that is not easily overcome. Luckily, there are a ton of resources to help the aspiring learner. Ramsey cites three of the most commonly used SQL tools, MySQL, mSQL, and Post-greSQL, as helpful options for those interested in using this methodology.
Because the readings this week are so dense, I'm going to split up the Great Wall of Text and direct you here for part two of this week's blog post!
Source |
The reason I bring up coding for the reason that it, much like databases, is so incredibly complicated. In taking a few lessons on basic coding, I know a fraction of a fraction of what goes into making one single webpage. I get the same feeling from this reading on databases, they can be quite simple, but so much goes into making a truly responsive database.
To delve more into that point we have our first article, "Databases," by Stephen Ramsey. Databases have existed, in one form or another, for a long time and serve as a way to categorize and store data for easy retrieval. Computerized databases add another element to the mix, a need for systems that "facilitate interaction with multiple end users, provide platform-independent representations of data, and allow for dynamic insertion and deletion of information." Databases play a large role in the DH, as the compilation of data can aid in charting relationships and themes throughout a number of books, or fields of data. Although this, as previously discussed, may seem daunting to the humanist, it is actually quite an exciting addition to the field. As Ramsey notes:
The most exciting database work in humanities computing necessarily launches upon less certain territory. Where the business professional might seek to capture airline ticket sales or employee data, the humanist scholar seeks to capture historical events, meetings between characters, examples of dialectical formations, or editions of novels; where the accountant might express relations in terms like "has insurance" or "is the supervisor of", the humanist interposes the suggestive uncertainties of "was influenced by", "is simultaneous with", "resembles", "is derived from."The first model of database discussed in this article is the relational model, which studies relationships between databanks-- or sets of data. The man who first proposed this model reasoned that "could be thought of as a set of propositions…and hence that all of the apparatus of formal logic could be directly applied to the problem of database access and related problems." Sounds logical to me!
Databases are quite complicated entities, and I'm going to try to keep my secondhand explanation as simple as possible. Ramsey delves into the finer points of what goes into a database and, at its most simple, a database includes a system which can store and query data, as well as one that can answer simple questions by linking together stored data. However simple as these systems are, databases can get quite large, and that's when the algorithms start getting more complicated.
Ramsey goes into the different categorizations of data in his example of a database that stores information about current editions of American novels. By showing the problems that can arise from the most simplistic categorizations, he explains that there are other ways in which data can be categorized that are more complicated, but yield better results. Further, there are different ways in which data can be compared, which complicates things even more. In his American novels example, he talks about comparing one author to many works (1:M), or comparing many publishers to many works (M:M), and how the system would logically go about making the calculations needed to give a result.
Are you still with me? It's going to get much more technical. The next subject to be discussed is schema design. This is where we get into programming, database schema, which is created using Structured Query Language, or SQL. The best, least daunting way I can think to describe this is by saying that in using SQL the user is telling the machine what to do. The humanist tells the computer what they want it to do with the data they will be using. Even though it includes a lot of code, it's somehow less daunting if you think of it as giving direction. Below is an example of SQL, a basic structure that will later be filled in with data.
Ramsey explains this next part well, so I'm going to direct you to him for this bit:
Like most programming languages, SQL includes the notion of a datatype. Datatype declarations help the machine to use space more efficiently and also provide a layer of verification for when the actual data is entered (so that, for example, a user cannot enter character data into a date field). In this example, we have specified that the last_name, first_name, title, city, and name fields will contain character data of varying length (not to exceed 80 characters), and that the year_of_birth, year_of_cleath, and pub_year fields will contain integer data. Other possible datatypes include DATE (for day, month, and year data), TEXT (for large text blocks of undetermined length), and BOOLEAN (for true/ false values). Most of these can be further specified to account for varying date formats, number bases, and so forth. PostgreSQL, in particular, supports a wide range of datatypes, including types for geometric shapes, Internet addresses, and binary strings.There's a lot more technical discussion in this article that I struggle to explain in my own words so I direct you to the text if you're interested in learning more about the programming side of SQL. What I've come to see is that it's incredible interesting and incredible precise. Much the same as coding, there is a fine art to speaking the language of the computer and communicating effectively. I'd be excited to take a lesson in this and have hands-on instruction. I'm hoping for a THATcamp in my area, as I think this would be the best opportunity to learn from others.
Ramsey broaches the discussion of data management in the last few paragraphs of his article. With great power comes great responsibility, so to speak, and anyone who has ever played around with HTML can tell you that the smallest error can throw off a large amount of work. The same is true with database management, and Ramsey suggests giving full access to very few people, for the sake of data and code security. After all, not many people need full access to an entire system. The less room for error, the better.
As we can see from the brief introduction to databases, here's a lot that goes into database programming, and there's certainly a learning curve that is not easily overcome. Luckily, there are a ton of resources to help the aspiring learner. Ramsey cites three of the most commonly used SQL tools, MySQL, mSQL, and Post-greSQL, as helpful options for those interested in using this methodology.
Because the readings this week are so dense, I'm going to split up the Great Wall of Text and direct you here for part two of this week's blog post!
Monday, February 13, 2017
Why Digital Humanities?
At last, I begin my first week of reading the articles I have compiled in my reading list-- a very exciting time.
Two of the readings this week will come from Blackwell's Companion to Digital Humanities, and the third is an article that I believe will be helpful in integrating DH into the English program curriculum. Without further ado, on to the first reading!
"The Digital Humanities and Humanities Computing"
Our first selection is written by Susan Schreibman, Ray Siemens, and John Unsworth, and it's a helpful introduction to the Blackwell companion, explaining some of the "hows" and "whys" behind the DH field. Although the field is immense, there is an overarching goal of using the technologies offered by the Digital Age to help researchers in their quest for knowledge.
The "Digital Age" (or "Information Age") has taken the world by storm, and scholars have decided that it's time to integrate new technologies into fields that have become tired and worn out after years of ceaseless analysis. However, this storm has been met with resistance by many because if there's one thing scholars like, it's time -honored tradition. The editors reflect on this in the following selection:
"Literary Studies"
With such a grandiose introduction, I'm expecting greatness from this book, and this field in general. Now, I'll move on the the next chapter, "Literary Studies," by Thomas Rommel, which will hopefully narrow down this broad field.
In the course of the article, Rommel discusses the wide range of opportunity that has been granted to the humanities by the introduction of the technology into the realm of criticism. He details how, upon it's birth in the 1960s and 70s, electronic media has changed the nature of the classroom. Once upon a time, scholars and students alike were limited to a set amount of options for textual data- relegated to however many they could feasible access and read in order to draw conclusions. This is not to deny the great critical works that were born in this time, but the rise of the internet has granted us the birth of a new age:
"What Is Digital Humanities and What’s It Doing in English Departments?"
In conclusion to his article, Kirschenbaum answers the question he posts in the title of his article in 6 ways, which I will summarize here:
In short, you ask, why DH? I ask you, why not?
Two of the readings this week will come from Blackwell's Companion to Digital Humanities, and the third is an article that I believe will be helpful in integrating DH into the English program curriculum. Without further ado, on to the first reading!
"The Digital Humanities and Humanities Computing"
Our first selection is written by Susan Schreibman, Ray Siemens, and John Unsworth, and it's a helpful introduction to the Blackwell companion, explaining some of the "hows" and "whys" behind the DH field. Although the field is immense, there is an overarching goal of using the technologies offered by the Digital Age to help researchers in their quest for knowledge.
The "Digital Age" (or "Information Age") has taken the world by storm, and scholars have decided that it's time to integrate new technologies into fields that have become tired and worn out after years of ceaseless analysis. However, this storm has been met with resistance by many because if there's one thing scholars like, it's time -honored tradition. The editors reflect on this in the following selection:
Thomas documents the intense methodological debates sparked by the introduction of computing in history, debates which computing ultimately lost (in the United States, at least), after which it took a generation for historians to reconsider the usefulness of the computer to their discipline. The rhetoric of revolution proved more predictive in other disciplines, though – for example, in philosophy and religion. Today, one hears less and less of it, perhaps because (as Ess notes) the revolution has succeeded: in almost all disciplines, the power of computers, and even their potential, no longer seem revolutionary at all.Luckily for us all, the DH field has thrived and opened up countless new opportunities for study. The next obstacle to overcome is learning to use the various tools, and this is no small challenge. However, daunting as these tools might initially seem, the editors are quick to point out that the purpose of the DH is to weave to methodologies together with practical application. Much like traditional methodologies, the purpose is to use to tools at hand to discover new things about the field being explored. Although the tools and methodologies are important, the results are just as crucial.
The growing field of knowledge representation, which draws on the field of artificial intelligence and seeks to "produce models of human understanding that are tractable to computation" (Unsworth 2001), provides a lens through which we might understand such implications.The editors and their cited sources conclude that the computational techniques and resulting data structures can have a great deal of impact on the way we interpret "human information." They conclude their introduction by dwelling on the powerful nature of the DH, positioning it next to other time-honored forms of inquiry, and suggesting that, due to the power of the analytics at hand, it may prove itself to be more powerful than any we have seen.
"Literary Studies"
With such a grandiose introduction, I'm expecting greatness from this book, and this field in general. Now, I'll move on the the next chapter, "Literary Studies," by Thomas Rommel, which will hopefully narrow down this broad field.
In the course of the article, Rommel discusses the wide range of opportunity that has been granted to the humanities by the introduction of the technology into the realm of criticism. He details how, upon it's birth in the 1960s and 70s, electronic media has changed the nature of the classroom. Once upon a time, scholars and students alike were limited to a set amount of options for textual data- relegated to however many they could feasible access and read in order to draw conclusions. This is not to deny the great critical works that were born in this time, but the rise of the internet has granted us the birth of a new age:
The "many details", the complete sets of textual data of some few works of literature, suddenly became available to every scholar. It was no longer acceptable, as John Burrows pointed out, to ignore the potential of electronic media and to continue with textual criticism based on small sets of examples only, as was common usage in traditional literary criticism: "It is a truth not generally acknowledged that, in most discussions of works of English fiction, we proceed as if a third, two-fifths, a half of our material were not really there" (Burrows 1987).Another interesting fact that I came across is that, with the rise of technology, we have begun to move away from close reading, a methodology now firmly linked to traditional criticism. Close reading dictates that a scholar read a work (or works) thoroughly, in order to pull key ideas from the texts, in order to mine ideas. On the other hand, DH methodologies allow for interpretation of the text (or texts) as a whole, by way of surveying the entire corpus, regardless of length or number of works.
Comparative approaches spanning large literary corpora have become possible, and the proliferation of primary texts in electronic form has contributed significantly to the corpus of available digital texts. In order to be successful, literary computing needs to use techniques and procedures commonly associated with the natural sciences and fuse them with humanities research, thereby bringing into contact the Two Cultures: "What we need is a principal use of technology and criticism to form a new kind of literary study absolutely comfortable with scientific methods yet completely suffused with the values of the humanities" (Potter 1989).As explained by Potter, cited in the above pull quote, the use of scientific (or technological) methods does not take away from the importance of the data. Much like a trip in the car does not take away from the experience of the vacation, the use of a computer does not negate the importance of the knowledge gathered. Further:
If a literary text carries meaning that can be detected by a method of close reading, then computer-assisted studies have to be seen as a practical extension of the theories of text that assume that "a" meaning, trapped in certain words and images and only waiting to be elicited by the informed reader, exists in literature. By focusing primarily on empirical textual data, computer studies of literature tend to treat text in a way that some literary critics see as a reapplication of dated theoretical models.
Within the text, Rommel cites another critic who brings the argument even closer to home by directly citing popular forms of literary criticism, and likening them to DH methodologies:"'One might argue that the computer is simply amplifying the critic's powers of perception and recall in concert with conventional perspectives. This is true, and some applications of the concept can be viewed as a lateral extension of Formalism, New Criticism, Structuralism, and so forth. (Smith 1989)'"
One major way to view literary texts through this lens is to examine repeated structures, and analyze the meaning of the results. Repeated structures could be characters, words, or phrases that are chosen from a work, or series of works. For example, one could examine the appearance of the word "home" in relation to the appearance of female characters in a collection of texts of the Victoria era, and analyze what a correlation could mean.
Chapters, characters, locations, thematic units, etc., may thus be connected, parallels can be established, and a systematic study of textual properties, such as echoes, contributes substantially to the understanding of the intricate setup of (literary) texts.All praise for technological analysis standing, Rommel notes that it's still important to not get caught up in the tools, and set less importance on the results. The tools can only provide the information that is already in existence, the rest of the work needs to be done by the human mind. The humanities do, after all, bear a lot of weight in the Digital Humanities. One might say that the way the words are arranged tells the story of the field: Digital, the front, technological work that goes into an effort, Humanities; the analysis that must be done in order to validate the technological side.
Once upon a time, scholars had excuses for not using technologies- they were widely inaccessible. Prior to TEI (Text Encoding Initiative- one such DH methodology), the programs offered were complex and required much study to be understood. The endeavors were expensive, and required more than one person in order to be successful. Nowadays, we no longer have excuses for not using the technologies at hand, and yet these technologies are still only marginally discussed. The results of using technology as a tool for literary criticism are notable, so the remaining excuse is aversion to change. People are fearful of new technologies, choosing instead to stick to the path most traveled. Ironically enough, isn't that everything we're warned against in the world of literary criticism. Freud, Derrida, Fish, Bakhtin-- these people did not influence schools of thought by sitting around, saying and doing the things that people wanted to hear. They stirred things up, and I think it's time we do that with the technological advantages we have at our disposal.
My final reading for this week is an article called "What is Digital Humanities and What's it Doing in English Departments," by Matthew G. Kirschenbaum. When I found this article, it jumped out to me because it's exactly the question that many ask about the field of DH-- what is this strange idea doing in my English classroom? In fact, as we've seen thus far, because of this very question, DH is not in many English classrooms.
The popular alarm of the English classroom sounds a little something like, "Computers? Never! Not in this class!" and book enthusiasts are likely to chime in, "E-book? Never. It's just not like holding a real book."
While these concerns can be valid personal preferences, they reflect a scarily real amount of opposition to technological advancement, that can be harmful to students of language and literature. Let's get candid for a moment here- we all know the field is saturated. We all know the starry-eyed Austin or Hemingway-crazed undergrad who pursues his or her dream through grad school, only to graduate and have their bubble burst, left unemployed and desolate in the job search. Does that sound real? Perhaps I know one too many people who fit the bill.
There are always going to be positions opening up, people retiring after long and illustrious careers but, even so, if you know that the pool is large and the opportunities are few, wouldn't it be beneficial to differentiate in some way? Wouldn't it also be wise to see the world changing around us, and realize that this could open opportunities to jobs that traditional scholars can't fill? I give you, reader, the digital humanities.
Kirschenbaum argues that computers are not the enemy of the English department, in fact, they're one of its biggest opportunities. He discusses text analysis tools such as we discussed above, and praises the networked connections that are birthed from a field that values interaction and working together to learn and grown. Because the DH is a newer field, people have more of a tendency to rely on one another to grow and teach, rather than the tired "sit and listen as I tell you everything you need to know" way of the past.
In more recent years, beyond the 2004 publication of the Blackwell Companion, associations and alliances have been formed which support the DH, along with the Digital Humanities Initiative, which created an official support system for the field, elevating it to a higher and more recognized standard. Additionally, Kirschenbaum included the following segment in his article:
Digital humanities was also (you may have heard) big news at the 2009 MLA Annual Convention in Philadelphia. On 28 December, midway through the convention, William Pannapacker, one of the Chronicle of Higher Education’s officially appointed bloggers, wrote the following for the online “Brainstorm” section: “Amid all the doom and gloom of the 2009 MLA Convention, one field seems to be alive and well: the digital humanities. More than that: Among all the contending subfields, the digital humanities seem like the first ‘next big thing’ in a long time.”In the same way that they can be scary to those wary of change, the digital humanities are exciting to people who have been looking for a new way of channeling their love of literature in up and coming ways. The DH field brings an air of vitality to a beloved, but somewhat tired world, and the more people who support it, the better!
In conclusion to his article, Kirschenbaum answers the question he posts in the title of his article in 6 ways, which I will summarize here:
- The DH gives us new ways to process and analyze texts, the backbone of English departments.
- There's a powerful association between computers and composition which should used to its fullest extent.
- We've been looking for a meeting point between technology and conventional editorial theory and methods, and here it is.
- Electronic literature (E-Lit) is an up and coming field that is bright, interesting, and diverse.
- English departments have long been open to new cultural developments and movements. Why should this be an exception?
- The explosion of interest in e-readers and text digitization have supported the development of text mining tools and programs that are able to work with digital data.
Monday, February 6, 2017
Reading List Pt. 2
This week's blog is dedicated to part two of my reading list. This list is going to be updated throughout the week, as I need to find more resources to add but, first and foremost, if you're reading this and you're involved in the Digital Humanities, please feel free to contact me with resources and advice. In compiling this list, I've come to have a decent understanding of the theory behind the DH, and my next step is practical application. I'm looking for resources that explain the how behind the methodologies. I'm working on it, but I would love to talk with anyone in the field. Anything you can share with me is much appreciated!
That being said, on to the links.
The first few links are, once again, from Blackwell's A Companion to Digital Humanities, compiled by editors Susan Schreibman, Ray Siemens, and John Unsworth. These articles delve further into the technology involved in the DH field.
"Text and Data Mining and Fair Use in the United States"
"Text and Data Mining" by Maurizio Borghi
From what I've seen of A Companion to Digital Humanities, I like this book. It seems to have a good combination of introductory texts, helpful entry points into the field. My next step in this process is going to be unpacking the methodologies, which has proved to be a challenge and is the part of this process that I'm going to work on throughout the week.
Last week I mentioned Blackwell's A Companion to Digital Literary Studies, compiled by editors Susan Schreibman and Ray Siemens. I'm going to include a few articles from this compilation in my syllabus, that I've found to be personally interesting, as my interests in the DH are linked to literary analysis, and I'm planning on using the DH in my MA thesis. However, I'm not going to explore these links until the latter part of the semester, as my primary focus is an intro to the field, in general.
That being said, on to the links.
The first few links are, once again, from Blackwell's A Companion to Digital Humanities, compiled by editors Susan Schreibman, Ray Siemens, and John Unsworth. These articles delve further into the technology involved in the DH field.
"Designing Sustainable Projects and Publications" by Daniel V. Pitti
"Conversion of Primary Sources" by Marilyn Deegan and Simon Tanner
"Text Tools" by John BradleyI've found the following articles to provide helpful supplementary information to the text, particularly the above chapters:
"Text and Data Mining and Fair Use in the United States"
"Text and Data Mining" by Maurizio Borghi
From what I've seen of A Companion to Digital Humanities, I like this book. It seems to have a good combination of introductory texts, helpful entry points into the field. My next step in this process is going to be unpacking the methodologies, which has proved to be a challenge and is the part of this process that I'm going to work on throughout the week.
Last week I mentioned Blackwell's A Companion to Digital Literary Studies, compiled by editors Susan Schreibman and Ray Siemens. I'm going to include a few articles from this compilation in my syllabus, that I've found to be personally interesting, as my interests in the DH are linked to literary analysis, and I'm planning on using the DH in my MA thesis. However, I'm not going to explore these links until the latter part of the semester, as my primary focus is an intro to the field, in general.
"Knowledge will be multiplied": Digital Literary Studies and Early Modern Literature by Matthew Steggle
"The Virtual Library" G. Sayeed Choudhury and David Seaman
(To Be Continued)
Here is the link to my working syllabus, which includes the readings I will be doing for each week. I've assigned myself two readings per week and I've tried to pair them based on subject matter. The syllabus is open for comments and feedback is welcome. This is an interactive field, and I would love to meet others who are also interested in exploring the DH.
Next week's post is going to include a summary and analysis of the first two articles that I have chosen. I'm excited to dive in!
Next week's post is going to include a summary and analysis of the first two articles that I have chosen. I'm excited to dive in!
Monday, January 30, 2017
Reading List Pt. 1
Over the course of my online searches, several writers have made reference to two texts which I believe will serve as excellent anchor texts throughout this semester. The first text is Blackwell's A Companion to Digital Humanities, compiled by editors Susan Schreibman, Ray Siemens, and John Unsworth. The second text is Blackwell's A Companion to Digital Literary Studies, compiled by editors Susan Schreibman and Ray Siemens.
A Companion to Digital Humanities seems to be hailed as a backbone to the field and is available online. From what I have read, it seems to be an excellent resource. The following chapters included information that I found to be informative as an introduction to DH, and ways in which it can be applied to the field of literature.
The second text that I mentioned above, A Companion to Digital Literary Studies, seems to be a even better fit for the meeting point of my interests of literature and DH, although I will delve further into this book in a later post. I will likely use this source to draw ties between the two fields.
In addition to these two books, I have found the additional articles:
"What Is Digital Humanities and What's It Doing In English Departments?" by Matthew G. Kirschenbaum
"An Introduction to Humanities Data Curation" by Julia Flanders and Trevor Muñoz
The next installment of my reading list will likely include chapters and articles about data curation, as that is the next part of DH that I am going attempt to conqueror. I'm excited to learn the tools of the trade!
A Companion to Digital Humanities seems to be hailed as a backbone to the field and is available online. From what I have read, it seems to be an excellent resource. The following chapters included information that I found to be informative as an introduction to DH, and ways in which it can be applied to the field of literature.
"The Digital Humanities and Humanities Computing: An Introduction" by Susan Schreibman, Ray Siemens, and John Unsworth
"Literary Studies" by Thomas Rommel
"Databases" by Stephen Ramsay
"Marking Texts of Many Dimensions" by Jerome McGann
"Text Encoding" by Allan H. Renear
"Electronic Texts: Audiences and Purposes" by Perry Willett
"Modeling: A Study in Words and Meanings" by Willard McCarty
"Stylistic Analysis and Authorship Studies" by Hugh Craig
"Preparation and Analysis of the Linguistic Corpora" by Nancy Ide
"Electronic Scholarly Editing" by Martha Nell Smith *possible inclusion
"Textual Analysis" by John Burrows
"Print Scholarship and Digital Resources" by Claire WarwickThere is quite a bit of exciting information to be mined from the aforementioned chapters, and I'm excited to go through them with a fine toothed comb!
The second text that I mentioned above, A Companion to Digital Literary Studies, seems to be a even better fit for the meeting point of my interests of literature and DH, although I will delve further into this book in a later post. I will likely use this source to draw ties between the two fields.
In addition to these two books, I have found the additional articles:
"What Is Digital Humanities and What's It Doing In English Departments?" by Matthew G. Kirschenbaum
"An Introduction to Humanities Data Curation" by Julia Flanders and Trevor Muñoz
The next installment of my reading list will likely include chapters and articles about data curation, as that is the next part of DH that I am going attempt to conqueror. I'm excited to learn the tools of the trade!
Subscribe to:
Posts (Atom)