Category Archives: Uncategorized

Visual Annotation & Commentary: Three Vernacular Tools

For a long time, scholarship largely focused on words and numbers.

  • Yes, art historians and theater scholars and radiographers thought a lot about images.
  • But today the visual dimension of knowledge increasingly leaves mere words and numbers in the shadows.
  • Chalk it up to the proliferation of screens–on our desks, on our walls, in our backpacks and pockets–or to whatever you like.

But it is in many ways a welcome change.

Many of us involved in scholarship and teaching spend a lot of time using images: gazing at them, thinking about them, writing about them; but also collecting, organizing, commenting and publishing them.

But how do we do this? Using what kinds of tools?

Those who manage large collections of images have specialized tools. And art historians and film scholars still write (lengthy) prose essays.

But using images to think about images has a special appeal. And tools like from making and giving presentations, editing movies, and sharing photos are all relatively easy enough to make them good candidates for vernacular scholarship: serious thinking that takes place in popular media.

When thoughtful people take up a medium, they think seriously about genres and forms.

  • Am I writing a novel or a tweet? A memoir or a lab report?
  • Am I drawing a landscape or a portrait? A wall-sized canvas or an ivory engraving?

And critical writing is no different–except that we who do critical writing could really spend more time thinking about genres, especially as we do and encourage critical writing on web pages and through viral videos and as info graphics.

Happily, some critical genres cut across media and can serve us well as we act critically in popular media: annotation and commentary are two crucial genres for critical analysis, and both of them lend themselves to visual media as well.

Both annotation and commentary bear a strong relationship to the text they comment on.

  • Annotation usually implies the presence of the text. An annotated edition is a manuscript that bears the annotations right on or beside the text.
  • Commentary may stand apart from the text it comments upon, but “apart” is often not far.
    • My edition of Hamlet contains some commentary in footnotes, and other commentaries before and after the text itself.
    • DVD (and now Blu-Ray) commentary tracks yoke together a text and a commentary: the two are synchronized.

When we use simple tools to share visual material, and when we try to work critically with these media, what features of the tools are we using? How do we annotate and comment?

I wanted to explore these issues by putting a dozen or twenty of the same images into three different readily accessible tools.

  • iMovie is a popular video editing tool which now costs about $15.
  • Google+ Photos is a service for sharing photo sets or ‘albums’: with a few people or the entire world-wide web.
  • Powerpoint is the ever-present
    • These files may be uploaded to Google Drive and published there
    • You can also record a voiceover and publish the presentation and voiceover together as a movie. But I skipped this, because I used iMovie to accomplish the same results.

What I Did and Why.

  1. I’m an amateur photographer, and I adore Hollywood glamour portraits of the ’30’s and ’40’s. I have books full of them, and over time, I’ve collected 50 or 80 such images from the web. So that determined my topic: convenience.
  2. I had the files in Dropbox, but I uploaded them to Google+ Photos, since I could organize them in a sequence there. The uploading involved selection.
    • In this case, I intuitively put together images that seemed to me related.
    • I had some notion of comparing images of men and woman, so that provided a sort of rule or principle.
    • But as I moved the images around, I found myself pairing them along the lines of similarity and contrast.
  3. As I browsed and sequenced the images, I started formulating my ideas about them.
    • The sequence turned out to involve shades of similarity.
    • I started with one that was highly emblematic of the whole: a kind of titular representation.
    • And then I arranged images of women, followed by men, with sub-similarities.
  4. I downloaded them all from Google+ Photos–simply because they were all in one place and neatly arranged.
  5. For iMovie I drag-and-dropped them onto the timeline. Once there, I composed some voiceover, which I recorded right in the software. I was then able to cut it into bits and slide it here and there to fit the images.

Affordances.

“Affordances” is the fancy word for the features of tool that let you do certain things.

  • The weight of a hammer determines whether it can tack carpet or crush rocks. You could say the ability to crush something heavy is an “affordance.”
    • The idea is to get away from features and to wonder aloud about what they get you.

iMovie has specific ‘affordances’:

  • It lets you add a voiceover.
  • It lets you add titles over images and between them.
  • It has a ‘Ken Burns effect’ in which still images are zoomed or panned across, to keep some visual interest.
  • And you can choose different transitions between still images (or video clips).

What would I do with these?

  • The voiceover seemed perfect for commentary. I could use the auditory channel for commentary, since the visual channel was largely full of what was being commented on. It was a neat divide.
  • I decided to use the titles to spell out the main topics.
    • Sure they were said out loud. But in some cases, I realized I had not recorded anything announcing the main topic.
    • So the titles became unifying themes that brought together multiple images, as well as the voiceover.
  • The Ken Burns effect is somewhat random in how it pans or zooms.
    • I decided that I could start in close on the visual element being described. Then I would zoom out to see the whole image.
    • So the pattern was to focus on a detail and then reveal its context. I did this with every single image. I decided consistency and repetition would make things easier on the viewer.
  • Finally, iMovie allows a transition that looks like un-focusing and re-focusing. It’s different than a ‘dissolve,’ in which one image slowly replaces another.
    • Since the context was cinematic, I thought the cross-focus transition fit nicely.
    • I used no other transition, as the images are from ‘classical’ Hollywood, and part of that classicism was parsimony: very few effects used carefully. So I wanted to match the material in this regard.

Link to visual commentary example created using iMovie

For Powerpoint, I went a bit further.

  • Powerpoint allows you to use simple, stock visual elements: like arrows.
  • You can record a voiceover, but I decided I had just done that: I would force myself to find a different pathway with Powerpoint.
  • The author can also create specific transitions: one image bumping another off to one side, etc.

I decided the visual logic of a video and a presentation were different.

  • A voice speaking to you over related images is very different than the same images presented without a voice.
  • So I decided I needed to structure my commentary more clearly.
    • Instead of a series of observations, I wanted to show consistency, repeated elements.
  • So I organized the images a bit differently.
    • And I tried to make very clear themes with sub-elements.
  • The images sat to one side–the right–and the themes and sub-themes were spelled out on the left.
    • First the viewer sees the image.
    • This way you get to see it with your own eyes.
    • The next slide spells out the theme and sub-themes: in this case, the effect the photo produces, and how it’s produced, the techniques.
  • Finally, I decided to use those simple stock visual elements:
    • I put arrows connecting the techniques to a specific place on each image.

To publish the presentation, I uploaded it to Google Drive.

  • Google Drive can then autoplay, and it lets the user choose a smaller number of transitions.
    • I chose a fairly slow pace, to give the viewer time to look and read.
    • By using a transition in which one image instantly replaces the next, my themes and sub-themes suddenly appear, and so do the arrows.
      • There is an animation-like effect.

Watch the Powerpoint-made presentation in a separate window here.

Finally, for the Google+ Photo album, I used the feature of ‘captions.’

  • Each photo can have a bit of explanation about it.
  • So I elaborated on my voiceover text here. There’s a little more space, so I could add some extra detail.
  • The casual browser might read these or not. So I tried to write them to reward reading.

In short, for this tool, I was relying largely on sequence.

  • Google+ Photos does let you edit the images. I could have emphasized some visual characteristics. But I opted for restraint. Let the images speak for themselves, and let my voice be softer, less obtrusive.

Going to Picasaweb.google.com lets you find code to embed a slideshow. (Somehow Google+ users don’t rate access to this feature.)

Hollywood Publicity Portraits of the 1930′s & 1940′s

And there’s a more static embedded version.

Both draw on the original photo set.

–Edward R. O’Neill

Three Questions, One Answer

In a recent blog post, Doug Mckee addresses three issues in how he should teach this fall.

1. Should I ban laptops in lecture?

2. Should I make discussion sections mandatory?
3. Should I cold-call students during lecture?
 
Basically: no, no and no–all for the same reasons.

1. Should you “ban” anything in lecture?
Or rather: were you to try, what would be the justification?
In teaching we do things for very few reasons.

a. Because they are inherent in the discipline and academic life. ”We’re reading Durkheim because he helped to found the discipline.” “We’ll use APA style because that’s what professionals do.” “You must offer arguments, not opinions, because in our domain, opinions have no value.”

b. Because they are convenient. “We need to get all your papers at once so we can compare them and grade them before the next work is due.”

c. Because they adhere to university policies and laws. “No smoking in the back row.” “Grades are due on the 11th.” “No sexual harassment.”

d. Because they embody our values about human freedom and responsibility. “You must take up your own argumentative position.” “You may turn in the work late, but it will be marked down.” “Write about the one topic on the list that interests you most.” Pursue your freedom. Experiment. Explore. Fail. But take on the responsibility of existing and choosing.

(I can’t think of many other justifications for why we do this, that or the other in teaching.)

And all of these questions are opened to reasoned debate–because that is one of our values.

Once you say “You will not open your laptops,” you are dictating. And you have lost. Now you are a cop, not a teacher.

Practically speaking, I know professors who have had good luck with the “three states”: put your laptops away and focus on this (discuss with a peer, whatever); open your laptops and do this specific task; leave your laptop open or put it away–I don’t care, just don’t distract your neighbor.

 
You can also play with the sequence. If you ask them to use it, then to close it, the act of opening it may be more self-aware.




2. See “1″ above.
 
a. What does “mandatory” mean? Again, from my perspective this is the wrong relation to the student.
 
We can mandate little in teaching. Rather, we reward and we punish. (Behavioral economics and game theory surely apply here–though I fear that those theories have no moral code embedded in them, and therefore they may be useful tools but they are not arbiters.)
 
Extrinsic rewards don’t motivate learning very well. So you can reward and punish for attending or not. But neither will help students learn.
 
Why not go the other way? “Go to section, don’t. No points for it. Go if you value it. And we’ll try to make it valuable.” Ask every week how section could be better. Make it a discussion topic in the web site. When you can’t decide in advance, make it a learning experience.
 
Hence…
 
b. One good principle in planning teaching is: treat all questions about teaching as something to be proven experimentally by teaching.


Reframe the issue as: What could I learn about making the section worth going to?
 
Survey students weekly–did you go or not, why? Ask the section leaders to experiment, to explore how best to meet the students’ needs. Maybe the first few weeks the sections would have different specific activities that students rated, and thereafter, students chose “which activity should we do today?” Make it their section. Meet their needs.

Or just put super-important things in section. Sell how great section will be, and then say “of course it’s totally optional.”
 
 
3. See “1″ above.
 
a. They are coming to lecture to learn. Would you pick on someone for not having understood the material as well as someone else? That person needs more help, not public shaming.
 
b. I tried this once. I would never do it again.
 
I once put the students’ names on index cards. I shuffled them and picked one at random.
 
Once the index cards came out, students sat up straight in their chairs.
 
I called a name, and the student stammered and hemmed and hawed. Other students tried to rescue those I called on–defended them.
 
One student shot his hand up later, after not having known the answer to an earlier question, and after class explained to me: “I knew the answer, I just couldn’t think of it, and so I had to show you that I’d done the reading.”
 
And I thought: who am I? To make someone prove a point to me?
 
After that I brought out the index cards and put them on the desk. They were radioactive. Students would stare at them. If no one answered a question, I moved towards the cards, and a voice would ring out with something to say.
 
It wasn’t motivated by something good. But I got good discussions. Not because of randomly calling on students all the time, as a policy. But by making a point that we needed to discuss and that I would do what it took to make that happen. They didn’t want that. 
 
But I would never teach that way again.
 
 
4. Learning devolves on human agency.
 
Agency is the center of learning. Through learning, I become more capable, and I feel myself to be more and more of an agent, less and less of a passive, receptive entity and more and more myself. 
 
Humans become more capable by overcoming meaningful challenges in an increasing order of difficulty, a difficulty matched to their abilities. (It’s tragedy when someone is outstripped by the task he faces; tragedy defines common humanity by contrast.)
 
Anything that takes away from the agency of the learner is bad for learning.
 
Yes, we need rules and limits.
 
But when possible, all meaningful choices should be passed to the student.
 
To experience one’s humanity through the responsibility of choice, to embrace the possibility of failure, and to own’s one’s successes: this is the heart of education.
 
–Edward R. O’Neill, Ph.D.

Robin Ladouceur Moving to Yale Graduate School Deanship

I am equal parts excited and sad to announce that ITG’s Robin Ladouceur will be moving to a new position in the Yale Graduate School as Assistant Dean of Humanities and Social Sciences. Robin has worked in ITG for four years supporting courses, primarily in the English Department and managing our Instructional Innovation Internship program. She has recently helped advance mobile learning initiatives like our iPad loan program. Before coming to ITS, Robin worked at the Yale Center for Language Study and earned her Ph.D. at Yale in Russian Language and Literature.

Robin, thank you for your years of service and we wish you all the best on your return engagement at HGS. On a personal level we will miss you but we’ll see you around campus and, as you’ve assured us, when Peeps Fest rolls around.

Screen Shot 2014-06-26 at 11.47.38 AM

Large Horizontal Image Presentation

Cross-posted from my project journal site

Since the close of classes in May, I’ve found more time to work on getting into the weeds with my 絵巻物 project and have made some forward motion.

One of my best discoveries has been that Adobe Photoshop CS 5.1 will execute the image tiling needed to allow zooming as happens in most of the typical large image presentations that I’ve found online. (For some scroll examples, see my post at Digital Humanities Questions and Answers.) Though I’ve only done it with my proof of concept section of the scroll, it was not a horribly intensive or time-consuming procedure. Strictly speaking, what Adobe has done is to bundle Zoomify capabilities into Photoshop. Using the steps described by Adobe’s help documentation, the output is not only the image tiles for my TIFF, Continue reading

New York Times series on Digital Humanities

The New York Times has just issued the first in a series of articles about “Humanities 2.0: Liberal Arts Meet the Data Revolution.”

The article quotes Tom Scheinfeldt, managing director of the Center for History and New Media at George Mason University. Tom will be speaking to Yale’s Digital Humanities Working Group this Thursday. The session is open to the Yale public. Please join us!

November 18
Tom Scheinfeldt, Assistant Director of the Center for History and New Media
4:00 – 5:00 pm
Whitney Humanities Center, room 208

Proxi vs. Automator

Having trouble with Apple Automator? Then try Proxi, a free workflow manager that can be downloaded from the Apple website. Automator is Apple’s built in task workflow utility but Proxi is an exceptional alternative. While Automator is useful for standard repetitive tasks such as searching for folder items, moving them to another location, and editing their names, Proxi takes a slightly different approach to setting up workflows. Proxi allows users to link their tasks (similar to the ones in Automator) to “triggers” such as waking the computer from sleep, launching or closing an application/file and many more. Proxi can even recognize when the battery is low and prompt the user to close certain applications that are running. Contrary to Automator in which users must initiate their workflow and tasks, Proxi integrates workflow management into native computer processes and non-user initiated actions.

Proxi main program window

Proxi must remain open for the triggers and workflows to be recognized but once your workflow has been set up the main window can be closed and Proxi can run silently in the background. Proxi even allows you to save blueprints of your “triggers” and “tasks” so that you can open them on another computer. The program is really easy to learn to use and you will even discover neat triggers to launch reminder messages that will display on the screen.

A sample message that can be produced from one of the many "triggers." This one is displayed when adding a new folder.

Altogether Proxi is definitely an application to add to your collection of utilities. Follow the link below to download.

http://www.apple.com/downloads/macosx/productivity_tools/proxi.html

Making WordPress Accessible

While WordPress is a highly accessible platform right out of the box, it is up to the administrators and theme designers to tweak and configure their site to ensure that it is accessible to all users, including the blind, deaf, elderly, or anyone else who might for some reason have difficulty navigating the web.

This post outlines some resources that might prove useful in creating an accessible WordPress site, particularly with regard to sight impairments.

Visually impaired computer users generally use a Screen Reader, such as JAWS, which speaks the website’s content using a synthesized voice. Screen Readers speak the content (headings, links, menus, blocks of text) according to the code, so it is up to the theme editor to keep the HTML and PHP clean and ensure that a screen reader can logically process the website’s content.

Though a bit outdated, the Web Content Accessibility Guidelines 1.0 provide sound coding guidelines for how to maintain an accessible website. In terms of WordPress themes, it is important to minimize use of tables and to make sure graphics and videos have descriptive alternate text.

The WordPress Accessibility Codex also outlines useful accessibility guidelines unique to the WordPress platform.

Screen Reader resources

WebAnywhere is a free online screen reader that is useful for testing pages. It’s also worth trying out just to get a sense for the challenges of navigating the web by ear.

Fangs is a Firefox extension that simulates JAWS reader. Rather than speaking the content, it outputs a text similar to how JAWS would read the page.

More useful accessibility testing resources:

Useful WordPress Plugins

The WordPress Accessibility Widget is a very useful widget that allows users to easily change the site’s font size. The code can be easily tweaked to customize the available font sizes or add additional size options.

The WordPress Access Key Widget is another useful widget that allows the administrator to easily set up access keys for their site. With this plugin, access keys can be assigned for existing pages under “Posts” on the admin interface.

One issue with access keys is that the keyboard commands vary by web browser. For example, Mac Firefox is CTRL+access key, and Mac Safari is CTRL+ALT+access key.

You can see these plugins in action at this test site.

Other Information

The Arjuna-X theme, which lacks tables, is a highly accessible theme and might be worth testing and tweaking when building an accessible WordPress site.

Accessites.org and brucelawson.co.uk provide additional useful information and solutions for WordPress accessibility.

Learning iPhone app development

In this post, I will try to describe my experience with learning how to develop apps for the iPhone, and all other Apple devices using OS X for that matter.

1. What you need to know:

- Objective-C – it is a variation of the C programming language. If you have an year of experience in programming in another language, it should not be a great challenge – although, if the language you are using is not as low level as C, you will want to become familiar with pointers and memory allocation – Programming in Objective-C 2.0 by Stephen Kochan is a good introduction to the language – you will probably still need to look up some things on the internet though.

- Cocoa API and the iPhone SDK – allow you to build the GUI. This is how the code written into Objective-C is made to respond to the touchscreen and look as nice as iPhone apps look. Depending on the project you are working on you will probably want to learn how to access the internet and how to store information in SQLite databases. Sams Teach Yourselves Cocoa Touch Programming is a somewhat good introduction to the topic. At this point, I would definitely recommend watching youtube and blog tutorials, and referring to all other resources and books that you have at your disposal.

2. How to start:

I think it all boils down to the level of complexity of the app you want to create, and the programming background that you have. For Xunzi it helped me a lot to first familiarize myself with Objective-C, and then move on to see how to manipulate the GUI to some extent. Then learn how to work with SQLite (since that is the DB system that OS X uses). Then learn how to make your app get info from the internet.