Category Archives: Events

Interactive Crown Street Installation Opens

We're thrilled to recommend to you an installation this weekend that we've worked on in various parts:

INTERACTIVE CROWN STREET

A "Pop-Up" Urban Research Field Office
@ 200 Crown Street
Friday, May 2 — Sunday, May 4
Facebook icon

Opening Reception is Friday, May 2, 6.00 pm. Events are scheduled but participants may come and go as they please. Please distribute — Interactive Crown Street is Free and Open to All!

Congratulations to Professor Elihu Rubin, to Florian Koenigsberger, and to the whole Interactive Crown Street crew!

If You See Something, Say Something.

I’m not thinking of suspicious packages.

Rather, I’m thinking about the standards and ethics of our profession: folks who support teaching and learning with technology.

In that regard, I saw several things at ELI 2014 which made me want to say something, and that something is basically "What goes on here? What do we as a profession do? And why can we not have a connected discussion about that?

1. I saw a keynote give blatantly wrong facts.

Okay. People make mistakes. Sure.

But this presentation pretended to give a ‘scientific’ basis to teaching and learning.

Should conference presentations perhaps be required to use footnotes?

One writing teacher I know asks this of undergraduates. Students must give a handout that includes:

(1) a short prose summary and
(2) a list of references.

Problem solved? Perhaps. But that wasn’t the only conspicuous absence of professional standards on display.

2. I saw a presentation arguing for a certain model of instruction, but the presentation made no reference to other models, nor to any concepts of learning, nor to any existing ideas.

This was an argument in a vacuum.

If we wouldn’t permit undergrads to do it, should we do it ourselves?

This lead me to a fear, which I now articulate. (See something, say something.)

Instructional technology as a profession seems to have no clear sense of standards of evidence––nor are these even really a part of the debate.

Think about any other discipline. History. Physics. Kinesiology.

  • You know what counts as evidence.
  • But you debate why some evidence is more meaningful than other kinds.
  • There are different schools and approaches, and they’re forced to duke it out.
  • Some standards and references are shared, some widely, some narrowly, while others are up for grabs.

Why should learning technology not be the same?

Nor are such issues just about evidence.

3. A presentation ostensibly about program evaluation offered no goal for the program, no significant research, numbers that were blatantly fudged.

Of course, if there is no goal, there can be no measuring. (Measure what?)

In this case I actually asked during the Q&A if there was any theory or concept or idea of learning driving the process. (I couldn’t ask about institutional goals, as the presenters had basically said “The Provost wanted it,” and it was clear no one after that point had even thought to tack on a goal as a fig leaf.)

The answer was: no, we don’t have instructional designers; we have Ph.D.’s. As if planning learning intentionally and being a scholar are somehow mutually exclusive.

It’s easy to understand this. In higher ed, the disciplines are the guardians of standards of knowledge.

  • The psychologists decide what psychology is.
  • The dance teachers decide whether dance is modern or ballet or rolling around on the floor.
  • The English professors decide what counts as literature and literary analysis.
  • Etc.

But it’s shocking to think that (for some at least) this excludes any role for thinking about teaching and learning––or even planning in its most basic sense.

All of which brought me to the terrible near-existential recognition of a central absence.

Instructional technology as a profession seems to have no shared framework for specifying goals and measuring results––hence justifying the value we create (potentially but not only ROI).

  • What kinds of things can we accomplish when we use technology to support learning?
  • What is the size or scope of our interventions?
    • Are we just making it easier to turn in homework?
    • Are we publishing things that were harder to publish before––like lectures?
    • Are we solving psychological problems? Economic problems? Cultural problems?

Of course, some goals are easy to pick out: convenience, efficiency and effectiveness.

  1. At this point in time, convenience reduces largely to what I call x-shifting.
    • Just as the VCR allowed TV shows to be shifted in time and place, now increasingly-smaller computers allow content and experience to be shifted in time, place and platform. These may not be the only forms of convenience, but they’re paramount.
  2. Efficiency is simply doing more with less.
    • We can promise this––but we mustn’t lie: a small-scale study I did at my prior institution showed what I think we all know. With any new technology, you must put in more time at first in order to save time later.
    • This points up a little-mentioned analogy, which really ought to be the core of what we do in learning technology: learning a new technology is itself a species of learning, hence a microcosm for learning-in-general. Helping people learn to use a new technology helps them to re-see with new eyes the phenomenon of learning.
  3. Effectiveness is where we lose all our bearings. Ideally, we’d like to make teaching more effective, for it to generate more learning. But how?
    • What are the drivers of learning? Where are the pedals and the steering wheel? We don’t have a good taxonomy.
      • Better motivation? Sure.
      • Good chunking for better cognitive processing? Okay.
      • Better sequencing of instruction? Absolutely.

But do we have a clear picture of the whole shape of such goals?

I fear not.

When I see something, I can say something.

Introduction to Contextual TEI, Day 1

Cross-posted from my own site.

IMG_4759.JPGI'm here in Providence (can't you see where I am?) for a three-day workshop at Brown on contextual encoding with TEI, run by the Women Writer's Project, and led by Julia Flanders and Syd Bauman. One of the first things I did when getting on board with digital humanities was to take part in the first iteration of THATCamp New England in 2010, and I'm glad I didn't really have any idea who I was there with, or I would have been horribly intimidated instead of just self-conscious. One of the other attendees was Julia Flanders, and among other things she leads the Women Writers Project at Brown. What I learned about the WWP at THATCamp was impressive, but I have since learned (tonight, if you must know) is that it is a self-sustaining project residing at Brown. As I also know more about sustaining university-level projects than I used to, I am even more impressed. However, I have also built up my knowledge of and abilities in digital humanities, so I'm also more ready to approach problems with what I would consider, were I leading a workshop such as this, an appropriate level.

It's a strange situation for me, as I have worked with text encodings in one way or another since some time in the mid-90s when I was in publishing and worked with Quark XPress, though I didn't entirely know at that time what I was doing and certainly didn't know about the global history of text encoding, let alone SGML, TEI, and XML. In my second stab at making it in the publishing world, I learned a bit more about that space. While at HarperCollins in the late 90s, we used an in-house encoding system that we called Text Markup System, though we also were phasing it out when I was laid off. Even so, I never really associated my work in TMS with a larger world of text encoding, not even with the HTML that I was teaching myself on the side. Extend that situation roughly through the next several years, and you'll see that while I understand a lot of the basics of markup and even have paid attention to some of the questions posed about TEI and to limitations suggested from some critics, I still have a lot to learn during this workshop.

Today was, I expect, the strongest showing for the awkwardness, as there was a good deal of scene-setting. We went through general notions of why we encode research objects and the basics of XML in the morning, then got in to the basics of TEI in the early afternoon, with enough time in the later afternoon to work on our own documents. My attention was frequently consciously divided, as much of the presentation was known material for me. Since I don't have a research project per se (that is, my text-based research projects are whatever faculty or students bring to me), I needed to choose a work that would be appropriate for a workshop on contextual encoding. With some advice from Yale post-doc Natalia Cecire, I settled on Émile Zola's Le Ventre de Paris, and I haven't regretted it. Among her many other helpful suggestions was Jean Toomer's Cane novel with the benefit of having some site-specificity in the good ol' US of A as well as that of having multiple text formats for juicy encoding goodness. However, what I might call my research interests include continually examining digital humanities tools, practices, and constructions from a multilingual or plurilingual point of view, so I went with the Zola and grabbed the textfile from Project Gutenberg. My recollection is of having read it years and years ago, but I can't recall with any further precision, so this process is also about getting reacquainted with this story.

After discovering and then applying Matthew Jockers' Python text-to-TEI formatter for Gutenberg content (I knew learning some Python would come in handy one day!), I dumped our friend Émile into le ventre de oXygen and spent some time figuring out what I care about in this text and how to encode it. Since we are dealing in context, I decided to start with marking up all specified locations and all people. So far, I've been able to geocode everything I've found, but I'm still at a fairly generic and introductory point in the text. Even so, while I say I've been able to geocode what I've found, it hasn't been entirely straightforward how to then encode it. For instance, there's an early mention of the Pont de Neuilly. Reading a little too closely, which is not to say doing a close reading of course, I wasn't sure whether it was the bridge of the same name currently located in Neuilly-sur-Seine or some other one that may have been eliminated. Even so, it wasn't so simple to reference with a GeoNames page as was Paris. The latter got a placeName element and a ref attribute with a GeoNames page URI, but for the former I had to bludgeon GeoNames into giving me an OpenStreetMap page based on the lat-long. I played around with using something different for the rue de Longchamp, ending up with a nesting of place, location, and geo with location having a sibling of placeName that contained "rue de Longchamp." In a very small way, it's an editorial decision to assert that Zola meant the intersection of rue de Longchamp and Avenue Charles de Gaulle, not least because Zola never met Charles de Gaulle. But that's what I'm hoping to get deeper with over these three days — these editorial decisions, how they can be made manifest as a result of the encoding choices, and how they can prove useful in scholarship for Yale researchers and student-researchers.