Category Archives: Academic Technology

General discussion about the topic

Tools and Purposes and Other Fairy Tales.

The gift of instructional technology is tools:

  • little tools that do one or two things brilliantly,
  • big tools that do many powerful things quickly,
  • the constant innovation which makes what is hard one day just a click away the next.

And the bane of instructional technology is: tools.

  • Little tools that do a few things poorly,
  • big tools so big they are slow and cumbersome and suck up your time,
  • the constant innovation which takes away your sanity and causes us all to chase the delusion of endless “improvement” which is often only: the need to keep up and to seem to be improving.

Tools are wonderful. Tools are dreadful. When they are new and work, they are magic. When they age and break, they are worse than inert: they aggravate and infuriate; they are deader than the proverbial doornail. And it all happens very, very fast.

Tools are the how, not the why, mere means to ends, and therein lies the problem.

In higher education we are concerned primarily not with means but ends. The human being is the ultimate (earthly) end: her life and purpose and her ability to use her freedom to choose that purpose and to build that life however she sees fit in an understanding that emerges quickly or slowly, early or late, and sometimes even: just in the nick of time.

We subvert the entire meaning of our enterprise when we fixate upon means––tools, that is––and measure those tools only against other tools and not against the purposes towards which our mission points us.

But think about tools we must, for we are IT, and it’s what we do. And so we struggle endlessly against the tendency to focus on the how and to forget the why. It is a mental struggle. It is a moral struggle. Sometimes it almost seems like a physical struggle: a gripping in the pits of our stomachs and an itching and tingling in our legs. As long as we live and breathe tools, we will always be uneasy.

What is the prescription for this unease? How in higher ed can we focus away from the tool and towards the ends?

One way is to focus not on the tool but rather on the use case.

A use case is a term of art. It sounds fancy but it’s simple. A use case is a story. It’s a picture of some things a user does. It’s journalistic: like the “lede,” that first part of the news story that gives you the whole picture but also whets your appetite to know more.

Write a journalistic “lede” without the “how,” and you have a use case: the problem to be solved, the thing our users need to do, the reason that they come to us, their purpose, their ‘end.’

  • Who?
  • Does what?
  • When and where?
  • And why?
  • To achieve what?

Subject. Verb. Circumstances. Purpose. A use case is a sentence writ large, exploded into steps. It could almost be the panes of a comic.

And we are the ones who help to figure out the ‘how.’

For many use cases, I would like to argue that the ‘how’ should always be in three sizes.

Just as in the fairybook bears’ house, in IT-land solutions come in three sizes. Like the bear story, it’s a fairy tale: there aren’t really just three sizes. And they aren’t just sizes, they’re bundles of traits––ownership, complexity, flexibility, and more.

But three is a good number, because looking at and choosing amongst five or seven or ten things is harder. So we in higher ed IT do well to recommend tools in three sizes and kinds.

  1. A free and easy consumer service with just a few functions. It’s not meant for professional use but it’s adaptable for many purposes. It’s not hard to use, though finding all the tricks can take time. And we don’t own it.
    • Think flickr for photos, Youtube for video, Dropbox for file sharing, Slideshare for publishing presentations, etc.
    • We don’t care that we don’t own it. We just need to make the proper warnings about where the data lives, who can access to it, whether the data can be sucked out, our lack of control, etc.
  2. A free service and which has robust-, numerous- and flexible-enough functions that it can be used for many purposes. It takes time to learn, but the learning curve is not steep. And we own and offer and support it, and that means it’s geared more towards the kinds of purpoes our users have.
    • At Yale, think WordPress. Anyone can request a site. There are already-built resources. It can be used for courses, working groups, projects, etc. It can be public, private or community-only.
  3. A specialized service which we have licensed or built, which has a high degree of complexity. It can be used for many different purposes. You can use it a little or a lot. The learning curve is steep. Whether it’s someone else’s or not, we bought it and we provide it and so even if we don’t own it 100%, we get the blame when things go wrong.
    • Think a sophisticated digital asset management service, or even Adobe’s Creative Cloud suite, which is licensed by and (in aggregate) is off-the-charts in complexity.

As with many choices, it’s really a table. This one has one binary distinction and four scales.

type who owns it? how many functions? how complex? number of purposes learning curve?
simple, free & easy someone else few simple one or two none or trivial
our un-fussy service us not too many relatively simple more than a few, less than a dozen non-flat
“our” high-end service us a lot complex many, many steep

But tables are for nerds like me, and a list is more human-readable, and this is one of those distinctions we in IT-land often forget, because “I can understand it,” but then I am not the user.

And unlike in the three bears’ house, in IT-land each of the three sizes is “just right” for somebody. Every user is a Goldilocks who deserves her chair and bed and porridge just the way she likes it.

  • People who come to us for simple functions can be directed to simple tools––even if we don’t own them.
    • And we need to have worked out the use cases well enough so that we can give a short ‘getting started’ document or demonstration.
    • We don’t need to know all the answers––as long as the client knows they are using someone else’s pipes.

Unlike many things in IT-land, the process doesn’t have 86 steps.

  • Write the use case, and identify the three choices.
  • Give your users a clear picture of the use case: who does what.
  • Help the users choose wisely, and help them get the right amount of support for each choice.
  • Advise your users appropriately of the advantages and pitfalls––learning curve, data ownership, privacy, security, longevity, etc.

If you can get the users to share their successes, then others will see what success looks like, and they too may come to recognize that one size seldom fits all, but there is often one size for each user that is “just right.”

––Edward R. O’Neill

If You See Something, Say Something.

I’m not thinking of suspicious packages.

Rather, I’m thinking about the standards and ethics of our profession: folks who support teaching and learning with technology.

In that regard, I saw several things at ELI 2014 which made me want to say something, and that something is basically "What goes on here? What do we as a profession do? And why can we not have a connected discussion about that?

1. I saw a keynote give blatantly wrong facts.

Okay. People make mistakes. Sure.

But this presentation pretended to give a ‘scientific’ basis to teaching and learning.

Should conference presentations perhaps be required to use footnotes?

One writing teacher I know asks this of undergraduates. Students must give a handout that includes:

(1) a short prose summary and
(2) a list of references.

Problem solved? Perhaps. But that wasn’t the only conspicuous absence of professional standards on display.

2. I saw a presentation arguing for a certain model of instruction, but the presentation made no reference to other models, nor to any concepts of learning, nor to any existing ideas.

This was an argument in a vacuum.

If we wouldn’t permit undergrads to do it, should we do it ourselves?

This lead me to a fear, which I now articulate. (See something, say something.)

Instructional technology as a profession seems to have no clear sense of standards of evidence––nor are these even really a part of the debate.

Think about any other discipline. History. Physics. Kinesiology.

  • You know what counts as evidence.
  • But you debate why some evidence is more meaningful than other kinds.
  • There are different schools and approaches, and they’re forced to duke it out.
  • Some standards and references are shared, some widely, some narrowly, while others are up for grabs.

Why should learning technology not be the same?

Nor are such issues just about evidence.

3. A presentation ostensibly about program evaluation offered no goal for the program, no significant research, numbers that were blatantly fudged.

Of course, if there is no goal, there can be no measuring. (Measure what?)

In this case I actually asked during the Q&A if there was any theory or concept or idea of learning driving the process. (I couldn’t ask about institutional goals, as the presenters had basically said “The Provost wanted it,” and it was clear no one after that point had even thought to tack on a goal as a fig leaf.)

The answer was: no, we don’t have instructional designers; we have Ph.D.’s. As if planning learning intentionally and being a scholar are somehow mutually exclusive.

It’s easy to understand this. In higher ed, the disciplines are the guardians of standards of knowledge.

  • The psychologists decide what psychology is.
  • The dance teachers decide whether dance is modern or ballet or rolling around on the floor.
  • The English professors decide what counts as literature and literary analysis.
  • Etc.

But it’s shocking to think that (for some at least) this excludes any role for thinking about teaching and learning––or even planning in its most basic sense.

All of which brought me to the terrible near-existential recognition of a central absence.

Instructional technology as a profession seems to have no shared framework for specifying goals and measuring results––hence justifying the value we create (potentially but not only ROI).

  • What kinds of things can we accomplish when we use technology to support learning?
  • What is the size or scope of our interventions?
    • Are we just making it easier to turn in homework?
    • Are we publishing things that were harder to publish before––like lectures?
    • Are we solving psychological problems? Economic problems? Cultural problems?

Of course, some goals are easy to pick out: convenience, efficiency and effectiveness.

  1. At this point in time, convenience reduces largely to what I call x-shifting.
    • Just as the VCR allowed TV shows to be shifted in time and place, now increasingly-smaller computers allow content and experience to be shifted in time, place and platform. These may not be the only forms of convenience, but they’re paramount.
  2. Efficiency is simply doing more with less.
    • We can promise this––but we mustn’t lie: a small-scale study I did at my prior institution showed what I think we all know. With any new technology, you must put in more time at first in order to save time later.
    • This points up a little-mentioned analogy, which really ought to be the core of what we do in learning technology: learning a new technology is itself a species of learning, hence a microcosm for learning-in-general. Helping people learn to use a new technology helps them to re-see with new eyes the phenomenon of learning.
  3. Effectiveness is where we lose all our bearings. Ideally, we’d like to make teaching more effective, for it to generate more learning. But how?
    • What are the drivers of learning? Where are the pedals and the steering wheel? We don’t have a good taxonomy.
      • Better motivation? Sure.
      • Good chunking for better cognitive processing? Okay.
      • Better sequencing of instruction? Absolutely.

But do we have a clear picture of the whole shape of such goals?

I fear not.

When I see something, I can say something.

Recent NEH/DFG Digital Humanities Awards and the Future of Autonomous Projects

The NEH (National Endowment for the Humanities) and DFG (Deutsche Forschungsgemeinschaft) have announced another round of awards for their Bilateral Digital Humanities Program. The program provides support for projects that contribute to developing and implementing digital infrastructures and services for humanities research. They are awarded to collaborative projects between at least one partner based in the U.S. and one partner based in Germany.

This round’s awardees were largely focused on digitization projects, especially text encoding, which seem to be indicative of the general field of digital humanities, especially those concerned with “ancient” languages and literatures. The goal of such projects is to create innovative (and hopefully better) ways to present texts in digital format. Part of the innovation is the ability to consider diachronic aspects of literature, especially variant traditions of ancient literature and critical work associated with the text in question. Additionally, these projects provide ready access to literature that had been previously limited to few (and generally quite expensive) volumes from a small group of publishers. The well-known and oft-mentioned Perseus Digital Library and the much less well-known Comprehensive Aramaic Lexicon Project provide numerous examples of the benefits of such projects. I have used a number of similar projects including these two mentioned here during my young academic career, and I can attest to their great benefits.

There are, however, a few drawbacks that seem to accompany these projects. The most central recurring caveat to these programs that I have experienced is the development of the projects seems to stop when the grant funding runs out. While it is certainly understandable why projects cannot continue to develop without funding, this problem is largely the result of the fact that these projects seem to often stand on their own, meaning they are not part of a larger collection to which they contribute. This autonomy creates an environment where the innovative technology developed by each of the individual projects seems to stagnate with the project itself. The arrested development of these individual projects creates a considerable disparity between autonomous projects—especially those that focus on relatively obscure content—and projects that are either paid applications (e.g. Accordance Bible Software) or are developed in collaboration with large tech companies (e.g. Dead Sea Scrolls Digital Project a collaborative effort between the Israel Museum and Google). I am not criticizing these latter projects. On the contrary, I have used both of these example programs with great relish. Rather, I am lamenting the stagnation of many autonomous projects whose subject matter might be more obscure (relatively speaking, of course), but is vital for a number of scholars’ research.

As the process of text encoding becomes more standardized, it would be interesting to see the development of a digital library that could incorporate these autonomous projects into one central location. This may allow for the continued development of autonomous projects whose dwindling funding limits the participation of its original developers. To be sure, there are obstacles to such grand collaborative work, and, ironically, this sort of project may need to begin as an autonomous project. However, the recent launch of the Digital Public Library of America provides a substantial step toward the further development of a central digital library of various digital materials, and may itself be the very project I would like to see.

I congratulate the program awardees, and very much look forward to experiencing the results of their projects.

Whither RSS Reading?

Are you distressed over the announced demise of Google’s Reader service? Are you a regular reader of RSS? What will you be doing come July 1?

I’m not personally directly affected by this change, since I use Vienna as a desktop reader and Pulse on my mobile devices. (I don’t do a lot of RSS reading on mobile, so it doesn’t matter to me that Pulse won’t bulk import my feed list from Vienna.) But by the same token, I am concerned about the future of RSS now that one of the major supporters of it has changed its mind. RSS is one of the high points of open standards online, highlighting what can be done with relatively simple programming. Without denigrating Aaron Swartz, the fact that the RSS 1.0 spec was partly authored by a 15-year-old (albeit a bright and creative 15-year-old) speaks to how elegant it is.

We’re not zealots about Open here at ITG, but it is something we like to choose. Our biggest example is the use on Yale Academic Commons of WordPress and WikiMedia. Both are open-source pieces of software that anyone can install on their own computer and change at will. Both are also open in that anyone can play around with making changes to the code and can even request that the changes get incorporated into the final product. Even without touching the code, there are ways to be involved with the production of these tools that have proven immensely valuable for education. You can review beta versions, contribute documentation, or just tell other people about your experiences with them.

If you would like to try something out, put in a request for a WordPress site or talk to us about wikis. If you want to go beyond what we host here at Yale, we’d love to talk to you about that, too.