We’re glad to see Professor Elihu Rubin’s thoughtful use of technology in his pedagogy getting some notice. Late in the spring, Professor Rubin’s work on Interactive Crown Street caught some news, and a couple weeks back (don’t ask us how we missed it) there an item appeared in Yale News about his investigation with students into New Haven’s infrastructure. Professor Rubin and the students in the cross-listed Architecture and Political Science course created an online guide by using Yale’s Academic Commons, an instance of WordPress founded and managed by the Instructional Technology Group. Pam Patterson of ITG as well as Ed Kairiss and Edward O’Neill of Educational Technologies supported the course.
I have no monopoly on this insight.
But it’s very enjoyable to be reminded not only how much a very effective and hard-working person can do in a very small space, but also how much Julia Child has to teach us about teaching and learning.
Not long ago, I stumbled across a little half-page essay by Child in a cooking magazine. It’s a miracle of compression. In less than 650 words (about a half a page), Child:
- defines quiche,
- tantalizes us with a description of the dish,
- chronicles the dish’s culinary rise and fall,
- whets our appetite for the dish,
- inspires us with a story of how make the dish more easily,
- gives a recipe for both the crust and the quiche.
All in eight paragraphs! How many of us could teach half as much in twice the space?
What’s going on here? How does Child work this magic?
To start, Julia begins with the result, the end-product: its taste and smell and pleasures. Quiche smells good, and it’s part of welcoming friends into your home. This is very motivating. You think: “I want to do that!”
Second, Julia makes the process easier by dropping out unnecessary or more-complicated steps. Her inspiring anecdote concerns an anxious neighbor who dread making the dough. Julia comforted the neighbor by advising her to skip that hard part: just buy a pre-bought crust. Who would be the wiser?
The effect was magical: the neighbor brimmed with excitement at her newfound ability.
We may call this “confidence,” but learning experts call it self-efficacy: the feeling of ‘I can do it!’ Apparently, it doesn’t necessarily come from experience. Some people just have more of that feeling. But “experiences of mastery” can boost that feeling of self-efficacy, hence preparing us for greater challenges.
That’s what Julia did with her anxious neighbor: made the task simpler so the neighbor could experience a success and feel more confident to approach a bigger challenge. Julia was quite the psychologist!
Cooking is hard. It’s a complex, multi-step, goal-oriented process. Changing the salt here changes the taste and texture there. Kneading more or less there, changing the temperature elswhere––each of these changes the end result. So you might have to go through the same process many, many times in order to get each step just right, otherwise the end result may be inedible.
It’s difficult to do something without knowing where you’re headed. And too much challenge overwhelms us. So these two strategies work very well: (1) emphasize the end result, the goal, and (2) simplify the complex process, in part by reducing the steps.
So are these two clever methods something that applies only to cooking? Hell no!
There are simple names for these methods. The first is sometimes called reverse-engineering. You take the end result and you take it apart to see how the pieces fit together.
The second is an old educational method called (depending on the context) chunking or scaffolding.
- The idea of chunking is simply: the average person’s memory only holds so much information. So in order to hold a long string of info, break it into smaller bits. We do this all the time when we separate phone numbers into three- and four-digit chunks.
- The idea of scaffolding is simply to help the learner by building a supportive structure around him––like the scaffolds built around a building to work on it as it’s being assembled.
- You give someone pre-assembled bits of the work, so the work is easier, and then over time the learner is able to do the more complex task.
If you teach or train, you can apply the same methods to any complex, multi-step, goal-oriented process.
- Say you want students to learn to write solid argumentative prose.
- You might start by asking them to take some apart. Identify the argument, the evidence, the reasoning. Reverse-engineer what good writing is.
- Then you might give the students the argument and have them support it with evidence and reasoning. Or give them the argument and a pile of evidence and have them pick which evidence supports and which undermines the argument.
- You can do the same thing with scientific experimentation.
- You might start by giving students a finished scientific paper supporting a conclusion based on hypothesis-testing. Working backwards, you can ask the students to explain why this particular experimental method was used, why another one would not have worked.
- Or you can give the students the experiment to run so they can collect the data, or give them the hypothesis and the data and ask them to analyze the data to see if it supports the hypothesis or not.
In short, Julia Child was certainly a miraculously gifted teacher. And like all gifted people, she worked tremendously hard. But her gifts and hard work follow underlying principles. And one of the inspiring things about Julia Child is how much we can learn from her about teaching. Namely:
Any complex, multi-step, goal-oriented process:
(1) can be practiced forwards or backwards––and should be––
(2) can be practiced from any step or ‘moment’ in the process to the next––and should be so practiced, e.g., by providing pre-fabricated materials for each step in the process and asking the learner to use them so as to lighten the burden of learning.
Learning is hard.
- Learning anything complex is harder.
- Learning a multi-step process is hard.
- Orienting all your thoughts and behavior towards one goal is hard.
- Doing them all together is very, very challenging.
Julia Child knew how to lower the difficulty level while keeping us stimulated by the excitement and challenge of a meaningful goal. And she does this the same way we should: by starting from the end, working backwards, and making the steps easier by simplifying them or practicing them separately.
––Edward R. O’Neill
cross-posted to/from blogspot
The gift of instructional technology is tools:
- little tools that do one or two things brilliantly,
- big tools that do many powerful things quickly,
- the constant innovation which makes what is hard one day just a click away the next.
And the bane of instructional technology is: tools.
- Little tools that do a few things poorly,
- big tools so big they are slow and cumbersome and suck up your time,
- the constant innovation which takes away your sanity and causes us all to chase the delusion of endless “improvement” which is often only: the need to keep up and to seem to be improving.
Tools are wonderful. Tools are dreadful. When they are new and work, they are magic. When they age and break, they are worse than inert: they aggravate and infuriate; they are deader than the proverbial doornail. And it all happens very, very fast.
Tools are the how, not the why, mere means to ends, and therein lies the problem.
In higher education we are concerned primarily not with means but ends. The human being is the ultimate (earthly) end: her life and purpose and her ability to use her freedom to choose that purpose and to build that life however she sees fit in an understanding that emerges quickly or slowly, early or late, and sometimes even: just in the nick of time.
We subvert the entire meaning of our enterprise when we fixate upon means––tools, that is––and measure those tools only against other tools and not against the purposes towards which our mission points us.
But think about tools we must, for we are IT, and it’s what we do. And so we struggle endlessly against the tendency to focus on the how and to forget the why. It is a mental struggle. It is a moral struggle. Sometimes it almost seems like a physical struggle: a gripping in the pits of our stomachs and an itching and tingling in our legs. As long as we live and breathe tools, we will always be uneasy.
What is the prescription for this unease? How in higher ed can we focus away from the tool and towards the ends?
One way is to focus not on the tool but rather on the use case.
A use case is a term of art. It sounds fancy but it’s simple. A use case is a story. It’s a picture of some things a user does. It’s journalistic: like the “lede,” that first part of the news story that gives you the whole picture but also whets your appetite to know more.
Write a journalistic “lede” without the “how,” and you have a use case: the problem to be solved, the thing our users need to do, the reason that they come to us, their purpose, their ‘end.’
- Does what?
- When and where?
- And why?
- To achieve what?
Subject. Verb. Circumstances. Purpose. A use case is a sentence writ large, exploded into steps. It could almost be the panes of a comic.
And we are the ones who help to figure out the ‘how.’
For many use cases, I would like to argue that the ‘how’ should always be in three sizes.
Just as in the fairybook bears’ house, in IT-land solutions come in three sizes. Like the bear story, it’s a fairy tale: there aren’t really just three sizes. And they aren’t just sizes, they’re bundles of traits––ownership, complexity, flexibility, and more.
But three is a good number, because looking at and choosing amongst five or seven or ten things is harder. So we in higher ed IT do well to recommend tools in three sizes and kinds.
- A free and easy consumer service with just a few functions. It’s not meant for professional use but it’s adaptable for many purposes. It’s not hard to use, though finding all the tricks can take time. And we don’t own it.
- Think flickr for photos, Youtube for video, Dropbox for file sharing, Slideshare for publishing presentations, etc.
- We don’t care that we don’t own it. We just need to make the proper warnings about where the data lives, who can access to it, whether the data can be sucked out, our lack of control, etc.
- A free service and which has robust-, numerous- and flexible-enough functions that it can be used for many purposes. It takes time to learn, but the learning curve is not steep. And we own and offer and support it, and that means it’s geared more towards the kinds of purpoes our users have.
- At Yale, think WordPress. Anyone can request a site. There are already-built resources. It can be used for courses, working groups, projects, etc. It can be public, private or community-only.
- A specialized service which we have licensed or built, which has a high degree of complexity. It can be used for many different purposes. You can use it a little or a lot. The learning curve is steep. Whether it’s someone else’s or not, we bought it and we provide it and so even if we don’t own it 100%, we get the blame when things go wrong.
- Think a sophisticated digital asset management service, or even Adobe’s Creative Cloud suite, which is licensed by and (in aggregate) is off-the-charts in complexity.
As with many choices, it’s really a table. This one has one binary distinction and four scales.
|type||who owns it?||how many functions?||how complex?||number of purposes||learning curve?|
|simple, free & easy||someone else||few||simple||one or two||none or trivial|
|our un-fussy service||us||not too many||relatively simple||more than a few, less than a dozen||non-flat|
|“our” high-end service||us||a lot||complex||many, many||steep|
But tables are for nerds like me, and a list is more human-readable, and this is one of those distinctions we in IT-land often forget, because “I can understand it,” but then I am not the user.
And unlike in the three bears’ house, in IT-land each of the three sizes is “just right” for somebody. Every user is a Goldilocks who deserves her chair and bed and porridge just the way she likes it.
- People who come to us for simple functions can be directed to simple tools––even if we don’t own them.
- And we need to have worked out the use cases well enough so that we can give a short ‘getting started’ document or demonstration.
- We don’t need to know all the answers––as long as the client knows they are using someone else’s pipes.
Unlike many things in IT-land, the process doesn’t have 86 steps.
- Write the use case, and identify the three choices.
- Give your users a clear picture of the use case: who does what.
- Help the users choose wisely, and help them get the right amount of support for each choice.
- Advise your users appropriately of the advantages and pitfalls––learning curve, data ownership, privacy, security, longevity, etc.
If you can get the users to share their successes, then others will see what success looks like, and they too may come to recognize that one size seldom fits all, but there is often one size for each user that is “just right.”
––Edward R. O’Neill
I’m not thinking of suspicious packages.
Rather, I’m thinking about the standards and ethics of our profession: folks who support teaching and learning with technology.
In that regard, I saw several things at ELI 2014 which made me want to say something, and that something is basically "What goes on here? What do we as a profession do? And why can we not have a connected discussion about that?
1. I saw a keynote give blatantly wrong facts.
Okay. People make mistakes. Sure.
But this presentation pretended to give a ‘scientific’ basis to teaching and learning.
Should conference presentations perhaps be required to use footnotes?
One writing teacher I know asks this of undergraduates. Students must give a handout that includes:
(1) a short prose summary and
(2) a list of references.
Problem solved? Perhaps. But that wasn’t the only conspicuous absence of professional standards on display.
2. I saw a presentation arguing for a certain model of instruction, but the presentation made no reference to other models, nor to any concepts of learning, nor to any existing ideas.
This was an argument in a vacuum.
If we wouldn’t permit undergrads to do it, should we do it ourselves?
This lead me to a fear, which I now articulate. (See something, say something.)
Instructional technology as a profession seems to have no clear sense of standards of evidence––nor are these even really a part of the debate.
Think about any other discipline. History. Physics. Kinesiology.
- You know what counts as evidence.
- But you debate why some evidence is more meaningful than other kinds.
- There are different schools and approaches, and they’re forced to duke it out.
- Some standards and references are shared, some widely, some narrowly, while others are up for grabs.
Why should learning technology not be the same?
Nor are such issues just about evidence.
3. A presentation ostensibly about program evaluation offered no goal for the program, no significant research, numbers that were blatantly fudged.
Of course, if there is no goal, there can be no measuring. (Measure what?)
In this case I actually asked during the Q&A if there was any theory or concept or idea of learning driving the process. (I couldn’t ask about institutional goals, as the presenters had basically said “The Provost wanted it,” and it was clear no one after that point had even thought to tack on a goal as a fig leaf.)
The answer was: no, we don’t have instructional designers; we have Ph.D.’s. As if planning learning intentionally and being a scholar are somehow mutually exclusive.
It’s easy to understand this. In higher ed, the disciplines are the guardians of standards of knowledge.
- The psychologists decide what psychology is.
- The dance teachers decide whether dance is modern or ballet or rolling around on the floor.
- The English professors decide what counts as literature and literary analysis.
But it’s shocking to think that (for some at least) this excludes any role for thinking about teaching and learning––or even planning in its most basic sense.
All of which brought me to the terrible near-existential recognition of a central absence.
Instructional technology as a profession seems to have no shared framework for specifying goals and measuring results––hence justifying the value we create (potentially but not only ROI).
- What kinds of things can we accomplish when we use technology to support learning?
- What is the size or scope of our interventions?
- Are we just making it easier to turn in homework?
- Are we publishing things that were harder to publish before––like lectures?
- Are we solving psychological problems? Economic problems? Cultural problems?
Of course, some goals are easy to pick out: convenience, efficiency and effectiveness.
- At this point in time, convenience reduces largely to what I call x-shifting.
- Just as the VCR allowed TV shows to be shifted in time and place, now increasingly-smaller computers allow content and experience to be shifted in time, place and platform. These may not be the only forms of convenience, but they’re paramount.
- Efficiency is simply doing more with less.
- We can promise this––but we mustn’t lie: a small-scale study I did at my prior institution showed what I think we all know. With any new technology, you must put in more time at first in order to save time later.
- This points up a little-mentioned analogy, which really ought to be the core of what we do in learning technology: learning a new technology is itself a species of learning, hence a microcosm for learning-in-general. Helping people learn to use a new technology helps them to re-see with new eyes the phenomenon of learning.
- Effectiveness is where we lose all our bearings. Ideally, we’d like to make teaching more effective, for it to generate more learning. But how?
- What are the drivers of learning? Where are the pedals and the steering wheel? We don’t have a good taxonomy.
- Better motivation? Sure.
- Good chunking for better cognitive processing? Okay.
- Better sequencing of instruction? Absolutely.
- What are the drivers of learning? Where are the pedals and the steering wheel? We don’t have a good taxonomy.
But do we have a clear picture of the whole shape of such goals?
I fear not.
When I see something, I can say something.
The NEH (National Endowment for the Humanities) and DFG (Deutsche Forschungsgemeinschaft) have announced another round of awards for their Bilateral Digital Humanities Program. The program provides support for projects that contribute to developing and implementing digital infrastructures and services for humanities research. They are awarded to collaborative projects between at least one partner based in the U.S. and one partner based in Germany.
This round’s awardees were largely focused on digitization projects, especially text encoding, which seem to be indicative of the general field of digital humanities, especially those concerned with “ancient” languages and literatures. The goal of such projects is to create innovative (and hopefully better) ways to present texts in digital format. Part of the innovation is the ability to consider diachronic aspects of literature, especially variant traditions of ancient literature and critical work associated with the text in question. Additionally, these projects provide ready access to literature that had been previously limited to few (and generally quite expensive) volumes from a small group of publishers. The well-known and oft-mentioned Perseus Digital Library and the much less well-known Comprehensive Aramaic Lexicon Project provide numerous examples of the benefits of such projects. I have used a number of similar projects including these two mentioned here during my young academic career, and I can attest to their great benefits.
There are, however, a few drawbacks that seem to accompany these projects. The most central recurring caveat to these programs that I have experienced is the development of the projects seems to stop when the grant funding runs out. While it is certainly understandable why projects cannot continue to develop without funding, this problem is largely the result of the fact that these projects seem to often stand on their own, meaning they are not part of a larger collection to which they contribute. This autonomy creates an environment where the innovative technology developed by each of the individual projects seems to stagnate with the project itself. The arrested development of these individual projects creates a considerable disparity between autonomous projects—especially those that focus on relatively obscure content—and projects that are either paid applications (e.g. Accordance Bible Software) or are developed in collaboration with large tech companies (e.g. Dead Sea Scrolls Digital Project a collaborative effort between the Israel Museum and Google). I am not criticizing these latter projects. On the contrary, I have used both of these example programs with great relish. Rather, I am lamenting the stagnation of many autonomous projects whose subject matter might be more obscure (relatively speaking, of course), but is vital for a number of scholars’ research.
As the process of text encoding becomes more standardized, it would be interesting to see the development of a digital library that could incorporate these autonomous projects into one central location. This may allow for the continued development of autonomous projects whose dwindling funding limits the participation of its original developers. To be sure, there are obstacles to such grand collaborative work, and, ironically, this sort of project may need to begin as an autonomous project. However, the recent launch of the Digital Public Library of America provides a substantial step toward the further development of a central digital library of various digital materials, and may itself be the very project I would like to see.
I congratulate the program awardees, and very much look forward to experiencing the results of their projects.
I’m not personally directly affected by this change, since I use Vienna as a desktop reader and Pulse on my mobile devices. (I don’t do a lot of RSS reading on mobile, so it doesn’t matter to me that Pulse won’t bulk import my feed list from Vienna.) But by the same token, I am concerned about the future of RSS now that one of the major supporters of it has changed its mind. RSS is one of the high points of open standards online, highlighting what can be done with relatively simple programming. Without denigrating Aaron Swartz, the fact that the RSS 1.0 spec was partly authored by a 15-year-old (albeit a bright and creative 15-year-old) speaks to how elegant it is.
We’re not zealots about Open here at ITG, but it is something we like to choose. Our biggest example is the use on Yale Academic Commons of WordPress and WikiMedia. Both are open-source pieces of software that anyone can install on their own computer and change at will. Both are also open in that anyone can play around with making changes to the code and can even request that the changes get incorporated into the final product. Even without touching the code, there are ways to be involved with the production of these tools that have proven immensely valuable for education. You can review beta versions, contribute documentation, or just tell other people about your experiences with them.