Faculty Bulldog Days Review

It's all over but the reflection for the professors and for the CTL organizing staff, and I have finished sitting in on three classes during Faculty Bulldog Days* for spring 2015. Here are some thoughts about that.

Before I talk about the teaching, I can't thank enough the professors who volunteered to have someone come observe their class. We don't have a strong and pervasive culture of openness at Yale, so I thank the professors for standing up and making their teaching work more visible. In the same breath, I want to thank the students in the classes for having a stranger (two, in one of the classes I attended) in their midst. The largest of the class sessions I attended maxed out at 20 students, making interlopers noticeable. Naturally, the five-student class had discussed opening up beforehand, but even the others accommodated visitors seamlessly.

So what about that teaching? Because the sign-up form didn't have a box on it to check for "Yes, I would like any minor mistake or idiosyncrasy made in my class to be splashed across a low-traffic instructional technology blog", I'll only mention things I noticed and liked. (Try not to chafe too much at the vagaries, because even revealing the discipline of a class would pull back the curtain a little too much.)

  • A particularly nice technique I saw was using an un-articulated motif in the class but then at some point in the session raising the motif to a conscious level. If activating prior knowledge contributes to learning, working with this idea at varying scales of "prior" — even within one class — makes sense.
  • Another teacher, in an effort that seemed effective, very noticeably phased in participation over the course of the class. Students engaged in heavier lifting at the beginning, with the professor only nudging along; as the discussion got denser and more challenge-laden (in a good academic way, I thought), the professor increasingly helped portage.
  • In a final example, and at the risk of being banal, one teacher engaged very personally with the work under discussion. Fortunately, the work was comedic, so laughter demonstrated their** engagement, but that personal commitment can make the difference for some students.

Taking on affective filters is a fine line, of course: Are you giving students a glimpse into personal meaning or risking scaring them off something they don't connect with in the same way? My bias is for not hiding how you feel about what you're teaching, for not pretending that scholars hold absolutely everything at arm's-length. By the same token, of course, you have to model critical engagement with the topic and critical engagement with how you feel about it.

I pepper my thoughts with conditionals and hedging, because this was drive-by observation. Some classes gave me prep work, some didn't. Even so, all the people involved in these classes had worked with and through scores of ideas, hundreds of pages of reading, and hours of lecture and/or discussion before I got there and without which I can't form any strong conclusions. This highlights one of the difficulties in mounting this sort of event. While there's no explicit pressure to participate, the implicit social expectations don't go away. If you're an untenured faculty member teaching in front of a high-ranking admin, who may be from a radically different field's teaching traditions, how do you keep it together? There's enough potential benefit (and actual benefit for me) in this event that I hope we do it again, but I hope we never stop trying to make sure it's a scaffolding exercise for the participating faculty rather than an unrewarding chore.

* Honestly, I wish we'd called it something like Classroom Open House or Sharing Our Teaching, or similar, as I don't make the same associations with a prospective student event that I do with this. I do hope, though, that prospective faculty hires are indeed able to sit in on a class or three, and not just in their department of recruitment, during their visits here.

** Gender-obscuring pronouns. Live it, love it.

In an item in yesterday's Yale Daily News about Yik Yak, one professor is quoted as seeing potential there:

[Aleh] Tsyvinski said that as a professor, he rarely gets feedback during the term. He added that he wishes there were an anonymous board, similar to Yik Yak, dedicated to continuous feedback.

Alina Nevins wins Spot Award

spot-alina-nevinsOur very own Alina Nevins won a CIO Spot Award from ITS. From the website:

A Support Technician wrote on behalf of several members of the ITS Help Desk to express thanks to Academic Technologist Alina Nevins, who wrote high-quality knowledgebase articles supporting Classes*v2.  "The articles are very well written and clearly spell out what we need to know,” he wrote. “I received a number of emails about Classes*v2 this morning. The articles made it very easy for me to provide information to the clients."

From ITG

Alina not only wrote high-quality knowledgebase articles supporting Classes*v2. She has taken over the very large support shoes of our dear retired colleague, Gloria Hardman (and is doing an extraordinary job of it). Alina offered hands-on training to HelpDesk staff regarding the use of V2. It's our first service which has tier one support at the help desk. She's managing the V2 queue and our great staffer Jennifer Colafrancesco's work on the service, mastering Drupal for course and educational technology services support, has just gotten a new "Senior Academic Technologist" title and is an all-around great team member.

Thanks Alina and congrats from the crew!

ELI 2015 Conference Notes

Highlights from the ELI 2015 conference in Anaheim, CA (besides the 75 degree weather).

blendkitFrom my alma mater, the University of Central Florida. This mooc/resource is for helping faculty and institutions created blended learning courses. From the website: The goal of the BlendKit Course is to provide assistance in designing and developing your blended learning course via a consideration of key issues related to blended learning and practical step-by-step guidance in helping you produce actual materials for your blended course (i.e., from design documents through creating content pages to peer review feedback at your own institution).

The-Symbiotic-Research-ToolkitA research toolkit for students from Georgia University. The idea being that students don't always know how to use the internet as a resource for research. Might be a good resource in the CTL.

 

 

 

trophyNot Everyone Gets a Trophy - Mark De Vinck, Dexter F. Baker Professor of Practice in Creativity, Lehigh University

Outcomes: Understand the importance of creativity as it relates to innovation, understand the value of hand-on learning, learn how to teach failure without failing.

This faculty member runs a maker lab at the university and provides structured lessons to help students overcome failure. Teaching them to be persistent and resilient in the face of failures. Each student keeps a inventors notebook which documents their tries and processes as well as ideas around real hands-on work on problems. He's found the most useful boost for innovation is the creation of a safe space for students to explore all ideas and to approach obstacles as opportunities for learning. Claims that all is needed for a maker space is a couch, a popcorn maker and coffee. De Vinck talks about using systematic creativity (nicely defined by Mindfolio here) and the 6 hats of creativity defined as follow (found in wikipedia) :

Six distinct directions are identified and assigned a color. The six directions are:

  • Managing Blue - what is the subject? what are we thinking about? what is the goal?
  • Information White - considering purely what information is available, what are the facts?
  • Emotions Red - intuitive or instinctive gut reactions or statements of emotional feeling (but not any justification)
  • Discernment Black - logic applied to identifying reasons to be cautious and conservative
  • Optimistic response Yellow - logic applied to identifying benefits, seeking harmony
  • Creativity Green - statements of provocation and investigation, seeing where a thought goes

Take aways: I enjoyed this talk and the enthusiasm of the professor. Wondering if we could incorporate this type of problem solving to humanities courses. These innovative and maker space work well in engineering and other sciences. But maybe in Public Humanities, Public Health? Perhaps where working as a group to discover underlying concepts in a discipline might benefit from using a systematic approach to thinking innovatively.

harvardcrestLearning at Scale and the Science of Learning - Justin Reich, Richard L. Menschel HarvardX Research Fellow, Harvard University

Outcomes: Learn how to distinguish between participation data (which is abundant) and learning data (which is scarce), learn about taxonomy of current research approaches ranging from fishing in the data exhaust to design research in the core, understand the importance of randomized experiments (A/B testing) to advancing the science of learning.

At the last ELI conference in New Orleans in 2014, MOOCs were in high profile. They were reaching a peak of interest and a flurry of activity. Now that we've got a body work done, we seem to be entering a time of deep assessment of the outcomes. Reich has written a few white papers about his research of HarvardX (one found here). Just like there is diversity in learning experiences there is also diversity of goals. Diversity is central to understanding of the enterprise. There is a difference between measures of activity and measures of learning - there's a lot of data about what people click on but not what goes on in their heads. The question is: What kinds of things are students doing that are helping learning outcomes?

Reich believes we should reboot MOOC research by offering suggestions for how we might do more research on the learning rather than the engagement (clicks).

Improving structures for assessment

  1. measure full range of competencies
  2. measure change in competency over time
  3. borrow validated assessments from elsewhere

MOOCS research has the following options at this time:

  1. fishing in the exhaust (tracking in the clicking data)
  2. experiments in the periphery - domain independent (don't have anything to do with the disciple being taught i.e. learning styles or course design options) which means you can plop it into different domains (disciplines)
  3. design research in the core - helps explain how to get students past a barrier or threshold and how to help students learn core concepts in a course better

Take aways: I thought this talk highlighted just how hard it is to create meaningful assessments of learning in an area that has such a diverse set of students. It would seem to me that assessment might be based on the goals of the groups who want information about how MOOCs are doing. Faculty would probably be interested in assessing if students are understanding the core concepts of a course, administration might be more interested in enrollment numbers and completion rates (perhaps the amount of clicking), students are probably interested in ease of use and the quality of the material - this group is probably the hardest to understand in terms of what their goals are, it's a broad group over many lands.

Hope-Creek-Sunset

Frontiers of Open Data Science Research - Stephen Howe, Director of Technical Product Management-Analytics, McGraw-Hill Education.

Outcomes: Learn how data science is being applied to gain new insights about learner and instructor behavior.

The next generation of education technology will use open data to provide a foundation of learning analytics and adaptive learning. The new frameworks will give continuous quality personalized feedback to help align curricula and assessments and help students make course corrections in their learning. Using open data educators can provide measurement and reports that will affect learning outcomes. There are 3 areas that can be explored:

  1. Prescriptive - base line requirements
  2. Predictions - attention lost? off course? don't understand
  3. Prescription - how to adjust - adaptive learning

Using adaptive learning environments and providing realtime feedback, students will have a pathway to what is known and what should be learned next. How can we take the power of adaptive products that are locked into software? Howe stresses the need for open architectures and standardized data output.

  • IMS Standards
  • LTI - interoperability for integration and identity
  • QTI - assessment interoperability
  • Caliper - a new standard for data exchange - common data exchange (JSON)

Howe shared a graphic of 3 main areas. The first area is the data source of learning events which is then converted to a  data common language. From that common language input APIs are processed in a learning analytics store house where the data is sorted. From that storehouse output APIs publish to products (phones, computers, dashboards, etc.) Howe claims you must start with the product that is trying to answer a question (goal outcome). Then you backwards architect how you sort the data.

Take away: The crux of the biscuit is always the open and standardized data source. Hard enough to do across a single institution let alone across many institutions. However, I don't believe we've done enough here at Yale to leverage product API's, and LTI's in our LMS. However I know it's in our sites and roadmaps. Future frontier looks bright.

Overall, I believe the conference themes that resonated throughout were learning analytics, hands-on learning assignments which give students the opportunity to fail and try again, and competency based learning objectives. And did I mention it was really warm and sunny there?

Timeline + Map Web Tool Comparison

I prepared this brief for a pair-taught course on monasticism, in which the professors wanted to explore using chronological, locative, and narrative data from historical, ethnographic, archaeological, literary, and visual sources to facilitate sophisticated comparative analysis. In particular, they hoped students would make connections and distinctions between phenomena that were non-obviously juxtaposable. They wanted to present this data visually on a website, using both a timeline and a map, with some navigational latitude available to site visitors.

Best Options

I've bolded the most salient items for each option.

Neatline

Pros

  • Allows points, lines, and polygons for representing locative data.
  • Date ambiguity representation
  • Sophisticated object metadata
  • Baselayer choices
  • Active development, at an academic institution
  • Self-hosted

Cons

  • Nontrivial learning curve
  • Sophistication accompanied by sophisticated interface that can be distracting/annoying for students. Requires solid explanation and clearly defined metadata requirements
  • Interface impermanence
  • Standalone, no embedding

Representative Example:Ibn Jubayr

TimeMapper

Pros

  • Google spreadsheet for data store (with implication of using Google Form for student contributions)
  • Data decoupled from presentation
  • Wide range of media embedding
  • Responsive design
  • Embeddable in other sites, such as WordPress
  • Good with BCE dates
  • Simple setup and use

Cons

  • Interface impermanence
  • Limited customizability, though can be deployed to Heroku
  • Uncertain development, sponsored by not-for-profit

Representative Example: Panhellenic Competition at Delphi

Timemap.js

Pros

  • Multiple options for data, including both Google spreadsheet (with implication of using Google Form for student contributions) and local
  • Data potentially decoupled from presentation
  • Self-hosted
  • High level of GUI customizability
  • Baselayer choices (though more limited than Neatline)

Cons

  • Old code
  • Interface impermanence
  • Mobile interface unknown
  • Embeddable as an IFRAME only, usability unclear

Representative Examples: Google spreadsheet with additional arbitrary data points, Themed data

MyHistro

Pros

  • Entries commentable
  • Dedicated iOS application for mobile use
  • Clear data export to CSV, KML, and PDF
  • Embeddable
  • Easy to use points, lines, polygons
  • Variable placemark colors
  • Can 'play' the timeline like a slideshow
  • Semi-automatic semi-multilinguality

Cons

  • Tightly coupled data and presentation
  • Privileges linear reading, though not a requirement
  • Unsophisticated design
  • BCE dates don't seem to get calculated and stored accurately

Representative Example: Early Mesopotamia

Commonalities

In all cases, you'll have to identify distinct start and end dates rather than using century-level notation. The individual dates don't have to be more precise than a year. BCE dates are often added by prepending a negation sign before the date (e.g. -200 is 200 BCE).

Additionally, it always needs to be said that at any moment, development on any of these might cease or changes in browsers and student browser usage might render the code unusable. Even on the options being actively developed, the development team might make a material change in the interface, altering substantially how it looks and works. Other technological or cultural changes can't be ruled out.

Other Options

Most other choices focus on locative storytelling, constraining a visitor to moving along a linear path:

Using Online Photo Albums for a Shot Breakdown Assignment


The online photo album is not a complicated tool. But it can be used to do more than share pictures of junior’s first trip to the zoo.
While prose writing is still a crucial skill our students can acquire, our students often need to analyze visual materials or to present analytical material in some visual form. Online photo albums are a simple way to do this. The photo album doesn’t do the work for you; you still need to know:

  • what you’re saying,
  • what the tool does,
  • what your options are–tool-wise, and more basically.

A familiar assignment in film studies is: the sequence analysis or shot breakdown.

  • The student analyzes a short clip.
  • She makes a table representing formal features–e.g.,
    • how many close-up’s vs. long shot’s,
    • how many still vs. moving camera shots,
    • etc.
  • Then the student typically writes a prose essay summarizing the results.

An online photo album can make a good alternative or additional assignment.

  • A visual presentation can be an intermediary assignment leading towards a prose essay.
  • Or it can represent a visual summary, an expression of some of the same ideas in a different form.

Trailer Shot Breakdown Analysis
A photo album will not capture all the nuance of prose: but reducing one’s arguments to a few points, or focusing on developing only a small compass of ideas can have a wonderfully clarifying effect. It’s like writing an abstract with pictures.
Ideas can be expressed in a photo set verbally or through the arrangement of images.

  • In the case of a verbal commentary or analysis, that can be put:
    • in the caption of each image;
    • or it can be inserted through explanatory slides created in Powerpoint or Keynote (or similar).
  • In the case of the visual elements, even changing the sequence of the images allows students to explore different ways of creating meaning. The images can be sorted:
    • sequentially, as when representing the shots of a scene in the order in which they appear in the film; or the images may be sorted
    • logically into types or kinds. In this case, they kinds represent the argument.

To show these options, I have grabbed still frames from the first two minutes or so of The Maltese Falcon.

  • This is a familiar movie, and the opening is (or used to be) used to demonstrate editing and its analysis in Bordwell and Thompson’s Film Art textbook.
  • Normally, a student would analyze a scene or other more-organic unit of a feature film. In this case, I’ve studied only a fragment, as that simplifies the demonstration.


Captions May Supply the Verbal Content.

This Google Plus Photo album adds captions to screenshots to call out important dimensions of the clip (like framing and duration), while other important elements of storytelling are previewed and narrated.

  • Clicking on an image brings up the image on one side of the screen and the caption on the other.
  • Adjusting the browser’s degree of zooming allows the caption to take up more space on the screen, thus making a good balance of image and analysis.
  • As an added bonus, a ‘backdoor’ lets you create an embeddable Flash-based slideshow you can put in Classes*v2, WordPress, etc.

    • The annotation obscures the image, but it’s an interesting option.
    • N.B. To do this, the photo album must be set to public.

Slides May Supply the Verbal Content.

The verbal analytical dimension may be added by generating text slides in Keynote or Powerpoint to create a ‘visual essay’ which analyzes the clips using brief text slides interspersed between frames representing the shots.

  • This example analyzes the shots in order.
    • The text slides preview key things to look for in the subsequent few slides.
      • This approach suggests the viewer will go back and forth to connect the argument and the evidence.
  • A different approach is to organize the shots by type or kind in some way.
    • This album counts how many shots of specific types and then labels and sorts them.
    • By labeling, sorting and re-arranging the elements one is analyzing, patterns emerge that may not have been apparent in the un-remixed text.
      • This could be called the basis of much analytical thinking: re-sorting experiences to compare them and find non-apparent patterns and resemblances.
    • What counts as a ‘type’ can be motivated by one’s curiosities or observations.
      • In this case, the symmetries and asymmetries of gender become revealed in neat ways by sorting the shots by various formal and other criteria--I.e., an idiomatic folksonomy rather than a scientific taxonomy.

In this case, the album with the captions was made first and then the photos copied to new albums.

  • Hence these images all bear captions, though in practice that would likely not happen.

An additional challenge is how to represent movement or duration.

  • Long takes or camera movement could be represented by multiple frame grabs.
  • The one-frame-per-shot ratio has the virtue of visualizing the shot count.
  • Whichever choice the author makes, clearly labeling the result helps orient the end-user.

–Edward R. O’Neill, Ph.D.

Visual Annotation & Commentary: Three Vernacular Tools

For a long time, scholarship largely focused on words and numbers.

  • Yes, art historians and theater scholars and radiographers thought a lot about images.
  • But today the visual dimension of knowledge increasingly leaves mere words and numbers in the shadows.
  • Chalk it up to the proliferation of screens–on our desks, on our walls, in our backpacks and pockets–or to whatever you like.

But it is in many ways a welcome change.

Many of us involved in scholarship and teaching spend a lot of time using images: gazing at them, thinking about them, writing about them; but also collecting, organizing, commenting and publishing them.

But how do we do this? Using what kinds of tools?

Those who manage large collections of images have specialized tools. And art historians and film scholars still write (lengthy) prose essays.

But using images to think about images has a special appeal. And tools like from making and giving presentations, editing movies, and sharing photos are all relatively easy enough to make them good candidates for vernacular scholarship: serious thinking that takes place in popular media.

When thoughtful people take up a medium, they think seriously about genres and forms.

  • Am I writing a novel or a tweet? A memoir or a lab report?
  • Am I drawing a landscape or a portrait? A wall-sized canvas or an ivory engraving?

And critical writing is no different–except that we who do critical writing could really spend more time thinking about genres, especially as we do and encourage critical writing on web pages and through viral videos and as info graphics.

Happily, some critical genres cut across media and can serve us well as we act critically in popular media: annotation and commentary are two crucial genres for critical analysis, and both of them lend themselves to visual media as well.

Both annotation and commentary bear a strong relationship to the text they comment on.

  • Annotation usually implies the presence of the text. An annotated edition is a manuscript that bears the annotations right on or beside the text.
  • Commentary may stand apart from the text it comments upon, but “apart” is often not far.
    • My edition of Hamlet contains some commentary in footnotes, and other commentaries before and after the text itself.
    • DVD (and now Blu-Ray) commentary tracks yoke together a text and a commentary: the two are synchronized.

When we use simple tools to share visual material, and when we try to work critically with these media, what features of the tools are we using? How do we annotate and comment?

I wanted to explore these issues by putting a dozen or twenty of the same images into three different readily accessible tools.

  • iMovie is a popular video editing tool which now costs about $15.
  • Google+ Photos is a service for sharing photo sets or ‘albums’: with a few people or the entire world-wide web.
  • Powerpoint is the ever-present
    • These files may be uploaded to Google Drive and published there
    • You can also record a voiceover and publish the presentation and voiceover together as a movie. But I skipped this, because I used iMovie to accomplish the same results.

What I Did and Why.

  1. I’m an amateur photographer, and I adore Hollywood glamour portraits of the ’30’s and ’40’s. I have books full of them, and over time, I’ve collected 50 or 80 such images from the web. So that determined my topic: convenience.
  2. I had the files in Dropbox, but I uploaded them to Google+ Photos, since I could organize them in a sequence there. The uploading involved selection.
    • In this case, I intuitively put together images that seemed to me related.
    • I had some notion of comparing images of men and woman, so that provided a sort of rule or principle.
    • But as I moved the images around, I found myself pairing them along the lines of similarity and contrast.
  3. As I browsed and sequenced the images, I started formulating my ideas about them.
    • The sequence turned out to involve shades of similarity.
    • I started with one that was highly emblematic of the whole: a kind of titular representation.
    • And then I arranged images of women, followed by men, with sub-similarities.
  4. I downloaded them all from Google+ Photos–simply because they were all in one place and neatly arranged.
  5. For iMovie I drag-and-dropped them onto the timeline. Once there, I composed some voiceover, which I recorded right in the software. I was then able to cut it into bits and slide it here and there to fit the images.

Affordances.

“Affordances” is the fancy word for the features of tool that let you do certain things.

  • The weight of a hammer determines whether it can tack carpet or crush rocks. You could say the ability to crush something heavy is an “affordance.”
    • The idea is to get away from features and to wonder aloud about what they get you.

iMovie has specific ‘affordances’:

  • It lets you add a voiceover.
  • It lets you add titles over images and between them.
  • It has a ‘Ken Burns effect’ in which still images are zoomed or panned across, to keep some visual interest.
  • And you can choose different transitions between still images (or video clips).

What would I do with these?

  • The voiceover seemed perfect for commentary. I could use the auditory channel for commentary, since the visual channel was largely full of what was being commented on. It was a neat divide.
  • I decided to use the titles to spell out the main topics.
    • Sure they were said out loud. But in some cases, I realized I had not recorded anything announcing the main topic.
    • So the titles became unifying themes that brought together multiple images, as well as the voiceover.
  • The Ken Burns effect is somewhat random in how it pans or zooms.
    • I decided that I could start in close on the visual element being described. Then I would zoom out to see the whole image.
    • So the pattern was to focus on a detail and then reveal its context. I did this with every single image. I decided consistency and repetition would make things easier on the viewer.
  • Finally, iMovie allows a transition that looks like un-focusing and re-focusing. It’s different than a ‘dissolve,’ in which one image slowly replaces another.
    • Since the context was cinematic, I thought the cross-focus transition fit nicely.
    • I used no other transition, as the images are from ‘classical’ Hollywood, and part of that classicism was parsimony: very few effects used carefully. So I wanted to match the material in this regard.

Link to visual commentary example created using iMovie

For Powerpoint, I went a bit further.

  • Powerpoint allows you to use simple, stock visual elements: like arrows.
  • You can record a voiceover, but I decided I had just done that: I would force myself to find a different pathway with Powerpoint.
  • The author can also create specific transitions: one image bumping another off to one side, etc.

I decided the visual logic of a video and a presentation were different.

  • A voice speaking to you over related images is very different than the same images presented without a voice.
  • So I decided I needed to structure my commentary more clearly.
    • Instead of a series of observations, I wanted to show consistency, repeated elements.
  • So I organized the images a bit differently.
    • And I tried to make very clear themes with sub-elements.
  • The images sat to one side–the right–and the themes and sub-themes were spelled out on the left.
    • First the viewer sees the image.
    • This way you get to see it with your own eyes.
    • The next slide spells out the theme and sub-themes: in this case, the effect the photo produces, and how it’s produced, the techniques.
  • Finally, I decided to use those simple stock visual elements:
    • I put arrows connecting the techniques to a specific place on each image.

To publish the presentation, I uploaded it to Google Drive.

  • Google Drive can then autoplay, and it lets the user choose a smaller number of transitions.
    • I chose a fairly slow pace, to give the viewer time to look and read.
    • By using a transition in which one image instantly replaces the next, my themes and sub-themes suddenly appear, and so do the arrows.
      • There is an animation-like effect.

Watch the Powerpoint-made presentation in a separate window here.

Finally, for the Google+ Photo album, I used the feature of ‘captions.’

  • Each photo can have a bit of explanation about it.
  • So I elaborated on my voiceover text here. There’s a little more space, so I could add some extra detail.
  • The casual browser might read these or not. So I tried to write them to reward reading.

In short, for this tool, I was relying largely on sequence.

  • Google+ Photos does let you edit the images. I could have emphasized some visual characteristics. But I opted for restraint. Let the images speak for themselves, and let my voice be softer, less obtrusive.

Going to Picasaweb.google.com lets you find code to embed a slideshow. (Somehow Google+ users don’t rate access to this feature.)

Hollywood Publicity Portraits of the 1930's & 1940's

And there’s a more static embedded version.

Both draw on the original photo set.

--Edward R. O'Neill

Three Questions, One Answer

In a recent blog post , Doug Mckee addresses three issues in how he should teach this fall.

1. Should I ban laptops in lecture?
2. Should I make discussion sections mandatory?
3. Should I cold-call students during lecture?

Basically: no, no and no—all for the same reasons.

1. Should you "ban" anything in lecture?
Or rather: were you to try, what would be the justification?
In teaching we do things for very few reasons.

a. Because they are inherent in the discipline and academic life. "We're reading Durkheim because he helped to found the discipline." "We'll use APA style because that's what professionals do." "You must offer arguments, not opinions, because in our domain, opinions have no value."

b. Because they are convenient. "We need to get all your papers at once so we can compare them and grade them before the next work is due."

c. Because they adhere to university policies and laws. "No smoking in the back row." "Grades are due on the 11th." "No sexual harassment."

d. Because they embody our values about human freedom and responsibility. "You must take up your own argumentative position." "You may turn in the work late, but it will be marked down." "Write about the one topic on the list that interests you most." Pursue your freedom. Experiment. Explore. Fail. But take on the responsibility of existing and choosing.

(I can't think of many other justifications for why we do this, that or the other in teaching.)

And all of these questions are opened to reasoned debate—because that is one of our values.

Once you say "You will not open your laptops," you are dictating. And you have lost. Now you are a cop, not a teacher.

Practically speaking, I know professors who have had good luck with the "three states": put your laptops away and focus on this (discuss with a peer, whatever); open your laptops and do this specific task; leave your laptop open or put it away—I don't care, just don't distract your neighbor.

You can also play with the sequence. If you ask them to use it, then to close it, the act of opening it may be more self-aware.

2. See "1" above.

a. What does "mandatory" mean? Again, from my perspective this is the wrong relation to the student.

We can mandate little in teaching. Rather, we reward and we punish. (Behavioral economics and game theory surely apply here--though I fear that those theories have no moral code embedded in them, and therefore they may be useful tools but they are not arbiters.)

Extrinsic rewards don't motivate learning very well. So you can reward and punish for attending or not. But neither will help students learn.

Why not go the other way? "Go to section, don't. No points for it. Go if you value it. And we'll try to make it valuable." Ask every week how section could be better. Make it a discussion topic in the web site. When you can't decide in advance, make it a learning experience.

Hence...

b. One good principle in planning teaching is: treat all questions about teaching as something to be proven experimentally by teaching.

Reframe the issue as: What could I learn about making the section worth going to?

Survey students weekly--did you go or not, why? Ask the section leaders to experiment, to explore how best to meet the students' needs. Maybe the first few weeks the sections would have different specific activities that students rated, and thereafter, students chose "which activity should we do today?" Make it their section. Meet their needs.

Or just put super-important things in section. Sell how great section will be, and then say "of course it's totally optional."

3. See "1" above.

a. They are coming to lecture to learn. Would you pick on someone for not having understood the material as well as someone else? That person needs more help, not public shaming.

b. I tried this once. I would never do it again.

I once put the students' names on index cards. I shuffled them and picked one at random.

Once the index cards came out, students sat up straight in their chairs.

I called a name, and the student stammered and hemmed and hawed. Other students tried to rescue those I called on—defended them.

One student shot his hand up later, after not having known the answer to an earlier question, and after class explained to me: "I knew the answer, I just couldn't think of it, and so I had to show you that I'd done the reading."

And I thought: who am I? To make someone prove a point to me?

After that I brought out the index cards and put them on the desk. They were radioactive. Students would stare at them. If no one answered a question, I moved towards the cards, and a voice would ring out with something to say.

It wasn't motivated by something good. But I got good discussions. Not because of randomly calling on students all the time, as a policy. But by making a point that we needed to discuss and that I would do what it took to make that happen. They didn't want that.

But I would never teach that way again.

4. Learning devolves on human agency.

Agency is the center of learning. Through learning, I become more capable, and I feel myself to be more and more of an agent, less and less of a passive, receptive entity and more and more myself.

Humans become more capable by overcoming meaningful challenges in an increasing order of difficulty, a difficulty matched to their abilities. (It's tragedy when someone is outstripped by the task he faces; tragedy defines common humanity by contrast.)

Anything that takes away from the agency of the learner is bad for learning.

Yes, we need rules and limits.

But when possible, all meaningful choices should be passed to the student.

To experience one's humanity through the responsibility of choice, to embrace the possibility of failure, and to own's one's successes: this is the heart of education.

— Edward R. O'Neill, Ph.D.

ITG Helps with a Creative Classroom

We're glad to see Professor Elihu Rubin’s thoughtful use of technology in his pedagogy getting some notice. Late in the spring, Professor Rubin's work on Interactive Crown Street caught some news, and a couple weeks back (don't ask us how we missed it) there an item appeared in Yale News about his investigation with students into New Haven's infrastructure. Professor Rubin and the students in the cross-listed Architecture and Political Science course created an online guide by using Yale's Academic Commons, an instance of WordPress founded and managed by the Instructional Technology Group. Pam Patterson of ITG as well as Ed Kairiss and Edward O'Neill of Educational Technologies supported the course.

DHSI 2014: Data Mining for Digital Humanists

Digital Humanities has become quite the buzzword of the academy in the last few years as the community recognizes the new areas of inquiry opened by this field and methodology. In order to further explore this area, I am attending the Digital Humanities Summer Institute (DHSI) at the University of Victoria. It has been quite the whirlwind! Over 600 people have congregated to learn, share and make together over a week. Wide ranges of courses are offered in areas such as TEI, GIS, networks, mapping, pedagogy, gaming and project management. I enrolled in Data Mining for Humanists.

The course has been exciting and intense. We are rapidly exploring data mining techniques such as Bayesian classification and support vector machines. The instructor has paired this with a crash course in probability that has been key to understanding the probabilistic approaches such as naive Bayes. The only drawback is that we aren’t programming along the way, which makes it difficult to move from the abstract to the hands-on. I hope we will work closer with the scikit-learn Python package we were asked to install before attending, as actually working through some data will help solidify the concepts.

On a side note, I began using IPython Notebook, which sits on your computer but runs on your browser. It allows you to easily edit, run and plot code. You can also share your notebooks easily. If you are using Python, I suggest exploring it!