New phone, new laptop, and disoriented

I had the opportunity this past week to replace both my smartphone and my work laptop. One had to happen, the other was a scheduled replacement.

Photo by Life Of Pix on Pexels.com

Both experiences were disorienting in multiple ways. They required time that I did not necessarily have in my schedule to acquire the new device and set it up in ways consistent with the old devices. Because they are newer machines, they have some new options to consider. I was temporarily without access to each and what they provide access to in parts of the setup.

And the transition process went rather smoothly. Copying over contacts, apps, and files just took some time. I had to tweak a few settings but they now look and operate similarly (with some nice upgrades) to what I was used to before.

These are not just machines. For many daily tasks, they are extensions of my self. They enable my work and embody my work. They are distributed cognition devices – extending my ability to think, reason, and write – and portals to interactions with people and systems. For them to be altered or unavailable, even for a short time, shakes up my day.

Ultimately, I am glad to have the new devices. My daily activities are back on track. Almost all of the wrinkles of adjusting to new machines has happened. And I hope I do not have to do it again for a while.

Getting better data on how students use laptops in class: spy on them

Professors like to talk about how students use laptops in the classroom. Two recent studies shed some new light on this issue and they are unique in how they obtained the data: they spied on students.

Still, there is one notable consistency that spans the literature on laptops in class: most researchers obtained their data by surveying students and professors.

The authors of two recent studies of laptops and classroom learning decided that relying on student and professor testimony would not do. They decided instead to spy on students.

In one study, a St. John’s University law professor hired research assistants to peek over students’ shoulders from the back of the lecture hall. In the other, a pair of University of Vermont business professors used computer spyware to monitor their students’ browsing activities during lectures.

The authors of both papers acknowledged that their respective studies had plenty of flaws (including possibly understating the extent of non-class use). But they also suggested that neither sweeping bans nor unalloyed permissions reflect the nuances of how laptops affect student behavior in class. And by contrasting data collected through surveys with data obtained through more sophisticated means, the Vermont professors also show why professors should be skeptical of previous studies that rely on self-reporting from students — which is to say, most of them.

While these studies might be useful for dealing with the growing use of laptops in classrooms, discussing the data itself would be interesting. A few questions come to mind:

1. What discussions took place with an IRB? It seems that this might have been a problem in the study using spyware on student computers and this was reflected in the generalizability of the data with just 46% of students agreeing to have the spyware on their computer. The other study also could run into issues if students were identifiable. (Just a thought: could a professor insist on spyware being on student computers if the students insisted on having a laptop in class?)

2. These studies get at the disparities between self-reported data and other forms of data collection. I would guess that students would underestimate their distractable laptop use on self-reported surveys because they would suspect that this is the answer that they should give (social desirability bias). But it could also reveal things about how cognizant computer/Internet users are about how many windows and applications they actually cycle through.

3. Both of these studies are on a relatively small scale: one had 45 students, the other had a little more than 1,000 but the data was “less precise” since it involved TAs sitting in the back monitoring students. Expanding the Vermont study and linking laptop use to outcomes on a larger scale is even better: move beyond just talking about the classroom experience and look at its impact on learning outcomes. Why doesn’t someone do this on a larger scale and in multiple settings? Would it be too difficult to get past some of the IRB issues?

In looking at the comments about this story, it seems like having better data on this topic would go a long ways to moving the discussion beyond anecdotal evidence.