Professors like to talk about how students use laptops in the classroom. Two recent studies shed some new light on this issue and they are unique in how they obtained the data: they spied on students.
Still, there is one notable consistency that spans the literature on laptops in class: most researchers obtained their data by surveying students and professors.
The authors of two recent studies of laptops and classroom learning decided that relying on student and professor testimony would not do. They decided instead to spy on students.
In one study, a St. John’s University law professor hired research assistants to peek over students’ shoulders from the back of the lecture hall. In the other, a pair of University of Vermont business professors used computer spyware to monitor their students’ browsing activities during lectures.
The authors of both papers acknowledged that their respective studies had plenty of flaws (including possibly understating the extent of non-class use). But they also suggested that neither sweeping bans nor unalloyed permissions reflect the nuances of how laptops affect student behavior in class. And by contrasting data collected through surveys with data obtained through more sophisticated means, the Vermont professors also show why professors should be skeptical of previous studies that rely on self-reporting from students — which is to say, most of them.
While these studies might be useful for dealing with the growing use of laptops in classrooms, discussing the data itself would be interesting. A few questions come to mind:
1. What discussions took place with an IRB? It seems that this might have been a problem in the study using spyware on student computers and this was reflected in the generalizability of the data with just 46% of students agreeing to have the spyware on their computer. The other study also could run into issues if students were identifiable. (Just a thought: could a professor insist on spyware being on student computers if the students insisted on having a laptop in class?)
2. These studies get at the disparities between self-reported data and other forms of data collection. I would guess that students would underestimate their distractable laptop use on self-reported surveys because they would suspect that this is the answer that they should give (social desirability bias). But it could also reveal things about how cognizant computer/Internet users are about how many windows and applications they actually cycle through.
3. Both of these studies are on a relatively small scale: one had 45 students, the other had a little more than 1,000 but the data was “less precise” since it involved TAs sitting in the back monitoring students. Expanding the Vermont study and linking laptop use to outcomes on a larger scale is even better: move beyond just talking about the classroom experience and look at its impact on learning outcomes. Why doesn’t someone do this on a larger scale and in multiple settings? Would it be too difficult to get past some of the IRB issues?
In looking at the comments about this story, it seems like having better data on this topic would go a long ways to moving the discussion beyond anecdotal evidence.
Like this:
Like Loading...