Studies suggest texting in class is related to lower grades, GPA

Several studies in recent years have examined the link between students texting and using Facebook in in class and their grades. The Chicago Tribune summarizes the studies:

In the past five years researchers have published the results of five surveys and experiments that link texting and Facebooking with lower academic performance. In 2011, researchers at California State University reported that students who received or sent a high number of text messages during a video recorded lecture scored worse on a quiz than those who received or sent few or no text messages.

In a 2012 study, a researcher at Lock Haven University in Pennsylvania surveyed 1,800 students about how often they Facebook, instant message, email, text, search online and talk on the phone in class. Among the results: 69 percent of students reported they had texted in class, and students who texted or used Facebook more frequently in class had lower overall semester GPAs. The author of that study, Reynol Junco, also co-wrote a study that linked texting and Facebooking during study time with lower GPAs…

“That I can be definitive about: That’s not working. If you’re going to search (online) during class, I don’t have any data telling you to stop. If you’re going to email during class, I don’t have any data to tell you to stop. But do not text or Facebook during class. Do not text or Facebook while you’re studying for your classes, because that’s another area where this is definitely a negative.”…

Junco’s evidence against texting and Facebooking is correlational, meaning that his studies show that students who Facebook or text more in class or while studying do worse academically, but not that the texting or Facebooking itself is causing the problem. It’s possible that, say, an easily distractible student is texting a lot and doing poorly in class, with the underlying cause of poor performance being the distractibility, not the texting. But Junco points to two other college studies in which researchers, not students, largely determined which students would do the most media multitasking in class.

In both the studies — the California State study and one published in 2012 in the journal Computers and Education — a form of media multitasking (texting or Facebooking) was linked to lower student performance.

It would be interesting to pair data like this with student’s perceptions of whether they are doing better or worse in a class because they are texting, browsing, or not. It would be one thing if students knew that texting was distracting and did it anyway and yet another thing if they were so used to texting that they were unaware of its possible effects.

Let’s say future studies more clearly establish a causal link. Would colleges then move to banning cell phone use or Facebooking in class? Or is this one of those areas that would generate a lot of negative feedback from students who would want the freedom to do more of what they want in class?

Website of the day: gradeinflation.com

Perhaps it is finals week that piqued my interest in this particular website: gradeinflation.com. There is a lot of fascinating information on this site about college grading trends in recent decades. Yes, my own institution is represented on the site.

If this puts you in the grading spirit, you can try out The Grading Game app which one Wired reviewer liked:

I’m frankly surprised by how much I like The Grading Game. It is ultimately about grading papers and looking for spelling errors, but somehow the intense time limit, scoring mechanics and various modes wrapped around that seemingly bland premise make the game super addictive. And, as someone who does a great degree of text-editing, I suspect that this simple iPhone app is making me better at my job.

Not quite the same experience but it is an attempt to put grading through the gamification process.

Most common college grade: A or A-

Here is some data about college grades and how they have increased to a modal letter grade of an A:

In 1960, the average undergraduate grade awarded in the College of Liberal Arts at the University of Minnesota was 2.27 on a four-point scale.  In other words, the average letter grade at the University of Minnesota in the early 1960s was about a C+, and that was consistent with average grades at other colleges and universities in that era.  In fact, that average grade of C+ (2.30-2.35 on a 4-point scale) had been pretty stable at America’s colleges going all the way back to the 1920s (see chart above from GradeInflation.com, a website maintained by Stuart Rojstaczer, a retired Duke University professor who has tirelessly crusaded for several decades against “grade inflation” at U.S. universities). By 2006, the average GPA at public universities in the U.S. had risen to 3.01 and at private universities to 3.30.  That means that the average GPA at public universities in 2006 was equivalent to a letter grade of B, and at private universities a B+, and it’s likely that grades and GPAs have continued to inflate over the last six years…
National studies and surveys suggest that college students now get more A’s than any other grade even though they spend less time studying. Cramer’s solution — to tack onto every transcript the percentage of students that also got that grade — has split the faculty and highlighted how tricky it can be to define, much less combat, grade inflation.”…
Last year, Professor Rojstaczer and co-author Christopher Healy published a research article in the Teachers College Record titled “Where A Is Ordinary: The Evolution of American College and University Grading, 1940–2009.” The main conclusion of the paper appears below (emphasis added), and is illustrated by the chart below showing the rising share of A letter grades over time at American colleges, from 15% in 1940 to 43% by 2008. Starting in about 1998, the letter grade A became the most common college grade.
“Conclusion: Across a wide range of schools, As represent 43% of all letter grades, an increase of 28 percentage points since 1960 and 12 percentage points since 1988. Ds and Fs total typically less than 10% of all letter grades. Private colleges and universities give, on average, significantly more As and Bs combined than public institutions with equal student selectivity. Southern schools grade more harshly than those in other regions, and science and engineering-focused schools grade more stringently than those emphasizing the liberal arts. It is likely that at many selective and highly selective schools, undergraduate GPAs are now so saturated at the high end that they have little use as a motivator of students and as an evaluation tool for graduate and professional schools and employers.”

This is quite an increase, particularly as more Americans started attending college in this period. What does this do in the long run for credentialism – the idea that employers and others can get an idea about the competence, skills, and work ethic of people by knowing whether they have a college degree or not. Are employers and students looking for ways to differentiate between students?

Seeing the data by discipline (and not just broad categories) would be particularly fascinating.

Something to note about grade data: good grades can only bring up the average so much since they have a max of 4.0. So the rising average is partly due to more good grades being handed out but also partly due to fewer bad grades (which would have a greater effect on the average) being assigned. Note the last chart: about 78% of grades are either As or Bs, suggesting that students have to work at getting grades below this.

h/t Instapundit

An example of statistics in action: measuring faculty performance by the grades students receive in subsequent courses

Assessment, whether it is for student or faculty outcomes,  is a great area in which to find examples of statistics. This example comes from a discussion of assessing faculty by looking at how students do in subsequent courses:

[A]lmost no colleges systematically analyze students’ performance across course sequences.

That may be a lost opportunity. If colleges looked carefully at students’ performance in (for example) Calculus II courses, some scholars say, they could harvest vital information about the Calculus I sections where the students were originally trained. Which Calculus I instructors are strongest? Which kinds of homework and classroom design are most effective? Are some professors inflating grades?

Analyzing subsequent-course preparedness “is going to give you a much, much more-reliable signal of quality than traditional course-evaluation forms,” says Bruce A. Weinberg, an associate professor of economics at Ohio State University who recently scrutinized more than 14,000 students’ performance across course sequences in his department.

Other scholars, however, contend that it is not so easy to play this game. In practice, they say, course-sequence data are almost impossible to analyze. Dozens of confounding variables can cloud the picture. If the best-prepared students in a Spanish II course come from the Spanish I section that met at 8 a.m., is that because that section had the best instructor, or is it because the kind of student who is willing to wake up at dawn is also the kind of student who is likely to be academically strong?

It sounds like the relevant grade data for this sort of analysis would not be difficult. The hard part is making sure the analysis includes all of the potentially relevant factors, “confounding variables,” that could influence student performance.

One way to limit these issues is to limit student choice regarding sections and instructors. Interesting, this article cites studies done at the Air Force Academy, where students don’t have many options in the Calculus I-II sequence. In summary, this setting means “the Air Force Academy [is] a beautifully sterile environment for studying course sequences.”

Some interesting findings both from the Air Force Academy and Duke: students who were in introductory/earlier classes that they considered more difficult or stringent did better in subsequent courses.

Intentional grade inflation

A story in the NY Times describes how at least 10 law schools have deliberately made their grades more lenient. The reason? To have their students appear more attractive in a weak job market.

[Loyola Law School Los Angeles]  is retroactively inflating its grades, tacking on 0.333 to every grade recorded in the last few years. The goal is to make its students look more attractive in a competitive job market.

In the last two years, at least 10 law schools have deliberately changed their grading systems to make them more lenient. These include law schools like New York University and Georgetown, as well as Golden Gate University and Tulane University, which just announced the change this month. Some recruiters at law firms keep track of these changes and consider them when interviewing, and some do not.

The article also discusses other interesting measures including abandoning traditional grades and paying students to take unpaid internships.