(AP Photo/Tim Ireland)

Grit. It’s the not-so-new thing in education that has nevertheless become a current watchword, in general for how much students persevere and stay on task. What exactly is it? Is it related to a student’s character? Can it be taught? If so, how? Should it be taught? Does it always produce positive results for students? Can it be measured in any meaningful way?

These are questions that have been part of the public education discussion for years (so much so that back in 2012 I published a post titled “Sick of grit already”). Yet there is no consensus on the big questions surrounding “grit.” That, however, is not stopping the U.S. government from deciding to collect data from students about their individual “grit” levels. How? By asking them to rate their own level of grit. But are they good judges of their own abilities in this regard?

[Ten concerns about the let’s teach-them-grit fad]

The National Assessment of Educational Progress, better known as NAEP and long called the nation’s report card because it is the largest nationally representative and continuing assessment of America’s students, is going to start amassing student data on level of “grit”  in 2017. NAEP is the legal responsibility of the U.S. Commissioner of Education Statistics, who heads the National Center for Education Statistics in the U.S. Department of Education. The U.S. education secretary appoints the National Assessment Governing Board, which sets policy for the NAEP  independent of the department and which works with contractors to create and  administer the tests.

[Why teaching students to have ‘grit’ isn’t always a good thing]

NAEP tests national samples of students in grades 4 and 8 — and sometimes grade 12 — in reading and math every two years, and in history, science, civics and other subjects every several years. For decades, it has asked students on background surveys to self-report on various topics, including their reading habits and the time they spend watching television. Now it will add “grit” and “desire for learning” to the list. The agenda of a May meeting of the NAEP governing board’s Reporting and Dissemination Committee said in part:

R&D will have reviewed the core contextual modules three times before any are included in the 2017 NAEP operational administration. These proposed modules include the following: (1) socio-economic status; (2) technology use; (3) school climate; (4) grit; (5) and desire for learning. The Committee’s first review occurred in August 2014, as part of the board meeting. In reviewing the feedback from that session, the overall focus of the comments seemed to lie in ensuring that the questions are inclusive, accessible, and more positive.

According to Education Week:

The background survey will include five core areas—grit, desire for learning, school climate, technology use, and socioeconomic status—of which the first two focus on a student’s noncognitive skills, and the third looks at noncognitive factors in the school. These core areas would be part of the background survey for all NAEP test-takers. In addition, questions about other noncognitive factors, such as self-efficacy and personal achievement goals, may be included on questionnaires for specific subjects to create content-area measures, according to Jonas P. Bertling, ETS director for NAEP survey questionnaires.

Diane Ravitch, on her blog, offers this:

Will we someday know which states and cities have students with the most grit? And once we know, will officials create courses in how to improve grit?

I am reminded of a strange finding that emerged from international background questions two decades ago. Students were asked if they were good in math. Students in nations with the highest test scores said they were not very good in math; students in nations where test scores were middling thought they were really good at math.

What does it all mean? I don’t know, but it satisfies someone’s need for more data.

[Sick of grit]

It is worth noting a 2015  essay on this subject co-written by Angela L. Duckworth, the University of Pennsylvania researcher who popularized “grit.” It is titled, “Measurement Matters: Assessing Personal Qualities Other Than Cognitive Ability for Educational Purposes” and says in part:

In recent years, scholars, practitioners, and the lay public have grown increasingly interested in measuring and changing attributes other than cognitive ability (Heckman & Kautz, 2014a; Levin, 2013; Naemi, Burrus, Kyllonen, & Roberts,2012; Stecher & Hamilton, 2014; Tough, 2013; Willingham,1985). These so-called noncognitive qualities are diverse and collectively facilitate goal-directed effort (e.g., grit, self-control, growth mind-set), healthy social relationships (e.g., gratitude,emotional intelligence, social belonging), and sound judgment and decision making (e.g., curiosity, open-mindedness). Longitudinal research has confirmed such qualities powerfully predict academic, economic, social, psychological, and physical well-being (Almlund, Duckworth, Heckman, & Kautz, 2011; Borghans, Duckworth, Heckman, & ter Weel, 2008; Farrington et al., 2012; J. Jackson, Connolly, Garrison, Levin, & Connolly, 2015; Moffitt et al., 2011; Naemi et al., 2012; Yeager & Walton, 2011).

We share this more expansive view of student competence and well-being, but we also believe that enthusiasm for these factors should be tempered with appreciation for the many limitations of currently available measures. In this essay, our claim is not that everything that counts can be counted or that everything that can be counted counts. Rather, we argue that the field urgently requires much greater clarity about how well, at present, it is able to count some of the things that count.

She and her co-author, David Scott Yeager, also note limitations of asking students to rate their own level of non-cognitive factors:

Reference bias is apparent in the PISA (Program for International Student Assessment). Within-country analyses ofthe PISA show the expected positive association between self-reported conscientiousness and academic performance, but between-country analyses suggest that countries with higher conscientiousness ratings actually perform worse on math andreading tests (Kyllonen & Bertling, 2013). Norms for judging behavior can also vary across schools within the same country:Students attending middle schools with higher admissions standards and test scores rate themselves lower in self-control(Goldman, 2006; M. West, personal communication, March 17, 2015). Likewise, KIPP charter school students report spending more time on homework each night than students at matched control schools, and they earn higher standardized achievement test scores—but score no higher on self-report questionnaire items such as “Went to all of your classes prepared” (Tuttle et al., 2013). Dobbie and Fryer (2013) report a similar finding for graduates of the Harlem Children’s Zone charter school. There can even be reference bias among students in different grade levels within the same school. Seniors in one study rated themselves higher in grit than did juniors in the same high school, but the exact opposite pattern was obtained in performance tasks of persistence (Egalite, Mills, & Greene, 2014).
Now NAEP will add to the mountain range of data being collected on America’s students. For what, who exactly knows?