GPA or SAT? Two Measures Are Better Than One 

At a time when only 41 percent of college students graduate in four years—and only 56 percent in five years—colleges and universities across the country are phasing out the only truly objective measure of academic excellence and student success in the application process: standardized tests.

Next month, for example, the University of North Carolina Board of Governors (BOG) is set to vote on a policy that would significantly diminish the role of test scores in the admissions process.

To do so, however, would be a blow against academic standards for the 16 UNC schools.

Currently, in order to even be considered for admission at any UNC institution, applicants must have a minimum GPA of 2.5 and an SAT score of 880 or an ACT score of 17. (Meeting the minimum standards does not guarantee students admission to any of the sixteen UNC institutions.) Those standards, however, might be revised by a BOG vote.

The proposed revisions are subtle, but significant: instead of requiring GPA and test scores, the new policy would require a minimum GPA of 2.5 or an SAT score of 1010 (or ACT score of 19).

The proposed policy comes as a controversial pilot program nears its conclusion. In 2014, the board passed a resolution to establish a program to test whether students’ GPA was a better predictor of academic success than standardized test scores. Three UNC system schools participated in the pilot study: Elizabeth City State University (ECSU), Fayetteville State University (FSU), and North Carolina Central University (NCCU).

Students were admitted to the pilot program based on a sliding scale that weighted GPA more heavily than SAT scores, but the test scores couldn’t fall below 750. To be eligible, pilot students’ GPA had to increase by 0.1 with each 10-point SAT score decline. There are now 5 cohorts totaling almost 1,100 students. The last cohort will be admitted this fall.

The pilot was set to last three years from fall 2015 through fall 2017, but in 2018 the board voted to extend the pilot another three years (through fall 2020). However, several board members opposed the extension. During a full board meeting on May 24, 2018, board member Steve Long was concerned that the pilot program was a “whittling away of our minimum admission requirements.”

But system employees have assured board members that the program, and any resulting policy recommendations, are not intended to lower academic standards. Indeed, nearly two years later, in a January meeting of the Educational Planning, Policies, and Programs Committee, Kimberly van Noort, senior vice president for academic affairs and chief academic officer for the UNC System, emphasized that the new policy would not lower standards. The proposal, she says, is “simply a modification to allow more flexibility for the institutions.”

During the meeting, van Noort and her colleagues discussed the pilot’s findings—which were compiled in a report presented to the committee. According to the report, the academic outcomes of the pilot students were very similar to those of the non-pilot students. Both groups had similar GPAs of 3.2, had similar retention rates, and completed a similar number of credit hours.

On the face of it, the pilot’s findings seem to justify modifying the system’s minimum admissions requirements. Why require students to meet minimum standards for both GPA and test scores when the pilot students—who didn’t meet minimum testing requirements—did just as well as the rest of the student body?

Furthermore, the system is not (at the moment) proposing a “test-optional” policy. Every student who is eligible to apply based on his or her GPA must still submit his or her test scores. As committee chair Anna Nelson noted: “[The SAT or ACT] remains a tool in the admission officer’s tool belt” when making admissions decisions. The only difference is that students won’t have to meet a baseline test score to be eligible to apply. If there were any negative student outcomes in the future, administrators would be able to analyze whether there was any connection to low test scores.

Even so, several compelling reasons should stop the board from approving the policy recommendation.

Meeting a low bar doesn’t indicate success

The fact that the pilot students had similar academic outcomes to the rest of the student bodies at each of the universities does not necessarily mean success. That’s because the universities themselves are in large part failing to graduate their students. For example, each of the three universities that participated in the pilot has dismal graduation rates.

In fact, between 2014 and 2015, ECSU and FSU’s graduation rates decreased: ECSU went from 20.6 percent in 2014 to 19.4 percent in 2015 while FSU went from 22.7 percent in 2014 to 21.4 percent in 2015. The graph below compares the pilot students’ graduation rates with non-pilot students at each school (a comparison of graduation rates was not included in the report presented to the board in January):

The fact that the pilot students performed similarly to the regular student bodies shouldn’t be a cause for celebration. If system officials wanted to measure the predictive power of GPA over test scores, why didn’t they conduct the pilot at higher-ranked UNC schools where the majority of students do graduate? By running the pilot in schools where graduation rates are already very low, it virtually guarantees that pilot students will not perform worse than an already failing student body.

The academic performance of the three schools points to a need for stricter standards, not more “flexible” ones. Indeed, the poor graduation rates suggest that admissions officers at those universities have not exercised enough discretion in ensuring that the students they admit are academically prepared. If the schools struggle to graduate students under the current minimum standards, why give them even more latitude to exercise their misjudgment?

The reality is, there are a number of perverse incentives for schools to boost enrollment. For one, more enrollment means more funding from the state. Secondly, increasing rural and low-income enrollments make the schools appear more diverse—a label that schools will go to great lengths to attain.

The UNC system also puts a great deal of pressure on universities to meet lofty enrollment goals as part of its five-year strategic plan. For example, by 2021, FSU is expected to increase low-income enrollments by 11.2 percent. ECSU is expected to increase rural enrollments by 63.2 percent. Since low-income and minority students were heavily represented in the pilot group, a more flexible admissions policy would help the system meet its enrollment goals.

But the new policy—although it might make the schools look more diverse—would likely hurt the low-income and minority students they claim to serve. That’s because low-income and minority students often graduate at lower rates than the rest of a given student body. At ECSU, for example, lower-income Pell Grant students had a 2015 graduation rate of 17.2 percent, but non-Pell Grant students graduated at a rate of 28.3 percent. It is widely known that students who do not graduate from college struggle the most with student loan debt.

Some might argue that pressures to increase enrollment will be constrained by equally demanding goals to improve graduation rates. For example, every UNC school has an articulated five-year graduation rate they are expected to hit by 2022. As they enroll students, all of the schools will have to exercise prudence in who they admit to meet their graduation rate goals.

But simply having lofty graduation rate goals is not enough to keep institutions accountable for irresponsibly enrolling students. If the schools have a track record of low graduation rates, it’s difficult to imagine that going forward will be any different. More importantly, if the institutions fail to meet the articulated graduation rate goals, it’s not clear whether they risk any penalties other than a symbolic slap on the wrist.

Why use one predictor of success instead of two?

Many education leaders, including those in the UNC system, draw a false dichotomy between the SAT and GPA. They seem to think it’s an “either/or” question: Either the GPA is a better predictor of success or standardized tests are.

In a sense, what UNC system leaders say is true: If only one measure could be used to predict student success, evidence does suggest that the GPA is a more reliable measure than test scores.

But why would UNC system leaders want to limit themselves to only one measure? There is strong evidence that GPA and test scores together is an even better predictor of student success. The two following statements can both be true:

  1. GPA alone is a better predictor than SAT or ACT scores alone.
  2. GPA and SAT/ACT taken together is a better predictor than GPA alone.

Nevertheless, UNC policymakers seem to be focused on the first statement, not the second—except for a few board members such as Long. During the committee meeting discussion, Long observed:

The thing about standardized testing is that it is a check on grade inflation…some high schools are more rigorous than other high schools…why wouldn’t it be better to have something where we take into account both of those things?

Long’s argument is similar to one made by two professors of industrial-organizational psychology at the University of Minnesota, Nathan Kuncel and Paul Sackett. According to Kuncel and Sackett:

High-school and college grades are excellent measures…But we all know that a grade-point average of 3.5 doesn’t mean the same thing across schools or even for two students within a school. As high-school GPAs continue to go up because of grade inflation, having the common measure provided by admissions test scores is useful.

In an extended version of their essay, Kuncel and Sackett acknowledge that GPA is the best predictor of student success, but they add: “Even better prediction is obtained by the combination of test scores and high school grade point average.” “Human behavior is notoriously difficult to forecast,” they write, “it would be strange for a single predictor to be the only one that matters. So it is also valuable to consider, whenever possible, how predictors combine in foretelling student success.”

Interestingly, the same year that the pilot program was approved, the North Carolina State Board of Education made it easier for high school students to boost their grades. That’s because the board voted to change the high school grading scale from a seven-point scale to a ten-point scale. After the change, a student, for example, “who has a 91 in all his classes would have a 4.0 GPA on a 10-point scale but a 3.0 GPA on a seven-point scale.”

Some might still object to standardized tests because they believe the tests unfairly discriminate against low-income and minority students. But Sackett and Kuncel also debunk this claim. According to Sackett and Kuncel:

Standardized tests are not just proxy tests of wealth, and many students from less affluent backgrounds do brilliantly on them. But the class differences in skill development are real, and improving the K-12 talent pipeline would be a huge benefit to the country.

In sum, despite continuing claims that testing isn’t a fair measure of current abilities or future accomplishments, the overwhelming conclusion across decades of research is that tests are not biased against women and racial/ethnic minority group members in terms of their use in predicting subsequent academic performance.

Unaccountable Athletic Recruiting

One final reason the Board of Governors should vote against the proposed admissions policy is that it may allow athletics departments to act with less transparency.

In some cases, the UNC system allows athletic departments to recruit student-athletes even if they do not meet the system’s minimum admissions requirements. However, the number of student-athlete “exceptions” must be reported annually to the BOG. In 2017, for example, 3.5 percent of student-athletes were exempted from the minimum admission requirements.

But if the admissions requirements change, it will likely affect the number of student-athletes who are counted as “exceptions.” Under the new policy, a student-athlete might have very low test scores, but nevertheless “meet” the admissions requirement if he or she has a 2.5 GPA. The Martin Center asked van Noort whether the new admissions policy would affect how student-athlete exemptions are reported and she said: “Once we have the language for the new policy, we will then work through how the change will affect reporting. So yes, that will likely change, but that is to be determined.”

The worry, of course, is that athletics departments already struggle with prioritizing academic integrity; making them less accountable for admitting underprepared students is a step in the wrong direction.


UNC system officials consistently highlight their commitment to student success. And rightly so; they would be failing in their educational mission if the majority of students fail to learn, pass classes, and graduate. But the uncomfortable truth is: ECSU, FSU, and NCCU are severely struggling to fulfill that central mission. And rewriting a system-wide policy based on the performance of students at those schools is imprudent.

Some members of the board might be inclined to vote for the policy change because they want to give students a second chance—especially if students come from struggling school districts.

But, however good-intentioned board members and system staff might be, admitting underprepared students is not compassionate. Instead, it puts vulnerable students in an impossible situation: they will likely take on burdensome debt for a degree they aren’t likely to finish for reasons that are often outside of the university’s power to control.

It’s time that UNC officials realize that introducing more subjectivity into the admissions process is more likely to hurt students than help them.

Shannon Watkins is senior writer at the James G. Martin Center for Academic Renewal.