2017 Conference Blog-Monday, January 23

  • Welcome and Morning Keynote-How Admissions Decisions are Made and Why (Lucido & Baker Tew)
  • Panel Session I-History and Future of Nonacademic Factors in Admission Decisions (Hossler, Lucido & Chung)
  • Master Class 1-Importance of Context in Admissions Decision Making (Bastedo)
  • Panel Session II-The Systematic Identification of Disadvantaged in Educational Opportunity (Ballinger & Perfetto)
  • Master Class II-The Systematic Identification of Personal Attributes in Admissions (Rikoon & Wright)
  • Dinner and Keynote-Meeting Institutional and Student Needs in Admissions: Recipes for Selecting Qualified & Diverse Applicants (Payne)

Welcome and Morning Address: How Admissions Decisions are Made and Why?
Jerry Lucido, USC CERPP
Laurel Baker Tew, Viewpoint School

Lucido slides
Baker Tew Independent Schools Response slides

Dr. Lucido began by welcoming guests and thanking supporters and sponsors. He also reviewed the mission of the Center for Enrollment Research:

The Center for Enrollment Research, Policy, and Practice (CERPP) at the University of Southern California in Los Angeles analyzes enrollment issues through the critical perspectives of social science researchers, policymakers, and college and university practitioners. The center is rooted in the belief that the educational attainment goals of the nation will be more fully realized as college admission, student financial aid, and degree completion processes become better informed, more expertly practiced, and more equitable. The teaching, research, and service activities of the center are devoted to these ends. In a phrase, the center is devoted to the social benefit of college admission and enrollment policies and practices in the United States and internationally.

The Center is currently involved in research related to nonacademic factors, the test-optional movement, classifications for international schools, and “the rankings” project – a new way to give the power of the rankings back to the consumer. It provides teaching through the Leadership in Enrollment Management program and we are currently developing a management program. From a service perspective, the Center engages in other activities, including this conference, and the College Counseling Corps.

Next, Lucido introduced his agenda for his presentation, which included the following discussion points:

  • Admissions and the institutional mission
  • Where are admissions decisions made?
  • Originally the Center focused on undergraduate admissions, but we are seeing more and more conversation from independent schools and graduate schools who are also examining and applying these principles.
  • Admission policy
  • Admission practice
  • Legitimacy: the courts and public opinion

Lucido continued by citing Sweezy vs. New Hampshire (1957) and asked several key questions (see slide deck). He quoted B. Alden Thresher (1966), noting that admissions decisions occur outside of the admissions office not within it.

Examining the term admission, we see the Latin roots: toward the mission and how that mission is formulated. Mission also has to do with education focus- what is the school trying to accomplish?

Participants were encouraged to think about their own missions and consider the following admissions models (see slide deck for additional information):

Eligibility-Based Model

  •             Entitlement
  •             Open Access

Performance Based Model

  •             Meritocracy
  •             Character

Student Capacity to Benefit Models

  •             Enhancement
  •             Mobilization

Student Capacity to Contribute

  •             Investment
  •             Environmental/Institutional
  •             Fiduciary

The helpfulness of these models is to think about which of these constructs operate at your institution? Why? How can these lead to better alignment of admissions practices? What is your ability to move them further?

Lucido shifted gears to examine the admissions decision elements. He suggests that we must distinguish admission criteria from the evidence that those criteria are present. Criteria are academic achievement and personal characteristics. The evidence is the application information such as transcripts, essays, personal statements, school, extracurricular records, and recommendations. He shared a host of decisions elements- all of which play a role in decisions:

  • Coursework
  • Grades
  • Relative achievement among peers
  • Standardized testing
  • Time spent and achievements when not in class
  • Insights evident in essays/recommendation
  • Talent: Athletics, art, music, drama, etc.
  • Diversity
  • Legacy
  • Globalization
  • Fiduciary (full payers and net tuition revenue)
  • Influence/pressure points (board, donors, legislature, etc.)

As these criteria are evaluated in the admissions office; there must be equity and fairness. There are limitations to standard measures as many other factors are in play. Another question we must ask is what is merit? Who is meritorious? How do we balance all of these questions and considerations to create the society that we want?

Lucido highlighted the notion that admission policy is a reflection of institutional purpose and highlighted additional models (see slide deck for elaboration):

  • Open and Eligibility Models
  • Selectivity Models- It is very difficult to be transparent when we are balancing all of these factors.

Lucido then shifted from undergrad/college-wide models to program-based models used in Graduate, Professional, and Medical School models. Here we have greater reliance on academic record, testing, faculty judgment, etc. Lucido notes that there is some evidence that academic credentials do not portend professional excellence.

Lucido concludes by pointing to constraints that are also in play as we work toward the mission, reminding us of the complexity of this work.

Laurel Baker Tew

Baker Tew took to the stage with the aim of sharing a response from Independent Schools to how admissions decisions are made. Baker Tew came from working in admissions at USC and describes the transition into independent schools as “eye-opening”. She suggests that the independent schools are constantly looking to higher education (HE) for practices but notes that HE could learn from independent schools!

Why do people go to independent schools? Who will teach your child, what they will be taught, how they will be taught, and who are their peers? These are the key questions for higher education and the same questions exist at independent or private schools. The independent school experience is highly relationship and this relationship continues after admission is granted (unlike higher education). The philosophy of admission is completely driven by the mission and the criteria are highly differentiated depending on the type of school. For example, boarding schools have a very different set of criteria than an elementary school. They must also consider that students are “added along the way” (older students entering the school) and how they fit into the school culture.

What are the biggest differences between Independent Schools and Higher Education?

Baker Tew notes that independent schools never admit students- they admit families who are going to be with them for a very long time. Thus, it is important to assess the fit of the family for the institution. With regard to the issue of selectivity, it is highly idiosyncratic and dependent on the age of the child, the school’s position in the marketplace (highly competitive or growing). The question at independent schools is more about “growing” the best students than “selecting” them.

Baker Tew notes that another key question is related to predicting success. How do you predict success for incoming kindergartners? What is the relative achievement? Who is meritorious? What does the child do outside of class? These are very difficult to assess with young children. Baker Tew pointed back to Weissbourd’s opening presentation yesterday evening and the importance of caring. Very young children can and do show their caringtendencies. They can lose these tendencies along the way and our work is to develop them.

Baker Tew asks if independent schools are the canary in the coalmine for higher education. She describes the educational landscape as “not good”. Dropping birth rates, growing educational options and offerings, and fewer levers than HE to pull to influence the market are among the challenges. Thus, independent schools are already dealing with the decline and are doing the best they can to meet their missions and goals, giving higher education a system to observe to predict the change they can expect to see and how to deal with it.


Panel Session 1: The Past, Present, and Future Use of Non-Academic Factors in Admission Decisions
Don Hossler, USC CERPP
Jerry Lucido, USC CERPP
Emily Chung, USC CERPP

Hossler Lucido Chung Slides

Lucido introduced the purpose of the session, noting that we want to look at the study and rationale, the literature review, the research questions, methods, findings, and implications for change related to the use of non-academic variables.

The use of non-academic variables is not new. In the early 1900’s the ability to pay was considered, there was the notion of athleticism, etc., related to nonacademic factors being used. At Yale, academic factors were the main criteria but character and personality matter. During the Civil Rights movement, using race as a factor and diversity became a question and debate. This brief history points to where we are heading- what is the contemporary context. How are things being used? There is a trend toward reliance on standardized test scores as the sole criteria. There is a desire to enroll a more diverse student body and concern that the standardized test scores don’t capture the potential for a student to be successful. The test optional movement is growing and there is a need for additional information to craft a class. These concepts undergird why this study was conducted.

The literature began by looking at how scholars have thought about nonacademic factors. One of the most useful tools was the 21st Century Skills rubric from the National Research Council (2012). These include cognitive, interpersonal/social, intrapersonal/emotional and self-regulatory. They also looked at the work of Sedlacek, Conley, and Duckworth. They wanted to look at how these are actually being used, classified, and talked about. Kyllonen’s work (2005) provided the framework used for this study, it includes: personality factors, affective competencies, performance factors, attitudinal constructs, and learning skills.

Next, Chung spoke to the research study. She pointed to the two research questions:

  1. Are nonacademic factors used in the institution’s admissions decisions?
  2. What is the relative importance of all of the factors used in the institution’s admissions decisions

This was a qualitative study with a purposeful sample that included ten four-year undergraduate institutions. These institutions use nonacademic factors in admissions decisions and were chosen to represent a range in terms of selectivity, public/private, size, and location. A structured interview protocol was followed and anonymity encouraged greater candor. Two people from each institution were included, the senior enrollment officer and senior admission officer. There was variability in terms of how terms were used. “Grit”, for example, could mean improvement of performance over time or being a first generation college-goer.

Next, Hossler presented the findings of the study related. In order of importance, they found that schools are using academic indicators, school and personal context, and nonacademic factors/constructs including performance, attitudinal, and character. They found that there were some experimentation with instruments designed to measure creativity, locus of control, emotional intelligence, and emotional quotient. In some cases, these were locally designed tools (within the institutions) and in other places they were “off the shelf”. They found that there was relatively little research done by institutions to shape their class- regardless of selectivity. Said differently, they don’t know if the nonacademic factors they are using work or produce the results that they are hoping for.

Lucido continued by sharing some of the observations that they made along the way as they conducted the study. He notes that we spend a lot of energy altering practices to fit demographic and society trends but they are still somewhat unexamined. Moving forward, we must continue discourse, yield “demographic dividend”, look to advances in neuroscience to point to learning opportunities in character development, and implement more expert and equitable practice.

Lucido pointed to what is needed, including new measures are needed to rebalance the equation, transparency, and a critical mass of practitioners working on these issues.


Master Class I: The Role of Context in Admission Decisions
Michael Bastedo

Bastedo slides

Bastedo shared paradoxes that he saw that got him into this work. He asked, what does it mean to do holistic admissions? He pointed to several quotes (see slide deck) that suggest that we are looking at applicants from the context from which they come. Bastedo surveyed over 300 admission offices to learn what they think when they say holistic admission. He found that about ½ meant that they mean “whole file” by holistic review. They read everything and consider everything. It isn’t a formula. The next approach (20%) was “whole person;” they want to get to know the whole person, grades, etc. Approximately 30% said, “whole context”. Thus the idea of holistic review is not consistent.

Bastedo asked, what is the evidence of holistic review?

One idea was “maxing out” the curriculum. Did the student take all AP courses or all challenging courses that was available to them? The results, however, were that maxing out was not a predictor of getting into a selective colleges. Many students who did not “max out” are admitted to selective colleges. Additionally, the practice itself may not produce the effects that we are hoping for.

The next idea was “standardized tests”. College Board suggests that tests should be one factor among many. But if you look at the research, standardized test scores are the strongest predictor for admission to highly selective colleges.

Why is there a disconnect between what we say and the research results? There is a need for organizational thinking that addresses how decisions are made and shaped in real-world contexts. Bastedo notes two primary biases that admissions officers are subject to:

  1. Anchoring Bias- The human tendency to consider arbitrary numerical values from the recent past when estimating future numerical values, particularly when those values are uncertain or ambiguous. This influences both expert and lay judgments. People inadequately adjust to anchors particularly if the anchor is provided externally.
  2. Correspondence Bias- the tendency to attribute decisions to a person’s dispositions rather than to the situation in which the decision occurs. Also “fundamental attribution error”. With the right information, people properly account for situational information rather than relying on dispositional inferences, and thus make more accurate attributions.

Bastedo then moved into a discussion of “Cognitive Repairs”. He pointed to the norming process. Much of the definition in rubrics relates to a comparison to the pool, so to be able to score, you would have to see the entire pool. One strategy Bastedo observed was a scoring process in which the initial score was based on the GPA and SAT and in other cases, these are the last things to be reviewed. Even when instructed NOT to preview the GPA and SAT was difficult, as the tendency was to review those first. In a sense, these folks were creating, or desired to create, their own anchors. Bastedo observed that paying attention to context, rather than just the raw credentials, was a big theme in one of the schools.

There were three primary cognitive repairs that Bastedo encountered.

  1. Language Monitoring: People would use phrases and admissions officers would “snap back” at them, providing different language that prevented a toxic atmosphere. For example, “bad grades” would be replaced with “low grades for us”, “great essays” was restated as “helpful personal statement”, “red flags” (“raising questions”, “a poor essay” (“missed opportunity”). This includes the concept of “building” or looking at a file for ways to build up, rather than “taking points away” or “pulling down”.
  2. Reducing Cognitive Closure: People have a tendency to want to reduce the amount the information that they have to process. “Seizing and Freezing”- meaning, at a certain point people may believe that they have enough information to make their decision and will make that decision, or stop taking in more information. This relates to cognitive load. There is only so much information that a person can take in and working against the tendency to “blur together” files is difficult. Bastedo suggests that we may not give readers enough tools to deal with this.
  3. Error Correction: A second reader or committee review either validates or reverses the decision. One note of caution however: A reader who becomes an “outlier” by making errors in these high stakes decisions can be devastated by this designation. It is common to not hit normal distributions in the first couple of weeks, but people feel like they are supposed to be “normalized” from the start. For the reader, however, the anxiety related to error correction can lead to overcorrection and more errors.

Bastedo then asked, if cognitive biases are common in admissions decisions, could we demonstrate the biases in a randomized lab experiment?

Through the support of the National Science Foundation, 300+ admissions officers from selective colleges were recruited and were asked to review simulated files “as usual”. They read 3 simulated files (see slide deck for details). They provided two conditions- one with limited information and the other with detailed information. The results (slide 33 on slide deck) lower SES applicant were 13-14% more likely to be admitted if they had more contextual informationeven if all of the other information was the same. It made a substantial difference in their propensity to admit. It didn’t make any difference how experienced the admissions officer was, how selective their institution was, or the demographic background of the officer.

Thus, a fairly simple intervention (more detailed information) related to propensity to admit. Bastedo does point out that the participants of this study knew it was a simulation and thus the “high stakes” anxiety issues were not in play. Nonetheless, the results point to a need for more detailed information to play a role in the decision making process.

In Q&A, he noted that a simple way to provide more contextual information for admissions purposes would be for the College Board and ACT to provide test (SAT/ACT) scores in terms of the percentile achievement at the applicant’s high school and in terms of the applicant’s zip code.


Master Class II: The Systematic Identification of Disadvantage in Educational Opportunity
Phil Ballinger, University of Washington
Greg Perfetto, College Board

Ballinger slides
Perfetto slides

Phil Ballinger began with a discussion of the work being done at the University of Washington related to identifying disadvantage. The University of Washington conducts a full holistic review of students. This process began in 2005 and the goal was to create a community built around academic potential, broad backgrounds, and a degree of social engineering (or a hope that there are social effects that relate to the common good).

They created the Geo-Index, which is derived from a combination of geographic and high school variables in tandem with applicant level socioeconomic factors. (Please see slide deck for specifics). Ballinger emphasized that all of the information comes from the students (not the schools or the parents). Additionally, everything that has to do with admission policy has to be approved by the faculty, so the Geo-Index importantly did not add new factors to the current holistic review policy, but rather added more detail to current factors. They created geographical indexes and organized their applications into “buckets”. This allowed readers of the applications to focus on applications that came from similar contexts. In this way, they can review academic records, etc., in view of that context.

Challenges after the first year of use include the reality that the effects of the Geo-Index are difficult to measure. However, they had the most diverse class admitted this past year. Training, norming, and implementing this approach is difficult and the faculty has expressed concern that the statistical methods used in the Geo-Index are not sufficiently vetted. Despite challenges, they proposed a 3-year implementation of the approach and are examining each step of the way.

Next, Dr. Greg Perfetto from the College Board continued the discussion with his presentation: Access, Adversity, and Context. Perfetto began by sharing the historical context for the work of the College Board (please see the slide deck for a brief review of the key activities that the College Board has engaged in over the last three decades). In the Future Admissions Tools and Models Project, they learned and described needs and priorities and determined that more research on best practices research and tool development was needed. Among the four key areas focus that the project described, environmental context was discussed in detail in this presentation.

The College Board envisioned an applicant-based contextual dashboard to support admissions offices in more systematically measuring and utilizing environmental context. They wanted to (among other things) create a race neutral tool. They heard from many colleges that were interested in better understanding context, but didn’t have the time or resources and also wanted a collaborative, national effort.

The College Board then:

  1. Convened experts
  2. Developed a framework
  3. Prototyped a toll
  4. Conducted preliminary research

They defined educational context based on three dimensions:

  1. The High School Environment
  2. The Family Environment
  3. The Neighborhood Environment

Bringing this all together, they designed the prototype dashboard (see slide deck). It includes:

  • High School Demographics and Opportunity
  • SAT Scores in Context
  • Neighborhood Context (note “Undermatch”- students could have potential accessed a more rigorous opportunity but they are undermatched. This is not to be judged at an individual level, but as neighborhood trends emerge it is worthy of examination)
  • High School Level Adversity Percentiles
  • There is also an overall adversity index, that corresponds largely to what Ballinger had shared related to the Geo-Index.

Perfetto asks, assuming this tool is valid, consistent, and systematic, what does this mean? At this point, Perfetto dove into the Attributes of Disadvantage, asking do we find adversity where we would expect? At a global level, the answer is yes. As they drilled down they found that areas of disadvantage seem to match what people would expect in terms of how it is distributed geographically. Perfetto continued to dissect the data from a socioeconomic, diversity, and educational outcomes perspective. Please see the slide deck for these details.


Master Class II: The Systematic Identification of Personal Attributes in Admissions 
Sam Rikoon, ETS
Keith D. Wright, The Enrollment Management Association

Rikoon slides
Wright slides

Rikoon began with a definition of noncognitive skills (although he said he does not prefer this designation but as it is commonly used, he will do so as well). He notes that “noncognitive” is often referred to as everything that is not targeted by standardized tests of academic ability,” which really is not specific enough a definition. He continues by describing these variables as demonstrable personality, motivation, attitudinal, self-regulator, and learning approach constructs for which there are observable differences among people that are not measured by traditional tests. ETS has a history of noncognitive assessment research that began in the 1950s. Elements such as drive, intellectual stamina, and conscientiousness were among those factors. More recently, ETS has had a dedicated center on noncognitive assessment since 2000.

One challenge is to recognize that traditional concepts like reliability and validity are not binary in nature. The reliability of an assessment can be expected to vary over the range of a noncognitive skill’s expression. Validity is also a concern as questions related to evidence that validates the data and what are the related outcomes must be considered. There are also issues of concern regarding certain types of observed response patterns in high-stakes applications like admissions. Among other reasons, these patterns may be due to “socially desirable responding” on the part of students (“faking good”), a tendency to respond in the center or extreme ranges of a scale due to construct-irrelevant cultural differences, or the halo effect (i.e. raters providing uniformly positive or negative ratings of students based on a general impression).

One way to address these issues is to use a mix of item types, including Likert scales, forced choice, situation judgment tests, performance tasks, biodata, anchoring vignettes, fluency, others’ ratings, and game-based, or conversation-based assessments.

Rikoon moved next to discuss how one might determine the quality of noncognitive assessment. There are four key areas:

  1. Development: Is the development of the tool supported by literature in the field? Is it ad hoc or organized?
  2. Evidence that Claims are Supported by Data: Are cognitive labs and pilot studies part of the process? Is the process implemented with fidelity? Are institutions basing claims they confirm with their own students, or external studies?
  3. Sufficient Reliability and Validity for Task(s) at hand? Are scores sufficiently stable across subgroups? Are scoring rubrics available for performance tasks? Do different raters agree in their judgments?
  4. Use Multiple Sources of Information: Do sources agree or disagree?

Rikoon continued to describe The Personal Potential Index, a tool available for research purposes. The study described simulated an admissions process and found that achievement gaps decreased and the admission of underrepresented minorities increased when noncognitive assessment criteria were taken into account. Applied to graduate admissions, the PPI predicted academic performance and added predictive value over standardized tests.

Rikoon concluded by noting that we have a good start but much more work and research needs to be done in the area of applying noncognitive assessment to admissions decisions.

Keith Wright next took the podium and began his presentation, Character Skills Assessment-Our Journey. He began by describing his own trajectory as an African American child growing up in an underserved school in Chicago. He described himself as taking “regular” classes and hadn’t heard about standardized tests until two months before he had to take them. As someone with average standardized test scores but a good work ethic, he thanked the universities for digging deeper–taking the time to read his essay and learn who he was–and ultimately admitting him.

He began by noting that we have standards for educational and psychological testing (from AERA, APA, and NCME). Do we take the time to consider the purposes of the test we are using and how we apply these measures to all applicants? Important psychometric concepts include:

1. Reliability: Are we measuring consistently?
2. Validity: Are we measuring what we think we are measuring? Three types of validity:
Content Validity (committees)
Construct Validity (research)
Predictive Validity (research)
3. Equating: This is an empirical statistical procedure used to adjust the form difficulty differences so that scores from different test forms have the same meaning and can be used interchangeably over time.

Wright continued by reviewing the assessment development cycle (please see slide deck) for the Character Skills Assessment (CSA). The first step is item writing by content expertise in what should be measured on a test. Next, they move through a committee (peers) and vendor review. From there, they pre-test the items and conduct an item analysis. Depending on the result, they either revise the question or move it to the next stage of the development cycle. Next, they move to the assembly stage for that item. This includes a robust, methodical, rigorous process.

Next, Wright asks why does character measurement matter? He posted a graphic that shows that cognitive measures only explains 10-20% of the first year GPA; other factors make up 80-90%.

The vision of The Enrollment Management Association with the CSA was to provide schools with a more holistic profile of students. The CSA is: Cognitive + Character, identifies character skills that are important, builds a reliable and valid measurement tool to assess character skills, and is accompanied by an easily understandable score report. Please see the slide deck for the 7 constructs and sample questions. The assessment will be rolled out in Fall 2017.


Dinner & Keynote Address: Meeting institutional and student needs in admissions: Recipes for selecting qualified and diverse applicants
David Payne, ETS 

Payne slides

David Payne began his session by asking, what are the ingredients for appropriate graduate school admissions? He notes that this conversation will be focused on graduate admissions, but is applicable to undergraduate admissions. Payne shared that ETS firmly stands behind the GRE; however trends in graduate education suggest that there may be undue emphasis on GRE scores. Note the article on slide 2 in the slide deck.

ETS stands behind the GRE and its utility. According to Payne, the GRE and UGPA are generalizably valid predictors of graduate grade point average, 1st-year graduate grade point average, comprehensive examination scores, publication citation counts, and faculty ratings. He notes that for decades ETS had been good at supporting faculty in how to use the test scores and defend their use of them. In fact, they publish a guide and note that a cut-off score should NEVER be used as the only criterion. ETS also has a social mission and is concerned about social issues, and thus they also emphasize that the context of individuals should be taken into account as scores are evaluated. At the same time, Julie Posselt had published a study and noted that common graduate admission practice includes a two-step process that includes quantitative measures (with a GRE cut-score) and a holistic file review. Only after applicants make it past that first screening are they given a more detailed evaluation and more factors considered. Thus the cut-score in practice excludes many potential students, despite the caution from ETS that the GRE score should not be used as the only criterion for denial of admission. There are other aspects of concern, including risk aversion in general, with where applicants went to undergrad being given more weight in the admissions process.

On January 4, 2016, the AAS (American Astronomical Society) shared a statement that recommended against using GRE and PGRE test scores for admission, but noted that if you are going to use them, you should use them in alignment with ETS guidelines. While the GRE and PGRE are actually valid predictors of graduate student success (see above), it’s important to note that ETS had itself highlighted that GRE and PGRE test scores should never be the sole criterion for admissions or fellowships.

What are the components for effective and impactful admissions policies that foster qualified and diverse applicants? What does the recipe involve?

  • ETS and the GRE program and board believe that we need to continue to engage with researchers
  • ETS and the GRE program and board have a commitment to conducting on-going research of their own
  • ETS, as a research organization, also invests in research each year, including examining the fairness of the admissions process
  • Continued engagement with those who use their scores
  • Seek and identify best practices

Payne pointed to the efforts of James Madison University as an example of a best practice. Their process includes holistic file review plus targeted interventions that use GRE scores for developmental purposes. The Fisk-Vanderbilt Bridge program is another promising practice that ETS is looking at to engage with the HE community.

Continuing with the recipe, Payne notes that there must be:

  • Continued investment in development of new assessment approaches
  • Continuing engagement with score users to identify and share best practices

Payne highlights that changing institutional practice is no easy task.

From here, Payne discussed the Personal Potential Index (discussed earlier). In 2001 the GRE board convened a group of researchers who were examining noncognitive skills to discuss ways to assess noncognitive skills in a way that is not coachable. Starting in July 2009, the PPI became available at no cost to students. Additionally, GRE funded a validity study–but they hit the challenge of not being able to find 10 schools to participate. They dug into why and found out that faculty were very reluctant to change their admission processes to factor in the noncognitive skills. Payne notes that they as ETS didn’t give faculty enough guidance in terms of how to use the information.

In summary, Payne notes that student selection is truly an art and a science. While there are exciting new trends, it is critical that we work collectively and adapt empirical approaches to investigate the impacts of innovations. He reminds us that institutional change is challenging and there are powerful forces that serve to maintain the status quo. That said, this is important work and ETS is committed to work in this area and partnering with educational communities for change.

No comments have been posted yet.

USA TODAY Politics All articles