1. About
  2. Features
  3. Explore

How does a typical graduate program's admissions committee operate? (In the US --- I understand that in Europe admissions are often structured differently.)

My mental model at the moment is several professors sitting in a conference room with a stack of printed applications and a few piles: accept, maybe, and reject. Three or so professors will read an application and then make a decision. If the volume of applications is large enough, multiple meetings might be required. This is purely my imagination --- I don't know how accurate this model is.

Of course, I'm sure that how admissions are handled depends greatly on the school, program, and department. I'm interested in a few data points to get a rough idea of what the average is.

A few more specific related questions:

  • How much time does a typical committee spend on each application? I suspect there's a distribution. Obviously unqualified applicants will probably have their applications tossed into the reject pile pretty quickly. Qualified applicants' applications might receive more attention.

  • Do admissions committees look online for more information about an applicant? I've read some professors (in CS --- not my field) will look at a student's research/project website, but I have no idea how common this is.

1 Answer 1

I've been on the grad admissions committee at my university for 30 years, and have been Dir of Grad Studies in math on two different occasions, so I have a good sample of what goes on here.

First, in the last few years all our applications are electronic, so the admissions cte doesn't have to be in a little room with piles of paper any more. This also allows much more asynchronous appraisal of files, which allows not wasting time trying to figure out common meeting times.

Typically, a preliminary screening is done in a "distributed" sense, that is, obviously-not-qualified or misguided applicants are removed from the "active possibilities" queue. By this, I mean engineering or computer science students, or crackpots, who declare interest but have no documentable mathematics background, no letters of recommendation from mathematicians, no GRE subject test, no nothing. :)

Even after this initial filtering, we have many fewer (funded as TAs or RAs) spots than approximately-qualified candidates. Even though we expect maybe 50-percent accept rate of our offers, the number of approximately-qualified candidates is still too large by a factor of 2 or 3 or more.

Candidates from small colleges in the U.S. typically have a very thin background, whether or not they've done special summer programs, so are at some initial disadvantage in comparison to candidates from Europe or Asia who have seen much more coursework, if only due to the structure of "college" there. This also precipitates discrepancies in GRE subject test scores. Nevertheless, we have found that this discrepancy is often very temporary, and after two years, or less, success in our program is almost completely uncorrelated with such things. Thus, many of the pretend-objective measures of prior success are of very limited value.

Further, the import of letters of recommendation depends greatly on the author, and their prior experiences, basis for comparison.

So we are left trying to compare candidates from widely varying backgrounds...

To get to the "final list" of offers thus consists partly of winnowing out candidates who are perhaps vaguely qualified but indicate no genuine interest in anything about our program, while putting onto the "definite" list candidates who seem to be focused and have a particular interest in the research program(s) of our faculty.

Documentably strong background and prior success is good, obviously, but, again, undergrad work, summer REUs, and such are a different enterprise than genuine graduate work and long-term research. Success in a highly structured environment gives very ambiguous indication about success in an unstructured environment. Success when there are unambiguous answers to questions that can be answered in a few hours is different from success in situations where the program can take months, and one is not quite sure what the question is.

So there is considerable discussion and re-reading of personal statements and letters of recommendation, trying to project into the future and a new environment, based on our prior experience. Not easily turned into an algorithm.

As to the specific questions: an obviously implausible applicant could be appraised in 5-10 minutes. At the other end, the final decisions about the better obviously-qualified applicants may involve repeated appraisal, so that the total time spent might be 5-10 person-hours in the end.

As to whether we look at on-line manifestations of applicants: this is a potentially delicate issue. The default approach would be to absolutely not do so, certainly not look at Facebook or other social media. If the application gives a pointer to a "professional" web page, of course this is considered useful, and positive if that site gives evidence of good work of the applicant. But "snooping around" would be unethical, inappropriate. Even if there is a pointer to a site in the application, if the application is not otherwise plausible I don't think I'd look at the web site.