Monthly Archives: July 2014

Philosophy Admissions Survey: Gender and Minority Status

The analysis here and the explanation here is all courtesy of my spouse, who has a statistical background in the social sciences. It’s a little….. dense. I’ll provide some commentary at the bottom. We are both assuming that marking yes for minority status represents an applicant who is nonwhite (Ian, or anyone else who could shed light on this, let me know if that’s not the case).

We wanted to see if philosophy PhD program acceptance is related to the gender and race of applicants. We obtained a sample of 49 individuals who had applied to at least one program each and heard back, and were willing to disclose their race, gender, and the responses they obtained. Our sample included 6 minority students, two of whom were women, and ten white women. We lumped all nonwhite people into a single group because there were not enough cases to reliably estimate the acceptance rate for each race individually. Ideally, we would want to test for an interaction between race and gender to see if the effect of race was different for men and women, or, equivalently, the effect of gender was different for white and nonwhite students. However, with only two nonwhite women applying to programs, we could not reliably estimate this effect. Thus, we modeled only the main effects of gender and race, not their interaction. Our model assumes that race has the same effect regardless of gender and vice versa. The number of programs applied to by each individual varied from 1 to 20. We used binary logistic regression to measure the relationships among these variables, as we were modeling a probability (acceptance rate). We were testing the null hypothesis that program acceptance was unrelated to both race and gender, with the alternative hypothesis that acceptance rates were different for different races and/or genders.
We started by building a logistic regression model using all 49 cases. The Hosmer-Lemeshow test for goodness of fit yielded a significant result, chisq(7) = 14.504, p = 0.043, indicating that the assumptions of the logistic regression model were not met. The logistic regression model assumes a linear relationship between the predictors (in this case race and gender) and the natural log of the odds ratio for the outcome (program acceptance) with random errors following the binomial distribution. Thus, we will not even look at the results of this logistic regression, as they are not considered valid. One possible reason for this assumption being violated is the presence of unusual observations for which the model cannot make a good prediction. We detected four outliers: 

ID Minority? Gender Total Applications Total Admissions Standardized Residual
24 No Male 18 14 5.65
14 No Male 11 7 3.23
25 No Female 11 8 2.57
100 No Male 20 0 -2.55

These contained the case with the lowest number of unsuccessful applications by far (#100) and the three cases with the highest number of admissions. To show you how unusual these observations are, we have included histograms of successful and unsuccessful applications.

 

unnamed (1) unnamed

 

As the graphs show, no other applicants have anywhere near 14 successful or 20 unsuccessful applications. Subjects 14 and 25 have unusually high numbers of successful applications, too, but not obviously separated from the rest. It’s worth noting that the next highest number of successful applications (6) was obtained by #13, a minority student.

ID Minority? Gender Total Applications Total Admissions Standardized Residual
13 Yes Male 9 6 1.07

As we will see, white students tend to have somewhat lower acceptance rates than minority students. The fact that #14 and #25 are white makes their success more remarkable, and #13’s less so. This explains why his residual is so much smaller. We removed these cases and built a new model.

The Hosmer-Lemeshow test was now far from significance, chisq(7) = 6.7333, p = 0.457, indicating that the logistic regression model fits well. The test that all slopes are zero was highly significant, G(2) = 16.867, p < 0.001, indicating that the results observed were highly unlikely given the null hypothesis that program acceptance was unrelated to both race and gender. The Z test for the gender variable was significant at the .05 level, Z = 2.08, p = .038. The Z test for the race variable was highly significant, Z = 3.61, p < .001. This indicates that the associations of race and gender observed with admission are highly unlikely given the lack of such a relationship in the entire population of individuals applying to philosophy PhD programs. We obtained the following regression equation :

Y = -1.44 + 0.66 * female + 1.56 * minority

This equation predicts the log of the odds ratio for a given application being successful given the gender and race of the applicant. The variable “female” takes a value of 1 for a woman and 0 for a man. The variable “minority” takes a value of 1 for a nonwhite person and 0 for a white person. For example, for a white woman applying to a program, the equation would yield a fitted value of Y = -1.44 + 0.66 * 1 + 1.56 * 0 = -.78. To interpret this, we need to antilog the fitted value to get an odds ratio of 0.4584. This is equivalent to a probability of 0.4584 / (1 + 0.4584) = .3143, or a 31% chance of success. The positive coefficients for the two variables representing nonwhite race and female gender indicate that being female and belonging to a racial minority group are both associated with higher odds of acceptance to a given program. The intercept represents the natural log of the odds of success when both predictors equal zero, i.e. for a white male. In this case, the estimated odds of success are e^-1.44 = 0.2369, and the estimated probability is 0.2369 / (1 + .2369) = 0.1915 or 19%. The two coefficients represent the natural log of the increase in the odds of success associated with being female and nonwhite, respectively. Being female is associated with multiplying the odds of success by e^.66 = 1.935, whereas being nonwhite is associated with multiplying the odds of success by e^1.56 = 4.759. Unfortunately, this has no straightforward interpretation in terms of probability. The change in probability depends on what the probability was in the first place.

We could use the regression equation to generate estimates of the probability of a successful application in each of our groups, but it makes more sense to use the observed probability in each group as our estimate. We really just built the logistic regression model so we could test the hypothesis that the group differences appeared by chance, and were able to reject that hypothesis. Because there are only six minority students total, and only two of them are female, the observed acceptance rate in each of these categories should probably not be considered representative of anything. By collapsing them into one group, we get a somewhat more reliable figure which should still be taken with a grain of salt. Here are the observed frequencies in each of these three groups:

  students Actual applications Actual acceptances Actual Rate Predicted Rate
white men 33 286 66 0.23 0.19
white women 10 67 25 0.39 0.31
nonwhite 6 25 14 0.56  
male 4       0.53
female 2       0.68
total 49 378 105 0.28  

The difference in acceptance rates among these three groups is large enough to have practical significance. All other things being equal, it seems like the nonwhite students who apply have a better chance of being accepted to a given program than white women, who in turn have better odds than white men. It would be tempting to compare the observed frequencies to the predicted ones as a test of how well our model works, but “predicting” the same cases we used to build our model doesn’t really mean anything.

The end result here is that being female and being an underrepresented minority is associated with an increase in the odds of being admitted. However, this is a relatively small sample: only 49 people, only 12 women, only 6 minorities. As I discussed here, we have some good reasons to believe that this data might not be representative of the average student applying to philosophy programs. Further evidence of that: white males have a 19% chance of being accepted per application. We already know that any white male is competing with more than 4 other applicants for each spot, so we’re clearly overstating the success rate here.

The reason for the success of women and minorities also might be explained by other factors not covered here: their GREs, GPAs, publication record, honors, recommendations, statement of purpose, writing samples. Some of these factors are quantifiable (GPA, GRE) and we’d like to look a little deeper at those over the next few days. Some are not (statement of purpose, writing samples), although they are perhaps where the most meaningful differences are.

I hope that no one will see these results and think that the higher rates of acceptance for minority and female students means they’re getting something they didn’t earn. It is well documented that philosophy has a women problem. It has an even bigger minority problem (although it seems like that conversation is rarely heard, perhaps because there are so few voices having it). The discipline is well served if admissions committees are viewing every application from female and minority applicants and making sure the best candidates are offered admissions. After looking at some of the most successful candidates, it seems like the best people are the ones getting multiple offers, regardless of gender or race.

Some notes about the data set

After spending some time with the data, I’m starting to realize that it has a number of limitations. There are 100 lines in the data set, but unfortunately there are not 100 data points. There were 5 people who entered the survey, but exited after the first question. This is not a large data set and we have no way of knowing if it is representative of the population of graduate philosophy applicants (we in fact have some excellent reasons to believe that it is not; the response bias inherent in online survey, and the fact that TGC attracts …. a certain crowd).

There were several ‘summary’ questions added to the survey after it had opened. These questions asked about how many programs in the top 20, top 50, and T-7 (for Masters’ programs) each applicant had applied and been accepted to. Some people used this questions in addition to reporting their school-by-school results, but some only reported their results in these questions. Since they were added, some people did not have an opportunity to report their results using these questions at all.

This poses a few problems to anyone interested in analyzing this data. Basically, there are two sources of data in the spreadsheet: the summary questions, and the school by school reporting. They report related, but not identical, pieces of information. One or both is missing or incomplete for almost every person who took the survey. It might be possible to answer the survey questions based on the school by school data, but it would be time consuming, to say the least. The answers to the survey questions are also of limited use for continental students; many top continental schools are unlisted or lowly ranked on PGR.

The method I used to count acceptances, rejections, and waitlists that I used in the post about traditions only captured the data contained in the school by school reporting. I want to be honest and forthcoming about the method I used, and the flaws it has as a result. The people I identified as ‘high achievers’ were also only those that were captured by the school by school reporting (although since there was no statistical analysis, the observations I made are still valid; they just don’t capture all the people who fit into that category. Consider it a limited sample). I was only made aware of this problem because someone recognized that they hadn’t been included in that post, although they had been extremely successful this application season.

All of this is to say that while I’ve found it interesting to look at the data more closely, it should all be taken with a grain of salt. The data has its limitations, just as I have mine (as I’ve said before, I am not a mathematician).

I am still very grateful that Ian was willing to take this on, and I think he did a great job. But if someone were to take up this mantle next year, I hope they would look to design a survey that could avoid these issues.

Philosophy Admissions Survey: High Achievers

We’ve all been warned that graduate admissions in philosophy are among the most competitive in academia, where students apply widely and are lucky to be admitted to one or two programs. But we’ve also seen people who have been enormously successful, and I hope I’m not the only one who has wondered, “What about them made them so successful?” The data we have now may or may not give us a complete view, but I wanted to look at the people who were the most prolific in their admissions. Despite their successes, there was no one who applied to more than one school who had a 100% acceptance rate, but everyone I’m looking at here was admitted to at least 5 schools, often near the top of the PGR.

There are 7 people who meet this metric.

# of admission offers Gender Minority status Tradition
A 5 Male N C
B 5 Female N A
C 5 Female N A
D 6 Male Y C
E 7 Male N A
F 8 Female N A
G 14 Male N A

Undergraduate Career:

All of these applicants went to a selective or moderately selective undergraduate institution. All had high overall GPAs—the lowest in the 3.60-3.79 bracket (applicant B), and only one other outside of the 3.90-4.00 bracket (applicant G)—as well as strong philosophy GPAs. All were philosophy majors, and 5 had additional majors or minors. 5 out of the 7 were applying as seniors in undergrad (or had completed their degree within the past 6 months). Only applicant C had a Masters degree, which was in philosophy, and applicant E had completed his degree in the past or to two years, but had not spent any time away from the academic study of philosophy.

Applications:

All had high GRE scores on verbal; 6 had about 95th percentile, and one had above the 90th percentile. The quantitative and writing scores were more scattered. Applicant A had the lowest quantitative scores, in the 50th-59th percentile, and the best quantitative scores were in the 90th-95th percentile (scored by applicants B and G). Writing scores were mostly in the 90th-95th or 95th and up categories; Applicant B scored in the 70th-79th percentile.

Not a single applicant had a publication. Only two (B and G) had somewhat relevant work experience; the rest had no work experience.

All applicants had letters of recommendation predominantly from philosophy professors, including endowed chairs or tenured professors. Applicants B, D, E, and G all had two letters from endowed chairs of philosophy. Every applicant has at least one letter from a tenured professor; Applicant E had a high of 4. Applicants A, B, C and F had one or two letters from tenure track faculty. No one had letters from adjunct faculty. Applicants G and E had letters from non-philosophers. Applicants A, C and F only had 3 letter writers in total; the other applicants had 5 letter writers in most case, although applicant E had 7 letter writers. It is unclear whether the applicants with more than 3 letter writers submitted more than 3 letters with each application, or if they only used some of their letter writers for some of their schools.

Most applicants listed two or three areas of philosophical interest in their statements of purpose. Those areas are:

19th Century Continental Philosophy, 20th Century Continental Philosophy, Philosophy of Art, Decision, Rational Choice and Game Theory, Philosophical Logic, Political Philosophy, Ethics, Applied Ethics, Philosophy of Race, Ancient Philosophy, Early Modern Philosophy: 17th Century, Metaethics (metaphysics, epistemology & semantics of morality), Philosophy of Language, and History of Analytic Philosophy (incl. Wittgenstein). Ethics and political philosophy appeared three times each; philosophical logic appeared twice, but there was no other overlap of interests.

Writing samples generally covered one or more of the interests from the statement of purpose. The only exception was applicant G whose writing sample was not on any of his stated interests.

Each applicant submitted a large number of applications, the fewest being nine and the most being 18. Most applied across the PGR, often including unranked programs; this is especially true of the two continental candidates, A and D.

Here are their overall results:

Admitted Denied Waitlisted Admitted or waitlisted in PGR top 20 Admitted or waitlisted in PGR 21-50
A 5 5 2 3 4
B 5 11 0 3 1
C 5 4 0 3 0
D 6 2 1 1 0
E 7 1 3 9 1
F 8 3 0 0 4
G 14 3 1 13 2

Since there were relatively few international programs, I used the US PGR for the top 20 and 21-50, and simply assigned the rank from the global list to any international programs. (This was largely due to laziness on my part, but it wouldn’t really change the results too much).

After spending some time looking at this, I’m a little surprised. I’m sure every one of these applicants deserved their successes, but there’s nothing here that stands out as the reason why. Each candidate has an obviously strong profile, but there are many people with equally strong profiles who were shut out, or at least did not have their pick of programs. This seems to support what many people believe: high GREs and GPAs get your profile looked at, but it’s in the letters, statement of purpose, and writing sample that get you admitted.

Philosophy Admissions Survey: Traditions

Thank you to Ian for sharing the data from the philosophy admissions survey! All the data came from here: http://faircloudblog.wordpress.com/philosophy-admissions-survey/

Edit 7/27: Please read here: https://wordpress.com/read/post/id/71818922/24/  about some of the issues with the data set generally, and the conclusions of this post. I’m leaving this post as is for now, but there might be an update in the future. 

First, I wanted to see how many people had been successful. I calculated a ‘success rate’ by finding the number of denials, acceptances, waitlists, and acceptances to other programs (Edited to add: this ended up not working as well as I thought it originally had. I revised substantially in light of this).  I calculated the percentage of admittances vs total notifications for each profile*, then found the mean for an average acceptance rate of 16.09%.

This is high, all things considered. I think we all know it’s much, much lower at top departments (Michigan said they had ~250 applicants for ~4 spots this year, for example). This includes departments both on and off the PGR. The median and the mode were 0%.

When I looked a little closer, it seemed like continental students were having slightly more success. Ian helpfully included a question about the tradition people were working in. The answer options were analytical, continental, “no such commitment is reflected”, and of course the ever popular leaving it blank. I broke the data down into three categories (combining blank with no commitment) to get these results:

  Average success rate by tradition std deviation
N=67 Analytic 16.8899197817 28.70916639
N=14 Continental 24.7997835498 23.26700998
N=19 “No such..”/none listed 9.0052336724 19.96581012

So it does appear that continental students do better. However, a t-test showed the differences between analytical and continental acceptance rates are not statistically significant (p = .3010). The difference between continental and ‘no such’/none was statistically significant (p = .0445)** although the difference between ‘no such’/none and analytical was not statistically significant (p =.3050). So, if you’re considering between continental and not taking a stand, perhaps you should go with continental!

I also calculated a ‘positive outcomes’ category based on acceptances, waitlists, and acceptances to alternatives.  Especially since we started the survey before April 15th, there were likely some people who ended up getting off those waitlists. The average positive outcomes rate was 24.32%.

So, some interesting things: 0% was again the median and the mode, but there were multiple people who had a 100% positive outcomes. This does make me wonder how complete the data is, if people left off rejections for the sake of time or simplicity.

Positive Outcomes by tradition  
Analytic 24.5141860627 35.6047528641
Continental 38.5642135642 30.4808285299
“No such..”/none listed 13.1676003735 26.7242422519

Again, continental candidates outperform analytical candidates by quite a bit. A quick t-test between the two does not show statistically significance, however (p = .1735).
If you have any questions or comments on the procedures I used, let me know! I’m a philosopher, not a statistician, so I likely made some mistakes here. I hope to look at a few other things, including gender breakdowns and more about what makes a ‘strong’ candidate, over the next few days!

*As Ian noted, a number of people who filled out the survey did not include any data about their results (ie. they filled out the demographics questions, but didn’t say if they were admitted/rejected). I left them in initially, although I might redo it without them.

**When my spouse (who has a strong statistics background for social science research) looked at this, he said because I’m comparing 3 groups across 3 tests, the level of statistical significance in the p-value drops to 1/3 of .05, or .0166. Therefore, this might not be a statistically significant difference. We did a one-way ANOVA which gave a p-value of .118 (not statistically significant). I’m hoping he’ll explain more about this to me and I’ll be able to update/clarify this post further.