Philosophy Admissions Survey: A deeper look at gender and minority status

 

Our original post about gender and minority showed that minority students and women had higher rates of success, based on 49 cases, looking at admissions to any program, not just PGR top-50. A larger number of participants reported the results of their applications to PGR top-50 programs, but these were recorded in such a way that we had to manually code them in order to analyze them. After we did this, we were able to expand the analysis. The results of these new models (including those here), which are based on more observations, provided a different view on the effect of minority status, at least. 

————————————–

We wanted to know if race adds anything to the prediction of PhD program admission success. To do this, we compared a model using only the three predictors we had previously selected to one that also included minority status. If the model that includes minority status gives a significantly better prediction than the one that doesn’t, this would tell us that PhD program admission rates differ between white and minority students with the same gender, undergrad GPA and verbal GRE score. Four of the participants whose answers we used to build our three-predictor model didn’t report minority status (cases 43, 64, 65, and 81). Obviously, we could only include cases that contained minority status in building the model that contained this variable, so we used the remaining cases. Since we can only meaningfully compare two models build on the same dataset, we also had to build a new three-predictor model using these cases. We created a dataset that didn’t contain these cases, build the two models, and compared them using an analysis of deviance.

Here are the slope and coefficients, with Wald Z statistics and probability values, for the three-predictor model constructed using the new dataset yielded the following regression equation:

 

Coefficients:          
  Estimate Std.Error    z_value Pr(>|z|)  
(Intercept) -17.72928 3.39103 -5.228 1.71E-07 ***
gender 0.32076 0.16915 1.896 0.057925 .
gpa 1.45217 0.42464 3.42 0.000627 ***
gre_verbal_round 0.11182 0.03234 3.457 0.000545 ***


These are very close to the results we got when we used the full data set. There is still no evidence for lack of fit, chisq(70) = 66.46273, p = 0.5977475. The deviance test comparing this model to the null model is still strongly significant, chisq(3) = 43.52594, p = 1.902966e-09. Gender is now (just barely) not significant, but as discussed earlier, the Wald Z test used to measure the contribution of each predictor to the strength of prediction is not that sensitive. The important thing to note is that the coefficent estimates hardly changed at all, suggesting that the relationships we observed among the predictors and the outcome are fairly reliable, and don’t depend that much on which cases are included.Here’s our new regression equation: Y = -17.72928 + gender*0.32076 + gpa*1.45217 + gre_verbal*0.11182

We also built a model that included minority status. Here is a summary of the results:

  Estimate Std.Error z_value Pr(>|z|)  
(Intercept) -17.3338 3.4596 -5.01 5.43E-07 ***
gender 0.3356 0.1703 1.971 0.048737 *
gpa 1.3977 0.43 3.25 0.001154 **
gre_verbal_round 0.1101 0.0326 3.378 0.000731 ***
minority -0.1725 0.2288 -0.754 0.451007  

Other than the inclusion of minority status as a predictor, this model is pretty similar to the three-predictor model. The relationships we observed among the other three predictors and the outcome don’t change very much when minority status is accounted for. The Wald test for the minority status variable is nowhere near significant. There is no evidence for lack of fit with this model, chisq(69) = 65.88326, p = 0.5841074. It definitely performs better than the null model, chisq(4) = 44.10541, p = 6.100333e-09. The p value is slightly higher than the three-predictor model because there is one more parameter (that’s why there are 4 degrees of freedom for the chi square test instead of 3) which is free to vary and not much of a decrease in deviance (value of chi squared is similar).

We compared the two models by testing the difference in residual deviance (the amount of variation in the outcome that each model fails to explain) using a chi square test, chisq(1) = 0.57947, p = 0.4465201. Adding another predictor to the model will always decrease the residual deviance a little bit, but the difference in this case was small and not statistically significant. To give you an idea of how small, the residual deviance of the simpler model was 66.463, and the difference was 0.57947. This indicates that the small improvement in prediction gained by adding minority status to the model probably occurred by chance. There is no evidence for a relationship between minority status and the odds of admission when controlling for gender, undergrad GPA and verbal GRE score.

What’s especially interesting about this is that there is a relationship between minority status and program admissions when gender, undergrad GPA and verbal GRE score are not controlled for. We can demonstrate this by building a model that contains only minority status and comparing it to the no-predictor model. Here’s what that model would look like:

  Estimate Std.Error z_value Pr(>|z|)  
(Intercept) -1.10192 0.08141 -13.535                  <2e-16 ***
minority -0.45622 0.21084 -2.164 0.0305 *

The Wald statistic for minority status is significant, and so is the overall model, chisq(1) = 5.002334, p = 0.02531316. Unfortunately, the deviance test detected a model violation, chisq(73) = 105.6568, p = 0.007471957. Nine outliers were present (cases 37, 14, 38, 33, 53, 10, 23, 18, and 44). I wasn’t comfortable deleting that many observations, so instead I tested the marginal relationship between minority status and admission success using Pearson’s chi square test of association. Nonminority applicants succeeded in 201 out of 602 applications, whereas minority students succeeded in 32 out of 152 applications. Being white was associated with a significantly higher rate of success, chisq(1) = 4.7407, p = 0.0295. However, we already know that this effect shrinks markedly and is no longer statistically significant when gender, undergrad GPA and verbal GRE score are modeled.

Essentially, decreased rates of admissions for minority students are explained by their GPAs, verbal GRE score, and gender. Since higher GPA and GRE scores have positive effects on admissions, the minority students who applied to PhD programs and submitted their results had weaker applications on these quantitative measures. There are many possibly explanations for why this may be the case, and again, these results are based on the data we have. When we looked at a smaller subset of students and expanded our measure of success to include admissions outside of the PGR top-50, we found that minority students had a higher likelihood of success. 

 

We also wanted to know if gender had an effect on graduate school admissions, so we built a model that didn’t include gender and compared it to the original thee-predictor model. Here’s what our new 2-predictor model looks like:

  Estimate Std.Error z_value Pr(>|z|)  
(Intercept) -17.77351 3.03041 -5.865 4.49E-09 ***
gpa 1.69816 0.40448 4.198 2.69E-05 ***
gre_verbal_round 0.10356 0.02781 3.724 0.000196 ***

The two-predictor model predicts significantly better than the null model, chisq(2) = 54.47855, p = 1.479562e-12, with no evidence for violation of the logit model according to the chi square deviance test, chisq(74) = 73.89527, p = 0.4487002. Now let’s compare this model to the original three-predictor model. Once again, we are testing the difference in deviance between the two models against the null hypothesis that gender is unrelated to application success in the population when undergrad gpa and verbal GRE are controlled for. We know that the deviance of the more complex model will be somewhat lower, but we want to see how much lower, and test this difference for significance using a chi square test.

Model 1: (success, applied) ~ gpa + gre_verbal_round

Model 2: (success, applied) ~ gender + gpa + gre_verbal_round

 

 

  Resid. DF Resid. Dev Df Dev.
1 74 73.895    
2 73 69.438 1 4.457


How much greater is the probability of success for a female applicant? That depends on the values of the other predictors. Let’s look at the modal values for undergrad GPA and verbal GRE score in our sample. For GPA, the most common answer is 4, which actually denotes grade point averages ranging from 3.9 to 4. For verbal GRE, the most common answer is 99, which actually denotes a range of percentile scores from 96 to 99.

For a typical (in our sample, but probably not anywhere else) male candidate, the estimated log odds would be -17.72928 + 4*1.45217 + 99*0.11182 = -0.85042, corresponding to odds of e^-0.85042 = 0.4272, or a probability of success for each application of 0.4272/(0.4272+1) = 0.299 (or 29.9%). The 13 candidates who actually have this combination of predictor values submitted 123 applications, of which 45 were successful, for a rate of 36.6%. So far our model looks pretty good. For a female candidate with the same GPA and verbal GRE, the estimated log odds would be -17.72928 + 0.32076 + 4*1.45217 + 99*0.11182 = -0.52966, corresponding to odds of e^-0.52966 = 0.58881, or a probability of success for each application of 0.58881(0.58881+1) = 0.9355 (or 93.55%), a shockingly high estimate. The five candidates who actually have this combination of predictor values submitted 40 applications, of which 24 were successful, for a rate of 60%. This is very high, but not nearly as high as our model predicts.The chi square test for the difference in deviance between the two models is significant, chisq(1) = 4.457, p = 0.03475849.

In other words, there is a statistically significant difference between the predictive power of the model that includes gender and the one that doesn’t. Female applicants have greater odds of being admitted to a top 50 program than male applicants with the same undergraduate GPA and verbal GRE score. How much greater? We can get the difference in odds between male and female applicants by antilogging the coefficient for gender, e^0.3356 = 1.399, meaning that female applicants have 39.9% higher odds of succeeding with each application. 

Advertisements

Philosophy Admissions Survey: What determines success?

So far, we’ve looked at the effects of tradition, gender, and minority status on admissions, as well as some of the most successful candidates. But what are the factors that most determine success for any given candidate? How much weight does each factor carry? What are the best ways to improve one’s applications, in order to increase the odds of success? This post will hopefully answer a few of those questions. The analysis here is again completed by my spouse (this time in R, if anyone was curious).

———————

We wanted to determine what factors are associated with admission to top 50 ranked philosophy PhD programs. We had data from 95 individuals who had applied to 804 philosophy PhD programs. We used the results of the philosophy admissions survey, from which we extracted information about each candidate, which we coded into variables for the purpose of building a statistical model.

In the end, the models here were built based on 84 individuals. The total acceptances, wait-lists, and rejections were determined from the summary question at the end of the survey, for both PGR top-20 and 21-50 programs. 32 individuals took the survey before those questions were added to the survey; their results were added by hand.

84 responses is a large enough sample size to make some interesting conclusions about the population. However, the sample size does limit the number of variables that can be considered at one time. If we had tried to use the information from every question, it would be impossible to distinguish the variables that were making a difference in admissions rates from those that were not. It would also reduce the sample size, since some people left at least a few question blank. We had to eliminate some questions in order to get a better picture of what was making a difference. The variables considered are: gender, minority status, teaching/work experience, publications, graduate degrees in philosophy, undergraduate institution selectivity, GRE scores (all three sections), undergraduate overall GPA, and undergraduate major GPA.

The following variables can take values of zero or one:

gender: female (1) or male (0)

minority: participant is a minority (1) or not (0)

experience: participant has (1) or does not have (0) teaching experience

philgrad: participant has (1) or does not have (0) a graduate degree (generally a masters degree) in philosophy.

published: participant’s work has (1) or has not (0) been published in an academic philosophy journal of any kind

For some survey questions whose answers were of interest for this analysis, the survey asked participants to indicate which of several ranges they fell into. For example, does the participant’s undergraduate institution admit 0-25%, 26-50%, 51-75%, or 76-100% of candidates? We recoded these ordinal variables (meaning they consisted of ordered categories) into continuous variables (numbers that can take any value in a certain range) by using the value at the top of the range selected by the participant (e.g. 51-75% becomes 75%).

There may be meaningful differences between participants from schools with respective admission rates of 51% and 75%. Unfortunately, this information is not in our model. This sort of procedure is generally frowned upon in the statistics world because ordinal data behaves differently than continuous data. We did this because continuous predictors require one parameter each, while ordinal predictors require one fewer than the number of levels. For example, an ordinal variable with four possible values (e.g. 0-25%, 26-50%, 51-75%, or 76-100%) would require three parameters. In regression models, more parameters means more things have to be estimated, and there are more chances to be wrong. In general, simpler models work better. We already had too many variables and not enough data, so we made this compromise. We treated the following variables in this fashion:

selectivity: percentage of applicants admitted at participant’s undergraduate institution

gre_verbal: verbal GRE percentile

gre_quant: quantitative GRE percentile

gre_writing: writing GRE percentile

gpa: overall undergraduate GPA

majgpa: undergraduate GPA for classes in participant’s major (usually, but not always, philosophy)

The online survey also asked participants which of 3 orientations (analytic, continental, none) participants indicated in their applications. We turned this into two dichotomous variables:

analytic: participant’s application indicated an analytic orientation (1) or did not

continental: participant’s application indicated a continental orientation (1) or did not

Participants who selected “none” would have a value of 0 for both of these variables.

For each individual, we knew the number of programs applied to and the number of successful applications (those which resulted in acceptance or being placed on the wait-list). Because we were modeling a binary outcome (you get into a program or you don’t), we used logistic regression, a form of statistical modeling which takes this into account.

First, we tried building our model with all our predictors on our entire data set. Below is a summary of this model. “Estimate” refers to the estimated coefficient in the logistic regression model, equal to the change in the natural log of the odds ratio associated with a one unit increase in the value of the predictor. Since the only possible values for some of these predictors are 0 and 1 (gender for example), in these cases it refers to the difference in the natural log of the odds ratio between the two levels of the variable. The important thing here is that positive coefficients mean an increase in the predictor (or a value of 1) is associated with an increase in the odds of application success. Std. Error is a measure of how reliable the estimate of the coefficient is: a high standard error means the true value could actually be very different from the one shown. This doesn’t really matter for our purposes. Z value and Pr(>|z|) are referring to a Wald test, which is a comparison between the model we built and one that includes all the variables except the one being tested. A high value of z, which corresponds to a low value of p, means that the variable is contributing quite a bit to the predictive power of the model. A period or one or more asterisks to the right of the numbers indicates the level of statistical significance, as shown in the “Signif. codes” below. If none of these are present, this means that the contribution is not statistically significant.

Coefficients:

Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.88E+001 6.34E+000 -2.97 0.00302 **
selectivity -8.27E-004 4.07E-003 -0.2 0.84
minority -4.87E-001 3.24E-001 -1.5 0.13
gender 1.55E-001 2.23E-001 0.7 0.49
gpa 7.53E-001 7.33E-001 1.03 0.3
majgpa 1.24E+000 1.43E+000 0.87 0.38
experience -2.92E-002 2.43E-001 -0.12 0.9
analytic -1.41E-001 4.43E-001 -0.32 0.75
continental -3.13E-001 5.06E-001 -0.62 0.54
philgrad -1.00E-001 2.32E-001 -0.43 0.67
published -5.68E-002 3.20E-001 -0.18 0.86
gre_verbal 9.68E-002 4.86E-002 1.99 0.04629 *
gre_quant 7.66E-003 7.89E-003 0.97 0.33
gre_writing 1.72E-004 9.02E-003 0.02 0.98

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

At this point, only one predictor (verbal GRE score) was significantly contributing to the prediction of the model, with a positive coefficient indicating that higher GRE scores are associated with higher odds of a successful application. I thought maybe there were a few unusual observations that were unduly influencing the model fit, so I examined the residuals, and found one outlier (participant #38, with a residual of -2.4754). I removed the outlier and tried building the model again, but got pretty much the same thing. I thought that maybe having too many predictors was introducing too much “noise,” so I tried cutting four that looked like obvious duds (selectivity, major gpa, teaching experience, gre writing score) and rebuilding the model using the full data set.

Coefficients:

Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.88E+001 6.34E+000 -2.97 0.00302 **
selectivity -8.27E-004 4.07E-003 -0.2 0.84
minority -4.87E-001 3.24E-001 -1.5 0.13
gender 1.55E-001 2.23E-001 0.7 0.49
gpa 7.53E-001 7.33E-001 1.03 0.3
majgpa 1.24E+000 1.43E+000 0.87 0.38
experience -2.92E-002 2.43E-001 -0.12 0.9
analytic -1.41E-001 4.43E-001 -0.32 0.75
continental -3.13E-001 5.06E-001 -0.62 0.54
philgrad -1.00E-001 2.32E-001 -0.43 0.67
published -5.68E-002 3.20E-001 -0.18 0.86
gre_verbal 9.68E-002 4.86E-002 1.99 0.04629 *
gre_quant 7.66E-003 7.89E-003 0.97 0.33
gre_writing 1.72E-004 9.02E-003 0.02 0.98

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’

Now the Wald tests for gender, undergrad GRE and verbal GRE score were all significant. I tried building a model using only these 3 predictors.

Coefficients:

Estimate Std. Error z value Pr(>|z|)
(Intercept) -17.74 3.03 -5.85 4.80e-09 ***
gender 0.35 0.17 2.13 0.033047 *
gpa 1.65 0.41 4.08 4.47e-05 ***
gre_verbal 0.1 0.03 3.72 0.000201 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

All 3 variables had significant Z tests.

Next, I performed a deviance chi square test to make sure the logistic regression model fit the data. The logistic regression model assumes a linear relationship between the predictors and the natural log of the odds ratio for the outcome (application success) with random errors following the binomial distribution; we used this same type of model here. This test compares the distribution of residuals (the differences between our predictions and the results) to what would be expected under this distribution, generating a chi squared statistic. A bigger difference means a higher value of chi squared and a lower p value. Traditionally, .05 is used as the cutoff for evidence of a violation of the logistic regression model. If a poor model fit (indicating violation of assumptions) was detected, that would invalidate our results. Incidentally, the chi square deviance test should only be used when there are multiple results for each combination of predictors (in this case, each applicant). Otherwise, its results are not valid. With chisq(73)=69.43829, p = 0.5964882, there was no evidence for lack of fit. The logistic regression model is appropriate.

Next, I performed an overall significance test for the model. I wanted to know if the predictions it generated were significantly better than just guessing. To answer this question, I compared the accuracy of our predictions for our sample to the level of accuracy that we would get by calculating the overall acceptance rate and using that as our guess for every participant’s applications. We calculated the deviance (a measure of the difference between what we observed and what we predicted) for both models, then tested the difference between the two deviance statistics for significance using a chi square test, chisq(3) = 58.93553, p = 9.922735e-13. The chisquare test for the overall model is highly significant, indicating that it performs much better than chance. I checked for outliers and found 2: Participants #23 (residual = 2.32834320) and 38 (residual = -2.54603528). I wanted to make sure that they weren’t having a large influence on the model, which can happen sometimes, so I tried removing them and rebuilding the model:

Coefficients:

Estimate Std. Error z value Pr(>|z|)
(Intercept) -17.5 3.03 -5.77 7.95e-09 ***
gender 0.38 0.17 2.29 0.022186 *
gpa 1.67 0.41 4.07 4.65e-05 ***
gre_verbal 0.1 0.03 3.62 0.000299 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The results were pretty much the same. The coefficient estimates and Wald statistics changed very little. There was still no evidence for lack of fit, chisq(71) = 57.40618, p = 0.8782012. The model still performed significantly better than chance, chisq(3) = 57.8712, p = 1.674629e-12. Removing the outliers doesn’t really help, so we can keep them in and use the original 3-predictor model.

The final regression equation is:

y = -17.50348 + gender*0.38489 + gpa*1.67461 + gre_verbal*0.10037

This equation predicts the log of the odds ratio for a given application being successfully admitted or wait-listed to one program, given the gender, overall GPA, and GRE verbal percentile of the applicant. The variable gender takes a value of 1 for a woman and 0 for a man. The GPA is based on a 4.0 scale, and the verbal GRE is the percentile score.

For example, for a male with a GPA of 3.95 and a verbal GRE percentile of 87 applying to a program, the equation would yield a fitted value of y = -17.50348 + 0*0.38489 + 3.95*1.67461 + 87*0.10037 = -2.1565 . To interpret this, we need to antilog the fitted value to get an odds ratio of e^(-2.1565) = 0.1157 . This is equivalent to a probability of 0.1157 / (1 + 0.1157 ) = .1037, or a 10% chance of success.

The intercept in this case should be the odds when all the values are zero, meaning a male candidate with a GPA of 0 and scoring in the 0th percentile on the verbal section of the GRE. The model did not include any individuals in this range (since someone with an overall GPA of 0.0 is not graduating college, and not pursuing graduate admissions in philosophy), so a prediction in this range is likely not accurate. Given that almost all the applicants who filled out the survey had GPAs over 3.0 and verbal GRE scores above the 80th percentile, the model should not be used to extrapolate outside of that range.

 

 

Philosophy Admissions Survey: Gender and Minority Status

The analysis here and the explanation here is all courtesy of my spouse, who has a statistical background in the social sciences. It’s a little….. dense. I’ll provide some commentary at the bottom. We are both assuming that marking yes for minority status represents an applicant who is nonwhite (Ian, or anyone else who could shed light on this, let me know if that’s not the case).

We wanted to see if philosophy PhD program acceptance is related to the gender and race of applicants. We obtained a sample of 49 individuals who had applied to at least one program each and heard back, and were willing to disclose their race, gender, and the responses they obtained. Our sample included 6 minority students, two of whom were women, and ten white women. We lumped all nonwhite people into a single group because there were not enough cases to reliably estimate the acceptance rate for each race individually. Ideally, we would want to test for an interaction between race and gender to see if the effect of race was different for men and women, or, equivalently, the effect of gender was different for white and nonwhite students. However, with only two nonwhite women applying to programs, we could not reliably estimate this effect. Thus, we modeled only the main effects of gender and race, not their interaction. Our model assumes that race has the same effect regardless of gender and vice versa. The number of programs applied to by each individual varied from 1 to 20. We used binary logistic regression to measure the relationships among these variables, as we were modeling a probability (acceptance rate). We were testing the null hypothesis that program acceptance was unrelated to both race and gender, with the alternative hypothesis that acceptance rates were different for different races and/or genders.
We started by building a logistic regression model using all 49 cases. The Hosmer-Lemeshow test for goodness of fit yielded a significant result, chisq(7) = 14.504, p = 0.043, indicating that the assumptions of the logistic regression model were not met. The logistic regression model assumes a linear relationship between the predictors (in this case race and gender) and the natural log of the odds ratio for the outcome (program acceptance) with random errors following the binomial distribution. Thus, we will not even look at the results of this logistic regression, as they are not considered valid. One possible reason for this assumption being violated is the presence of unusual observations for which the model cannot make a good prediction. We detected four outliers: 

ID Minority? Gender Total Applications Total Admissions Standardized Residual
24 No Male 18 14 5.65
14 No Male 11 7 3.23
25 No Female 11 8 2.57
100 No Male 20 0 -2.55

These contained the case with the lowest number of unsuccessful applications by far (#100) and the three cases with the highest number of admissions. To show you how unusual these observations are, we have included histograms of successful and unsuccessful applications.

 

unnamed (1) unnamed

 

As the graphs show, no other applicants have anywhere near 14 successful or 20 unsuccessful applications. Subjects 14 and 25 have unusually high numbers of successful applications, too, but not obviously separated from the rest. It’s worth noting that the next highest number of successful applications (6) was obtained by #13, a minority student.

ID Minority? Gender Total Applications Total Admissions Standardized Residual
13 Yes Male 9 6 1.07

As we will see, white students tend to have somewhat lower acceptance rates than minority students. The fact that #14 and #25 are white makes their success more remarkable, and #13’s less so. This explains why his residual is so much smaller. We removed these cases and built a new model.

The Hosmer-Lemeshow test was now far from significance, chisq(7) = 6.7333, p = 0.457, indicating that the logistic regression model fits well. The test that all slopes are zero was highly significant, G(2) = 16.867, p < 0.001, indicating that the results observed were highly unlikely given the null hypothesis that program acceptance was unrelated to both race and gender. The Z test for the gender variable was significant at the .05 level, Z = 2.08, p = .038. The Z test for the race variable was highly significant, Z = 3.61, p < .001. This indicates that the associations of race and gender observed with admission are highly unlikely given the lack of such a relationship in the entire population of individuals applying to philosophy PhD programs. We obtained the following regression equation :

Y = -1.44 + 0.66 * female + 1.56 * minority

This equation predicts the log of the odds ratio for a given application being successful given the gender and race of the applicant. The variable “female” takes a value of 1 for a woman and 0 for a man. The variable “minority” takes a value of 1 for a nonwhite person and 0 for a white person. For example, for a white woman applying to a program, the equation would yield a fitted value of Y = -1.44 + 0.66 * 1 + 1.56 * 0 = -.78. To interpret this, we need to antilog the fitted value to get an odds ratio of 0.4584. This is equivalent to a probability of 0.4584 / (1 + 0.4584) = .3143, or a 31% chance of success. The positive coefficients for the two variables representing nonwhite race and female gender indicate that being female and belonging to a racial minority group are both associated with higher odds of acceptance to a given program. The intercept represents the natural log of the odds of success when both predictors equal zero, i.e. for a white male. In this case, the estimated odds of success are e^-1.44 = 0.2369, and the estimated probability is 0.2369 / (1 + .2369) = 0.1915 or 19%. The two coefficients represent the natural log of the increase in the odds of success associated with being female and nonwhite, respectively. Being female is associated with multiplying the odds of success by e^.66 = 1.935, whereas being nonwhite is associated with multiplying the odds of success by e^1.56 = 4.759. Unfortunately, this has no straightforward interpretation in terms of probability. The change in probability depends on what the probability was in the first place.

We could use the regression equation to generate estimates of the probability of a successful application in each of our groups, but it makes more sense to use the observed probability in each group as our estimate. We really just built the logistic regression model so we could test the hypothesis that the group differences appeared by chance, and were able to reject that hypothesis. Because there are only six minority students total, and only two of them are female, the observed acceptance rate in each of these categories should probably not be considered representative of anything. By collapsing them into one group, we get a somewhat more reliable figure which should still be taken with a grain of salt. Here are the observed frequencies in each of these three groups:

  students Actual applications Actual acceptances Actual Rate Predicted Rate
white men 33 286 66 0.23 0.19
white women 10 67 25 0.39 0.31
nonwhite 6 25 14 0.56  
male 4       0.53
female 2       0.68
total 49 378 105 0.28  

The difference in acceptance rates among these three groups is large enough to have practical significance. All other things being equal, it seems like the nonwhite students who apply have a better chance of being accepted to a given program than white women, who in turn have better odds than white men. It would be tempting to compare the observed frequencies to the predicted ones as a test of how well our model works, but “predicting” the same cases we used to build our model doesn’t really mean anything.

The end result here is that being female and being an underrepresented minority is associated with an increase in the odds of being admitted. However, this is a relatively small sample: only 49 people, only 12 women, only 6 minorities. As I discussed here, we have some good reasons to believe that this data might not be representative of the average student applying to philosophy programs. Further evidence of that: white males have a 19% chance of being accepted per application. We already know that any white male is competing with more than 4 other applicants for each spot, so we’re clearly overstating the success rate here.

The reason for the success of women and minorities also might be explained by other factors not covered here: their GREs, GPAs, publication record, honors, recommendations, statement of purpose, writing samples. Some of these factors are quantifiable (GPA, GRE) and we’d like to look a little deeper at those over the next few days. Some are not (statement of purpose, writing samples), although they are perhaps where the most meaningful differences are.

I hope that no one will see these results and think that the higher rates of acceptance for minority and female students means they’re getting something they didn’t earn. It is well documented that philosophy has a women problem. It has an even bigger minority problem (although it seems like that conversation is rarely heard, perhaps because there are so few voices having it). The discipline is well served if admissions committees are viewing every application from female and minority applicants and making sure the best candidates are offered admissions. After looking at some of the most successful candidates, it seems like the best people are the ones getting multiple offers, regardless of gender or race.

Some notes about the data set

After spending some time with the data, I’m starting to realize that it has a number of limitations. There are 100 lines in the data set, but unfortunately there are not 100 data points. There were 5 people who entered the survey, but exited after the first question. This is not a large data set and we have no way of knowing if it is representative of the population of graduate philosophy applicants (we in fact have some excellent reasons to believe that it is not; the response bias inherent in online survey, and the fact that TGC attracts …. a certain crowd).

There were several ‘summary’ questions added to the survey after it had opened. These questions asked about how many programs in the top 20, top 50, and T-7 (for Masters’ programs) each applicant had applied and been accepted to. Some people used this questions in addition to reporting their school-by-school results, but some only reported their results in these questions. Since they were added, some people did not have an opportunity to report their results using these questions at all.

This poses a few problems to anyone interested in analyzing this data. Basically, there are two sources of data in the spreadsheet: the summary questions, and the school by school reporting. They report related, but not identical, pieces of information. One or both is missing or incomplete for almost every person who took the survey. It might be possible to answer the survey questions based on the school by school data, but it would be time consuming, to say the least. The answers to the survey questions are also of limited use for continental students; many top continental schools are unlisted or lowly ranked on PGR.

The method I used to count acceptances, rejections, and waitlists that I used in the post about traditions only captured the data contained in the school by school reporting. I want to be honest and forthcoming about the method I used, and the flaws it has as a result. The people I identified as ‘high achievers’ were also only those that were captured by the school by school reporting (although since there was no statistical analysis, the observations I made are still valid; they just don’t capture all the people who fit into that category. Consider it a limited sample). I was only made aware of this problem because someone recognized that they hadn’t been included in that post, although they had been extremely successful this application season.

All of this is to say that while I’ve found it interesting to look at the data more closely, it should all be taken with a grain of salt. The data has its limitations, just as I have mine (as I’ve said before, I am not a mathematician).

I am still very grateful that Ian was willing to take this on, and I think he did a great job. But if someone were to take up this mantle next year, I hope they would look to design a survey that could avoid these issues.

Philosophy Admissions Survey: High Achievers

We’ve all been warned that graduate admissions in philosophy are among the most competitive in academia, where students apply widely and are lucky to be admitted to one or two programs. But we’ve also seen people who have been enormously successful, and I hope I’m not the only one who has wondered, “What about them made them so successful?” The data we have now may or may not give us a complete view, but I wanted to look at the people who were the most prolific in their admissions. Despite their successes, there was no one who applied to more than one school who had a 100% acceptance rate, but everyone I’m looking at here was admitted to at least 5 schools, often near the top of the PGR.

There are 7 people who meet this metric.

# of admission offers Gender Minority status Tradition
A 5 Male N C
B 5 Female N A
C 5 Female N A
D 6 Male Y C
E 7 Male N A
F 8 Female N A
G 14 Male N A

Undergraduate Career:

All of these applicants went to a selective or moderately selective undergraduate institution. All had high overall GPAs—the lowest in the 3.60-3.79 bracket (applicant B), and only one other outside of the 3.90-4.00 bracket (applicant G)—as well as strong philosophy GPAs. All were philosophy majors, and 5 had additional majors or minors. 5 out of the 7 were applying as seniors in undergrad (or had completed their degree within the past 6 months). Only applicant C had a Masters degree, which was in philosophy, and applicant E had completed his degree in the past or to two years, but had not spent any time away from the academic study of philosophy.

Applications:

All had high GRE scores on verbal; 6 had about 95th percentile, and one had above the 90th percentile. The quantitative and writing scores were more scattered. Applicant A had the lowest quantitative scores, in the 50th-59th percentile, and the best quantitative scores were in the 90th-95th percentile (scored by applicants B and G). Writing scores were mostly in the 90th-95th or 95th and up categories; Applicant B scored in the 70th-79th percentile.

Not a single applicant had a publication. Only two (B and G) had somewhat relevant work experience; the rest had no work experience.

All applicants had letters of recommendation predominantly from philosophy professors, including endowed chairs or tenured professors. Applicants B, D, E, and G all had two letters from endowed chairs of philosophy. Every applicant has at least one letter from a tenured professor; Applicant E had a high of 4. Applicants A, B, C and F had one or two letters from tenure track faculty. No one had letters from adjunct faculty. Applicants G and E had letters from non-philosophers. Applicants A, C and F only had 3 letter writers in total; the other applicants had 5 letter writers in most case, although applicant E had 7 letter writers. It is unclear whether the applicants with more than 3 letter writers submitted more than 3 letters with each application, or if they only used some of their letter writers for some of their schools.

Most applicants listed two or three areas of philosophical interest in their statements of purpose. Those areas are:

19th Century Continental Philosophy, 20th Century Continental Philosophy, Philosophy of Art, Decision, Rational Choice and Game Theory, Philosophical Logic, Political Philosophy, Ethics, Applied Ethics, Philosophy of Race, Ancient Philosophy, Early Modern Philosophy: 17th Century, Metaethics (metaphysics, epistemology & semantics of morality), Philosophy of Language, and History of Analytic Philosophy (incl. Wittgenstein). Ethics and political philosophy appeared three times each; philosophical logic appeared twice, but there was no other overlap of interests.

Writing samples generally covered one or more of the interests from the statement of purpose. The only exception was applicant G whose writing sample was not on any of his stated interests.

Each applicant submitted a large number of applications, the fewest being nine and the most being 18. Most applied across the PGR, often including unranked programs; this is especially true of the two continental candidates, A and D.

Here are their overall results:

Admitted Denied Waitlisted Admitted or waitlisted in PGR top 20 Admitted or waitlisted in PGR 21-50
A 5 5 2 3 4
B 5 11 0 3 1
C 5 4 0 3 0
D 6 2 1 1 0
E 7 1 3 9 1
F 8 3 0 0 4
G 14 3 1 13 2

Since there were relatively few international programs, I used the US PGR for the top 20 and 21-50, and simply assigned the rank from the global list to any international programs. (This was largely due to laziness on my part, but it wouldn’t really change the results too much).

After spending some time looking at this, I’m a little surprised. I’m sure every one of these applicants deserved their successes, but there’s nothing here that stands out as the reason why. Each candidate has an obviously strong profile, but there are many people with equally strong profiles who were shut out, or at least did not have their pick of programs. This seems to support what many people believe: high GREs and GPAs get your profile looked at, but it’s in the letters, statement of purpose, and writing sample that get you admitted.

Philosophy Admissions Survey: Traditions

Thank you to Ian for sharing the data from the philosophy admissions survey! All the data came from here: http://faircloudblog.wordpress.com/philosophy-admissions-survey/

Edit 7/27: Please read here: https://wordpress.com/read/post/id/71818922/24/  about some of the issues with the data set generally, and the conclusions of this post. I’m leaving this post as is for now, but there might be an update in the future. 

First, I wanted to see how many people had been successful. I calculated a ‘success rate’ by finding the number of denials, acceptances, waitlists, and acceptances to other programs (Edited to add: this ended up not working as well as I thought it originally had. I revised substantially in light of this).  I calculated the percentage of admittances vs total notifications for each profile*, then found the mean for an average acceptance rate of 16.09%.

This is high, all things considered. I think we all know it’s much, much lower at top departments (Michigan said they had ~250 applicants for ~4 spots this year, for example). This includes departments both on and off the PGR. The median and the mode were 0%.

When I looked a little closer, it seemed like continental students were having slightly more success. Ian helpfully included a question about the tradition people were working in. The answer options were analytical, continental, “no such commitment is reflected”, and of course the ever popular leaving it blank. I broke the data down into three categories (combining blank with no commitment) to get these results:

  Average success rate by tradition std deviation
N=67 Analytic 16.8899197817 28.70916639
N=14 Continental 24.7997835498 23.26700998
N=19 “No such..”/none listed 9.0052336724 19.96581012

So it does appear that continental students do better. However, a t-test showed the differences between analytical and continental acceptance rates are not statistically significant (p = .3010). The difference between continental and ‘no such’/none was statistically significant (p = .0445)** although the difference between ‘no such’/none and analytical was not statistically significant (p =.3050). So, if you’re considering between continental and not taking a stand, perhaps you should go with continental!

I also calculated a ‘positive outcomes’ category based on acceptances, waitlists, and acceptances to alternatives.  Especially since we started the survey before April 15th, there were likely some people who ended up getting off those waitlists. The average positive outcomes rate was 24.32%.

So, some interesting things: 0% was again the median and the mode, but there were multiple people who had a 100% positive outcomes. This does make me wonder how complete the data is, if people left off rejections for the sake of time or simplicity.

Positive Outcomes by tradition  
Analytic 24.5141860627 35.6047528641
Continental 38.5642135642 30.4808285299
“No such..”/none listed 13.1676003735 26.7242422519

Again, continental candidates outperform analytical candidates by quite a bit. A quick t-test between the two does not show statistically significance, however (p = .1735).
If you have any questions or comments on the procedures I used, let me know! I’m a philosopher, not a statistician, so I likely made some mistakes here. I hope to look at a few other things, including gender breakdowns and more about what makes a ‘strong’ candidate, over the next few days!

*As Ian noted, a number of people who filled out the survey did not include any data about their results (ie. they filled out the demographics questions, but didn’t say if they were admitted/rejected). I left them in initially, although I might redo it without them.

**When my spouse (who has a strong statistics background for social science research) looked at this, he said because I’m comparing 3 groups across 3 tests, the level of statistical significance in the p-value drops to 1/3 of .05, or .0166. Therefore, this might not be a statistically significant difference. We did a one-way ANOVA which gave a p-value of .118 (not statistically significant). I’m hoping he’ll explain more about this to me and I’ll be able to update/clarify this post further.