Monthly Archives: August 2014

Philosophy Admissions Survey: A deeper look at gender and minority status

 

Our original post about gender and minority showed that minority students and women had higher rates of success, based on 49 cases, looking at admissions to any program, not just PGR top-50. A larger number of participants reported the results of their applications to PGR top-50 programs, but these were recorded in such a way that we had to manually code them in order to analyze them. After we did this, we were able to expand the analysis. The results of these new models (including those here), which are based on more observations, provided a different view on the effect of minority status, at least. 

————————————–

We wanted to know if race adds anything to the prediction of PhD program admission success. To do this, we compared a model using only the three predictors we had previously selected to one that also included minority status. If the model that includes minority status gives a significantly better prediction than the one that doesn’t, this would tell us that PhD program admission rates differ between white and minority students with the same gender, undergrad GPA and verbal GRE score. Four of the participants whose answers we used to build our three-predictor model didn’t report minority status (cases 43, 64, 65, and 81). Obviously, we could only include cases that contained minority status in building the model that contained this variable, so we used the remaining cases. Since we can only meaningfully compare two models build on the same dataset, we also had to build a new three-predictor model using these cases. We created a dataset that didn’t contain these cases, build the two models, and compared them using an analysis of deviance.

Here are the slope and coefficients, with Wald Z statistics and probability values, for the three-predictor model constructed using the new dataset yielded the following regression equation:

 

Coefficients:          
  Estimate Std.Error    z_value Pr(>|z|)  
(Intercept) -17.72928 3.39103 -5.228 1.71E-07 ***
gender 0.32076 0.16915 1.896 0.057925 .
gpa 1.45217 0.42464 3.42 0.000627 ***
gre_verbal_round 0.11182 0.03234 3.457 0.000545 ***


These are very close to the results we got when we used the full data set. There is still no evidence for lack of fit, chisq(70) = 66.46273, p = 0.5977475. The deviance test comparing this model to the null model is still strongly significant, chisq(3) = 43.52594, p = 1.902966e-09. Gender is now (just barely) not significant, but as discussed earlier, the Wald Z test used to measure the contribution of each predictor to the strength of prediction is not that sensitive. The important thing to note is that the coefficent estimates hardly changed at all, suggesting that the relationships we observed among the predictors and the outcome are fairly reliable, and don’t depend that much on which cases are included.Here’s our new regression equation: Y = -17.72928 + gender*0.32076 + gpa*1.45217 + gre_verbal*0.11182

We also built a model that included minority status. Here is a summary of the results:

  Estimate Std.Error z_value Pr(>|z|)  
(Intercept) -17.3338 3.4596 -5.01 5.43E-07 ***
gender 0.3356 0.1703 1.971 0.048737 *
gpa 1.3977 0.43 3.25 0.001154 **
gre_verbal_round 0.1101 0.0326 3.378 0.000731 ***
minority -0.1725 0.2288 -0.754 0.451007  

Other than the inclusion of minority status as a predictor, this model is pretty similar to the three-predictor model. The relationships we observed among the other three predictors and the outcome don’t change very much when minority status is accounted for. The Wald test for the minority status variable is nowhere near significant. There is no evidence for lack of fit with this model, chisq(69) = 65.88326, p = 0.5841074. It definitely performs better than the null model, chisq(4) = 44.10541, p = 6.100333e-09. The p value is slightly higher than the three-predictor model because there is one more parameter (that’s why there are 4 degrees of freedom for the chi square test instead of 3) which is free to vary and not much of a decrease in deviance (value of chi squared is similar).

We compared the two models by testing the difference in residual deviance (the amount of variation in the outcome that each model fails to explain) using a chi square test, chisq(1) = 0.57947, p = 0.4465201. Adding another predictor to the model will always decrease the residual deviance a little bit, but the difference in this case was small and not statistically significant. To give you an idea of how small, the residual deviance of the simpler model was 66.463, and the difference was 0.57947. This indicates that the small improvement in prediction gained by adding minority status to the model probably occurred by chance. There is no evidence for a relationship between minority status and the odds of admission when controlling for gender, undergrad GPA and verbal GRE score.

What’s especially interesting about this is that there is a relationship between minority status and program admissions when gender, undergrad GPA and verbal GRE score are not controlled for. We can demonstrate this by building a model that contains only minority status and comparing it to the no-predictor model. Here’s what that model would look like:

  Estimate Std.Error z_value Pr(>|z|)  
(Intercept) -1.10192 0.08141 -13.535                  <2e-16 ***
minority -0.45622 0.21084 -2.164 0.0305 *

The Wald statistic for minority status is significant, and so is the overall model, chisq(1) = 5.002334, p = 0.02531316. Unfortunately, the deviance test detected a model violation, chisq(73) = 105.6568, p = 0.007471957. Nine outliers were present (cases 37, 14, 38, 33, 53, 10, 23, 18, and 44). I wasn’t comfortable deleting that many observations, so instead I tested the marginal relationship between minority status and admission success using Pearson’s chi square test of association. Nonminority applicants succeeded in 201 out of 602 applications, whereas minority students succeeded in 32 out of 152 applications. Being white was associated with a significantly higher rate of success, chisq(1) = 4.7407, p = 0.0295. However, we already know that this effect shrinks markedly and is no longer statistically significant when gender, undergrad GPA and verbal GRE score are modeled.

Essentially, decreased rates of admissions for minority students are explained by their GPAs, verbal GRE score, and gender. Since higher GPA and GRE scores have positive effects on admissions, the minority students who applied to PhD programs and submitted their results had weaker applications on these quantitative measures. There are many possibly explanations for why this may be the case, and again, these results are based on the data we have. When we looked at a smaller subset of students and expanded our measure of success to include admissions outside of the PGR top-50, we found that minority students had a higher likelihood of success. 

 

We also wanted to know if gender had an effect on graduate school admissions, so we built a model that didn’t include gender and compared it to the original thee-predictor model. Here’s what our new 2-predictor model looks like:

  Estimate Std.Error z_value Pr(>|z|)  
(Intercept) -17.77351 3.03041 -5.865 4.49E-09 ***
gpa 1.69816 0.40448 4.198 2.69E-05 ***
gre_verbal_round 0.10356 0.02781 3.724 0.000196 ***

The two-predictor model predicts significantly better than the null model, chisq(2) = 54.47855, p = 1.479562e-12, with no evidence for violation of the logit model according to the chi square deviance test, chisq(74) = 73.89527, p = 0.4487002. Now let’s compare this model to the original three-predictor model. Once again, we are testing the difference in deviance between the two models against the null hypothesis that gender is unrelated to application success in the population when undergrad gpa and verbal GRE are controlled for. We know that the deviance of the more complex model will be somewhat lower, but we want to see how much lower, and test this difference for significance using a chi square test.

Model 1: (success, applied) ~ gpa + gre_verbal_round

Model 2: (success, applied) ~ gender + gpa + gre_verbal_round

 

 

  Resid. DF Resid. Dev Df Dev.
1 74 73.895    
2 73 69.438 1 4.457


How much greater is the probability of success for a female applicant? That depends on the values of the other predictors. Let’s look at the modal values for undergrad GPA and verbal GRE score in our sample. For GPA, the most common answer is 4, which actually denotes grade point averages ranging from 3.9 to 4. For verbal GRE, the most common answer is 99, which actually denotes a range of percentile scores from 96 to 99.

For a typical (in our sample, but probably not anywhere else) male candidate, the estimated log odds would be -17.72928 + 4*1.45217 + 99*0.11182 = -0.85042, corresponding to odds of e^-0.85042 = 0.4272, or a probability of success for each application of 0.4272/(0.4272+1) = 0.299 (or 29.9%). The 13 candidates who actually have this combination of predictor values submitted 123 applications, of which 45 were successful, for a rate of 36.6%. So far our model looks pretty good. For a female candidate with the same GPA and verbal GRE, the estimated log odds would be -17.72928 + 0.32076 + 4*1.45217 + 99*0.11182 = -0.52966, corresponding to odds of e^-0.52966 = 0.58881, or a probability of success for each application of 0.58881(0.58881+1) = 0.9355 (or 93.55%), a shockingly high estimate. The five candidates who actually have this combination of predictor values submitted 40 applications, of which 24 were successful, for a rate of 60%. This is very high, but not nearly as high as our model predicts.The chi square test for the difference in deviance between the two models is significant, chisq(1) = 4.457, p = 0.03475849.

In other words, there is a statistically significant difference between the predictive power of the model that includes gender and the one that doesn’t. Female applicants have greater odds of being admitted to a top 50 program than male applicants with the same undergraduate GPA and verbal GRE score. How much greater? We can get the difference in odds between male and female applicants by antilogging the coefficient for gender, e^0.3356 = 1.399, meaning that female applicants have 39.9% higher odds of succeeding with each application. 

Philosophy Admissions Survey: What determines success?

So far, we’ve looked at the effects of tradition, gender, and minority status on admissions, as well as some of the most successful candidates. But what are the factors that most determine success for any given candidate? How much weight does each factor carry? What are the best ways to improve one’s applications, in order to increase the odds of success? This post will hopefully answer a few of those questions. The analysis here is again completed by my spouse (this time in R, if anyone was curious).

———————

We wanted to determine what factors are associated with admission to top 50 ranked philosophy PhD programs. We had data from 95 individuals who had applied to 804 philosophy PhD programs. We used the results of the philosophy admissions survey, from which we extracted information about each candidate, which we coded into variables for the purpose of building a statistical model.

In the end, the models here were built based on 84 individuals. The total acceptances, wait-lists, and rejections were determined from the summary question at the end of the survey, for both PGR top-20 and 21-50 programs. 32 individuals took the survey before those questions were added to the survey; their results were added by hand.

84 responses is a large enough sample size to make some interesting conclusions about the population. However, the sample size does limit the number of variables that can be considered at one time. If we had tried to use the information from every question, it would be impossible to distinguish the variables that were making a difference in admissions rates from those that were not. It would also reduce the sample size, since some people left at least a few question blank. We had to eliminate some questions in order to get a better picture of what was making a difference. The variables considered are: gender, minority status, teaching/work experience, publications, graduate degrees in philosophy, undergraduate institution selectivity, GRE scores (all three sections), undergraduate overall GPA, and undergraduate major GPA.

The following variables can take values of zero or one:

gender: female (1) or male (0)

minority: participant is a minority (1) or not (0)

experience: participant has (1) or does not have (0) teaching experience

philgrad: participant has (1) or does not have (0) a graduate degree (generally a masters degree) in philosophy.

published: participant’s work has (1) or has not (0) been published in an academic philosophy journal of any kind

For some survey questions whose answers were of interest for this analysis, the survey asked participants to indicate which of several ranges they fell into. For example, does the participant’s undergraduate institution admit 0-25%, 26-50%, 51-75%, or 76-100% of candidates? We recoded these ordinal variables (meaning they consisted of ordered categories) into continuous variables (numbers that can take any value in a certain range) by using the value at the top of the range selected by the participant (e.g. 51-75% becomes 75%).

There may be meaningful differences between participants from schools with respective admission rates of 51% and 75%. Unfortunately, this information is not in our model. This sort of procedure is generally frowned upon in the statistics world because ordinal data behaves differently than continuous data. We did this because continuous predictors require one parameter each, while ordinal predictors require one fewer than the number of levels. For example, an ordinal variable with four possible values (e.g. 0-25%, 26-50%, 51-75%, or 76-100%) would require three parameters. In regression models, more parameters means more things have to be estimated, and there are more chances to be wrong. In general, simpler models work better. We already had too many variables and not enough data, so we made this compromise. We treated the following variables in this fashion:

selectivity: percentage of applicants admitted at participant’s undergraduate institution

gre_verbal: verbal GRE percentile

gre_quant: quantitative GRE percentile

gre_writing: writing GRE percentile

gpa: overall undergraduate GPA

majgpa: undergraduate GPA for classes in participant’s major (usually, but not always, philosophy)

The online survey also asked participants which of 3 orientations (analytic, continental, none) participants indicated in their applications. We turned this into two dichotomous variables:

analytic: participant’s application indicated an analytic orientation (1) or did not

continental: participant’s application indicated a continental orientation (1) or did not

Participants who selected “none” would have a value of 0 for both of these variables.

For each individual, we knew the number of programs applied to and the number of successful applications (those which resulted in acceptance or being placed on the wait-list). Because we were modeling a binary outcome (you get into a program or you don’t), we used logistic regression, a form of statistical modeling which takes this into account.

First, we tried building our model with all our predictors on our entire data set. Below is a summary of this model. “Estimate” refers to the estimated coefficient in the logistic regression model, equal to the change in the natural log of the odds ratio associated with a one unit increase in the value of the predictor. Since the only possible values for some of these predictors are 0 and 1 (gender for example), in these cases it refers to the difference in the natural log of the odds ratio between the two levels of the variable. The important thing here is that positive coefficients mean an increase in the predictor (or a value of 1) is associated with an increase in the odds of application success. Std. Error is a measure of how reliable the estimate of the coefficient is: a high standard error means the true value could actually be very different from the one shown. This doesn’t really matter for our purposes. Z value and Pr(>|z|) are referring to a Wald test, which is a comparison between the model we built and one that includes all the variables except the one being tested. A high value of z, which corresponds to a low value of p, means that the variable is contributing quite a bit to the predictive power of the model. A period or one or more asterisks to the right of the numbers indicates the level of statistical significance, as shown in the “Signif. codes” below. If none of these are present, this means that the contribution is not statistically significant.

Coefficients:

Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.88E+001 6.34E+000 -2.97 0.00302 **
selectivity -8.27E-004 4.07E-003 -0.2 0.84
minority -4.87E-001 3.24E-001 -1.5 0.13
gender 1.55E-001 2.23E-001 0.7 0.49
gpa 7.53E-001 7.33E-001 1.03 0.3
majgpa 1.24E+000 1.43E+000 0.87 0.38
experience -2.92E-002 2.43E-001 -0.12 0.9
analytic -1.41E-001 4.43E-001 -0.32 0.75
continental -3.13E-001 5.06E-001 -0.62 0.54
philgrad -1.00E-001 2.32E-001 -0.43 0.67
published -5.68E-002 3.20E-001 -0.18 0.86
gre_verbal 9.68E-002 4.86E-002 1.99 0.04629 *
gre_quant 7.66E-003 7.89E-003 0.97 0.33
gre_writing 1.72E-004 9.02E-003 0.02 0.98

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

At this point, only one predictor (verbal GRE score) was significantly contributing to the prediction of the model, with a positive coefficient indicating that higher GRE scores are associated with higher odds of a successful application. I thought maybe there were a few unusual observations that were unduly influencing the model fit, so I examined the residuals, and found one outlier (participant #38, with a residual of -2.4754). I removed the outlier and tried building the model again, but got pretty much the same thing. I thought that maybe having too many predictors was introducing too much “noise,” so I tried cutting four that looked like obvious duds (selectivity, major gpa, teaching experience, gre writing score) and rebuilding the model using the full data set.

Coefficients:

Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.88E+001 6.34E+000 -2.97 0.00302 **
selectivity -8.27E-004 4.07E-003 -0.2 0.84
minority -4.87E-001 3.24E-001 -1.5 0.13
gender 1.55E-001 2.23E-001 0.7 0.49
gpa 7.53E-001 7.33E-001 1.03 0.3
majgpa 1.24E+000 1.43E+000 0.87 0.38
experience -2.92E-002 2.43E-001 -0.12 0.9
analytic -1.41E-001 4.43E-001 -0.32 0.75
continental -3.13E-001 5.06E-001 -0.62 0.54
philgrad -1.00E-001 2.32E-001 -0.43 0.67
published -5.68E-002 3.20E-001 -0.18 0.86
gre_verbal 9.68E-002 4.86E-002 1.99 0.04629 *
gre_quant 7.66E-003 7.89E-003 0.97 0.33
gre_writing 1.72E-004 9.02E-003 0.02 0.98

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’

Now the Wald tests for gender, undergrad GRE and verbal GRE score were all significant. I tried building a model using only these 3 predictors.

Coefficients:

Estimate Std. Error z value Pr(>|z|)
(Intercept) -17.74 3.03 -5.85 4.80e-09 ***
gender 0.35 0.17 2.13 0.033047 *
gpa 1.65 0.41 4.08 4.47e-05 ***
gre_verbal 0.1 0.03 3.72 0.000201 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

All 3 variables had significant Z tests.

Next, I performed a deviance chi square test to make sure the logistic regression model fit the data. The logistic regression model assumes a linear relationship between the predictors and the natural log of the odds ratio for the outcome (application success) with random errors following the binomial distribution; we used this same type of model here. This test compares the distribution of residuals (the differences between our predictions and the results) to what would be expected under this distribution, generating a chi squared statistic. A bigger difference means a higher value of chi squared and a lower p value. Traditionally, .05 is used as the cutoff for evidence of a violation of the logistic regression model. If a poor model fit (indicating violation of assumptions) was detected, that would invalidate our results. Incidentally, the chi square deviance test should only be used when there are multiple results for each combination of predictors (in this case, each applicant). Otherwise, its results are not valid. With chisq(73)=69.43829, p = 0.5964882, there was no evidence for lack of fit. The logistic regression model is appropriate.

Next, I performed an overall significance test for the model. I wanted to know if the predictions it generated were significantly better than just guessing. To answer this question, I compared the accuracy of our predictions for our sample to the level of accuracy that we would get by calculating the overall acceptance rate and using that as our guess for every participant’s applications. We calculated the deviance (a measure of the difference between what we observed and what we predicted) for both models, then tested the difference between the two deviance statistics for significance using a chi square test, chisq(3) = 58.93553, p = 9.922735e-13. The chisquare test for the overall model is highly significant, indicating that it performs much better than chance. I checked for outliers and found 2: Participants #23 (residual = 2.32834320) and 38 (residual = -2.54603528). I wanted to make sure that they weren’t having a large influence on the model, which can happen sometimes, so I tried removing them and rebuilding the model:

Coefficients:

Estimate Std. Error z value Pr(>|z|)
(Intercept) -17.5 3.03 -5.77 7.95e-09 ***
gender 0.38 0.17 2.29 0.022186 *
gpa 1.67 0.41 4.07 4.65e-05 ***
gre_verbal 0.1 0.03 3.62 0.000299 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The results were pretty much the same. The coefficient estimates and Wald statistics changed very little. There was still no evidence for lack of fit, chisq(71) = 57.40618, p = 0.8782012. The model still performed significantly better than chance, chisq(3) = 57.8712, p = 1.674629e-12. Removing the outliers doesn’t really help, so we can keep them in and use the original 3-predictor model.

The final regression equation is:

y = -17.50348 + gender*0.38489 + gpa*1.67461 + gre_verbal*0.10037

This equation predicts the log of the odds ratio for a given application being successfully admitted or wait-listed to one program, given the gender, overall GPA, and GRE verbal percentile of the applicant. The variable gender takes a value of 1 for a woman and 0 for a man. The GPA is based on a 4.0 scale, and the verbal GRE is the percentile score.

For example, for a male with a GPA of 3.95 and a verbal GRE percentile of 87 applying to a program, the equation would yield a fitted value of y = -17.50348 + 0*0.38489 + 3.95*1.67461 + 87*0.10037 = -2.1565 . To interpret this, we need to antilog the fitted value to get an odds ratio of e^(-2.1565) = 0.1157 . This is equivalent to a probability of 0.1157 / (1 + 0.1157 ) = .1037, or a 10% chance of success.

The intercept in this case should be the odds when all the values are zero, meaning a male candidate with a GPA of 0 and scoring in the 0th percentile on the verbal section of the GRE. The model did not include any individuals in this range (since someone with an overall GPA of 0.0 is not graduating college, and not pursuing graduate admissions in philosophy), so a prediction in this range is likely not accurate. Given that almost all the applicants who filled out the survey had GPAs over 3.0 and verbal GRE scores above the 80th percentile, the model should not be used to extrapolate outside of that range.