How a summer school made me even more confused

Arjen De WitBy Arjen de Wit / Reading Time: 4 minutes


As every motivated PhD I’m happy to attend conferences and courses every once in a while. Thanks to a grant from the VU-Graduate School of Social Sciences I was able to attend a four-week seminar on quantitative methods by the ICPSR Summer Program in Ann Arbor, Michigan. I hoped that this program would solve the causality issues I had in my research question. The contrary is true: it raised even more questions.

  1. You should use panel data! When using survey data it is better to study the same respondents over time in order to test whether changes in X are followed by changes in Y.This can make you a bit more certain of causal relations. So I used data from the Giving in the Netherlands Panel Study (GINPS) including 1,902 people surveyed over multiple years. In this study participants report their donations to 17 of the largest charities in the Netherlands, like World Nature Fund or the Salvation Army. From annual reports we know how much those organizations receive from different governmental subsidies. This allows me to compute how subsidies to an organization in a certain year are correlated with private donations in the following year.

But then the confusion came in. These are only 17 charities, are my results the same when I exclude one of these organizations? Can we expect effects to be the same for international aid organizations as for charities working in the field of health care?

  1. You should include fixed effects! Fixed effects account for variables that don’t change over time, which allows you to look only at the effect of variables that do change over time. For example, some organizations receive both more subsidies and more donations just because they are bigger organizations. An analysis that includes fixed effects for organizations rules out the effect of organization size.

But there is the confusion again. Should I use fixed effects for individuals or for organizations? A person’s gender or other individual characteristics can disturb the effect, as well as an organization’s size, sector or age. Or should I use fixed effects for each unique combination of individual and organization?

  1. You should do Tobit regression! Because most people don’t donate to all 17 organizations in the sample there are a lot of cases scoring 0 on the dependent variable. Linear regression is not appropriate in that case. Tobit regression, I was told, includes both the likelihood of scoring higher than 0 and the linear distribution of valid donations in one estimation.

Confusion! Are the decision whether or not you donate and the decision on how much you donate the same thing? Are non-donors motivated by the same considerations as donors?

  1. You shouldn’t do Tobit with fixed effects! The ‘incidental parameters problem’ means that fixed effects can make the estimation of a binary outcome variable (donating or not donating) biased. In other words, Tobit and fixed effects are not always good friends.

So shouldn’t I use Tobit? Or shouldn’t I do fixed effects? Or is there another way to account for this problem?

I went to the ICPSR summer school to get answers on the causality issues I had with my research question. Do higher government subsidies lead to lower charitable donations, is it the other way around, or is there another variable that causes both subsidies and donations? The summer school provided answers but those answers confused me even further. More difficult methods come with more difficult problems, and that’s how researchers keep on struggling with their analyses until they come up with answers that are the best they can get to.

Arjen de Wit is a PhD candidate at the Center for Philanthropic Studies, where his research concerns the question to what extent government support affects volunteering and charitable giving. He also works for ProDemos, House for Democracy and the Rule of Law, and writes for his personal blog

Academic life is not a mad hazard – not that much

Arjen De Wit  by Arjen de Wit / Reading Time: 4 Minutes /

This is the way I perceived the review process when I submitted a research proposal to NWO. I Imagined a large, brown table with about seven professors, old and grumpy, complaining in a loud voice about the future of the social sciences. On the table is a pile of top quality proposals, the result of months of work from ambitious young scholars from around the Netherlands. Then the evil professors quickly skim the texts to look at the names of the supervisors, and divide the grants among people that are from their own university. They are finished within 10 minutes, after which they start drinking another cup of black coffee.

“Hence academic life is a mad hazard”, as Max Weber noticed a century ago. Your chances depend on who you know and who your supervisor knows, not so much on what your qualities are.

Max Weber illustrated by Harald Groven
Max Weber illustrated by Harald Groven

Inequality grows bigger and bigger. We have to fund our research with grants, and to successfully apply for grants you have to have had grants in the past. So we continue playing the gambling game, as David Passenier recently pointed out, without knowing what happens in the black box of the reviewing process.

The selection process

My picture on this black box completely changed during a PhD course on proposal writing. Aart Liefbroer, professor in demographics and experienced in reviewing all kinds of applications, delivered a guest lecture on the NWO reviewing process.

He explained the reviewing process in the case of VENI proposals, the NWO grants for research after PhD graduation. First, there is a selection of proposals by an internal committee based on track record, quality, innovation, scientific impact and valorization. Second, the selected proposals are sent to two or three external reviewers, both in the Netherlands and abroad, who have to answer a large number of questions on different aspects on the quality, originality and feasibility of the research design. Third, applicants are given the possibility to write a 1000 words rebuttal on the comments of the reviewers. Fourth, members of the selection committee write a pre-report based on the proposal, the reviews and the rebuttal. All committee members give a score on each proposal, the scores are averaged and the committee derives a selection. Fifth, the selected applicants are invited for an interview, in which they have the opportunity to present their proposal and react on questions posed by the committee. Sixth, the committee once again grades each proposal, the scores are averaged and the committee decides on the final selection.

Are you still there? Quite a procedure, isn’t it?

The most intriguing message of Liefbroer’s talk, at least to me, was that the committee almost always reaches consensus. From the beginning it’s quite clear which proposals are the best, and this picture doesn’t really change during the process. This is the moment when I realized that there is probably too much time spent on proposal selection, instead of too little.

No doubt

Your proposal should be better than perfect, Liefbroer stressed. That’s the message I took away. Submitting proposals is no gambling, so don’t gamble. Do whatever you can do to convince reviewers and the committee. Make sure there is no little aspect of the proposal that can cause any doubts, because any doubt is a reason for rejection.

That’s what I do now in every application. When I believe some argumentation is ‘good enough’, I rewrite it to make it perfect.

Of course committee members and reviewers have their preferences, and of course they are biased because they know some of the applicants. But I changed my mind about the evil old professors behind a table deciding on my academic future in a split second and on questionable grounds. The reviewing processes is quite reliable and thorough – maybe inefficiently thorough – and it’s up to yourself to make sure that there aren’t any weaknesses in your proposal.

My research proposal wasn’t granted, by the way. And I still believe it’s not fair.


Arjen de Wit is a PhD candidate at the Center for Philanthropic Studies, where his research concerns the question to what extent government support affects volunteering and charitable giving. He also works for ProDemos, House for Democracy and the Rule of Law, and writes for his personal blog