‘Closeness to concepts’ in terms of social science research is something I’ve been talking about a lot recently when discussing papers and research proposals to the point now where the phrase is finding its way into forthcoming publications. With that in mind, I thought I’d write out exactly what this phrase about as something of a reference point for this upcoming work.
In short: ‘Closeness to concepts’ is a progression of what we mean by ‘validity’ of our measurements when testing concepts and theory using empirical data (see Brewer 2000).
When we are working with empirical data, we of course seldom ever have the luxury of data that gets to exactly the concept or attitude that we want to investigate. By and large, we use proxies or adapt the data available in order to produce measurements of whatever it is we seek to test. We do this for both independent and dependent variables.
For example, we might want to know the effect of a certain attitude on what type of party one might vote for. Surveys don’t ask questions about ‘what type of party are you going to vote for’, but we recode data reported voting intention according to our definition of types of parties in order to test this (for example incumbent parties, populist parties, far right parties, and so on).
While that is a (relatively) simple case of finding and manipulating existing data to connect with a concept that you are looking for, many studies employ more complex methods and particularly so when dealing with attitudes.
For example, what method would we employ to investigate xenophobia using survey data? Most surveys ask questions wanting respondents to judge the impact of immigrants on the culture and economy of the host country. They also ask questions pertaining to a respondent’s desire to have immigrants as neighbours. In these situations with this sort of data, how do we operationalise testing for xenophobia?
The simple answer is to code anyone over a midpoint of a scale and towards a normatively ‘negative’ answer about any question regarding immigrants as xenophobic. If Question A asked about the impact of immigrants on the national culture of Country X on a scale of 0 to 10, with 0 being very bad and 10 being very good, then this would mean anyone scoring 0/4 would be classed as ‘xenophobic’, with anyone elsewhere on the scale coming in as ‘non-xenophobic’. Simple? Yes. Correct? No.
It is in these circumstances where empirical investigations often disconnect from concepts. This is where validity breaks down. Too often theory is discussed at length in literature reviews and following research designs, concepts drawn out ready to be applied or tested, but then the data used does not quite properly connect to it. Rather than considering the nuances of how individual questions (and indeed response items) are framed and exactly what element of the broader topic they are placed in that questions are getting at, researchers too readily select empirical data based loosely topics or literatures without ever achieving real proximity to their concepts (a paper I recently reviewed did exactly this).
The same is also true of survey design, with questions designed with the purpose of getting at a particular attitude or belief actually end up measuring something else by blurring concepts together. A good example of this is questions which ask respondents to make judgements about immigration’s effect on the economy - is this purely a judgement about immigration, or does it capture as much about perceptions of the economy as it does immigrants?
In our example above the concept we are aiming to get at is xenophobia. This is quite a specific concept; it is not racism (a deep rooted prejudice), but is something less sister and altogether more difficult to find - fear of the unknown.
Rather than classify all individuals on the normatively negative side of a scale about attitudes towards immigrants as xenophobic, our research design in this case requires us to take a nuanced approach at finding something specific. We need a design which can isolate xenophobia without picking up genuine racial prejudice.
The exact answer to that question will depend entirely on the data available. But essentially, researching xenophobia will always require a multi-track approach to isolate respondents who have both negative attitudes towards immigrant but who also do not have much knowledge, understanding or experience of immigration and immigrants.
To isolate these effects we could operationalise ‘political knowledge’ variables perhaps as a proxy of a respondent being ‘tuned in’ to the political world around them, testing for interaction effects between this and negative attitudes towards immigrants.
Or better still we could take a ‘domicile’ based approach. We could isolate respondents living in cities (much more likely to have contact with immigrants) and split them them from those dwelling in rural areas (unlikely to have high levels of contact with immigrants).
Considering our ‘closeness to concepts’, we should take rural respondents (less likely to have ever had immigrants as neighbours) indicating that they would not like immigrants for neighbours as far closer to the concept of being xenophobic than urban dwellers (more likely to have had this experience) indicating the same. We we are doing here is hoping to split out genuinely racist attitudes from those which are truly xenophobic and achieve ‘closeness to concepts’.
We should add as many layers of cross-sectionalitey into the picture as the data will allow - we could use educational (to find University graduates) and occupation (finding those in professional classes) variables to further strip away those who are more likely to have had substantial contact with migrants. Anything we can do to tighten the link between our variable(s) and the concepts we are applying/investigating, we should do.
This also applies to testing models of voting behaviour, for example economic voting (see Lewis-Beck and Paldam 2000 and the associated special issue papers). How can we operationalise economic voters using survey data? Is it just a case of looking for voters having strong attitudes about the economy and seeing whether this correlates with voting against governments? Absolutely not - we should be using a multi-faceted approach to track blame, satisfaction, and relative deprivation to likelihood of voting for incumbents. Everywhere there is a theory or a concept, we have to think long and hard about the steps that take us from A to B and what data to use to get closest to that process.
The exact approach for achieving ‘closeness to concepts’ in each circumstance should depend on whatever data is at hand. But we should always be considering our theory and our concepts, the true nature of them and the processes and steps they go through, and how best to get truly close to it using what empirical evidence we have. Rather than just seeing validity as a binary measure of whether the data we have covers the concepts we are investigating or employing, we should always be finding ways to get as close to our concepts as is possible.