“Such a fit exists, for example, when the research object is less context-rich than in qualitative research, i.e., when a reduced scope of a phenomenon is scrutinized, when quantification as such is possible”
This point is at the crux of the quantitative/ qualitative methodological choice. When I first started writing papers over a decade ago, I sought the advice of our section head. He had a PhD in biology even though he was employed as a language teacher (who did research into octopuses in his non-class contact time). His instruction to me was extremely valuable, but on reflection now, it was completely positivistic. Yet he managed to convince then me that (post-) positivistic research was the only way to research language, and I had no idea about qualitative research as a possibility until only a few weeks ago. As most of the readings we are doing now are written by social scientists and educators, it may be interesting to see the position of someone brought up in the hard sciences.
His argument ran like this. (I’ll couch the terminology to match Wolfgang’s as that was a succinct way of putting the situation.) Sure, language is highly complex. There are so many context-rich environments to contend with. So is biology. What ‘we’ do, he said, is to identify as narrowly as possible within any environment a single phenomenon that is subject to change. That might be how light is received by a retinal ganglion cell. Theories exist to describe this process, to explain the why of the process and to predict what will happen when another–as yet untested–octopus will react to having light put near its eye. (I’m sure that I’m simplifying here, but I’m no biologist.) The theory is tested; the animal’s reaction is recorded, and the conclusion suggests a strengthening or weakening of the theory. All highly deductive, a priori stuff so far.
In language education, the skill, he claimed, was to identify context-poor environments within context-rich ones and only make tests where tests can be made. If there is a highly rich environment, there is the necessity for dozens or hundreds of studies to be done before any ‘truth’ may be established. He found articles in which people ‘just talked’ about situations highly frustrating as to his ‘trained’ eye; they were missing dozens of potential–and much more certain–sources of information. ‘Little by little’ was his motto, just as the hard sciences had done over 100 years. Why should the soft sciences simply bypass the hard (pun intended) work and create methods that fail to establish theories?
If we take ethnography as an example, we have a researcher interview people. From that, narratives are discovered and stories, deep and rich, evolve. Although there is no generalisability, readers can see ‘truth’ as it applies situationally to the ethnography.
If there were time and a different mindset, theoretically at least, it would be possible to isolate every point that is able to change. Each point is then subject to quantitative study. Of course, this is highly impractical. But let’s say that just one point was approached in this manner, and hundreds of researchers did similar follow-up studies on the same and the other points, a quantitative picture of the same study would emerge over a lot of time. And perhaps–to play the devil’s advocate here–there might be more stability and generalisability and most importantly more theory-making possibilities in social science. Was Ian right? Are we looking for the easy way through?