EDEV_507 Week 7 Initial Post

Module 7 primary focus is the development of a research plan that will form the foundation for the one that will underpin the actual doctoral thesis proposal. And as a doctoral thesis is partially a vehicle for demonstrating the ability to research, the next three weeks begin a serious look at the key ingredients in the toolkit for a thesis development.


In this week’s post, I will address the pragmatic issue of sample size and the methodological issue of construct validity. These two issues are intimately linked, and errors or misunderstandings between the relationship of these will ultimately weaken any purpose and claims I wish to make about the research.

Ideally, I would have access to a large population of undergraduate students. From this population, I would randomly select as large a sample as time and resources allow. In doing so, the probability sampling procedure (Blaxter, Hughes, & Tight, 2006) would likely mitigate the non-qualitative oriented people the research wishes to address. However, being probabilistic, the sample number needs to be sufficient to allow parametric analyses (not necessarily at the data analysis stage, but in presenting the rationale behind the sample selection). Figures vary regarding the number in the sample, and estimates can be between three and twenty times the number of variables if factor analysis methods are to be used (Mundfrom, Shaw, & Ke, 2005). As an example, one of the shortest paper-and-pen instruments for investigating epistemic cognition, the Epistemic Belief Inventory (Schraw, Bendixen, & Dunkle, 2002) has 35 items that typically factor into five categories, leading to a minimum of 105 respondents. This figure allows for no leeway as it assumes that the instrument’s validity is perfect. To provide for confidence levels and sampling errors, many more respondents are needed (Cohen, Manion, & Morrison, 2011). In Japan, educational psychology is dominated by quantitative methodologies, and if I adopted a purely qualitative orientation (which would be acceptable for the thesis), any results on the basis of non-probabilistic sampling would be treated with suspicion. At the very most, my target audience would consider my thesis grounds for further study.

My audience is first and foremost the general education division in my university (although they don’t know it yet). They are concerned with overall academic, moral and educational development in the student body, but—to be frank within this closed community—they have little conception of how students develop over their time here. I do not wish to walk in to their office and make unsubstantiated claims about how we can manage our programmes better. This thesis aims to be my proof of concept to them about how to improve our student services.

The second issue of construct validity relates directly to sample size. I will likely conduct either a mixed-methods study or a purely exploratory qualitative one for two reasons: it is unlikely that I will have access to as many respondents as I would like (or I am working on that assumption); and the question of how valid western instruments of capturing epistemic cognitive factors are in the Japanese context remains largely unknown (Hofer, 2010 and B. Hofer, personal communication, October 28, 2016). However, I need to consider very carefully how I generate my sample because any individual selected may not be representative of the general population (Blaxter et al., 2006). Following Yin (2008), the argument that the unit of analysis being small does not mean that the general ideas and theories that the research is promoting are invalid. Yin’s distinction between statistical and analytic generalisation argues for the acceptance of ontological existences using a single case. I will consider this more during the coming week.

Jim

Blaxter, L., Hughes, C., & Tight, M. (2006). How to Research (3rd ed.). Maidenhead: Open University Press.

Cohen, L., Manion, L., & Morrison, K. (2011). Research Methods in Education. Professional Development in Education(7th ed.). Abingdon: Routledge.

Hofer, B. K. (2010). Personal epistemology, learning, and cultural context: Japan and the United States. In M. B. Baxter Magolda, E. G. Creamer, & P. S. Meszaros (Eds.), Development and Assessment of Self-Authorship: Exploring the concept across cultures (pp. 133–148). Sterling, Virginia: Stylus Publishing, LLC.

Mundfrom, D. J., Shaw, D. G., & Ke, T. L. (2005). Minimum sample size resommendations for conducting factor analyses. International Journal of Testing, 5(2), 159–168. http://doi.org/10.1207/s15327574ijt0502_4 To

Schraw, G., Bendixen, L. D., & Dunkle, M. E. (2002). Development and validation of the Epistemic Belief Inventory (EBI). In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 261–276). Mahwa, New Jersey: Lawrence Erlbaum Associates, Inc.

Yin, R. K. (2003). Case Study Research . Design and Methods. SAGE Publications. Thousand Oaks: Sage Publications. http://doi.org/10.1097/FCH.0b013e31822dda9e

About theCaledonian

Scot living in north Japan teaching at a national university.
This entry was posted in EDEV_507. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s