EDEV_507 Week 8 Initial Post

Purpose

The field of epistemic cognition (EC) has been extensively researched since the 1960s (J. A. Greene, Sandoval, & Bråten, 2016). As of 2016, a number of foundational constructs have been established, including include the ontology of epistemological stages (e.g. Hofer & Pintrich, 1997; King & Kitchener, 1994; Perry, 1970) and the existence of particular dimensions in knowledge stability, certainty, the role of authority and the relationship of one’s positioning to knowledge (Baxter Magolda, 1992). Certain questions have also emerged: the role of targeted instruction in domain specific areas (Hofer, 2006); how EC relates to related constructs, such as critical thinking (Felton & Kuhn, 2007) and personal self-authorship  (Baxter Magolda, Creamer, & Meszaros, 2010); and how culture mediates the influence of EC dimensions and constructs (Chan & Elliott, 2004; Hofer, 2010).

As little has been studied in the Japanese context, there is a need to investigate—simply—the question of how existing theory fits. It is hypothesised that the extent of EC stage theories exists in Japan but that such a full range, for example using Baxter Magolda’s (1992) sequence, of Absolute, Transitional, Independent and Contextual thinkers will not be present. This claim contains a contradiction that potentially introduces an internal, face, validity threat (Cohen, Manion, & Morrison, 2011). It is paradoxical, possibly nonsensical, to claim for a construct’s existence while simultaneously arguing that only a subset of that construct can be seen. This position, however, has precedence in the literature. D. Kuhn (2000) show that over their four-year college career, participants improvement in critical thinking skills was a single standard deviation, which mapped onto EC development of a single stage. Oshima and his team (2006) charted the move from one epistemic position in a Japanese teacher to another one.

Data collection methods

I will create a Liket-like survey instrument based on Schommer (Schommer-Aikins, 2004; Schraw, Bendixen, & Dunkle, 2002) and Chan and Elliot (Chan & Elliott, 2004). Quite possibly, this instrument will include items that attempt to capture factors that I suspect are Japan-specific. Together, these three item sources present many challenges.

Firstly, simply combining items from disparate instruments may introduce incompatibilities within the items. A classic example of this is Hofer’s (1997) attack on Schommer  (1989) for including items in her instrument that targeted beliefs about learning, which, according to Hofer, are irrelevant to how one conceptualises one’s beliefs about knowledge. This debate is ongoing (J. Greene, Sandoval, & Bråten, 2016). Prior to piloting the instrument, I must consider precisely which constructs I am targeting and how, if at all, the items are appropriate to that task. Additionally, I must be aware of my inclusion and exclusion criteria for the constructs themselves. This brings me onto the second challenge.

Secondly, although Cohen, Manion and Morrison appear confident in the need to investigate phenomena from the insider’s perspective (Cohen et al., 2011), a question must be asked about this certainty. If EC dimensions are human, they will be relevant to all humanity and not just be a phenomenon that is limited to the Western context. The emic approach, in Japan’s case, would mean that no study is possible at all, based on the evidence that no Japanese researcher has considered studying EC in Japan. Instead, there must, by necessity, be a degree of eticness in that I will bring outside theories into the Japanese context. This proposal, however, introduces many potential issues, not least of all the notion that I may interpret Japanese actions without due reference to the emic possibilities.

This point brings up a major question I need to address between now and the beginning of the thesis is the field of study EC in Japan has be to placed within. Cohen, Manion and Morrison (2011) rely mainly upon psychology to address cross-cultural issues. Matsumoto and Yoo (2006), similarly, cite psychology sources only in their dissection of research bias in cross-cultural research. A telling example is in their citation of Allik and McCrae’s (see also Allik & McCrae, 2004) “reverse causation” of how culture impacts on personality (Matsumoto & Yoo, 2006, p. 240). However, sociology and social psychology may also offer frameworks. Gidden’s (1984) structuration, also briefly mentioned by Cohen, Manion and Morrison (2011) but undiscussed as the double hermeneutic, and symbolic interaction (Blumer, 1969) both offer competing conceptions on the how individuals are influenced by their environment.

There are other concerns, but space is limited, and perhaps we can discuss these during the week. For now, myu data collection, as yet undecided in its exact form and content, needs to address these foundational issues. Does anyone have any thoughts on other issues that affect this? Also, I’m in Nagoya this weekend for our national conference. I may not be able to respond in a timely fashion. Please forgive me.

Jim

Allik, J., & McCrae, R. R. (2004). Toward a Geography of Personality Traits: Patterns of Profiles across 36 Cultures. Journal of Cross-Cultural Psychology, 35(1), 13–28. http://doi.org/10.1177/0022022103260382

Baxter Magolda, M. B. (1992). Knowing and reasoning in college: Gender-related patterns in students’ intellectual development. San Francisco: Jossey-Bass.

Baxter Magolda, M. B., Creamer, E. G., & Meszaros, P. S. (2010). Development and Assessment of Self-Authorship: Exploring the concept across cultures. Sterling, Virginia: Stylus Publishing, LLC.

Blumer, H. (1969). Symbolic Interactionism: Perspective and Method. Berkeley: University of California Press.

Chan, K., & Elliott, R. G. (2004). Epistemological beliefs across cultures: critique and analysis of beliefs structure studies. Educational Psychology, 24(2), 123–142. http://doi.org/10.1080/0144341032000160100

Cohen, L., Manion, L., & Morrison, K. (2011). Research Methods in Education. Professional Development in Education (7th ed.). Abingdon: Routledge.

Felton, M. K., & Kuhn, D. (2007). “How Do I Know?” The Epistemological Roots of Critical Thinking. The Journal of Museum Education, 32(2), 101–110. http://doi.org/10.2307/40479581

Giddens, A. (1984). The Constitution of Society: Outline of the Theory of Structuration. Cambridge: Polity Press.

Greene, J. A., Sandoval, W. A., & Bråten, I. (2016). An Introduction to Epistemic Cogntion. Handbook of Epistemic Cognition, 1–15. http://doi.org/10.4324/9781315795225

Greene, J., Sandoval, W., & Bråten, I. (2016). Handbook of Epistemic Cognition. New York: Routledge.

Hofer, B. K. (2006). Domain specificity of personal epistemology: Resolved questions, persistent issues, new models. International Journal of Educational Research, 45(1–2), 85–95. http://doi.org/10.1016/j.ijer.2006.08.006

Hofer, B. K. (2010). Personal epistemology, learning, and cultural context: Japan and the United States. In M. B. Baxter Magolda, E. G. Creamer, & P. S. Meszaros (Eds.), Development and Assessment of Self-Authorship: Exploring the concept across cultures (pp. 133–148). Sterling, Virginia: Stylus Publishing, LLC.

Hofer, B. K., & Pintrich, P. R. (1997). The Development of Epistemological Theories: Beliefs About Knowledge and Knowing and Their Relation to Learning. Review of Educational Research, 67(1), 88–140. http://doi.org/10.3102/00346543067001088

King, P. M., & Kitchener, K. (1994). Developing Reflective Judgment: Understanding and Promoting Intellectual Growth and Critical Thinking in Adolescents and Adults. San Francisco, CA: Jossey-Bass.

Kuhn, D., Cheney, R., & Weinstock, M. (2000). The development of epistemological understanding. Cognitive Development, 15(3), 309–328. http://doi.org/10.1016/S0885-2014(00)00030-7

Matsumoto, D., & Yoo, S. H. (2006). Toward a New Generation of Cross-Cultural Research. Perspectives on Psychological Science, 1(3), 234–250.

Oshima, J., Horino, R., Oshima, R., Yamamoto, T., Inagaki, S., Takenaka, M., … Nakayama, H. (2006). Changing Teachers’ Epistemological Perspectives: A case study of teacher–researcher collaborative lesson studies in Japan. Teaching Education, 17(1), 43–57. http://doi.org/10.1080/10476210500527931

Perry, W. G. (1970). Forms of Intellectual and Ethical Development in the College Years: A Scheme. New York: Holt, Rinehart, and Winston.

Schommer, M. A. (1989). The effects of beliefs about the mature of knowledge on comprehension. University of Illinois at Urbana-Champaign.

Schommer-Aikins. (2004). Explaining the Epistemological Belief System. Educational Psychologist. http://doi.org/10.1207/s15326985ep3901_3

Schraw, G., Bendixen, L. D., & Dunkle, M. E. (2002). Development and validation of the Epistemic Belief Inventory (EBI). In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 261–276). Mahwa, New Jersey: Lawrence Erlbaum Associates, Inc.

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 7_5

I sense a similarity between your study of reasons for the high dropout rate amongst black students and my attempt to map the personal epistemic cognition (EC) of Japanese university students. In both cases, there has been a significant body of research in other contexts. I won’t list the EC literature, but a brief search into black student dropout rates returned many studies in the U.S. context (e.g. Carpenter & Ramirez, 2007; Guryan, 2004; Steele, 1992), in South Africa (e.g. Letseka & Maile, 2008) and in other geopolitical areas. A further distinction can be drawn between dropouts in secondary and tertiary schooling (e.g. Lee, Cornell, Gregory & Fan, 2011, who compared the dropout rates between black and white students at the secondary level).

In my own research, I am compiling a list of constructs that the available survey instruments try to target and ask about. I would suspect that there are many equivalent instruments for researching black student dropout rates.

Rather than re-invent the wheel, so to speak (although if that has to be done, so be it), there may be much available research on which to base your reasoning and subsequent research design. In my case, I would dearly love the time (5 years), resources (software, technical skills in coding and interviewing, marksheet reader and marksheets for surveys, money for translation services, etc.), access (to students for surveys and to interview students) and Japanese language ability to conduct a bottom-up, from scratch phenomenology of EC. This is not possible. It seems more feasible for me to adapt or create a survey instrument based on existing ones, and use theoretical sampling of select cases (i.e. individuals) from the results to investigate possible and exploratory interviews for theory building.

You mentioned a survey that found that 40% of black students in your institution “lag behind in graduation” (can this be defined more precisely?) (Amann, 2016). What other information was captured by that study? Was the survey set up to produce descriptive statistics or was there a set of factors underpinning the questions? Or did the survey attempt to uncover the processes through which black students ‘progress’ from being comfortable first-year entrants (note my assumption here) to failing students later on in their college career?

Your study will be a very useful one. And I truly hope that it does allow you gain tenure. Best wishes,

Jim

Aman, J. (2016, November 21). Re: Week 7 – Research planning and design: Establishing your approach. [Online discussion post]. Retrieved from from https://my.ohecampus.com/lens/home?locale=en_us#

Carpenter, D. M., & Ramirez, A. (2007). More than one gap: Dropout rate gaps between and among Black, Hispanic, and White students. Journal of Advanced Academics, 19(1), 32-64.

Guryan, J. (2004). Desegregation and black dropout rates. The American Economic Review, 94(4), 919-943.

Lee, T., Cornell, D., Gregory, A., & Fan, X. (2011). High suspension schools and dropout rates for black and white students. Education and Treatment of Children, 34(2), 167-192.

Letseka, M., & Maile, S. (2008). High university drop-out rates: A threat to South Africa’s future. Pretoria: Human Sciences Research Council.

Steele, C. M. (1992). Race and the schooling of Black Americans. The Atlantic Monthly, 269(4), 68-78.

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 7_4

Continuing our discussion on the reliability of findings in a grounded study approach, I’d like to comment on the notion of subjectivity. Gasson’s (2004) metaphor of the hot stove (p. 85) inadvertently highlights a serious issue in generalisation: namely;

“If we put our hand on the stove and it is burned, we learn that hot stoves will burn us. But then it is through deduction from empirical evidence that we can identify and avoid hot stoves (this is the expected shape for a stove and it is turned on)” (p. 85)

and subsequently;

“Inductive analysis is treated as suspect because it introduces subjectivity into research and so the findings can be challenged, from a positivist perspective, as not measured from, but subjectively associated with the situation observed” (p. 85, italics in original).

To continue the metaphor, look at the conditions of the stove. Its status as ‘on’ is confirmed through a visual, not a tactile, stimulus. Yet, given the continuum from ‘just on—and still cold’ to ‘on for a while—and piping hot’, we recognise that there is no available epistemology in Gasson’s example that accounts for ‘on’ states that are still touchable. This seemingly trivial example helps place the role of subjectivity within the potential wider phenomenological frameworks available to the human experience. If all observers, i.e. human agents who have the same (or within a humanly measurable range of) perceptions, record any phenomenon with accurate measurements, then, arguably, the results should be similar.

This argument is known as Aumann’s Theorem (Aaronson, 2004; Aumann, 1976) which posits that;

“If two people have the same priors, and their posteriors for an event A are common knowledge, then these posteriors are equal” (Aumann, 1976, p. 1236).

In other words, if two individuals have the same knowledge (through experience, learning and cognitive abilities—an unlikely situation but one that offers a philosophical base from which comparisons become possible), and they know equally about a future possibility, for example the existence of aliens (in Aaronson’s [2004] example), then those two individuals will necessarily have the same opinion about the likely existence extra-terrestrial aliens.

It is trivial to state that all humans have different priors and that that leads to different interpretations of sense data and to different opinions about future events. Where the issue becomes more interesting and more relevant to grounded theory, is not the possible differences between interpretations of data, but how the researchers themselves are different.

“One cannot step twice into the same river, for the water into which you stepped has flowed on” (Heraclitus, cited in Bentz & Shapiro, 1998, p. 60).

Although I subscribe to the social constructionist worldview (Berger & Luckmann, 1967), and currently my sympathies lean towards critical realism (Bhaskar, 2008), I cannot shake the notion that much of constructed social fact are epiphenomena emanating from complex (not complicated) interactions at the intersection of perception and personal history. This, I believe, is not disputed. However, in my readings into social constructionism, especially symbolic interactionism, this intersection is rarely accounted for. Instead, divergences between interpretations of phenomena are passed over as being a natural result of individual differences. Quite possibly, the thought of having to trace an individual’s experiential lived world back to birth while taking into account individual differences in psychological traits and attributes is seen as being too complex to be worth the trouble. Far easier it is to simply accept that humans have different worldviews as in Gasson (2004). However, this position is too facile.

All humans feel hungry after prolonged fasting. All humans get sleepy. All humans share so many common experiences and our basic anatomy (including the brain mechanisms) are so similar that the argument that our perceptions are unique is one that needs to be proved, not accepted.

It is unthinkable for a PhD—even less an EdD—to attempt to answer these questions. Yet, on the other side, reducing complexity to simplistic notions of difference also fails in its intellectual scope. This recognition is partly the reason that the Bentz and Shapiro (1998) text was assigned at the very beginning of this module: self-awareness is vital in research. But it only goes so far. I don’t think that it is wise yet to discount the positivist* worldview in grounded theory research although currently there are no frameworks to do so, although symbolic interactionism offers the closest model to date (Blumer, 1969; Mead, 1962).

Jim

*By ‘positivist’, I don’t mean quantitative methods per se. I mean that the ultimate understanding of our human world can be achieved by studying how epiphenomena emerge from stable levels, and by ontologising those levels further down at the social level.

Aaronson, S. (2004). The Complexity of Agreement, 27. http://doi.org/10.1145/1060590.1060686

Aumann, R. J. (1976). Agreeing to disagree. The Annals of Statistics, 4(5), 1236–1239.

Bentz, V. M., & Shapiro, J. J. (1998). Mindful Inquiry in Social Research. Sage Publications. Thousand Oaks: Sage Publications. http://doi.org/10.1017/CBO9781107415324.004

Berger, P. L., & Luckmann, T. (1967). The social construction of reality. New York: Doubleday.

Bhaskar, R. (2008). A realist theory of science. London and New York: Routledge. http://doi.org/10.2307/2184170

Blumer, H. (1969). Symbolic Interactionism: Perspective and Method. Berkeley: University of California Press.

Gasson, S. (2004). Rigor in Grounded Theory Research: An Interpretive Perspective on Generating Theory From Qualitative Field Studies. In M. Whitman & A. Woszczynski (Eds.), Rigor in Grounded Theory Research (pp. 79–102). Hershey; PA: Idea Group Publishing. http://doi.org/10.4018/978-1-59140-144-5.ch006

Mead, G. H. (1962). Mind, self, and society: From the standpoint of a social behaviourist. (C. W. Morris, Ed.). Chigago: University of Chicago Press. http://doi.org/10.1080/01463376009385121

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 7_3

Like you (W, I presume), I’m also in the process of learning about grounded theory (GT). I find the methodology to be intense, intellectually challenge and replete with potentially contradictory claims. Avoiding those contradictions is, as far as I can understand, a major task for the grounded theorist. Schroth (2013) notes that;

“A grounded theory is neither valid or invalid, but rather it has more or less fit, relevance, workability and modifiability” (Schroth, 2013).

Looking just at “modifiability” as an example of a potential contradiction, this is the degree to which new data can result in the modification of the theory (Schroth, 2013). If a theory is stable, new data should not influence the theory significantly. Yet, achieving that stability may not come easily. Charmaz (2006) discusses the work of Carolyn Ellis whose grounded theoretical study entailed numerous visits to her research site, and;

“After a troubling revisit to the community three years following publication of her book, her subsequent reflections sparked new insights. Researchers with limited involvement in their respective fields probably would not have realised the limitations of their categories” (Charmaz, 2006, p. 181).

Ellis’ revision raises many questions, not least for novice researchers such as myself about the stability of any findings I may derive from my base data. Ellis is, in Charmaz’ (2006) characterisation, an experienced researcher, one able to revisit and reflect on earlier research. Unfortunately, Charmaz did not expand on Ellis’ transformation, but I wonder how much Ellis—as a researcher—had changed during the three years after her publication, and how much that change had altered her perspective, which led to “new insights”. If her own conceptual systems had changed (possibly through post-publication discussions of her book and having to answer challenges to her methodology and findings), the new Ellis’s views of the original data would be different. In other words, it would be a different Ellis who analysed the old data and not necessarily “limitations of their categories” that were understood. After all, no two researchers will code the same data in the same way (Charmaz, 2006).

My own way of dealing with this problem is to critically and closely read the original phenomenological studies in personal epistemology. Currently, I am living with Baxter Magolda’s (1992) “Knowing and Reasoning in College”. Her analysis of the raw data is fascinating, and seeing how she demonstrates the theoretical underpinning of her theoretical categories through the raw data is illuminating. However, I can also pinpoint instances where the quotations from participants may be interpretations that are forced into categories and that may offer other interpretations if those categories were not present.

Jim

Baxter Magolda, M. B. (1992). Knowing and reasoning in college: Gender-related patterns in students’ intellectual development. San Francisco: Jossey-Bass.

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. London: Sage.

Schroth, S. T. (2013). Grounded theory. In Salem Press Encyclopedia. Salem Press. http://doi.org/10.1080/07351698809533738

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 7_2

“If you wonder about how stable your claims about existence are, would it make sense to merely investigate the existence and the very patterns and suggest in your section on further research for others to take generalizations forward?” (Amann, 2016).

This certainly is one option. I need to balance that possibility with the existing theory from the rest of the world, in which many categories (dimensions, scales and so on) have already been delimitated. Thanks for your comment.

Jim 

Amann, W. (2016, November 21). Re: Week 7 – Research planning and design: Establishing your approach. [Online discussion post]. Retrieved from from https://my.ohecampus.com/lens/home?locale=en_us#

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 7_1

Thank you for your comment. The issue of construct validity and sample size is made complex by the acceptability of single case research and Yin’s argument for analytic generalisation (Yin, 2008). Without such an argument, the purely statistical probabilistic calculations are well established  in social research (e.g. see many of the chapters in Alasuutari, Bickman, & Brannen, 2008). When sample sizes are robust enough to be are considered adequate representations of the target population, parametric statistics may be conducted, and even with smaller sizes, there are non-parametric tools available (Cohen, Manion, & Morrison, 2011; Flick, 2014). In these instances, a research is able to make claims about the evidence supporting a construct when the relevant statistic figures are validated as being likely within a given level of confidence, error and p-value. In other words, the issue of the existence of the construct is intimately tied to strength of the evidence used to argue for it: and this strength, in parametric and non-parametric quantitative statistics, is ultimately related to the sample size.

However, for qualitative research, the notion of existence takes on a different meaning. The example of cancer may be an illustrative one. Without doubt (because I had it and know directly) non-Hodgkin’s lymphoma (NHL) exists (or more accurately, the medico-biological phenomena that results in altered cellular biomechanisms that has the associated socio-medico label of NHL exists). The condition affects about 20 people in 100,000. If I were to take a random sample of 1,000 individuals and investigate the existence of that cancer, there would be a 0.2% chance that a patient is in that sample. Non- and parametric tools would not be available to the researcher to argue for the existence of NHL. Similarly, if a quantitative research plan was set up to investigate the existence of a construct (i.e. NHL) in a random population, only very lucky samples would help the research.

However, the existence of a single case of NHL is enough to show that NHL exists. The famous black swan example(Popper, 1959) and the associated notions of falsifiability and verifiability, point to a different kind of search, an investigation that gains validation upon the discovery of a single case. When a phenomenon can be argued to exist, a la Yin’s (2008) analytical generalisation, work can begin into understanding that phenomenon more accurately.

These examples do not discuss the concept of a construct directly. These are hidden ideas that “that have to be inferred from observable indicators” (Williams & Vogt, 2011). A wider question is that of interpretation. If an observable phenomenon is interpreted as pointing to an occluded phenomenon, how stable is that interpretation when only one instance can be observed? The black swan, for example, may be a single genetic mutation and not really be sufficient evidence to disprove the white swan theory. Or it may point to the existence of a new type of swan. Sato (2016) notes the importance of theoretical sampling, the deliberate choice of, for example, selecting other black swans to improve the robustness of the theoretical category.

With these issues in mind, I progress to the task of attempting to delineate the possible constructs from the myriad of possibilities thrown up by the literature on epistemic cognition. But the question becomes if I use a limited number of cases, how stable are the claims of existence and non-existence?

Jim

Alasuutari, P., Bickman, L., & Brannen, J. (2008). The SAGE Handbook of Social Research Methods. Los Angeles: Sage Publications.

Cohen, L., Manion, L., & Morrison, K. (2011). Research Methods in Education. Professional Development in Education (7th ed.). Abingdon: Routledge.

Flick, U. (2014). The SAGE Handbook of Qualitative Data Analysis. Los Angeles: Sage Publications. http://doi.org/10.4135/9781446282243.n33

Popper, K. (1959). The Logic of Scientific Discovery. London and New York: Routledge.

Sato, H. (2016). Generalization Is Everything, or Is It?: Effectiveness of Case Study Research for Theory Construction. Annals of Business Administrative Science, 15(1), 49–58. http://doi.org/10.7880/abas.0151203a

Williams, M., & Vogt, W. P. (2011). Innovation in Social Research Methods. Los Angeles: Sage Publications.

Yin, R. (2008). The case study as a research method. Case study research: Design and methods.

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 7 Initial Post

Module 7 primary focus is the development of a research plan that will form the foundation for the one that will underpin the actual doctoral thesis proposal. And as a doctoral thesis is partially a vehicle for demonstrating the ability to research, the next three weeks begin a serious look at the key ingredients in the toolkit for a thesis development.


In this week’s post, I will address the pragmatic issue of sample size and the methodological issue of construct validity. These two issues are intimately linked, and errors or misunderstandings between the relationship of these will ultimately weaken any purpose and claims I wish to make about the research.

Ideally, I would have access to a large population of undergraduate students. From this population, I would randomly select as large a sample as time and resources allow. In doing so, the probability sampling procedure (Blaxter, Hughes, & Tight, 2006) would likely mitigate the non-qualitative oriented people the research wishes to address. However, being probabilistic, the sample number needs to be sufficient to allow parametric analyses (not necessarily at the data analysis stage, but in presenting the rationale behind the sample selection). Figures vary regarding the number in the sample, and estimates can be between three and twenty times the number of variables if factor analysis methods are to be used (Mundfrom, Shaw, & Ke, 2005). As an example, one of the shortest paper-and-pen instruments for investigating epistemic cognition, the Epistemic Belief Inventory (Schraw, Bendixen, & Dunkle, 2002) has 35 items that typically factor into five categories, leading to a minimum of 105 respondents. This figure allows for no leeway as it assumes that the instrument’s validity is perfect. To provide for confidence levels and sampling errors, many more respondents are needed (Cohen, Manion, & Morrison, 2011). In Japan, educational psychology is dominated by quantitative methodologies, and if I adopted a purely qualitative orientation (which would be acceptable for the thesis), any results on the basis of non-probabilistic sampling would be treated with suspicion. At the very most, my target audience would consider my thesis grounds for further study.

My audience is first and foremost the general education division in my university (although they don’t know it yet). They are concerned with overall academic, moral and educational development in the student body, but—to be frank within this closed community—they have little conception of how students develop over their time here. I do not wish to walk in to their office and make unsubstantiated claims about how we can manage our programmes better. This thesis aims to be my proof of concept to them about how to improve our student services.

The second issue of construct validity relates directly to sample size. I will likely conduct either a mixed-methods study or a purely exploratory qualitative one for two reasons: it is unlikely that I will have access to as many respondents as I would like (or I am working on that assumption); and the question of how valid western instruments of capturing epistemic cognitive factors are in the Japanese context remains largely unknown (Hofer, 2010 and B. Hofer, personal communication, October 28, 2016). However, I need to consider very carefully how I generate my sample because any individual selected may not be representative of the general population (Blaxter et al., 2006). Following Yin (2008), the argument that the unit of analysis being small does not mean that the general ideas and theories that the research is promoting are invalid. Yin’s distinction between statistical and analytic generalisation argues for the acceptance of ontological existences using a single case. I will consider this more during the coming week.

Jim

Blaxter, L., Hughes, C., & Tight, M. (2006). How to Research (3rd ed.). Maidenhead: Open University Press.

Cohen, L., Manion, L., & Morrison, K. (2011). Research Methods in Education. Professional Development in Education(7th ed.). Abingdon: Routledge.

Hofer, B. K. (2010). Personal epistemology, learning, and cultural context: Japan and the United States. In M. B. Baxter Magolda, E. G. Creamer, & P. S. Meszaros (Eds.), Development and Assessment of Self-Authorship: Exploring the concept across cultures (pp. 133–148). Sterling, Virginia: Stylus Publishing, LLC.

Mundfrom, D. J., Shaw, D. G., & Ke, T. L. (2005). Minimum sample size resommendations for conducting factor analyses. International Journal of Testing, 5(2), 159–168. http://doi.org/10.1207/s15327574ijt0502_4 To

Schraw, G., Bendixen, L. D., & Dunkle, M. E. (2002). Development and validation of the Epistemic Belief Inventory (EBI). In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 261–276). Mahwa, New Jersey: Lawrence Erlbaum Associates, Inc.

Yin, R. K. (2003). Case Study Research . Design and Methods. SAGE Publications. Thousand Oaks: Sage Publications. http://doi.org/10.1097/FCH.0b013e31822dda9e

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 6_5

I feel the force of your argument. That’s a good thing. I’m glad that you feel strongly about informing participants of research findings. Personally, I see no problem in doing so, either. My point, however, is that there is no necessary obligation to do so. Oliver (2010) points to some benefits to participants being involved with a research project; the sense of being listened to, the notion that they are being helpful, the idea that the research may bring value to the participant’s community and so on.

We must remember that participants are not forced into the project. Or if they are, that is a serious breach of ethics (Cohen, Manion, & Morrison, 2011) and should be avoided at all costs. The issue is made more complex in the case of teacher-researchers because of the power dynamics involved making the problem of covert forcing a potential threat. Leaving that glitch out of the discussion for the moment, I don’t necessarily believe that researchers ‘use’ participants any more than participants ‘use’ researchers. Sure, there is a disparity between the social levels of an educated researcher and a student, but this is only one example. Oliver (2010) also discusses the problems in research when the younger researcher interviews senior members of an organisation.

To rephrase my point, perhaps I would say that it is unethical to deliberately hide one’s research from participants when disclosure is an option. This, I agree with. However, it is another thing to state that we have an ethical duty to provide participants with our findings. What would you do after finishing a 1,000-respondent study? Would you attempt to make sure that every one got a copy of your paper?

Jim

(Please forgive the lack of references. I’m on a long-distance bus without a reference manager. But all of the citations can be found in my earlier posts.)

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 6_4

I did the LAUR754 Ethical Issues for Practitioner-Researcher Masterclass in July. Doing the week-long course is to be recommended, and I expect that you will derive much benefit from it. (Ah! This reminded me of a question I had about the ethics of reproducing/drawing upon our earlier work. That question is now in the other discussion forum.)

One part of your post in particular drew my attention. You talk about the “moral and legal obligation” (Aman, 2016) concerning research dissemination, and you include both research participants and the general public in the scope of that dissemination. I wonder to which extent researchers are actually obligated to report their research. Oliver makes a surprising claim on this point;

“If it is intended that this [research report] will be an article in an academic journal, which is clearly in the public domain, there will be no difficulty” (2010, p. 65).

It is easily arguable that “the public domain” is not the same thing as expensive journal articles that are hidden behind both paywalls and the general inaccessibility to the general public of academic publishing. Even researchers have great difficulty in accessing certain publications. The general public in most cases do not even know about those publications, and locating them is practically impossible. Furthermore, if the funding body for a research plan wishes to keep the final report within its own domain, can the researcher override that wish? Oliver (2010) claims no.

If I felt that I was under a moral obligation to disseminate my research to the public domain but could not (due to circumstances beyond my control), I would begin to feel uncomfortable. This creates an ethical contradiction: should the ethical duty (i.e. comfort) to conduct research result in ethical discomfort? Because of this contradiction, personally, I would not link the ethical action of research—necessarily—to its dissemination. These two issues would be better kept separate. What do you think?

Jim

Aman, J. (2016, November 16). RE: Week 6 – Ethical issues in research [Online discussion post]. Retrieved from https://my.ohecampus.com/lens/home?locale=en_us#

Oliver, P. (2010). The student’s guide to research ethics (2nd ed.). Maidenhead: Open University Press.

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 6_3

You bring up the messy topic of plagiarism. Although I fully accept the need for doctoral candidates to show the lineage of their thoughts, there is a cultural aspect of plagiarism that prevents a simple dismissal of texts that are seen to be copied. The ethical boundaries, ironically, may be on the western researcher to show why a non-western author has plagiarised rather than simply reject the latter as being unethical. (I have written about this topic on these forum boards before in module 5 week 9, but I have adapted my writing to differ from that significantly, although the base content is completely derivative.)

At the undergraduate level in many Asian contexts, memorisation of standard answers is usual (Tran, 2013). Furthermore, a definition of an educated person in some areas is the demonstration of the ability to quote vast quantities from canonical texts (Tweed & Lehman, 2002; Valiente, 2008). The notion of this elite, educated individual who shares common values, which are derived from these texts, is the base of the Newmanian gentlemen (Newman, 2011; Trowler, 1998). This nineteenth century western ideal may be outmoded in Occidental research, but it remains both an ideal and a technique in other parts of the globe. A Chinese student of Pennycook (1996) once handed to him an essay that was completely plagiarised from another source. Like many teachers, Pennycook held a consultation with the student and asked him about the text, expecting him to not know the information. Instead, the student could from memory recall every detail and complete a perfect recitation of the text. Although Pennycook used this incident to question the ownership of language and the right to dictate how the language is used, the incident also serves as an example of how ethical issues do not fall neatly when discussing cross-cultural issues.

Jim

Newman, J. H. (2011). The Idea of a University Defined and Illustrated: In Nine Discourses Delivered to the Catholics of Dublin, (0), 440.

Pennycook, A. (1996). Borrowing Others’ Words: Text, Ownership, Memory, and Plagiarism. TESOL Quarterly, 30(2), 201–230.

Tran, T. T. (2013). Is the learning approach of students from the Confucian heritage culture problematic? Educational Research for Policy and Practice, 12(1), 57–65. http://doi.org/10.1007/s10671-012-9131-3

Trowler, P. R. (1998). Academics responding to change: New higher education frameworks and academic cultures. Buckingham: SRHE and Open University Press. http://doi.org/10.1016/j.jhsa.2014.04.015

Tweed, R. G., & Lehman, D. R. (2002). Learning considered within a cultural context. Confucian and Socratic approaches. The American Psychologist, 57(2), 89–99. http://doi.org/10.1037/0003-066X.57.2.89

Valiente, C. (2008). Are students using the “wrong” style of learning?: A multicultural scrutiny for helping teachers to appreciate differences. Active Learning in Higher Education, 9(1), 73–91. http://doi.org/10.1177/1469787407086746

Posted in EDEV_507 | Leave a comment