EDEV_507 Week 9_4

Your decision to concentrate on a grounded theory study, phenomenological analysis seems practical given the limitations imposed by the EdD structure. In doing so, you can avoid the theoretical issues that potentially mar MMR studies (Bryman, 2009; Teddlie & Tashakkori, 2011). My question to you today centres on the quantitative notion of generalisability, which Rapley (2014) equates with transferability of qualitative studies.

I assume that you will interview a series of business leaders and use their interview transcripts as your data for analysis. Irrespective of the techniques involved in grounded theory (Charmaz, 2006) which may lead you towards a very robust interpretation of that data, unless the sample is representative enough, how will you attempt to ensure transferability of your findings? Rapley (2014) asserts that transferability is “dependent upon the degree of similarity (fittingness) between two contexts” (p. 52). Will you pass on the burden of transferability proof onto each reader, who will judge for themselves the degree of fit between your findings and their situation?

In the context of this week’s discussion (which you and I are entering into as an intellectual exercise), it may be useful to suggest that a short quantitative follow-up survey that is based on your qualitative findings may help support any claims of generalisability/transferability.

Jim

Bryman, A. (2009). Mixed methods in organizational research. In D. A. Buchanan & A. Bryman (Eds.), The SAGE handbook of organizational research methods (4th ed., pp. 516–531). Thousand Oaks, CA: Sage.

Rapley, T. (2014). Sampling Strategies in Qualitative Research. In U. Flick, (Ed), The SAGE Handbook of Qualitative Data Analysis (pp. 49-63). Los Angeles: Sage Publications.

Teddlie, C., & Tashakkori, A. (2011). Mixed methods research: Contemporary issues in an emerging field. In N. K. Denzin & Y. S. Lincoln, (Eds.), The SAGE handbook of qualitative research (pp. 285–299). Thousand Oaks, CA: Sage.

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 9_3

Thank you for your clear explanation of your understanding of MMR. I have a question for you, though. You say;

“Mixed methods research is not intrinsically superior to single method or single strategy research” (Alexandrou, 2016);

then, in the same paragraph, you add;

“As all research projects have limited resources, MMR can dilute the research effort in any area by spreading resources” (Alexandrou, 2016).

I agree with the first quotation fully. But in your mind, does your juxtaposition of these two apparently contradictory statements imply that MMR may, in fact, be more dangerous than single method research?

Furthermore, the second quotation may be based on a false assumption. If, as Cohen, Manion and Morrison (2011) frequently assert, the research question is the guiding light for a research design, is it possible that if a project is in danger of being diluted, the actual research question may not be clearly articulated? Research must be feasible (Blaxter, Hughes, & Tight, 2006), and if a question requires an MMR approach, the timing, resources, technical skill and so on must be considered at the same time as how the question can be investigated. I fail to see how the effort can be diluted. Perhaps you can enlighten me?

Jim

Alexandrou, P. (2016, December 6). RE: Week 9 Using mixed methods [Online discussion post]. Retrieved from https://my.ohecampus.com/lens/home?locale=en_us#

Blaxter, L., Hughes, C., & Tight, M. (2006). How to Research (3rd ed.). Maidenhead: Open University Press.

Cohen, L., Manion, L., & Morrison, K. (2011). Research Methods in Education. Professional Development in Education (7th ed.). Abingdon: Routledge.

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 9_2

You make a solid argument for the use of mixed methods research (MMR). However, I found a few of the statements you made puzzling. Perhaps you could elucidate on their implications?

I’m not sure why it is “usually preferable to use more than one approach” (Alexandrou, 2016). If, as you rightly point out, the research question indicates the kinds of data to be collected, why is there the normative assertion that multiple approaches are “usually preferable”? Indeed, given the inherent complexity of managing MMR (Creswell, 2009) and the notion of parsimony that underpins good social theory (Neuman, 2014), the argument that MMR ‘should’ be avoided without very good reason seems more plausible to me.

Although Bryman (2009) listed ontological differences as being a possible inhibitor to a successful MMR design, he neglected to describe what these may actually be. Fitzgerald and Cunningham (2002) discuss ontological stances in their exposition of five types of epistemological position that characterise modern scientific research into reading. The fundamental way that researchers view their world influence the type of forms they view as structuring that world. Accordingly, their data collection prioritises those ontological forms, and the research questions are framed by the prevailing epistemology surrounding that ontology (Fitzgerald & Cunningham, 2002). Figure 1 shows the five epistemological positions and the seven questions on epistemology each cluster attempts to answer. The point for this week’s discussion is that while MMR is potentially a valid option for many research questions, the failure to demonstrate any linkages between strands of epistemology weaken the MMR design and, by extension, any results derived from it. In light of such diverse ways of knowing, how would you justify an MMR design when the potential data collection and data analysis may be so divergent?

fitzgerald-cunningham-epistemological-clusters-and-questions

Figure 1. Fitzgerald and Cunningham’s five epistemological positions and seven epistemological questions

Jim

Bryman, A. (2009). Mixed methods in organizational research. In D. A. Buchanan & A. Bryman (Eds.), The SAGE handbook of organizational research methods (4th ed., pp. 516–531). Thousand Oaks, CA: Sage.

Creswell, J. W. (2009). Research Design Qualitative, Quantitative, and Mixed Approaches (3rd ed.). Los Angeles: Sage. http://doi.org/10.1002/1521-3773(20010316)40:6<9823::AID-ANIE9823>3.3.CO;2-C

Fitzgerald, J., & Cunningham, J. W. (2002). Mapping basic issues for identifying epistemological outlooks. In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 209–228). Mahwa, New Jersey: Lawrence Erlbaum Associates, Inc.

Neuman, W. L. (2014). Social Research: Qualitative and Quantitative Approaches. Social Research Methods: Qualitative and Quantitative Approaches (7th ed.). Harlow: Pearson Education. http://doi.org/10.1234/12345678

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 9_1

It depends on what you mean by ‘macro’. Creswell (2009) uses the term ‘macro’ in two distinct ways. The first way derives from Neuman (2014) who, in accordance with most uses of ‘macro’ (e.g. see the entries in Denzin & Lincoln, 2005), defines it thus;

“Social theory focusing on the macro level of social life (e.g., social institutions, major sectors of society, entire societies, or world regions) and processes that occur over long durations (many years, multiple decades, or a century or longer)” (Neuman, 2014, p. 71).

This definition does not fit my outline of EC longitudinal research. The other use of ‘macro’ by Creswell refers to the first stage of a two-part design in which “Phase 1 was a quantitative study that looked at statistical relationships between teacher commitment and organizational antecedents and outcomes in elementary and middle schools” (Creswell, 2009, p. 221). Arguably, the analysis conducted on this Phase 1 data was more meso than macro, but leaving that distinction aside, Creswell considers macro to be related to the hypothetical-deductive model, which Teddlie and Tashkorri (2009) explain as a theoretical hypothesis that is tested through the collection of data.

Although my outline conflates a number of studies within an overarching theoretical theme, and so might be considered macro in the first use, I had intended to mean the second use. This is because macro in the first use has too wide a scope to be applicable to a localised assumptive use of, for example, Perry (1970) as the theoretical base in a later study. This use has meso connotations if nothing else, but my point was that hypothetical-deductive models (or any design model that is based on earlier theory) must necessarily include the assumptions, data types, and ontologies of the base model. If the later study is QUAL or qual, this study becomes a mixed methods one implicitly.

Jim

Creswell, J. W. (2009). Research Design Qualitative, Quantitative, and Mixed Approaches (3rd ed.). Los Angeles: Sage. http://doi.org/10.1002/1521-3773(20010316)40:6<9823::AID-ANIE9823>3.3.CO;2-C

Denzin, N. K., & Lincoln, Y. S. (2005). The SAGE Handbook of Qualitative Research. The SAGE Handbook (3rd ed.). Thousand Oaks: Sage Publications. http://doi.org/10.1017/CBO9781107415324.004

Neuman, W. L. (2014). Social Research: Qualitative and Quantitative Approaches. Social Research Methods: Qualitative and Quantitative Approaches (7th ed.). Harlow: Pearson Education. http://doi.org/10.1234/12345678

Perry, W. G. (1970). Forms of Intellectual and Ethical Development in the College Years: A Scheme. New York: Holt, Rinehart, and Winston.

Teddlie, C., & Tashakkori, A. (Eds.). (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Sage Publications Inc.

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 9 Initial Post

Epistemic cognition (EC) is a field with over fifty years of literature (Greene, Sandoval, & Bråten, 2016). The genesis, development and maturity of EC research may be regarded as a model for a longitudinal mixed methods research design, so I will briefly sketch out that history within the framework of mixed methods theory as described by Cresswell (2009) before relating that to how and why I may adopt certain aspects of this longitudinal design in my own study.

Perry (1970) thought that students’ intellectual and moral development during their college careers may be a result of motivational prompts, that is, through students aligning their present with their future selves. In this way, Perry was a key link between the humanist school (e.g. Maslow, 1954) and the motivational determinists (e.g. Elliott & Dweck, 1988; Markus & Nurius, 1986). Perry’s initial motivational studies were qualitative in design and positivist in epistemology (Perry, 1970). Yet, these initial forays into student development failed to convince Perry of their completeness. He embarked on a series of interviews, and the male-elite-white centredness of his exploratory phenomenology resulted in his method being replicated with females only (Belenky, Clinchy, Goldberger, & Tarule, 1987) and with mixed gender groups (Baxter Magolda, 1992). Perry’s work is shown in Figure 1 where the initial importance of the confirmatory motivational studies gives way to the more dominant qualitative interviews.

 

Figure 1. Perry’s sequential explanatory design

The second phase in EC research centred on Schommer’s (1990) paper-and-pencil positivist quantification of EC dimensions that utilised a Likert-type questionnaire. (For completeness, also know that Perry attempted a similar endeavour with proprietary questionnaires, for example his “Learning Environment Preferences Checklist”. But such instruments are not available to the general research public and cannot be included in this survey.) Schommer’s Epistemological Questionnaire (EQ)(1990) had sixty-three items and was reduced to thirty two in Schraw, Bendixen and Dunkle’s (2002) Epistemic Beliefs Inventory (EBI).

Such studies are not technically mixed methods, according to the Greene classification system (cited in Bryman, 2009). However, recent work in EC has seen many studies use the qual –> QUAL pattern, especially in the attempt to validate existing paper-and-pencil instruments in various educational or cultural settings and with how epistemic beliefs interact with other psychological phenomenon. Bråten and Strømsø (2004) used a Norwegian translation of Schommer (1990) as the first component in a three-instrument trial that spanned two data collection periods, (the other two being Dweck’s Theories of Intelligence and Migley’s Personal Goal Orientation). Bråten and Strømsø’s research design is implicitly quan –> QUAN because of their acceptance of the utility of the earlier instruments. Their data collection and analysis aimed to be developments of those questionnaires, and accordingly, their emphasis, that is, their results, focused not on the validation of the instruments but on the zero-order correlations found between the various questionnaires’ items. A similar underlying design is found in Chan and Elliot (2004) who compared the EC beliefs of teachers from Hong Kong, North American and Taiwan using Schommer’s EQ. This time, the focus was on how cultural beliefs influenced EC. Figure 2 describes the implicit quan –> QUAN design in these studies.

quan-quan

Figure 2. Implicit quan–>QUAN Exploratory Design in EC research

The timing of the data collection components was explicitly stated in Perry (1970). Perry realised that the QUAN intentions of a purely motivational study could not answer his precise questions. This intellectual desire supposes a weighting onto the QUAL sessions. There was no attempt to mix methods, per se, and no EC research to date contains a qual–>QUAN design (except Zhang, 1995, whose doctoral disseration comprised the bold sequence). The later studies equally suppose the veracity, assumptions and construct validity of the instruments they use.

All of these designs present ideas for possible robust research designs of studies of EC in the Japanese context. A Perry-esque developmental or expansionist (Bryman, 2009) design would be too time consuming to be practical for a doctoral thesis (Blaxter, Hughes, & Tight, 2006), so the paper would necessarily be limited to a single paradigmatic type. My interest is two-fold: to test the validation of the EQ (Schommer, 1990) in the Japanese university context; and to introduce items that aim to capture cultural specific influences on knowledge beliefs. This interest suggests that an implicit quan–>QUAN design is needed. However, a level of complexity is introduced by any new item I include to cover the cultural aspects. In this case, the first-stage ‘quan’ also includes an element of QUAN–>qual as I interpret the numerical data and statistical implications of that data.

Baxter Magolda, M. B. (1992). Knowing and reasoning in college: Gender-related patterns in students’ intellectual development. San Francisco: Jossey-Bass.

Belenky, M. F., Clinchy, B. M., Goldberger, N. R., & Tarule, J. M. (1987). Women’s Ways of Knowing (Tenth Anni). New York: Basic Books.

Blaxter, L., Hughes, C., & Tight, M. (2006). How to Research (3rd ed.). Maidenhead: Open University Press.

Bråten, I., & Strømsø, H. I. (2004). Epistemological beliefs and implicit theories of intelligence as predictors of achievement goals. Contemporary Educational Psychology, 29, 371–388.http://doi.org/10.1016/j.cedpsych.2003.10.001

Bryman, A. (2009). Mixed methods in organizational research. In D. A. Buchanan & A. Bryman (Eds.), The SAGE handbook of organizational research methods (4th ed., pp. 516–531). Thousand Oaks, CA: Sage.

Chan, K., & Elliott, R. G. (2004). Epistemological beliefs across cultures: critique and analysis of beliefs structure studies. Educational Psychology, 24(2), 123–142. http://doi.org/10.1080/0144341032000160100

Creswell, J. W. (2009). Research Design Qualitative, Quantitative, and Mixed Approaches (3rd ed.). Los Angeles: Sage.http://doi.org/10.1002/1521-3773(20010316)40:63.3.CO;2-C

Elliott, E. S., & Dweck, C. S. (1988). Goals: an approach to motivation and achievement. Journal of Personality and Social Psychology, 54(1), 5–12. http://doi.org/10.1037/0022-3514.54.1.5

Greene, J., Sandoval, W., & Bråten, I. (2016). Handbook of Epistemic Cognition. New York: Routledge.

Markus, H., & Nurius, P. (1986). Possible selves. American Psychologist, 41(9), 954–969. http://doi.org/10.1037/0003-066X.41.9.954

Maslow, A. H. (1954). Motivation and Personality (1970th ed.). Harper & Row.

Perry, W. G. (1970). Forms of Intellectual and Ethical Development in the College Years: A Scheme. New York: Holt, Rinehart, and Winston.

Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82(3), 498–504. http://doi.org/10.1037/0022-0663.82.3.498

Schraw, G., Bendixen, L. D., & Dunkle, M. E. (2002). Development and validation of the Epistemic Belief Inventory (EBI). In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 261–276). Mahwa, New Jersey: Lawrence Erlbaum Associates, Inc.

Zhang, L. F. (1995). The construction of a Chinese language cognitive development inventory and its use in a cross-cultural study of the Perry scheme. University of Iowa.

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 8_6

Here’s my take on the variable/category issue. As usual, much of the confusion arises from similar terminology being used in different ways.

In virtually all stats textbooks, the term quantitative ‘variable’ is explained as something that can ‘vary’ and that is of interest in a research design. Variables fall into a number of types: nominal, that is, those whose name (from the Latin ‘nomen/name’) defines the variable, such as ‘gender’ (leaving LGBTIQ aside for now) , ‘university-educated or not’ and other binary or small categories of information. In these cases, there is no gradation between categories. Someone is either ‘university-educated’ or they are not. These are also often called ‘categorical’ variables, but this use of ‘category’ is not the same as what José referred to; ordinal, that is, variables that can be ‘ordered’ in a line (from the Latin ‘ordo/line or sequence’). Ordinal variables have a hierarchy. Some are numerically bigger than others, but the distance between the variables is not known. A Likert scale, with options 1 to 5, is this type. There is no way of knowing how different ‘1 to 2’ is from ‘4 to 5’.  Understanding how the use of ordinal variables limits research findings is an important task. (For example, the median statistic is predicated on the distance between the Likert options being the same. But if they are not, in reality, the results will be compromised.); continuous, that is, a variable that is ordered—like an ordinal variable—and whose distance is known. My height is different from you, and the degree of difference can be known as much as the preciseness of the measuring instrument allows. Arguably, it will be recognised that only continuous variables can exist, but for ease of research, the others are used routinely.

The fundamental feature of quantitative research is the positivistic belief that human activity can be measured and translated into continuous variables (and the others, but let’s leave that aside here). However, two major problems have to be dealt with. The first is understanding of, and access to, hidden activity, and the second is that that activity often has no corporeal form. Can you measure my happiness? You might decide to define happiness as the width of a smile, or the increased heartbeat, or the particular neurological change in the brain, or with some other measurement. No one can feel another’s happiness. The existence of happiness can never be known directly and this kind of notion is called a construct. Of course, the selection of any proxy measurement for happiness must contain assumptions. I can smile widely when I’m angry (to hide my distain); my heartbeat will be faster during running away from a bull. In order to make the measuring instrument more precise, a single invisible activity may be defined as the result of a group of measurements: the width of the smile, the degree of heartbeat change and the neurological movements. Noting how variables change together is the basis of (multi-) variate analysis, and combining variables into a single package is the basis of factor analysis.

In qualitative research, the purpose and function of variables doesn’t seem (I say seem because I’ve never actually done any qualitative research before) different. The processes and techniques are entirely distinct, though. Let’s say I want to measure the degree of happiness in a participant. Well, I could just ask them, but I wouldn’t know how happy they were. Well, I could just ask them again, this time adding a qualification statement like, ‘Compare your current happiness to the happiest time you’ve ever been. How different is it?’ But then, I wouldn’t know how happy Participant 1 is from Participant 2. How could I tell the difference? Maybe the exact difference is not important, only that there is a difference. However, in all cases, I need to rely on participants telling me the truth. I need to have tools and techniques for ascertaining the veracity of participant information. Notice that the questions asked just got longer until the degree of happiness was found.

This style of questioning presumes the existence of a hidden construct of happiness. But some qualitative research methods (grounded theory, for instance) does not begin by accepting the existence of the construct of happiness. Rather, information is collected on a more general theme, and through interpretation (coding, memos, constant comparison), the existence of hidden (well, constructs are always hidden) constructs becomes a possibility. The task then is to selectively limit the data collection until the construct is either shown or not. ‘Construct’ is what José means by ‘qualitative variable’ as it relates to qualitative research, although he used the terms ‘category’ and ‘theme’. Fundamentally, they are the same, and in many research designs, quantitative confirmatory research often follows qualitative exploratory research on this basis. But now we are getting on to mixed methods, and that’s for next week.

Jim

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 8_5

Thanks for the clarification. Actually, I had assumed that you would have collected both qual and quan information as you later mentioned. That bit didn’t worry me. (But José is right to make us clarify and articulate as many of our assumptions as possible. Just imagine the viva; how will you respond to similar questions? That image is how I try to conduct my work.)

My question to you was a little subtler than how I had phrased it. Please let me restate it with your expanded information. Let’s take a simplified example, but one that may be plausible within your study.

Let’s say that you find in the quantitative data that 34% more black students drop out than white students. And you also find in a similar data set over the same time period that black students’ academic records (as seen in their GPAs) is 45% lower than whites’. At first glance, there seems to be a strong correlation between the two figures. Some researchers may actually stop at this point and argue that the dropout rate is a result of the low GPA. Of course, this interpretation is incomplete. We need to ask why the GPA is low. So we collect some qual data, that is, we interview some students and ask about their GPA. If we are not careful, we may receive some very biased reasons. For example, if we ask directly about GPA (e.g. by asking, ‘Why do you think your GPA is low?’), participants may talk about their lack of understanding of lecture material, or low self-esteem that inhibits their involvement in the classes, or the fact that the study materials focus on white men’s achievements too much thus de-engaging the black students’ motivation, and so on. All of these reasons seem plausible, and I can imagine a report being generated on that basis that argues for a better understanding of the experience at college of black students.

However, let’s also say that the reality is that most black students (in this fictitious example) are poor and work much more than whites. They come to school tired, have less time to do homework and self-study, and their non-involvement in extracurricular activities means that they do not develop the degree of academic enculturation that white students have. These reasons would not have been captured in the first report.

The ‘biased reasons’ in the earlier paragraph refer to the kind of information retrieved by a simplistic and direct question. These reasons resulted in a study that was incomplete as the following paragraph demonstrated. Moreover, there may be even further reasons for the low GPA and the high dropout rate; for example, the university’s affirmative action policy allowed lower high school achieving black students entry into higher education, or the physical location of the university (combined with the local cost of accommodation/transport) inadvertently favoured the local white population due to its proximity to a white neighbourhood thus making black students poorer (due to costs) and the associated problems that that brings, or something else.

However, my point is that all of this post has been generated from just two quantitative data points, the GPA and the dropout rate. A problem was assumed because of these figures, and on the basis of the belief that a problem actually existed, a qualitative study (i.e. the interview on the GPA) and a further set of data collection (i.e. the amount of homework, the time travelled to school, the amount of extracurricular activities) was taken. Yet, all of this activity may have been based on an unsupportable assumption. This, to me, is one of the dangers of triangulation. How will you attempt to limit such biases?

Jim

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 8_3

Hi José,

Yes, I’m delighted with the weekend’s conference, but I’m a bit worried about the extra work that it has brought up. Thanks for responding to my question. I’d like to comment on this proposition.

“[T]he development and use of critical skills: people who are brought up and educated in contexts adverse to critical analysis tend to face greater challenges with developing or making use of their critical skills than those from social and political contexts open to critical judgment” (Reis Jorge, 2016).

A prima facie and to many Western observers, this proposition seems accurate. It is a fascinating proposition on many levels, and it relates closely to any proposed cross-cultural study in EC. However, it contains a number of assumptions that are problematic. Most of the research and related comments and opinions about the differences between Japanese students’ abilities to think critically emanate from Western observers (Aoki, 2008). Those scholars from the Far East hold an opposing view;

“A person who learns but doesn’t think is lost \

A person who thinks but doesn’t learn is in great danger” (Confucius, cited in Aoki, 2008, p. 36)

Rohlen (Aoki, 2008) argues that the first line is representative of much of Japanese schooling, whereas the second line is appropriate to schooling in the U.S. The basic argument is that the focus on rote memorisation in Confucian Heritage cultures (CHC), of which Japan is clearly a part (Phuong-Mai, Terlouw, & Pilot, 2005), de-emphasises deep thinking. The corollary is that American students learn those critical skills but fail to master enough content on which to apply those skills. Confucius lived before the constitution of the U.S., and Rohlen’s interpretation must be bracketed as a modern creation of a false dichotomy. Confucius recognised that both types are present in the same location. The lines form within-cultural dimensions, not cross-cultural comparators. Yet, the notion that CHC fail at critical analysis where the Western ones succeed is still common.

Allied to this misconception is the (once said it becomes obvious) recognition that all governments aim to improve their educational output at the rhetorical, political and policy levels, and many include statements to the effect that critical thinking is important. Japan does this, too (Nemoto, 2009). However, Western observers of Japan have used what is a sociological natural Japan-internal discussion as evidence that Japan lacks critical thinking skills (e.g. Cutts, 1997; McVeigh, 2002).

I’m sure that it would be empirically provable to show that Japanese students are weaker at certain kinds of critical analysis than Western ones. Conversely, there are areas of cognitive activity in which Western students may be much weaker than their Japanese counterparts. The type of socialisation in Japan’s middle and high schools promotes group responsibility and appropriate action with a group (Aspinall, 2015). LeTendre (2000, cited in Aspinall, 2015) noticed that group organisation in Japan’s middle and high schools mirrored that in businesses. The individual’s sense of future self (Markus & Nurius, 1986) may be much clearer in Japan, yet the cost to a Western view of critical thinking may be the development of cognitive structures that allow the individual to flourish within the collective. Putting this simply, a silent Japanese person may be more likely to be working out how to co-ordinate their actions with the group to promote harmony and group responsibility than a loud (forgive the example) Scotsman who just wants to get things worked out (whatever that means). Tweed and Lehman (2002) compared the implications of a Socratic belief system with that of a Confucian one on education. One of their conclusions was CHC tended to value pragmatic approaches more than Socratic cultures. This is congruent with the notion of truth in CHC being more flexible.

All of the above, and much more, point to extreme difficulties in making cross-cultural claims about levels of critical engagement. The fundamental neurological structures are identical at birth, but given brain plasticity which allows individuals to develop particular neurological mechanisms within particular contexts (Carter, 2009), an ability that seems to lead to adults in different cultural contexts actually having different brain structures, how can differences be measured at maturity?

Jim

Aoki, K. (2008). Confucius vs. Socrates: The Impact of Educational Traditions of East and West in a Global Age. The International Journal of Learning, 14(11).

Aspinall, R. W. (2015). Society. In J. D. Babb (Ed.), The Sage handbook of modern Japanese studies (pp. 213–228). Sage Publications.

Carter, R. (2009). The human brain book: An illustrated guide to its structure, function, and disorders. London: Dorling Kindersley Limited (DK).

Cutts, R. L. (1997). An empire of schools: Japan’s universities and the moulding of a national power elite. Armonk: M. E. Sharpe.

Markus, H., & Nurius, P. (1986). Possible selves. American Psychologist, 41(9), 954–969. http://doi.org/10.1037/0003-066X.41.9.954

McVeigh, B. J. (2002). Japanese Higher Education as Myth. Armonk: East Gate. http://doi.org/10.1353/jjs.2004.0010

Nemoto, A. (2009). Galapagos or an isolated model of LIS educational development?: A consideration on Japanese LIS education in the international setting. In Symposium on Future Perspectives in Globalization of Library and Information Professional (pp. 1–12).

Phuong-Mai, N., Terlouw, C., & Pilot, A. (2005). Cooperative learning vs Confucian heritage culture’s collectivism: Confrontation to reveal some cultural conflicts and mismatch. Asia Europe Journal, 3(3), 403–419. http://doi.org/10.1007/s10308-005-0008-4

Reis Jorge, J. (2016, November 30). Re: Week 8 – Research planning and design: Methods and analysis. [Online discussion post]. Retrieved from from https://my.ohecampus.com/lens/home?locale=en_us#

Tweed, R. G., & Lehman, D. R. (2002). Learning considered within a cultural context. Confucian and Socratic approaches. The American Psychologist, 57(2), 89–99. http://doi.org/10.1037/0003-066X.57.2.89

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 8_2

Indeed I am much more attuned to sociology than either social-psychology or psychology. In fact, I see epistemic cognition (EC) as a tool to understand one aspect of the educational context in which I find myself. My initial interest was in personal knowledge management (KM) and if better achieving students had more efficient or different systems of KM than others. I suspected that the particular values and educational systems at the macro level, i.e. in the classroom and the associated tests, impact significantly on the heuristics of KM and on those techniques of KM that are taken up by the successful student. Furthermore, I suspected that those same ‘good’ techniques are not the same in Japan, or in any Confucian Heritage country, as what is considered ‘good’ in the West. This is the basis of my sociological interest. However, to know that, a precise view of knowledge is needed. That led me into EC.

Now, here is the question, and here is something that I would appreciate direct advice on. Without exception, all research into EC is psychological, either developmental psychology, educational psychology, or cognitive science. Would it be acceptable to attempt a thesis that completely breaks with the established field?

I suspect not, but adding some items into the questionnaire that try to capture the cultural, or social, factor is certainly on my radar.

I’d like to look at the concept of constructs and how they are operationalised as variables (to keep with this week’s theme). One outstanding feature of Confucianism is the strength of collectivism (Phuong-Mai, Terlouw, & Pilot, 2005). The operationalising of collectivism in EC cross-cultural studies seems to be missing; for example, Chan and Elliot (2000) follow a very typical pattern in Asian EC research of adopting a U.S. survey tool (in this case, Schommer, 1990) and simply note the strength of those items that they judge to match a Confucian ideal. Chan and Elliot (2000; 2002; 2004) select Schommer’s (1990), “Omniscient Authority as one of the prominent factors” (K. Chan & Elliot, 2000, p. 232) and discuss that even though other factors scored higher values. I find this kind of analysis potentially interesting but ultimately limited. Schommer (1990, and by extension, any study based on her survey) does not attempt to capture any other Confucian belief, nor does her sixty-three-item instrument include any mention of working with others. Individualism—Collectivism is one of Hofstede’s (2010) main dimensions at the cultural level, and others have created tools to measure it at the individual level (Oishi, Schimmack, Diener, & Suh, 1998). A strikingly clear example can be found in Singelis (1994) who has twenty-four items (twelve for independent and twelve for interdependent) that directly attempt to operationalise the construct of collectivism/individualism; for example, item 6 “I will sacrifice my self-interest for the benefit of the group I am in” (p. 585).

Any instrument I create needs to address similar issues in similar ways. It is likely that I will select my variables based on the existing literature rather than ground them on intensive interviews. However, unlike other EC research, I will include more cultural and cross-cultural items.

On another note, I had a great time in Nagoya. Got very drunk. Met many great people. Had a good couple of presentations. Got elected to two officer positions. Had three textbook proposals accepted! (They will make me busy.) Came back and collapsed yesterday. Now, hopefully, I’m back to normal.

Jim

Chan, K., & Elliot, R. G. (2000). Exploratory Study of Epistemological Beliefs of Hong Kong Teacher Education Students: resolving conceptual and empirical issues. Asia-Pacific Journal of Teacher Education, 28(3), 225–234.

Chan, K., & Elliott, R. G. (2002). Exploratory Study of Hong Kong Teacher Education Students’ Epistemological Beliefs: Cultural Perspectives and Implications on Beliefs Research. Contemporary Educational Psychology, 27, 392–414. http://doi.org/10.1006/ceps.2001.1102

Chan, K. W., & Elliott, R. G. (2004). Relational analysis of personal epistemology and conceptions about teaching and learning. Teaching and Teacher Education, 20(8), 817–831. http://doi.org/10.1016/j.tate.2004.09.002

Hofstede, G., Hofstede, G. J., & Minkov, M. (2010). Cultures and organisations: Software of the mind. New York: McGraw-Hill.

Oishi, S., Schimmack, U., Diener, E., & Suh, E. M. (1998). The measurement of values and individualism-collectivism. Personality and Social Psychology Bulletin, 24(11), 1177–1189. http://doi.org/0803973233

Phuong-Mai, N., Terlouw, C., & Pilot, A. (2005). Cooperative learning vs Confucian heritage culture’s collectivism: Confrontation to reveal some cultural conflicts and mismatch. Asia Europe Journal, 3(3), 403–419. http://doi.org/10.1007/s10308-005-0008-4

Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82(3), 498–504. http://doi.org/10.1037/0022-0663.82.3.498

Singelis, T. M. (1994). The Measurement of Independent and Interdependent Self-Construals. Personality and Social Psychology Bulletin, 20(5), 580–591. http://doi.org/0803973233

Posted in EDEV_507 | Leave a comment

EDEV_507 Week 8_1

My image of Qatar is of a hot, dry, arid country. Is there the infrastructure to deal with heavy rain? Still, given the precipitation, you ask some very good questions. I’ll respond to them on this very long bullet train journey on my way home from the conference where I could confirm that nothing in epistemic cognition (EC) is known in Japan (at least in the formation I’ve often mentioned in these boards).

I think that I need to abandon any hope of using grounded theory (GT) in my research design. You are right; there are hypotheses in place, but this is not necessarily inconsistent with Charmaz’s (2006) formulation of GT as she recognises the implausibility of a researcher fully bracketing their theoretical knowledge of a field, and Strauss and Corbin (1990) allow forcing as a way of relating emergent GT theorising to accommodate existing theory. However, this kind of theorising is fundamentally different from that in a positivistic research plan that places the hypotheses at the start and at the base (Cohen, Manion, & Morrison, 2011). I have come to my decision to forgo GT for two reasons. The first is that those seminary EC studies (e.g. Baxter Magolda, 1992; Belenky, Clinchy, & Goldberger, 1999; Perry, 1970) were all phenomenological and were conducted longitudinally. The sense of credibility (Cohen et al., 2011) remains high, probably as a direct result of using the same research participants over the years, while synchronic snapshots based on paper-and-pencil survey instruments tend to produce lower internal reliability ratings (Hofer & Pintrich, 2002). But I cannot do a diachronic, longitudinal study at the EdD level. On reflection, I suspect that a paper-and-pencil snapshot would not be markedly different from a snapshot series of interviews in terms of the EC stages that would be described. The second reason is the practical problems that interviews in a foreign language bring up. I would need to hire a translator to aid me in either the data collection itself or in the data analysis. Yet, without training that person, the internal reliability of the interview would be in jeopardy (Yin, 2006).

Very briefly, I’d also like to address your contention that quantitative results need to be normally distributed. While this is true if when parametric statistics are used, there are non-parametric tools that can be used when the assumptions of the parametric tool use have not been met (Cohen et al., 2011).

By the way, why do you feel that Wang’s title is racist?

Best wishes from a flying bullet train,

Jim

Baxter Magolda, M. B. (1992). Knowing and reasoning in college: Gender-related patterns in students’ intellectual development. San Francisco: Jossey-Bass.

Belenky, M. F., Clinchy, B. M., & Goldberger, N. R. (1999). Women’s Ways of Knowing. New Directions for Student Services, 88(Winter), 17–27.

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. London: Sage.

Cohen, L., Manion, L., & Morrison, K. (2011). Research Methods in Education. Professional Development in Education (7th ed.). Abingdon: Routledge.

Corbin, J. M., & Strauss, A. (1990). Grounded theory research: Procedures, canons, and evaluative criteria.Qualitative Sociology, 13(1), 3–21. http://doi.org/10.1007/BF00988593

Hofer, B. K., & Pintrich, P. R. (2002). Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Mahwa, New Jersey: Lawrence Erlbaum Associates.

Perry, W. G. (1970). Forms of Intellectual and Ethical Development in the College Years: A Scheme. New York: Holt, Rinehart, and Winston.

Yin, R. K. (2006). Mixed methods research: Are the methods genuinely integrated or merely parallel? Research in the Schools, 13(1), 41–48. http://doi.org/Article

Posted in EDEV_507 | Leave a comment