690+research+design+definitions


 * || Characteristics of research designs ||

definitions of: descriptive, experimental, correlational, causal-comparative, and qualitative DESCRIPTIVE EXPERIMENTAL CORRELATIONAL CAUSAL-COMPARATIVE QUALITATIVE ETHNOGRAPHIC TOP OF PAGE

from Fraenkel/Wallen website (ch.1)
 * []**

__Types of Research__ __General Research Types__
 * Some of the most commonly used scientific research methodologies in education are experimental research, correlational research, causal-comparative research, survey. research, content analysis research, qualitative research, and historical research.
 * Experimental research involves manipulating conditions and studying effects.
 * Correlational research involves studying relationships among variables within a single group, and frequently suggests the possibility of cause and effect.
 * Causal-comparative research involves comparing known groups who have had different experiences to determine possible causes or consequences of group membership.
 * Survey research involves describing the characteristics of a group by means of such instruments as interview schedules, questionnaires, and tests.
 * Ethnographic research concentrates on documenting or portraying the everyday experience of people using observation and interviews.
 * Ethnographic research is one form of qualitative research. Another common form of qualitative research involves case studies.
 * A case study is a detailed analysis of one or a few individuals.
 * Content analysis research involves the systematic analysis of communication.
 * Historical research involves studying some aspect of the past.
 * Action research is a type of research by practitioners designed to help improve their practice.
 * Each of the research methodologies described constitutes a different way of inquiring into reality and is thus a different tool to use in understanding what goes on in education.
 * Individual research methodologies can be classified into general research types. Descriptive studies describe a given state of affairs. Associational studies investigate relationships. Intervention studies assess the effects of a treatment or method on outcomes.
 * Quantitative and qualitative research methodologies are based on different assumptions; the purpose of research, the methods used by researchers, the kinds of studies undertaken, the researcher's role, and the degree to which generalization is possible.
 * Meta-analysis attempts to synthesize the results of all the individual studies on a given topic by statistical means.

descriptive from Marcie's slides (Descriptive) GO TO TOP OF PAGE



from [|F/W website (glossary)] GO TO TOP OF PAGE
 * **Descriptive studies** || Research to describe existing conditions without analyzing relationships among variables ||


 * **Experimental research** || Research in which at least one independent variable is manipulated, other relevant variables are controlled, and the effect on one of more dependent variables is observed. ||


 * **Causal-comparative research** || Research to determine the cause for, or consequences of, existing differences in groups of individuals; also referred to as ex post facto research. ||


 * **Correlational research** || Research that involves collecting data in order to determine the degree to which a relationship exists between two or more variables. ||


 * **Qualitative research/study** || Research in which the investigator attempts to study naturally occurring phenomena in all their complexity. ||
 * [[image:http://highered.mcgraw-hill.com/olcweb/styles/shared/spacer.gif width="1" height="5"]] ||

Experimental section GO TO TOP OF PAGE

From F/W ch. 13

experimental research unique for 2 reasons: 1. only type of research directly attempts to influence a variable 2. best for testing hypotheses about cause/effect relationships

look at effect of one independent variable on one or more dependent variables

independ. variable: aka experimental or treatment variable, it is manipulated ex. methods of instruction, types of assignments, learning materials, rewards to students, type of questions asked by teachers

depend. variable: aka criterion or outcome variable for the outcome of the studyl ex. achievement, interest in a subject, attention span, motivation, attitudes towrd school

try something and systematically observe what happens

formal experiments have 2 basic conditions: 1. 2 or more conditions or methods are compared to assess the effect/s of particular conditions or treatments (indep. variable) 2. indep. variable is directly manipulated by the researcher

experimental group receives treatment of some sort control group receives no treatment OR comparison groups receives different treatment

control/comparison is important bc it shows if treatment has effect or if one treatment is better than the other

in ed. research don't usually use a control (no treatment) they usually use comparison group

some indep are manipulatable, some are not some are: teaching method, learning activities, assignments given, materials used, counseling some are not: gender, ethnicity, age, religion

indep. variable can be: 1. one form of variable vs. another 2. presence vs. absence 3. varying degrees

random assignment of subjects to groups: every individual has an equal chance of being assigned to any of the experimental or control conditions being compared 1. takes place bf experiment 2. process of assigning, no the result 3. equivalent, differ only by chance, as long as groups are large 4. intends to eliminate extraneous aka additional variables those aware and those not aware different from... random selection: every member of population has equal chance of being selected to sample

minimizing threats: 1. randomization assume that groups are equivalent 2. holding certain variables constant 3. building the variable into the design of study 4. matching: pairs to variables of interest 5. using subjects as their won controls 6. using analysis of covariance: used to equate groups statistically on the basis of pretest or other variables, posttest scores then adjusted

quality of experiment depends on how well threats to internal validity are controlled

weak do not have built in controls to threats

weak: one-shot case study: single group exposed to treatment, absence of control, no way to know if treatment resulted in observation one-group pretest-posttest design: single group is measured before and after a treatment (don't know if maturity, instrument decay, data collecting bias, attitude, etc, can play a factor)

static-group comparison design (aka nonequivalent control group design): 2 groups formed but not randomly formed, one group gets treatment, the other doesn't or gets a different treatment, observation of both groups occurs at same time

static-group pretest-posttest design: same as above but pretest is given to both groups

true experimental designs include subjects randomly assigned to treatment groups

randomized posttest only control group design: 2 groups, both randomly formed, one group gets treatment, other doesn't, both are posttested, control of threats is excellent, best design, as long as at least 40 subjects in each group

randomized pretest-posttest control group design: same as above but use pretest also

threat of pretest treatment interaction: an alert, do better or worse on posttest, but researchers can really see if groups are equal, if not equal then use matching designs

randomized Solomon 4-group design: to elimate poss. effect of pretest. random assignment of subjects to 4 groups, 2 groups pretested and 2 not, one pretest and one not are exposed to treatments, all 4 are posttested best design, weak bc needs large sample to create 4 groups

random assigning with matching: pairs of individuals are matched to certain variables. then randomly assigned to experimental / control groups

mechanical matching: pairing 2 persons who scores on a variable are similar problems: can't match beyond 2 or 3 characteristics, need large sample, some will have no matches and have to be eliminated making it less of a random sample

statistical matching: each subject given a predicted score on depend. variable, based on correlation betw. depend. variable and variable to be matched. difference betw predicted and actual scores for each subject is used to compare experimental and control group when pretest is used, the diff. betw. predicted and actual score is called a regressed gain score, more preferred than a straightforward gain score (posts minus pretest score) bc more reliable

quasi-experimental designs: no random assignment

Matching only design: without randomization, subjects are matched to experimental/control groups on certain variables, intact groups

counterbalanced designs: each group is exposed to all treatments but in dif. order, then posttested

time-series designs: similar to one-group pretest-posttest design except repeated measurements over a period of time before and after, more data collected on a single group, vulnerable to history/practice

factorial designs: modification of either posttest-only control group or pretest-posttest control group designs (with or without random) permitting the investigation of additional independ. variables allows researcher to study the interaction of an independ. variable with one or more other variables (moderator variables: either treatment or subject characteristic variables) evaluating if internal threat likely: 1. What could affect? 2. What likelihood? 3. How can they be controlled? If they can't, acknowledge it.

researchers control over experimental treatments: control of the what, who, when and how of it

use of existing treatment groups, groups located by researcher that already are receiving certain treatments. called causal-comparative or ex post facto studies, not considered experimental research, but legitimate

from Marcie's slides: Experimental GO TO TOP OF PAGE



from F/W website: GO TO TOP OF PAGE

__The Uniqueness of Experimental Research__ __Essential Characteristics of Experimental Research__ __Randomization__ __Control of Extraneous Variables__ __Weak Experimental Designs__ __True Experimental Designs__ __Matching__ __Quasi-Experimental Designs__ __Factorial Designs__
 * Experimental research is unique in that it is the only type of research that directly attempts to influence a particular variable, and it is the only type that, when used properly, can really test hypotheses about cause-and-effect relationships. Experimental designs are some of the strongest available for educational researchers to use in determining cause and effect.
 * Experiments differ from other types of research in two basic ways ― comparison of treatments //and// the direct manipulation of one or more independent variables by the researcher.
 * Random assignment is an important ingredient in the best kinds of experiments. It means that every individual who is participating in the experiment has an equal chance of being assigned to any of the experimental or control conditions that are being compared.
 * The researcher in an experimental study has an opportunity to exercise far more control than in most other forms of research.
 * Some of the most common ways to control for the possibility of differential subject characteristics (in the various groups being compared) are randomization, holding certain variables constant, building the variable into the design, matching, using subjects as their own controls, and the statistical technique of ANCOVA.
 * Three weak designs that are occasionally used in experimental research are the one-shot case study design, the one-group pretest-posttest design, and the static-group design. They are considered weak because they do not have built in controls for threats to internal validity.
 * In a one-shot case study, a single group is exposed to a treatment or event, and its effects assessed.
 * In the one-group pretest-posttest design, a single group is measured or observed both before and after exposure to a treatment.
 * In the static-group comparison design, two intact groups receive different treatments.
 * Several stronger designs that are more commonly used are true experimental designs, matching designs, counterbalanced designs, time-series designs, and factorial designs. These designs do have at least some controls built into the design to control for threats to internal validity.
 * The randomized posttest-only control group design involves two groups formed by random assignment and receiving different treatments.
 * The randomized pretest-posttest control group design differs from the randomized posttest-only control group only in the use of a pretest.
 * The randomized Solomon four-group design involves random assignment of subjects to four groups, with two being pretested and two not.
 * To increase the likelihood that groups of subjects will be equivalent, pairs of subjects may be matched on certain variables. The members of the matched groups are then assigned to the experimental and control groups.
 * Matching may be either mechanical or statistical.
 * Mechanical matching is a process of pairing two persons whose scores on a particular variable are similar.
 * Two difficulties with mechanical matching are that it is very difficult to match on more than two or three variables, and that in order to match, some subjects must be eliminated from the study, since no matches can be found.
 * Statistical matching does not necessitate a loss of subjects.
 * In a counterbalanced design, all groups are exposed to all treatments, but in a different order.
 * A time-series design involves repeated measurements or observations over time, both before and after treatment.
 * Factorial designs extend the number of relationships that may be examined in an experimental study.

Causal-Comparative GO TO TOP OF PAGE

from Marcie's slides:

GO TO TOP OF PAGE

from F/W ch. 16 GO TO TOP OF PAGE

causal-comparative research: determine the cause or consequences of diff that already exist betw or among a group of individuals. a form of associational research ex post facto (from latin "after the fact") research opposite of an experimental study where researcher creates differences in groups and compares performance

group diff variable either cant be manipulated, like ethnicity, or can be but for some reason isn't, like teaching style.



from F/W website GO TO TOP OF PAGE

__The Nature of Causal-Comparative Research__ __Causal-Comparative versus Correlational Research__ __Causal-Comparative versus Experimental Research__ __Steps in Causal-Comparative Research__ __Threats to Internal Validity in Causal-Comparative Research__ __Data Analysis in Causal-Comparative Studies__ __Associations Between Categorical Variables__
 * Causal-comparative research, like correlational research, seeks to identify associations among variables.
 * Causal-comparative research attempts to determine the cause or consequences of differences that already exist between or among groups of individuals.
 * The basic causal-comparative approach is to begin with a noted difference between two groups and then to look for possible causes for, or consequences of, this difference.
 * There are three types of causal-comparative research (exploration of effects, exploration of causes, exploration of consequences), which differ in their purposes and structure.
 * When an experiment would take a considerable length of time and be quite costly to conduct, a causal-comparative study is sometimes used as an alternative.
 * As in correlational studies, relationships can be identified in causal-comparative study, but causation cannot be fully established.
 * The basic similarity between causal-comparative and correlational studies is that both seek to explore relationships among variables. When relationships are identified through causal-comparative research (or in correlational research), they often are studied at a later time by means of experimental research.
 * In experimental research, the group membership variable is manipulated; in causal-comparative research the group differences already exist.
 * The first step in formulating a problem in causal-comparative research is usually to identify and define the particular phenomena of interest, and then to consider possible causes for, or consequences of, these phenomena.
 * The important thing in selecting a sample for a causal-comparative study is to define carefully the characteristic to be studied and then to select groups that differ in this characteristic.
 * There are no limits to the kinds of instruments that can be used in a causal-comparative study.
 * The basic causal-comparative design involves selecting two groups that differ on a particular variable of interest and then comparing them on another variable or variables.
 * Two weaknesses in causal-comparative research are lack of randomization and inability to manipulate an independent variable.
 * A major threat to the internal validity of a causal-comparative study is the possibility of a subject selection bias. The chief procedures that a researcher can use to reduce this threat include matching subjects on a related variable or creating homogeneous subgroups, and the technique of statistical matching.
 * Other threats to internal validity in causal-comparative studies include location, instrumentation, and loss of subjects. In addition, type 3 studies are subject to implementation, history, maturation, attitude of subjects, regression, and testing threats.
 * The first step in a data analysis of a causal-comparative study is to construct frequency polygons.
 * Means and standard deviations are usually calculated if the variables involved are quantitative.
 * The most commonly used test in causal-comparative studies is a //t//-test for differences between means.
 * Analysis of covariance is particularly useful in causal-comparative studies.
 * The results of causal-comparative studies should always be interpreted with caution, because they do not prove cause and effect.
 * Both crossbreak tables and contingency coefficients can be used to investigate possible associations between categorical variables, although predictions from crossbreak tables are not precise. Fortunately, there are relatively few questions of interest in education that involve two categorical variables

Correlational GO TO TOP OF PAGE

from Marcie's slides



GO TO TOP OF PAGE

from F/W website

__Purposes of Correlational Research__ __Complex Correlational Techniques__ __Basic Steps in Correlational Research__ __Correlation Coefficients and Their Meaning__ __Evaluating Threats to Internal Validity in Correlational Research__
 * Correlational studies are carried out either to help explain important human behaviors or to predict likely outcomes.
 * If a relationship of sufficient magnitude exists between two variables, it becomes possible to predict a score on either variable if a score on the other variable is known.
 * The variable that is used to make the prediction is called the predictor variable.
 * The variable about which the prediction is made is called the criterion variable.
 * Both scatterplots and regression lines are used in correlational studies to predict a score on a criterion variable.
 * A predicted score is never exact. As a result, researchers calculate an index of prediction error, which is known as the "standard error of estimate."
 * Multiple regression is a technique that enables a researcher to determine a correlation between a criterion variable and the best combination of two or more predictor variables.
 * The coefficient of multiple correlation (//R//) indicates the strength of the correlation between the combination of the predictor variables and the criterion variable.
 * The value of a prediction equation depends on whether it predicts successfully with a new group of individuals.
 * When the criterion variable is categorical rather than quantitative, discriminant function analysis (rather than multiple regression) must be used.
 * Factor analysis is a technique that allows a researcher to determine whether many variables can be described by a few factors.
 * Path analysis is a technique used to test a theory about the possibility of causal connections among three or more variables.
 * These include, as in most research, selecting a problem, choosing a sample, selecting or developing instruments, determining procedures, collecting and analyzing data, and interpreting results.
 * The meaning of a given correlation coefficient depends on how it is applied.
 * Correlation coefficients below .35 show only a slight relationship between variables.
 * Correlations between .40 and .60 may have theoretical and/or practical value depending on the context.
 * Only when a correlation of .65 or higher is obtained can reasonably accurate predictions be made.
 * Correlations over .85 indicate a very strong relationship between the variables correlated.
 * Threats to the internal validity of correlational studies include subject characteristics, location, instrument decay, data collection, and testing.
 * Results of correlational studies must always be interpreted with caution, because they may suggest, but they cannot establish, causation.

Qualitative GO TO TOP OF PAGE

from Marcie's study session:

grounded theory is qualitative: depends on researcher having a lot of expertise.

qualitative data gathering takes much longer to do, rather than quantitative.

from W/L website (ch. 18 & 19)

ch. 18 __The Nature of Qualitative Research__ GO TO TOP OF PAGE __Steps Involved in Qualitative Research__ __Approaches to Qualitative Research__ __Generalization in Qualitative Research__ __Ethics and Qualitative Research__ __Reconsidering Qualitative and Quantitative Research__
 * The term "qualitative research" refers to the //quality// of relationships, activities, situations, or materials.
 * The natural setting is a direct source of data and the researcher is a key part of the instrumentation process in qualitative research.
 * Qualitative data are collected mainly in the form of words or pictures and seldom involve numbers. Content analysis is a primary method of data analysis.
 * Qualitative researchers are especially interested in how things occur and particularly in the perspectives of the subjects of a study.
 * Qualitative researchers do not, usually, formulate a hypothesis beforehand and then seek to test it. Rather, they allow hypotheses to emerge as a study develops.
 * Qualitative and quantitative research differ in the philosophic assumptions which underlie the two approaches.
 * The steps involved in conducting a qualitative study are not as distinct as they are in quantitative studies. They often overlap and sometimes even conducted concurrently.
 * All qualitative studies begin with a //foreshadowed problem,// the particular phenomenon the researcher is interested in investigating.
 * Researchers who engage in a qualitative study of some type usually select a purposive sample. Several types of purposive samples exist.
 * There is no treatment in a qualitative study, nor is there any manipulation of variables.
 * The collection of data in a qualitative study is ongoing.
 * Conclusions are drawn continuously throughout the course of a qualitative study.
 * A biographical study tells the story of the special events in the life of a single individual.
 * A researcher studies an individual's reactions to a particular phenomenon in a phenomenological study. He or she tries to describe an individual's experiences from the subject's perspective.
 * In a grounded theory study, a researcher forms a theory that is formed inductively from the data that is collected as part of the study.
 * A case study is a detailed study of one or (at most) a few individuals or other social units, such as a classroom, a school, or a neighborhood.
 * Generalizing is possible in qualitative research, but it is of a different type than that found in quantitative studies. It is more likely it will be done by interested practitioners.
 * The identities of all participants in a qualitative study should be protected, and they should be treated with respect.
 * Aspects of both qualitative and quantitative research often are used together in a study. Increased attention is being given to such mixed-methods studies.
 * Whether qualitative or quantitative research is the most appropriate boils down to what the researcher involved wants to find out.

Ch. 19 GO TO TOP OF PAGE __Observer Roles__ __Participant versus Nonparticipant Observation__ __Observation Techniques__ __Observer Effect__ __Observer Bias__ __Sampling in Observational Studies__ __Interviewing__ __Reliability and Validity in Qualitative Research__
 * There are four roles that an observer can play in a qualitative research study, ranging from complete participant, to participant-as-observer, to observer-as-participant, to complete observer. The degree of involvement of the observer in the observed situation diminishes accordingly for each of these roles.
 * In participant observation studies, the researcher actually participates as an active member of the group in the situation or setting he or she is observing.
 * In nonparticipant observation studies, the researcher does not participate in an activity or situation but observes "from the sidelines."
 * The most common forms of nonparticipant observation studies include naturalistic observation and simulations.
 * A simulation is an artificially created situation in which subjects are asked to act out certain roles.
 * A coding scheme is a set of categories an observer uses to record a person or group's behaviors.
 * Even with a fixed coding scheme in mind, an observer must still choose what to observe.
 * A major problem in all observational research is that much that goes on may be missed.
 * The term "observer effect" refers to either the effect the presence of an observer can have on the behavior of the subjects or observer bias in the data reported. The use of audio- and videotapings is especially helpful in guarding against this effect.
 * For this reason, many researchers argue that the participants in a study should not be informed of the study's purpose until after the data have been collected.
 * Observer bias refers to the possibility that certain characteristics or ideas of observers may affect what they observe.
 * Researchers who engage in observation usually must choose a purposive sample.
 * A second major technique commonly used by qualitative researchers is in-depth interviewing.
 * The purpose of interviewing the participants in a qualitative study is not only to find out what they think or how they feel about something but also to provide a check on the researcher's observations.
 * Interviews may be structured, semistructured, informal, or retrospective.
 * The six types of questions asked by interviewers are background or demographic questions, knowledge questions, experience of behavior questions, opinion or values questions, feelings questions, and sensory questions.
 * Respect for the individual being interviewed is a paramount expectation in any proper interview.
 * Key actors are people in any group who are more informed about the culture and history of the group and who also are more articulate, than others.
 * A focus group interview is an interview with a small, fairly homogeneous group of people who respond to a series of questions asked by the interviewer.
 * The most effective characteristic of a good interviewer is a strong interest in people and in listening to what they have to say.
 * An important check on the validity and reliability of the researcher's interpretations in qualitative research is to compare one informant's description of something with another informant's description of the same thing.
 * Another, although more difficult, check on reliability/validity is to compare information on the same topic with different information — triangulation.
 * Efforts to ensure reliability and validity include use of proper vocabulary, recording questions used as well as personal reactions, describing content, and documenting sources.

ethnographic GO TO TOP OF PAGE

from Marcie's study session:


 * (**Marcie describes it as: **ethnography**: understand a setting as best as possible and accepted by the group (not clandestinely), couldn't be a complete participant, time consuming, can't see a group once and then say you know them, usually using observation and interviews. participant in the action but not hidden, everyone knows you are)

from F/W website: Ch. 21

__The Nature and Value of Ethnographic Research__ __Ethnographic Concepts__ __Topics That Lend Themselves Well to Ethnographic Research__ __Sampling in Ethnographic Research__ __The Use of Hypotheses in Ethnographic Research__ __Data Collection and Analysis in Ethnographic Research__ __Fieldwork__ __Advantages and Disadvantages of Ethnographic Research__
 * Ethnographic research is particularly appropriate for behaviors that are best understood by observing them within their natural settings.
 * The key techniques in all ethnographic studies are in-depth interviewing and highly detailed, almost continual, ongoing participant observation of a situation.
 * A key strength of ethnographic research is that it provides the researcher with a much more comprehensive perspective than do other forms of educational research.
 * Important concepts in ethnographic research include //culture, holistic outlook, contextualization, emic perspective,//and //multiple realities.//
 * These include topics that defy simply quantification; those that can best be understood in a natural setting; those that involve studying individual or group activities over time; those that involve studying the roles that individuals play and the behaviors associated with those roles; those that involve studying the activities and behaviors of groups as a unit; and those that involve studying formal organizations in their totality.
 * The sample in ethnographic studies is almost always purposive.
 * The data obtained from ethnographic research samples rarely, if ever, permit generalization to a population.
 * Ethnographic researchers seldom formulate precise hypotheses ahead of time. Rather, they develop them as their study emerges.
 * The major means of data collection in ethnographic research are participant observation and detailed interviewing.
 * Researchers use a variety of instruments in ethnographic studies to collect data and to check validity. This is frequently referred to as "triangulation."
 * Analysis consists of continual reworking of data with emphasis on patterns, key events, and use of visual representations in addition to interviews and observations.
 * Field notes are the notes a researcher in an ethnographic study takes in the field.
 * They include both descriptive field notes (what he or she sees and hears) and reflective field notes (what he or she thinks about what has been observed).
 * Field jottings refer to quick notes about something the researcher wants to write more about later.
 * A field diary is a personal statement of the researcher's feelings and opinions about the people and situations he or she is observing.
 * A field log is a sort of running account of how the researcher plans to spend his or her time compared to how he or she actually spends it.
 * A key strength of ethnographic research is that it provides a much more comprehensive perspective than other forms of educational research. It lends itself well to topics that are not easily quantified. Also, it is particularly appropriate to studying behaviors best understood in their natural settings.
 * Like all research, ethnographic research also has its limitations. It is highly dependent on the particular researcher's observations. Furthermore, some observer bias is almost impossible to eliminate. Lastly, generalization is practically nonexistent.

GO TO TOP OF PAGE