690+instrumentation

Distinguishing features (at a very high level) of surveys, tests, observations, and content analyses.

from F/W website

__What Are Data?__ __Instrumentation__ __Validity and Reliability__ __Objectivity and Usability__ __Ways to Classify Instruments__ __Types of Instruments__ __Types of Scores__
 * The term "data" refers to the kinds of information researchers obtain on the subjects of their research.
 * The term "instrumentation" refers to the entire process of collecting data in a research investigation.
 * An important consideration in the choice of an instrument to be used in a research investigation is validity: the extent to which results from it permit researchers to draw warranted conclusions about the characteristics of the individuals studied.
 * A reliable instrument is one that gives consistent results.
 * Whenever possible, researchers try to eliminate subjectivity from the judgments they make about the achievement, performance, or characteristics of subjects.
 * An important consideration for any researcher in choosing or designing an instrument is how easy the instrument will actually be to use.
 * Research instruments can be classified in many ways. Some of the more common are in terms of who provides the data, the method of data collection, who collects the data, and what kind of response they require from the subjects.
 * Research data are data obtained by directly or indirectly assessing the subjects of a study.
 * Self-report data are data provided by the subjects of a study themselves.
 * Informant data are data provided by other people about the subjects of a study.
 * Many types of researcher-completed instruments exist. Some of the m ore commonly used are rating scales, interview schedules, tally sheets, flowcharts, performance checklists, anecdotal records, and time-and-motion logs.
 * There are also many types of instruments that are completed by the subjects of a study rather than the researcher. Some of the more commonly used of this type are questionnaires; self-checklists; attitude scales; personality inventories; achievement, aptitude, and performance tests; projective devices; and sociometric devices.
 * The types of items or questions used in subject-completed instruments can take many forms, but they all can be classified as either selection or supply items. Examples of selection items include true-false items, multiple-choice items, matching items, and interpretive exercises. Examples of supply items include short-answer items and essay questions.
 * An excellent source for locating already available tests is the //ERIC Clearinghouse on Assessment and Evaluation.//
 * Unobtrusive measures require no intrusion into the normal course of affairs.
 * A raw score is the initial score obtained when using an instrument; a derived score is a raw score that has been translated into a more useful score on some type of standardized basis to aid in interpretation.
 * Age/grade equivalents are scores that indicate the typical age or grade associated with an individual raw score.
 * A percentile rank is the percentage of a specific group scoring at or below a given raw score.
 * A standard score is a mathematically derived score having comparable meaning on different instruments.

from F/W ch. 7

instrument is the device used to collect data 3 quesions for instrumentation: where will data be collected? when will data be collected? how often/frequency? who will collect data?

validity: it measures what its supposed to measure reliablity: instrument gives consistent results objectivity: as much as possible

3 methods for who provides info: researcher himself directly from subjects from others (informants)

tips for instrument development: be clear about which instrument see if there are existing instruments decide on format for each instrument make sure questions are valid and align with variable have colleagues and similar subjects look at it conduct a small statistical item analysis with tryout data

types of instruments: written response: objective tests (mult choice, true-false, etc) performance instruments: device designed to measure a procedure or product

semantic differentials: measure attitude toward a particular concept. comparing 2 opposites, cold/hot, good/bad, and then statement above, like working with other students in small groups

personality or character inventories: measure traits, feelings about themselves: Minnesota Multiphasic Personality Inventory, IPAT Anxiety Scale, Piers-Harris Children's Self-concept Scale (How I feel about myself) and Kuder Preference Record

achievement tests (ability): measure individuals knowledge or skill in a given subject Cal. achievement test: reading. lang. math Standford Achievement test: langu. usage, word meaning, spelling, math computation, ss, sic CTBS, ITBS, MAT, STEP, GRE if comparing instructional methods: achievement is the dependent variable

general achievement tests: STEP and GRE then there are specific achievement tests on specific subjects

aptitude tests: general ability or intelligence, abilities not taught in school, can be indep. or dep. variable. sometimes want to control variable to keep same abilities aptitude measures potential to achieve, although measuring present skills or abilities Cal. test of mental maturing (CTMM) and Otis lennon, stanford-binet, wechsler scale age 5-15 WISC-III, for 16+ WAIS-III

performance test: like a typing test, most objective perform. checklist/rating scale, if need to have subjective judgment



from Moodle



from F/W website Ch. 20

__What is Content Analysis?__ __Applications of Content Analysis__ __Categorization in Content Analysis__ __Steps Involved in Content Analysis__ __Coding Categories__ __Reliability and Validity as Applied to Content Analysis__ __Data Analysis__ __Advantages and Disadvantages of Content Analysis__
 * Content analysis is an analysis of the contents of a communication.
 * Content analysis is a technique that enables researchers to study human behavior in an indirect way by analyzing communications.
 * Content analysis has wide applicability in educational research.
 * Content analysis can give researchers insights into problems that they can test by more direct methods.
 * There are several reasons to do a content analysis: to obtain descriptive information of one kind or another; to analyze observational and interview data; to test hypotheses; to check other research findings; and/or to obtain information useful in dealing with educational problems.
 * Coding (categorizing) by using predetermined categories.
 * Coding by use of categories that emerge as data is reviewed.
 * In doing a content analysis, researchers should always develop a rationale (a conceptual link) to explain how the data to be collected are related to their objectives.
 * Important terms should at some point be defined.
 * All of the sampling methods used in other kinds of educational research can be applied to content analysis. Purposive sampling, however, is the most commonly used.
 * The unit of analysis ― what specifically is to be analyzed ― should be specified before the researcher begins an analysis.
 * After defining what aspects of the content are to be analyzed, the researcher needs to formulate coding categories.
 * Developing emergent coding categories requires a high level of familiarity with content.
 * In doing a content analysis, a researcher can code either the manifest or the latent content of a communication, and sometimes both.
 * The manifest content of a communication refers to the specific, clear, surface contents: the words, pictures, images, and such that are easily categorized.
 * The latent content of a document refers to the meaning underlying what is contained in a communication.
 * Reliability in content analysis is commonly checked by comparing the results of two independent scorers (categorizers).
 * Validity can be checked by comparing data obtained from manifest content to that obtained from latent content.
 * A common way to interpret content analysis data is by using frequencies (i.e., the number of specific incidents found in the data) and proportion of particular occurrences to total occurrences.
 * Another method is to use coding to develop themes to facilitate synthesis.
 * Computer analysis is extremely useful in coding data once categories have been determined. It can also be useful at times in developing such categories.
 * Two major advantages of content analysis are that it is unobtrusive and it is comparatively easy to do.
 * The major disadvantages of content analysis are that it is limited to the analysis of communications and it is difficult to establish validity.