Important Characteristics of Measures
• Validity
• Reliability
• Objectivity
• Usability
Validity vs. Reliability
Validity= appropriateness, correctness, meaningfulness, and usefulness of inferences made about the instruments used in a study
Reliability= consistency of scores obtained, across individuals, administrators, and sets of items
Relationship Between Reliability and Validity
Suppose I have a faulty measuring tape and I use it to measure each student’s height.
On the other hand, if I have a correctly printed measuring tape...
My tool is invalid, but it’s still reliable.
My tool is both valid and reliable.
Something can be valid & reliable.
Something can be invalid but reliable.
But if something is unreliable, it is always invalid.
Types of Validity
• Content Validity -
• Criterion Validity -
–Predictive Validity - Ability of the measure to predict future performance
–Concurrent Validity -
• Convergent vs. DiscriminantValidity
Convergent Validity - they are trying to show that one measure is showing the same thing as another measure
Discriminant Validity - showing one measure is actually showing something quite different than another measure
• Construct Validity -
• Internal Validity - How well is your study designed?
Threats to Internal Validity:
Subject characteristics
Mortality threat (attrition)
Location
Instrumentation
Data Collectors
Testing
History
Maturation
Attitude of subjects
Regression threat
Implementation
Ways That Threats to Internal Validity Can be Minimized:
a. Standardized study conditions - The "Bus Test" - If you walked out the door and got hit by a bus, someone else could pick up right where you left off with your research.
b. Obtain more information on individuals in the sample
c. Obtain more information about details of study
d. Choice of appropriate design
Reliability Checks
• Test-Retest (aka Stability) - Tests have consistent results
• Equivalent Forms - Multiple forms of the same test - If one individual takes both forms of the test, the scores should be highly correlated
• Internal Consistency
–Split-half - compare 1/2 of the items on the test to the other 1/2 to ensure that all items on the test are reliable - NEVER compare the 1st half to the last half of the items because fatigue or not completing the test can greatly affect the answers on the 2nd half; instead you could compare odd #s to even #s
–KuderRichardson
–ChronbachAlpha
• Inter-Rater (Agreement)
Analyzing Data
Frequency Polygon aka Frequency Distribution
Normal Distribution
Descriptive Statistics = describe a sample
Inferential Statistics = describe a sample, and are inferred to a larger (target) population
•Measures of Central Tendency:
–Mean = statistical average - the best, most stable measure of central tendency
–Median = middle score
–Mode = most frequent score
• Measures of Variability
–Range = highest score minus the lowest score
–Standard deviation = average deviation from the mean
–Standard error of measurement = range in which “true score”is likely to fall
Standardized scores (or z-scores) = transform raw scores into standard deviation units on the normal distribution; z = (raw score –mean) / stand. dev.
Correlational Data -- plotted on scatterplots
• Correlation Coefficients
–“r”can range from -1 to +1
–Negative correlation = as one variable decreases, other increases (r is close to -1)
–Positive correlation = as one variable increases, other also increases (r is close to +1)
–Zero correlation = no relationship between the two variables (the closer r is to 0, the less correlation there is)
*You cannot imply causation from correlation.*
Hypothesis Testing
Null Hypothesis (H0) = set up to state that there is no effect
Alternative Hypothesis (H1) = set up to state that there is an effect
These two hypotheses must be:
• Mutually Exclusive - they can't overlap - either there is no effect or there is an effect
• Exhaustive
Test by determining by doing statistics to determine probability that the result was due to
chance:
• If probability that the result was due to chance <> 5%, the null hypothesis cannot be rejected
• 5% level => alpha level => .05
So, a researcher wants the probability (p) that their results were due to chance to be less than 5% (0.05).
If p is <>
If p is > 0.05, there is a non-significant effect.
If my null hypothesis is true, but I reject the null, that is a Type I Error.
If my null hypothesis is true, and I fail to reject the null, that is a correct decision.
If my null hypothesis is false, and I reject the null, that is a correct decision.
If my null hypothesis is false, and I fail to reject the null, that is a Type II Error.
This is the one I want! I will do anything I can to increase my POWER to get this result.
Ways Researchers May try to Increase Likelihood of Rejecting Null Hypothesis:
• Increase sample size.
• Control for extraneous variables (confounds).
• Increase the strength of the treatment.
• Use a one-tailed test when justifiable.
How do you know how “big”an effect really is?
• Effect Sizes = an estimate of the magnitude of an effect between two groups or variables
–Cohen’s d - an estimate of effect size
–η2(eta-squared) or partial η2
–Coefficient of determination (R2)
Interpreting Cohen’s d:
Small d <.2 (statistically significant, but not practically significant)
Medium .3 < d < .5
Large d > .5
NEXT WEEK
• Moving into different Research Designs
–Everybody read:
• Kavalearticle• Rosen & Solomon article
–Starting with Meta-Analyses
• I’ll discuss Kavalearticle
• Staci, Randy, and Katie will lead discussion on Rosen & Solomon article
• Initial Article Analyses are Due
–Use guidelines on Initial Analysis handout–Consider “What you Know to Ask So Far”
–Turn in your review and a complete copy of the article you reviewed
1st Quiz is due Tuesday at midnight.
Wednesday, January 30, 2008
Wednesday, January 23, 2008
Sampling & Measurement
Sampling & Measurement
I. Sampling
A. Samples vs. Populations
B. Sampling Methods
1. Quantitative Methods
2. Qualitative Methods
C. Issues in Sampling
II. Measurement
A. Measurement, Evaluation, & Assessment
B. Types of Educational Measures
C. Interpreting Data
D. Evaluating Measures
Samples vs. Populations
• Sample= group of people participating in your study
• Population= group of people to whom you want to generalize your results
–Target Population - the population you are trying to represent with your research findings
–Accessible Population - the population you are actually able to get a sample from *may or may not be the same as your target population - the closer it matches your target population, the better
Two Types of Sampling
1. Probability Sampling (aka Simple Random Sampling, aka Straight Random Sample)= take a random selection of individuals from our population, such that each individual has an equal chance of being selected for participation in the study.
2. Non-Probability Sampling (aka Non-Random Sample)= individuals are selected from the population in such a way that not everyone has an equal chance of being selected for participation in the study. Not totally random.
Probability Sampling Methods:
1. Stratified Random Sampling= select subsets of the population to participate in the study in the same proportion as they appear in the population
e.g., 400 teachers in Salt Lake area schools, 275 are female and 125 are male
I decide to sample 40% of Salt Lake area teachers. My sample contains:
40% * 400 teachers = 160 total teachers in sample
40% * 275 female teachers = 110 females in sample
40% * 125 male teachers = 50 males in sample
2. Clustered random sample= select existing groups of participants instead of creating subgroups
e.g., Instead of randomly selecting individuals in correct proportions, I randomly select groups of individuals. So now I randomly select some schools in Salt Lake area district, and all teachers in those selected schools participate in my study. But, I must ensure that those groups selected are representative of my population as a whole.
3. Two-Stage Random Sampling= combines methods 1 and 2; in stage 1, existing groups are
randomly selected; in stage 2, individuals from those groups are randomly selected
e.g., Instead of randomly selecting individuals in correct proportions, I randomly select groups of individuals, then randomly select individuals from those groups
Stage 1: I randomly select some schools in Salt Lake area district.
Stage 2: From each selected school, I randomly select a subset of teachers to participate in the study
*If you don't have a really good reason for controlling your sample, it's probably better to just do a simple random sample. You can't control for every characteristic, so it's often best just to be random.
Non-Probability Sampling Methods:
1. Systematic Sampling= every nth individual in a population is selected for participation in the study
e.g., I take an alphabetical list of all teachers in Salt Lake area schools, and select every 3rd individual from that list for participation in my study. Here, 3 is my sampling interval
sampling interval = population size / desired sample size
e.g., sampling interval = 400 teachers / 160 teachers (or 40%) =2.5
sampling ratio = proportion of individuals in population selected for sample
e.g., sampling ratio = 160/400 = .4 or 40%
2. Convenience Sampling = select from a group of individuals who are conveniently available to be participants in your study
e.g., I go into schools at lunchtime and give surveys to those teachers who can be found in the teachers’ lounge
Potential Problem:
Sample is likely to be biased –are those teachers in the lounge at lunchtime likely to be different from those who aren’t?
This type of sampling should be avoided if possible.
3. Purposive Sampling= researchers use past knowledge or own judgment to select a sample that he/she thinks is representative of the population
e.g., I decide to just give my survey to teachers who are also currently enrolled in the EDPS 6030, because I *think* they are representative of the population of Salt Lake area teachers
Potential problem: Researchers may be biased about what they believe is representative of a population, or they may be just plain wrong.
Be very cautious of this kind of sampling!
Sampling in Qualitative Research
• Purposive Sampling
• Case Analysis (aka Case Study)
–Typical - the prototype, the typical example
–Extreme - the unusually extreme example
–Critical - highlights the characteristics you want to study
• Maximum Variation - you are representing the extremes of your examples (some less than typical, some typical, some extreme)
• Snowball Sampling - you select some people for your sample, then ask the to get some people to participate, then they get some people to participate, etc.
Sampling and Validity
1. What size sample is appropriate?
Descriptive => 100 subjects
Correlational=> 50 subjects
Experimental => 30 subjects per group* (You will often see less than that.)
Causal-Comparative => 30 subjects per group*
But if groups are tightly controlled, less (e.g., 15 per group) may be OK.
2. How generalizable is the sample?
external validity= the results should be generalizable beyond the conditions of the individual study
a. Population generalizability= extent to which the sample represents the population of interest
b. Ecological generalizability= degree to which the results can be extended to other settings or conditions
What is Measurement?
• Measurement - the collection of data, the gathering of information
• Evaluation - making a decision based on the information
• Where does assessment fit in? - both measurement and evaluation are lumped together
What kind of scale is the measurement based on?
• Nominal - categorical (qualitative variables)
• Ordinal - rank order, no other information.
eg. 1st, 2nd, and 3rd place but no details about the distance between 1st and 2nd or 2nd and 3rd
• Interval - we do know the distance between the results - there is NO absolute zero
• Ratio - we do know the distance between the results - there IS an absolute zero
Types of Educational Measures
• Cognitive vs. Non-Cognitive
cognitive - interested in the cognitive processes involved
non-cognitive - ex. opionion - not cognitively based
• Commercial vs. Non-Commercial
commercial - developed by a company - tried and tested, standardized, generalized
non-commercial - developed by the researcher - tailored for your own needs
• Direct vs. Indirect
direct - getting our information directly from the participants
indirect - getting our information from somewhere besides the participants
Sample Cognitive Measures
• Standardized Tests
–Achievement Tests - tests things already learned
–Aptitude Tests - tests potential for future learning
• Behavioral Measures
–Naming Time
–Response Time
–Reading Time
• Wpm
• Eyetracking Measures
Non-Cognitive Measures
• Surveys & Questionnaires
• Observations
• Interviews
How is an individual’s score interpreted?
1. Norm-referenced instruments= an individual’s score is based on comparison with peers (e.g.,
percentile rank, age/grade equivalents, grading on curve, etc.)
2. Criterion-referenced instruments= an individual’s score is based on some predetermined standard (e.g., raw score)
Interpreting Data
Different Ways to Present Scores:
1. Raw Score= number of items answered correctly, number of times behavior is tallied, etc.
2. Derived Scores= scores changed into a more meaningful unit
a. age/grade equivalent scores= for a given score, tell what age/grade score usually falls in
b. percentile rank= ranking of score compared to all other individuals who took the test
c. standard scores (aka Z Scores) = indicates how far scores are from a reference point; usually best to use in research - allows you to compare scores from two totally different scales
Important Characteristics of Measures
• Objectivity
• Usability - Can I use it? Am I able to interpret the data I get from it?
• Validity - Does the measure actually measure what it's supposed to measure?
• Reliability - Do I get consistent measurements over time?
I. Sampling
A. Samples vs. Populations
B. Sampling Methods
1. Quantitative Methods
2. Qualitative Methods
C. Issues in Sampling
II. Measurement
A. Measurement, Evaluation, & Assessment
B. Types of Educational Measures
C. Interpreting Data
D. Evaluating Measures
Samples vs. Populations
• Sample= group of people participating in your study
• Population= group of people to whom you want to generalize your results
–Target Population - the population you are trying to represent with your research findings
–Accessible Population - the population you are actually able to get a sample from *may or may not be the same as your target population - the closer it matches your target population, the better
Two Types of Sampling
1. Probability Sampling (aka Simple Random Sampling, aka Straight Random Sample)= take a random selection of individuals from our population, such that each individual has an equal chance of being selected for participation in the study.
2. Non-Probability Sampling (aka Non-Random Sample)= individuals are selected from the population in such a way that not everyone has an equal chance of being selected for participation in the study. Not totally random.
Probability Sampling Methods:
1. Stratified Random Sampling= select subsets of the population to participate in the study in the same proportion as they appear in the population
e.g., 400 teachers in Salt Lake area schools, 275 are female and 125 are male
I decide to sample 40% of Salt Lake area teachers. My sample contains:
40% * 400 teachers = 160 total teachers in sample
40% * 275 female teachers = 110 females in sample
40% * 125 male teachers = 50 males in sample
2. Clustered random sample= select existing groups of participants instead of creating subgroups
e.g., Instead of randomly selecting individuals in correct proportions, I randomly select groups of individuals. So now I randomly select some schools in Salt Lake area district, and all teachers in those selected schools participate in my study. But, I must ensure that those groups selected are representative of my population as a whole.
3. Two-Stage Random Sampling= combines methods 1 and 2; in stage 1, existing groups are
randomly selected; in stage 2, individuals from those groups are randomly selected
e.g., Instead of randomly selecting individuals in correct proportions, I randomly select groups of individuals, then randomly select individuals from those groups
Stage 1: I randomly select some schools in Salt Lake area district.
Stage 2: From each selected school, I randomly select a subset of teachers to participate in the study
*If you don't have a really good reason for controlling your sample, it's probably better to just do a simple random sample. You can't control for every characteristic, so it's often best just to be random.
Non-Probability Sampling Methods:
1. Systematic Sampling= every nth individual in a population is selected for participation in the study
e.g., I take an alphabetical list of all teachers in Salt Lake area schools, and select every 3rd individual from that list for participation in my study. Here, 3 is my sampling interval
sampling interval = population size / desired sample size
e.g., sampling interval = 400 teachers / 160 teachers (or 40%) =2.5
sampling ratio = proportion of individuals in population selected for sample
e.g., sampling ratio = 160/400 = .4 or 40%
2. Convenience Sampling = select from a group of individuals who are conveniently available to be participants in your study
e.g., I go into schools at lunchtime and give surveys to those teachers who can be found in the teachers’ lounge
Potential Problem:
Sample is likely to be biased –are those teachers in the lounge at lunchtime likely to be different from those who aren’t?
This type of sampling should be avoided if possible.
3. Purposive Sampling= researchers use past knowledge or own judgment to select a sample that he/she thinks is representative of the population
e.g., I decide to just give my survey to teachers who are also currently enrolled in the EDPS 6030, because I *think* they are representative of the population of Salt Lake area teachers
Potential problem: Researchers may be biased about what they believe is representative of a population, or they may be just plain wrong.
Be very cautious of this kind of sampling!
Sampling in Qualitative Research
• Purposive Sampling
• Case Analysis (aka Case Study)
–Typical - the prototype, the typical example
–Extreme - the unusually extreme example
–Critical - highlights the characteristics you want to study
• Maximum Variation - you are representing the extremes of your examples (some less than typical, some typical, some extreme)
• Snowball Sampling - you select some people for your sample, then ask the to get some people to participate, then they get some people to participate, etc.
Sampling and Validity
1. What size sample is appropriate?
Descriptive => 100 subjects
Correlational=> 50 subjects
Experimental => 30 subjects per group* (You will often see less than that.)
Causal-Comparative => 30 subjects per group*
But if groups are tightly controlled, less (e.g., 15 per group) may be OK.
2. How generalizable is the sample?
external validity= the results should be generalizable beyond the conditions of the individual study
a. Population generalizability= extent to which the sample represents the population of interest
b. Ecological generalizability= degree to which the results can be extended to other settings or conditions
What is Measurement?
• Measurement - the collection of data, the gathering of information
• Evaluation - making a decision based on the information
• Where does assessment fit in? - both measurement and evaluation are lumped together
What kind of scale is the measurement based on?
• Nominal - categorical (qualitative variables)
• Ordinal - rank order, no other information.
eg. 1st, 2nd, and 3rd place but no details about the distance between 1st and 2nd or 2nd and 3rd
• Interval - we do know the distance between the results - there is NO absolute zero
• Ratio - we do know the distance between the results - there IS an absolute zero
Types of Educational Measures
• Cognitive vs. Non-Cognitive
cognitive - interested in the cognitive processes involved
non-cognitive - ex. opionion - not cognitively based
• Commercial vs. Non-Commercial
commercial - developed by a company - tried and tested, standardized, generalized
non-commercial - developed by the researcher - tailored for your own needs
• Direct vs. Indirect
direct - getting our information directly from the participants
indirect - getting our information from somewhere besides the participants
Sample Cognitive Measures
• Standardized Tests
–Achievement Tests - tests things already learned
–Aptitude Tests - tests potential for future learning
• Behavioral Measures
–Naming Time
–Response Time
–Reading Time
• Wpm
• Eyetracking Measures
Non-Cognitive Measures
• Surveys & Questionnaires
• Observations
• Interviews
How is an individual’s score interpreted?
1. Norm-referenced instruments= an individual’s score is based on comparison with peers (e.g.,
percentile rank, age/grade equivalents, grading on curve, etc.)
2. Criterion-referenced instruments= an individual’s score is based on some predetermined standard (e.g., raw score)
Interpreting Data
Different Ways to Present Scores:
1. Raw Score= number of items answered correctly, number of times behavior is tallied, etc.
2. Derived Scores= scores changed into a more meaningful unit
a. age/grade equivalent scores= for a given score, tell what age/grade score usually falls in
b. percentile rank= ranking of score compared to all other individuals who took the test
c. standard scores (aka Z Scores) = indicates how far scores are from a reference point; usually best to use in research - allows you to compare scores from two totally different scales
Important Characteristics of Measures
• Objectivity
• Usability - Can I use it? Am I able to interpret the data I get from it?
• Validity - Does the measure actually measure what it's supposed to measure?
• Reliability - Do I get consistent measurements over time?
Wednesday, January 16, 2008
Research Questions and Variables and Hypotheses
Research Questions & Variables and Hypotheses
I.What is a researchable question?
II. Characteristics of researchable questions
III. Research Variables
A. Quantitative vs. Categorical
B. Independent vs. Dependent
IV. Hypotheses
A. Quantitative vs. Qualitative
V. Identifying Research Articles
Research Problems vs. Research Questions
Research Problem:a problem to be solved, area of concern, general question, etc.
e.g. We want to increase use of technology in K-3 classrooms in Utah.
Research Question:a clarification of the research problem, which is the focus of the research and drives the methodology chosen
e.g. Does integration of technology into teaching in grades K-3 lead to higher standardized achievement test scores than traditional teaching methods alone?
Researchable Research Questions
•Where do they come from?
–Experimenter interests
–Application issues
–Replication issues
•Do they focus on product or process? Or neither?
•Are they researchable? Unresearchable?
Researchable vs. Un-Researchable Questions
Researchable Questions–contain empirical referents
Empirical Referent –something that can be observed and/or quantified in some way
e.g., The Pepsi Challenge –which soda do people prefer more? Coca-Cola or Pepsi?
Un-Researchable Questions–contain no empirical referents, involve value judgments
e.g., Should prayer be allowed in schools?
Essential Characteristics of Good Research Questions:
1. They are feasible.
2. They are clear.
a. Conceptual or Constitutive definition = all terms in the question must be well-defined and understood
ex. In the question, "Does technology in K-3 schools improve standardized test scores over traditional teaching methods?" What counts as technology? What kind of K-3 schools? Which tests? What are "traditional" teaching methods? - You must be very clear about these terms.
b. Operational definition = specify how the dependent variable will be measured
3. They are significant.
4. They are ethical.
a. Protect participants from harm.
b. Ensure confidentiality.
c. Should subjects be deceived?
Variables: Quantitative vs. Categorical
1. Quantitative Variables
a.Continuous
b.Discontinuous (Discrete)
2. Categorical Variables - not quantifiable - you can't attach a number to it
Can look for relationships among:
1. Two Quantitative Variables - ex. height and weight
2. Two Categorical Variables - ex. religion and political affiliation
3. A Quantitative and Categorical Variable - ex. age and occupation (unless they make age categorical)
Variables: Independent vs. Dependent
1. Independent Variable= variable that affects the dependent variable, or is the effect of interest - This is what you are manipulating in the experiment.
a.Manipulated
b.Selected
2. Dependent Variable= dependent on the independent variable, or what is being measured
3. Extraneous variable (aka. confound) = uncontrolled factors affecting the dependent variable
*Dependent variables and extraneous variables are separate variables. They can't, in theory, be both.
Quantitative Research Hypotheses
•They should be stated in declarative form.
•They should be based on facts/research/theory. - can't just be based on a hunch
•They should be testable.
•They should be clear and concise.
•If possible, they should be directional - Take a stand! Predict the effect is going to be in a certain direction.
Qualitative Research Questions
•They are written about a central phenomenon instead of a prediction.
•They should be:
–Not too general…not too specific
–Amenable to change as data collection progresses - you may start with one idea and one direction, but move in a new direction based on new things you learn from your research - be open to whatever you might discover
--Unbiased by the researcher’s assumptions or hoped findings
Identifying Research Articles
1. What type of source is it?
–Primary Source–original research article
–Secondary Source–reviews, summarizes, or discusses research conducted by others
–Tertiary Source–summary of a basic topic, rather than summaries of individual studies
2. Is it peer reviewed?
–Refereed journals
•Editors vs. Reviewers
•Blind Reviews
•Level of journal in field
–Non-refereed journals
Why peer review?
•Importance of verification before dissemination
–Once the media disseminates information, it is hard to undo the damage
•ex. Scientist arguing autism a result of MMR vaccine never published his results in a scientific journal
•ex. Claim of first human baby clone was based only on the company’s statement
–Greater the significance of the finding, the more important it is to ensure that the finding is valid
Is peer review an insurance policy?
•Not exactly –some fraudulent (or incorrect) claims may still make it through to publication
ex. –Korean scientist who fabricated data supporting the landmark claim in 2004 that he created the world's first stem cells from a cloned human embryo. - His false data was in a peer review journal, but it was still fraudulent.
•Peer review is another source of information for:
–Funding Allocation
–Quality of Research / Publication in Scientific Journals
–Quality of Research Institutions (both on department and university levels)
–Policy Decisions
Where to find research articles:
Marriot Library - ERIC Ebsco
Make sure it has:
Method
Participants
Measure or Instruments
Prodecures
I.What is a researchable question?
II. Characteristics of researchable questions
III. Research Variables
A. Quantitative vs. Categorical
B. Independent vs. Dependent
IV. Hypotheses
A. Quantitative vs. Qualitative
V. Identifying Research Articles
Research Problems vs. Research Questions
Research Problem:a problem to be solved, area of concern, general question, etc.
e.g. We want to increase use of technology in K-3 classrooms in Utah.
Research Question:a clarification of the research problem, which is the focus of the research and drives the methodology chosen
e.g. Does integration of technology into teaching in grades K-3 lead to higher standardized achievement test scores than traditional teaching methods alone?
Researchable Research Questions
•Where do they come from?
–Experimenter interests
–Application issues
–Replication issues
•Do they focus on product or process? Or neither?
•Are they researchable? Unresearchable?
Researchable vs. Un-Researchable Questions
Researchable Questions–contain empirical referents
Empirical Referent –something that can be observed and/or quantified in some way
e.g., The Pepsi Challenge –which soda do people prefer more? Coca-Cola or Pepsi?
Un-Researchable Questions–contain no empirical referents, involve value judgments
e.g., Should prayer be allowed in schools?
Essential Characteristics of Good Research Questions:
1. They are feasible.
2. They are clear.
a. Conceptual or Constitutive definition = all terms in the question must be well-defined and understood
ex. In the question, "Does technology in K-3 schools improve standardized test scores over traditional teaching methods?" What counts as technology? What kind of K-3 schools? Which tests? What are "traditional" teaching methods? - You must be very clear about these terms.
b. Operational definition = specify how the dependent variable will be measured
3. They are significant.
4. They are ethical.
a. Protect participants from harm.
b. Ensure confidentiality.
c. Should subjects be deceived?
Variables: Quantitative vs. Categorical
1. Quantitative Variables
a.Continuous
b.Discontinuous (Discrete)
2. Categorical Variables - not quantifiable - you can't attach a number to it
Can look for relationships among:
1. Two Quantitative Variables - ex. height and weight
2. Two Categorical Variables - ex. religion and political affiliation
3. A Quantitative and Categorical Variable - ex. age and occupation (unless they make age categorical)
Variables: Independent vs. Dependent
1. Independent Variable= variable that affects the dependent variable, or is the effect of interest - This is what you are manipulating in the experiment.
a.Manipulated
b.Selected
2. Dependent Variable= dependent on the independent variable, or what is being measured
3. Extraneous variable (aka. confound) = uncontrolled factors affecting the dependent variable
*Dependent variables and extraneous variables are separate variables. They can't, in theory, be both.
Quantitative Research Hypotheses
•They should be stated in declarative form.
•They should be based on facts/research/theory. - can't just be based on a hunch
•They should be testable.
•They should be clear and concise.
•If possible, they should be directional - Take a stand! Predict the effect is going to be in a certain direction.
Qualitative Research Questions
•They are written about a central phenomenon instead of a prediction.
•They should be:
–Not too general…not too specific
–Amenable to change as data collection progresses - you may start with one idea and one direction, but move in a new direction based on new things you learn from your research - be open to whatever you might discover
--Unbiased by the researcher’s assumptions or hoped findings
Identifying Research Articles
1. What type of source is it?
–Primary Source–original research article
–Secondary Source–reviews, summarizes, or discusses research conducted by others
–Tertiary Source–summary of a basic topic, rather than summaries of individual studies
2. Is it peer reviewed?
–Refereed journals
•Editors vs. Reviewers
•Blind Reviews
•Level of journal in field
–Non-refereed journals
Why peer review?
•Importance of verification before dissemination
–Once the media disseminates information, it is hard to undo the damage
•ex. Scientist arguing autism a result of MMR vaccine never published his results in a scientific journal
•ex. Claim of first human baby clone was based only on the company’s statement
–Greater the significance of the finding, the more important it is to ensure that the finding is valid
Is peer review an insurance policy?
•Not exactly –some fraudulent (or incorrect) claims may still make it through to publication
ex. –Korean scientist who fabricated data supporting the landmark claim in 2004 that he created the world's first stem cells from a cloned human embryo. - His false data was in a peer review journal, but it was still fraudulent.
•Peer review is another source of information for:
–Funding Allocation
–Quality of Research / Publication in Scientific Journals
–Quality of Research Institutions (both on department and university levels)
–Policy Decisions
Where to find research articles:
Marriot Library - ERIC Ebsco
Make sure it has:
Method
Participants
Measure or Instruments
Prodecures
Wednesday, January 9, 2008
First Day of Class
Print out the quizzes and see if you can answer the questions using your notes, lecture, reading, etc. Don't work with others, but feel free to take notes and discuss based upon those questions before taking it online.
Next week meet in the MBH computer lab (rm. 108) for class.
I. Theme is technology in education
II.
III.
IV.
Why is research important?
textbooks don't know everything
programs of instruction - we keep changing programs based on new research
(ie. phonics vs. whole lang.)
who conducts research?
No Child Left Behind Act - scientifically based evidence (very narrow)
How do we know what we think we know?
personal experiences
tradition - also known as tenacity
appeal to authority - ask an expert
a priori knowledge - a hunch, intuition
reason
inductive - specific to general - after we collect the data and try to interpret and apply it
deductive - general to specific - before we ever collect the data
Science as a "Knowing"
Research = systematic process of gathering, interpreting, and reporting information
In this class we're going to focus on the methods and results of a research project (rather than just the intro. and discussion about it). Has a researcher really found what he says he has found? Did he really do what he says he did?
It's up to the consumer to figure out what the implications of a research article will be. It's up to the teacher, for example, to interpret the research and decide how it will affect their instruction in the classroom.
Basic vs. Applied Research
Action vs. Theoretical Research
Are some research findings more easily applied than others?
Qualitative vs. Quantitative Research
Quantitative -
posivitist orientation (the truth is out there, and I can find it)
world is a reality made up of facts to be discovered
balck and white, relationships between varieables, cause and effects
detached and objective researchers
goal is to generalize beyond the experiment
Qualitative -
interpretivist/constructivist orientation (there are multiple realities and multiple truths)
world is made up of multiple realities that are socially constructed by individual views of a
situation
understand situations and events from the viewpoints of participants involved
researchers are immersed in situations being studied
Don't necessarity try to generalize beyond the situation because the research is so tied to
the context
Perhaps the best research model is a mix between qualitative and quantitative
Characteristics of both qualitative and quantitative studies:
Research creates info. that should be shard publicly. (whether it's perfect or not, it should be shared - that's the point)
Research findings should be replicable.
Research is used to refute or support claims - NOT to PROVE them. (You can't prove anything and close the book on it. Someone can always come along and refute your claims.)
Researchers should take care to control for errors and biases.
Research findings are limited in their generalizability.
Research should be analyzed and critiqued carefully.
The Scientific Method
1.Problem
2. Clarify
3. Determine the information needed and how to get it
4. Organize the information
5. Interpret the results
This is easier to apply to quantitative than to qualitative
Method of Scientific Inquiry
Objectivity
Control of bias
Willingness to alter beliefs
Verification (through replication)
Induction (generlize beyond specific causes)
Precision (details: timing, order, subject, etc.)
Truth - always working toward it, but will never prove it
How does this apply to research on technology in education?
•Research on process - focus on understanding (psychology)
•Research on product - focus on application (education)
Next week meet in the MBH computer lab (rm. 108) for class.
I. Theme is technology in education
II.
III.
IV.
Why is research important?
textbooks don't know everything
programs of instruction - we keep changing programs based on new research
(ie. phonics vs. whole lang.)
who conducts research?
No Child Left Behind Act - scientifically based evidence (very narrow)
How do we know what we think we know?
personal experiences
tradition - also known as tenacity
appeal to authority - ask an expert
a priori knowledge - a hunch, intuition
reason
inductive - specific to general - after we collect the data and try to interpret and apply it
deductive - general to specific - before we ever collect the data
Science as a "Knowing"
Research = systematic process of gathering, interpreting, and reporting information
In this class we're going to focus on the methods and results of a research project (rather than just the intro. and discussion about it). Has a researcher really found what he says he has found? Did he really do what he says he did?
It's up to the consumer to figure out what the implications of a research article will be. It's up to the teacher, for example, to interpret the research and decide how it will affect their instruction in the classroom.
Basic vs. Applied Research
Action vs. Theoretical Research
Are some research findings more easily applied than others?
Qualitative vs. Quantitative Research
Quantitative -
posivitist orientation (the truth is out there, and I can find it)
world is a reality made up of facts to be discovered
balck and white, relationships between varieables, cause and effects
detached and objective researchers
goal is to generalize beyond the experiment
Qualitative -
interpretivist/constructivist orientation (there are multiple realities and multiple truths)
world is made up of multiple realities that are socially constructed by individual views of a
situation
understand situations and events from the viewpoints of participants involved
researchers are immersed in situations being studied
Don't necessarity try to generalize beyond the situation because the research is so tied to
the context
Perhaps the best research model is a mix between qualitative and quantitative
Characteristics of both qualitative and quantitative studies:
Research creates info. that should be shard publicly. (whether it's perfect or not, it should be shared - that's the point)
Research findings should be replicable.
Research is used to refute or support claims - NOT to PROVE them. (You can't prove anything and close the book on it. Someone can always come along and refute your claims.)
Researchers should take care to control for errors and biases.
Research findings are limited in their generalizability.
Research should be analyzed and critiqued carefully.
The Scientific Method
1.Problem
2. Clarify
3. Determine the information needed and how to get it
4. Organize the information
5. Interpret the results
This is easier to apply to quantitative than to qualitative
Method of Scientific Inquiry
Objectivity
Control of bias
Willingness to alter beliefs
Verification (through replication)
Induction (generlize beyond specific causes)
Precision (details: timing, order, subject, etc.)
Truth - always working toward it, but will never prove it
How does this apply to research on technology in education?
•Research on process - focus on understanding (psychology)
•Research on product - focus on application (education)
Subscribe to:
Posts (Atom)