Wednesday, February 20, 2008

Experimental Design

Experimental Designs

What makes experimental designs so great? CONTROL!!!
• In experimental designs, researchers have control over:
–Independent Variables (what is manipulated; whatever you are comparing, such as traiditonal vs. online technology)
–Dependent Variables (what is being measured, how are they operational? How is it being measured?)
–Extraneous Variables (all of the things I can't control; things that have impact on my dependent variable, such as dog color blindness)

Internal Validity and Control
• History (some uncontrolled event, such as a fire alarm)
• Selection (how do you select your subject)
• Maturation (individuals change over time)
• Testing (does just giving a test somehow influence what the subjects think about the Independent Variable)
• Instrumentation (something could be wrong with the test)
• Treatment Replications
• Attrition (mortality, or losing subjects during the experiment)
• Regression to the Mean
• Diffusion of Treatment - (does the Independent Variable from one group share info to another group) Does your independent variable in one group bleed into another group? One group tells the other group what happens in the experiment, or they talk about it and share opinions and thoughts that skew the results
Experimenter Effects - (is the experimenter overly nice or overly mean to subjects; or if your tester is a cute girl and your subjects are 13 yr old boys, then they subjects do whatever the tester wants you to do)
How the behavior, personality, looks of the experimenter affect how the subjects react to and participate in the experiment
Subject Effects - in self-report data, the attitude of the subject or the reason they are there can affect how they participate in the experiment - why did they participate? Does it take a certain kind of person to actually participate? Does this affect your results?
•Social Desirability Bias - the participant alters answers so they pain the picture of themselves that they want

External Validity and Control
• So more control ≈ higher levels of internal validity...
–Note, this is what allows us to make cause/effect conclusions about experimental data
• But what happens to external validity? - External validity lessens the more you control because more control means you are getting farther and farther away from the real world
*It's like a see-saw. You have to find the right balance between external validity (less control) and internal validity (more control)

Important Terms
• Randomization
Random Selection - How do you get the participants for your study in the first place?
–vs. Random Assignment - What do you do with the participants once you've already gotten them? How do you assign them within the experiment?

Within Subjects - Every participant gets every level of the independent variable - ex. in a pretest/post test design, every participant takes both the pre and post tests - this is always going to be better if you can make it work because everyone participates in everything and there is no chance that different groups are going to have different results because of extraneous variables
vs. Between Subjects Variables and Designs - One group gets level A of the independent variable, but a different group gets level B of the independent variable - ex. experimental vs. control group designs OR men vs. women design - You do run the risk that there is some extraneous variable that exists in one group, but not the other; there might be something fundamentally different between groups that affects your results

Controlling for Confounds
–Holding variables constant
–Building variables into design
–Matching

Pretest/Posttest Designs
• Single Group Posttest-Only Design
– Expose 1 group to the IV and then measure the DV (posttest) once
– Example: I decide to test whether using puppets to “read” books with young children to help them learn different sound combinations. – What’s the problem? - You don't have anything to compare to, no baseline to start at. There is no way you can say the puppets affected anything if you don't know what the results of the test were before using the puppets. To make it even better, you should have a control group that takes a pre and post test without the puppets to prove it was the puppets that made the difference in test scores and not just the fact that they took the test twice.

Single Group Pretest Posttest
• For a single group, give them a pretest (DV), then the IV, then a posttest (DV again)
• Example, in the “puppet experiment,”I give students a pre-reading PA quiz, then read to them with the puppet, then give them a post-reading PA quiz

Nonequivalent Groups Pretest Posttest: Quasi-Experimental
• Involves 2 groups (experimental and control); both get pretest (DV), then only the exper. Gp. is exposed to the IV, and both gps. get the posttest (DV)
*Pre-existing groups

Between Subjects Designs= each group of subjects receives a different level of the IV
–Advantage: often more practical that within subjects designs
–Disadvantage: are differences due to groups? Or to the IV?
–Use of Matching

And what you’ve all been waiting for...“TRUE”Experimental Designs
• Randomized Groups PrettestPosttest Design
– Individuals are randomly assigned to either exper. Or control grp.; both groups receive the pretest (DV), then only the exper.
Gp. Is exposed to the IV, and then both gps. Receive the posttest (DV)
Random assignment gives control over group differences, and pretest allows for a baseline measure in both groups

Experimental Designs with more than 1 Independent Variable (IV)
Factorial Designs= measure the impact of 2 or more IVs on the Dependent Variable (DV)
(ex. test whether puppets help develop reading skills, AND whether it helps English Language Learners more)
–Advantages
• Allow researchers to determine whether effects are consistent across subject characteristics
• Allow researchers to assess interactions
–Disadvantages
• Can get messy if there are too many IVs because every IV you add changes the "personality" of the experiment - NEVER HAVE MORE THAN 3 IVs, otherwise it gets too messy
Sample Between Subjects Factorial Design

Analyses of Exp. Designs (t and F)
• T-test (t) --used to assess the impact of two levels of one IV on a single DV
• ANOVA (F) –used to assess impact of one or more IVs on a single DV
• MANOVA –used to assess impact of one or more IVs on multiple DVs
• ANCOVA –used to assess impact of one or more IVs on a single DV after removing effects of a variable that might be correlated to DV (e.g., age, gender, aptitude, achievement, etc.)
• MANCOVA –used to assess impact of one or more IVs on multiple DVs after removing effects of a variable that might be correlated to DV (e.g., age, gender, aptitude, achievement, etc.)

No comments: