Grupo s Control

download Grupo s Control

of 8

Transcript of Grupo s Control

  • 7/25/2019 Grupo s Control

    1/8

    A peer-reviewed electronic journal.

    Copyright is retained by the first or sole author, who grants right of first publication to Practical Assessment, Research & Evaluation. Permissionis granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has theright to authorize third party reproduction of this article in print, electronic and database forms.

    Volume 19, Number 6, June 2014 ISSN 1531-7714

    Quasi-Experiments in Schools:The Case for Historical Cohort Control Groups

    Tamara M. Walser, University of North Carolina Wilmington, NC

    There is increased emphasis on using experimental and quasi-experimental methods to evaluate educationalprograms; however, educational evaluators and school leaders are often faced with challenges whenimplementing such designs in educational settings. Use of a historical cohort control group design provides a

    viable option for conducting quasi-experiments in school-based outcome evaluation. A cohort is a successivegroup that goes through some experience together, such as a grade level or a training program. A historicalcohort comparison group is a cohort group selected from pre-treatment archival data and matched to asubsequent cohort currently receiving a treatment. Although prone to the same threats to study validity as anyquasi-experiment, issues related to selection, history, and maturation can be particularly challenging. However,use of a historical cohort control group can reduce noncomparability of treatment and control conditionsthrough local, focal matching. In addition, a historical cohort control group design can alleviate concerns aboutdenying program access to students in order to form a control group, minimize resource requirements anddisruption to school routines, and make use of archival data schools and school districts collect and findmeaningful.

    The current education research and evaluation

    climate favoring experimental and quasi-experimentalstudies has left educational program evaluators andschool leaders with the task of implementing qualitycontrol group studies in school settings where thefeasibility and appropriateness of such designs is oftenin question. The purpose of this article is to describe theuse of a historical cohort control group as a viable optionfor conducting quasi-experimental outcome evaluationsin schools; that is, the selection of a cohort control groupfrom pre-treatment archival data matched to a group ofstudents currently receiving an educational treatment

    (i.e., intervention).This article includes background information

    describing the rationale for using a historical cohortcontrol group, including challenges to implementingexperimental and quasi-experimental designs in school-based evaluation studies; a description of historicalcohort control group designs; and notable threats to thevalidity of study findings and other considerations whenusing a historical cohort control group design. Theaudience for this article is educational programevaluators and school leaders.

    Background

    The rationale for using a historical cohort controgroup is based on the emphasis placed on using sociascience methods that address causal questions ofprogram effectiveness, the related need for educationalprogram evaluation credibility in the currentaccountability climate, and the need to conductevaluations that are feasible and appropriate in the reaworld of school-based outcome evaluation. Under theElementary and Secondary Education Act (1965), asreauthorized under the No Child Left Behind Act of2001 (NCLB), scientifically based research, one of the four

    pillars of NCLB, is defined as experimental or quasi-experimental studies (U.S. Department of Education2005). Although NCLB is often associated with studenttesting, terms such as evidence-based decisions andscientifically-based research appear 111 times in thepages of the NCLB legislation (Wilde, 2004).

    In addition to the call for scientifically basedresearch under NCLB, the U.S. Department ofEducations Office of Research and Improvement wasreauthorized in 2002 with the Education SciencesReform Act, and became the Institute of Education

  • 7/25/2019 Grupo s Control

    2/8

    Practical Assessment, Research & Evaluation, Vol 19, No 6 Page 2Walser, Historical Cohort Control Groups

    Sciences. With this change came new parameters for thetype of research funded by IES, emphasizing empiricalmethods using observation or experiment (Eisenhart &Towne, 2003). Thus, the U.S. Department of Educationhas enacted a priority in grant competitions favoring

    experimental studies; or when random assignment is notpossible, quasi-experimental studies that includematched control groups, regression-discontinuitydesigns, or single-subject designs (Mills, 2008).

    Further, the U.S. Department of Educations WhatWorks Clearinghouse, which also began in 2002,emphasizes studies of causal effectiveness using studentachievement test data (Eisenhart & Towne, 2003).Acceptable designs for the What Works Clearinghouseinclude high quality experiments, quasi-experiments,regression-discontinuity designs, and single subject

    research studies (U.S. Department of Education, 2008).Finally, the U.S. Department of Educations Institute ofEducation Sciences and the National ScienceFoundation jointly published Common Guidelines forEducation Research and Development in which impactstudiesefficacy, effectiveness, and scale-up studiesrequire experimental or quasi-experimental designs (U.S.Department of Education & National ScienceFoundation, 2013).

    What does this me an for edu cational evalua tors

    and sch ool leade rs

    NCLB requires that educators use instructionalprograms and methods proven to be effective.Scientifically-based research must be used for programdevelopment to ensure a scientific basis for the program;and for program evaluation to determine programeffectiveness (Slavin, 2003). For states, school districts,and schools; this means that instructional materials andmethods adopted and used must have research-basedevidence of effectiveness. Further, locally developedand/or implemented programs are held to the samestandard (Schmitt & Whitsett, 2008). This also means

    that when conducting federally-funded outcomeevaluations, often referred to as effectiveness or impactstudies, NCLB guidelines favor experimental controlswhen possible (Mills, 2008) and the use of studentachievement test data (Eisenhart & Towne, 2003).

    What are som e cha l lenges to implement ing c ontrol

    group studies in sch ools

    Control group studies may be experimental orquasi-experimental, with the latter being more commonin educational program evaluation. Although

    experimental studies requiring random assignment of

    participants to either treatment or control conditions areimportant for responding to causal questions of programeffectiveness, such studies have not been common ineducation (Boruch, 2007; Cook, 2003). Reasonscommonly cited for the lack of use include not wanting

    to deny program access to some students for the sake offorming a control group and the resource requirementsfor implementing experiments (Baruch, 2007Baughman, 2008), as well as concerns about disruptionto school routines, and the lack of value placed onexperiments by educational program evaluators (Cook2003).

    Quasi-experimental studies are similar toexperiments in that they compare treatment tonontreatment conditions; however, participants are notrandomly assigned to conditions. Instead, evaluators use

    a nonequivalent control group for comparison (Cook2003). Common quasi-experimental control groupdesigns include the use of intact treatment groups (e.g.classrooms, schools) matched to control groups ondemographic and other key variables. The lack ofrandom assignment increases threats to the internavalidity of study resultsthe ability to attribute studyresults to the treatment and not some other source orsources. In particular, although students, classroomsand/or schools may be matched on certain known andobservable variables, they may differ on other unknownand/or unobservable variables in ways that differentiallyimpact results. In addition, as with experimentsquestions of the feasibility and appropriateness of quasi-experimental studies in schools present challenges totheir use.

    Using a Historical Cohort Control Group

    A cohort is a group of people who have similardemographic or statistical characteristics(dictionary.com, n.d.). In education, the term is oftenused to identify successive groups that go through agrade level, an educational program, or a training

    program. According to Shadish, Cook, and Campbel(2002):

    [C]ohorts are particularly useful as controlgroups if (1) one cohort experiences a givenintervention and earlier or later cohorts do not;(2) cohorts differ in only minor ways from theircontiguous cohorts; (3) organizations insist thatan intervention be given to everybody, thusprecluding simultaneous controls and makingpossible only historical controls; and (4) anorganizations archival records can be used for

  • 7/25/2019 Grupo s Control

    3/8

    Practical Assessment, Research & Evaluation, Vol 19, No 6 Page 3Walser, Historical Cohort Control Groups

    constructing and then comparing cohorts (pp.148-149).

    If an earlier cohort does not receive a given treatment,they can serve as a historical cohort control groupfor a currentgroup that is receiving the treatment. Thus, it is possible

    to conduct a quasi-experiment comparing the outcomesof a treatment group that is currently receiving atreatment to those of a historical cohort control groupthat did not receive the treatment. In research, the termcohort is also used to refer to any group that is repeatedlymeasured over time, as in a longitudinal or panel study;however, this is a different use of the term (Shadish etal. 2002).

    Using a historical cohort control group is a viableoption due to its feasibility and appropriateness inschool settings, where, as mentioned previously, there

    are often challenges to implementing experimental orcommonly used quasi-experimental designs. Studentswho are eligible for a program are not denied access sothat a control group can be formed; and because controlgroup data come from archival data sources, no new datacollection is needed, decreasing the resources requiredto conduct the evaluation, as well as disruption to schoolroutines. Due to NCLB, more data are available(Guillen-Woods, Kaiser, & Harrington, 2008), includingstudent achievement test data favored under NCLB(Eisenhart & Towne, 2003) andlongitudinal data (Azin

    & Resendez, 2008).Finally, the goal of matching a control group to a

    treatment group is to achieve as much overlap of the twogroups, or distributions, as possible in order to minimizeinitial nonequivalence. T. D. Cook and W. R. Shadish(personal communication, August 7, 2008) havesuggested using a local, focal, nonequivalent intact control groupto maximize overlap; that is, using a local group from thesame site (e.g., school, school district), and matching onfocal indicators that are related to the outcome. Ahistorical cohort control group is an example of this.

    An example of a historical cohort control groupdesign used in a school-based evaluation study isStockards (2011) evaluation of a direct instructionreading program in primary grades in three rural schooldistricts. Using existing student characteristics data,Dynamic Indicators of Basic Early Literacy Skills(DIBELS) assessment data, fourth grade stateassessment data, and implementation data; Stockard wasable to conduct several analyses comparing district, state,and national data. Results indicated that students incohorts that received full exposure to the program, those

    who began the program in kindergarten and whoseteachers implemented the program with fidelity, hadhigher scores than students in cohorts that received lessexposure.

    Al-Iryani, Basaleem, Al-Sakkaf, Crutzen, Kok, and

    Bart van den Borne (2011) also used a historical cohortcontrol group, in addition to a concurrent control groupin their evaluation study of a school-based peerintervention for HIV prevention among students. Asurvey to determine knowledge and attitudes related toHIV was administered concurrently to students whoreceived the peer intervention and students who did notreceive the intervention. In addition, a historical cohortcontrol group was randomly selected from a sample ofstudents from the same schools who completed thesurvey in 2005. Thus, survey results for the historical

    cohort control group were compared to those of theintervention group, in addition to comparisons betweenthe intervention group and concurrent control groupBased on results, the authors concluded that theintervention improved knowledge on modes oftransmission and prevention and decreased levels ofstigma and discrimination.

    Validity and Historical Cohort ControlGroup Designs

    When conducting a quasi-experiment using ahistorical cohort control group, evaluators are faced withthe same threats to the validity of study findings as theywould be when implementing any quasi-experimentEarly conceptualizations of validity characterized it ascovering two aspects, internal and external validityInternal validity was generally defined as the ability toattribute observed differences in outcomes to thetreatment under investigation. External validity wasdefined as the ability to generalize study outcomes todifferent contexts. Within each of these categories ofvalidity were several specific validity threatsi.e., issuesthat could decrease validity of study findings.

    In their often-cited book on experimental andquasi-experimental design, Campbell and Stanley (1963)identify 12 common threats to internal and externavalidity and describe the level of these threats givendifferent research design options. More recenconceptualizations of validity have expanded thesecategories of validity to include:

    Statistical conclusion validity, which focuses onthe validity of inferences given potential analysis

  • 7/25/2019 Grupo s Control

    4/8

    Practical Assessment, Research & Evaluation, Vol 19, No 6 Page 4Walser, Historical Cohort Control Groups

    issues (e.g., low statistical power, restriction ofrange).

    Internal validity, which focuses on the validityof inferences given potential issues that caninfluence outcomes instead of or in addition to

    a causal relationship between treatment andoutcome (e.g., selection, history).

    Construct validity, which focuses on the validity

    of inferences given potential issues related tothe constructs that represent the specifics of thestudy (e.g., construct confounding, novelty anddisruption effects).

    External validity, which focuses on the validityof inferences given potential issues with theability to generalize a causal relationship from

    one context (setting, persons) to another (e.g.,interaction of the causal relationship with units,context-dependent mediation) (Shadish et al.,2002).

    The following sections include a discussion ofspecific validity threats that are particularly relevantwhen using a historical cohort control group. Aspreviously mentioned, a historical cohort control groupdesign is prone to the same validity threats as any quasi-experimental design; thus, only those threats thateducational evaluators and school leaders should be

    more cognizant of when using a historical cohort controlgroup design are discussed. These notable threats fallinto the categories of internal validity and constructvalidity. For a complete description of the 37 identifiedthreats across the 4 categories of validity, see Shadish etal., 2002.

    Threa ts to Internal Validity

    Internal validity threats that are notable whenusing a historical cohort control group design includeselection, history, maturation, regression, testing, and

    instrumentation. Each threat is discussed in thefollowing sections.

    Selection threats occur when observed differencesmay be due to differences in participants that existedprior to the study (Shadish et al., 2002). Selection posesa considerable threat to any quasi-experimental study,because there is no random assignment. Use of ahistorical cohort control group design has the potentialto maximize overlap of treatment and control groups;that is, to minimize selection differences between groupsthat would pose threats to internal validity. Generally,

    cohorts are considered more similar to each other thanmost nonequivalent (nonrandomized) comparisongroups: The crucial assumption with cohorts is thatselection differences are smaller between cohorts thanwould be the case between noncohort comparison

    groups (Shadish et al., 2002, p. 149).On the other hand, selection threats may increase

    depending on when data were collected from thehistorical cohort. For example, in a school-basedevaluation study, students in treatment and controcohort groups may have attended or currently attend thesame school and may be from the same community, butdepending on the time period in which the historicalcohort control group data were collected, there could bedifferences in treatment and control cohortcharacteristics due to changes in the demographics of

    the school and community. Thus, it is important tounderstand the context within which a historical cohortcontrol group design is used, to document changes incohort characteristics that have occurred over time, andto consider a different design option if those changesgreatly increase selection threats.

    History threats are problematic when other eventsthat occurred during the time of treatment could havecaused or impacted observed differences (Shadish et al.2002). History threats are an issue when using ahistorical cohort control group design due to differences

    in the time of data collection. With concurrent treatmentand control conditions, it is possible that any influencesof history on outcomes would occur in both conditionsAs stated in Shadish et al. (2002), Only when anonequivalent control group is added to the design andmeasured at exactly the same time points as thetreatment cohorts can we hope to address history (p151). Thus, the use of a historical cohort control groupcannot address history threats.

    Maturation threats are a concern when naturallyoccurring changes over time could be confused with a

    treatment effect (Shadish et al., 2002, p. 55). This is acommon concern in studies of children who aredeveloping cognitive skills as part of a normaprogression. For example, first grade student gains inreading comprehension may be due to naturadevelopment instead of or in addition to a particularreading treatment. Maturation threats may be reduced byusing a historical cohort control group design. Becausecohort groups are, by definition, similar in characteristicsand from the same location (e.g., cohorts in school-based evaluation studies would be the same age and/or

  • 7/25/2019 Grupo s Control

    5/8

    Practical Assessment, Research & Evaluation, Vol 19, No 6 Page 5Walser, Historical Cohort Control Groups

    grade level), changes that occur in the treatment cohortgroup that could be attributed to maturation should havesimilarly occurred in the historical cohort control group;thus, any observed differences between the groupswould not likely be the result of maturation. When using

    a historical cohort control group design, it is importantthat there is as much overlap in characteristics betweenthe control cohort and treatment cohort as possible.Using local, focal matching, where cohorts are from thesame location and are matched on characteristics focalto the study, supports this overlap (T. D. Cook & W. R.Shadish, personal communication, August 7, 2008).

    Regression threats occur when participants areselected for a study due to extreme scores. They willoften have less extreme scores on subsequent measuresand this shift in scores can be confused with a treatment

    effect (Shadish et al., 2002). In other words, if studyparticipants are selected due to a deficit in some area,such as reading ability, subsequent outcomes related toreading ability may be higher due to a phenomenonknown as regression to the mean. Regression threatsmay be less of a concern when using a historical cohortcontrol group design, because cohorts are characterizedas having similar characteristics and would be chosen asstudy participants based on local, focal criteria. Anyobserved differences due to regression would occur inthe historical cohort control group and the treatmentcohort group.

    Instrumentation threats occur when a measurechanges over time, impacting study outcomes. Inaddition to actual changes in the measure, changes in theconditions within which the measure is administered canalso impact observed differences. Instrumentation canbe particularly problematic when using a historicalcohort control group design, due to the time intervalbetween data collection for the historical cohort controlgroup and the treatment cohort. For example, those whoadministered measures for the historical cohort control

    group may be different than those who administermeasures for the treatment cohort, which could result indifferences in administration that impact scores.Further, even if the same people administer themeasures for both groups, their skills in administrationmay improve or otherwise change over time. Thus, whenusing a historical cohort control group design, it isimportant to understand and document the testconditions for cohorts, noting discrepancies that mayintroduce instrumentation threats.

    Th reats to Construct Validity

    Construct validity threats that are notable whenusing a historical cohort control group design includereactivity to the experimental situation, compensatoryequalization, compensatory rivalry, resentfu

    demoralization, and treatment diffusion. Each threat isdiscussed in the following sections.

    Reactivity to the experimental situation occurswhen participants perceptions related to being in a studyimpact their behaviors and influence study outcomes(Shadish et al., 2002). For example, if a student knowsshe is in a study, she may work harder to do well on testsIn general, use of a historical cohort control groupdesign can decrease such reactivity issues, becausemeasures used for the historical cohort control groupand treatment cohorts are measures that are routinely

    used as part of monitoring and evaluation. In the case ofschool-based evaluation studies, measures could includeprogress monitoring, benchmark, and annuaassessments used by the school district to monitorstudent achievement. Thus, they would be seen bystudents as business as usual.

    Compensatory equalization occurs when thetreatment includes desired resources or services notafforded to the control group participants, resulting inactions to provide compensatory resources or servicesto the control group by administrators or staff; thusconfounding study outcomes (Shadish et al., 2002)Using a historical cohort control group design caneliminate compensatory equalization threats, becausethe historical cohort and treatment cohorts are notconcurrent. Thus, no compensatory resources orservices would be provided to the historical cohortcontrol group, nor would any be taken away from thetreatment group in an effort to equalize.

    Compensatory rivalry occurs when control groupparticipants are motivated to show that they can do aswell as those in the treatment group (Shadish et al.2002). This could include, for example, control groupteachers in a school-based evaluation study being moremotivated to ensure their students demonstrateoutcomes associated with the study, as well as thestudents themselves competing with students in thetreatment group. Similar to compensatory equalizationthreats, compensatory rivalry threats are not an issuewhen using a historical cohort control group designBecause the control group is historical and onlyarchival data from the group are used, historical cohort

  • 7/25/2019 Grupo s Control

    6/8

    Practical Assessment, Research & Evaluation, Vol 19, No 6 Page 6Walser, Historical Cohort Control Groups

    control group participants would not know they did notreceive a given treatment.

    Resentful demoralization is the flip side ofcompensatory rivalry and occurs when control groupparticipants are so demoralized by not being selected to

    receive the treatment that this negatively impacts theiroutcomes; thus, any observed differences in treatmentand control conditions are influenced by these attitudes(Shadish et al., 2002). As with compensatoryequalization and compensatory rivalry, use of a historicalcohort control group can eliminate resentfuldemoralization threats.

    Treatment diffusion occurs when the control groupreceives some or all of a treatment (Shadish et al., 2002).For example, in a school-based evaluation study, thesame teacher may provide treatment and control group

    instruction; thus, the teacher may use some of thetreatment strategies with the control group. Even if thetreatment and control groups have different teachers,the control group teacher may hear about some of thetreatment strategies and implement them. Of course,similar to the other construct validity threats previouslydiscussed, treatment diffusion is not an issue when usinga historical cohort control group, because treatment andcontrol conditions are not concurrent and the controlcondition occurs prior to introduction andimplementation of the treatment.

    Addit ional Considerat ions

    Perhaps the greatest benefit of using a historicalcohort control group in terms of increasing the validityof study findings is the potential to maximize overlap oftreatment and control groups; that is, to minimizeselection differences between groups that would posethreats to internal validity. Generally, cohorts areconsidered more similar to each other than mostnonequivalent (nonrandomized) comparison groups:The crucial assumption with cohorts is that selectiondifferences are smaller between cohorts than would bethe case between noncohort comparison groups(Shadish et al., 2002, p. 149). Regardless, it is critical forthe evaluator to investigate validity threatstounderstand study context and document and reportissues with validity.

    Another consideration is to think of research designas a layering process as opposed to a process of choosingone textbook design. Shadish et al. (2002) discussadding design elements to strengthen validity and offerrelated suggestions throughout their text. Thus, using a

    historical cohort control group may be one element of a

    larger research design that could also include, forexample, concurrent control groups. Another option tostrengthen validity is the use of multiple historical cohorcontrol groups; that is, using archival data from historicacohort control groups for as many years back as

    possible. Similarly, archival data for the treatment cohortcould be used as multiple pretests.

    Finally, standards for quality evaluation requiremore than addressing study validity. The ProgramEvaluation Standards (Yarbrough, Shulha, Hopson, &Caruthers, 2011) provide guidance on the designconduct, and evaluation of evaluations. Thirty standardsare categorized as utility, propriety, feasibility, accuracyand accountability standards. Thus, these standardsshould be considered in addition to addressing studyvalidity.

    Conclusion

    Using a historical cohort control group is a viableoption for addressing causal questions of effectivenessin school-based outcome evaluation. Given the push forcontrol group studies, as well as common challenges toimplementing such studies in school settings, using ahistorical cohort control group (a) alleviates concernsabout denying program access to students in order toform a control group, (b) minimizes resourcerequirements and disruption to school routines, (c)makes use of the archival data schools and schooldistricts collect and find meaningful, and (d) can reduceinitial noncomparability of treatment and control groupsthrough local, focal matching. Although a historicalcohort control group design is prone to the same threatsto validity as any quasi-experimental study, use of thedesign may decrease threats such as selection, reactivitycompensatory equalization, compensatory rivalryresentful demoralization, and treatment diffusion. Inaddition, using a historical cohort control group as adesign element as part of a larger study can strengthenvalidity. Finally, for program evaluation, it is important

    to also use The Program Evaluation Standards(Yarbroughet al., 2011) in the design and conduct of evaluations.

    References

    Al-Iryani, B., Basaleem, H., Al-Sakkaf, K., Crutzen, R.,Kok, G., & van den Borne, B. (2011). Evaluationof a school-based HIV prevention interventionamong Yemeni adolescents. BMC Public Health,11(279), 1-10.

  • 7/25/2019 Grupo s Control

    7/8

    Practical Assessment, Research & Evaluation, Vol 19, No 6 Page 7Walser, Historical Cohort Control Groups

    Azin, M., & Resendez, M. G. (2008). Measuring studentprogress: Changes and challenges under No ChildLeft Behind. In T. Berry & R. M. Eddy (Eds.),Consequences of No Child Left Behind for educationalevaluation. New Directions for Evaluation, 117, 71-84.

    Boruch, R. (2007). Encouraging the flight of error:Ethical standards, evidence standards, andrandomized trials. In G. Julnes & D. J. Rog (Eds.),Informing federal policies on evaluation methodology:Building the evidence base for method choice in governmentsponsored evaluation. New Directions for Evaluation, 113,55-73.

    Baughman, M. (2008). The influence of scientificresearch and evaluation on publishing educationalcurriculum. In T. Berry & R. M. Eddy (Eds.),Consequences of No Child Left Behind for educational

    evaluation. New Directions for Evaluation, 117, 85-94.Cook, T. D. (2003). Why have educational evaluators

    chosen not to do randomized experiments? TheANNALS of the American Academy of Political andSocial Science, 589(1), 114-149.

    Eisenhart, M., & Towne, L. (2003). Contestation andchange in national policy on scientifically basededucation research.Educational Researcher, 32, 31-38.

    Guillen-Woods, B. F., Kaiser, M. A., & Harrington, M.J. (2008). Responding to accountabilityrequirements while promoting program

    improvement. In T. Berry & R. M. Eddy (Eds.),Consequences of No Child Left Behind for educationalevaluation. New Directions for Evaluation, 117, 59-70.

    Mills, J. I. (2008). A legislative overview of No ChildLeft Behind. In T. Berry & R. M. Eddy (Eds.),Consequences of No Child Left Behind for educationalevaluation. New Directions for Evaluation, 117, 9-20.

    Schmitt, L. N. T., & Whitsett, M. D. (2008). Usingevaluation data to strike a balance betweenstakeholders and accountability systems. In T.

    Berry & R. M. Eddy (Eds.), Consequences of No ChildLeft Behind for educational evaluation. New Directions forEvaluation, 117, 47-58.

    Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002).Experimental and quasi-experimental designs for

    generalized causal inference (2nd ed.). Boston:Houghton Mifflin Company.

    Slavin, R. E. (2003). A readers guide to scientifically-based research: Learning how to assess the validityof education research is vital for creating effective,sustained reform.Educational Researcher, 60(5), 12-16.

    Stockard, J. (2011). Increasing reading skills in ruralareas: An analysis of three school districts.Journalof Research in Rural Education, 26(8), 1-19.

    U.S. Department of Education. (2005). Scientifically-

    based evaluation methods (RIN 1890-ZA00).Federal Register, 70(15), 3586-3589.

    U.S. Department of Education. (2008, December).What Works Clearinghouse standards and proceduresdocument (Version 2.0). Washington, DC: U.S.Department of Education, Institute of EducationSciences. Retrieved June 14, 2009 fromhttp://ies.ed.gov/ncee/wwc/

    Wilde, J. (2004, January). Definitions for the No Child LeftBehind Act of 2001: Scientifically-based research.Washington, DC: National Clearinghouse forEnglish Language Acquisition and LanguageInstruction Education Programs, The GeorgeWashington University. Retrieved April 10, 2008fromhttp://www.ncela.gwu.edu/resabout/Research_definitions.pdf

    Yarbrough, D. B., Shulha, L. M., Hopson, R. K., &Caruthers, F. A. (2011). The program evaluationstandards: A guide for evaluators and evaluation users (3rded.). Thousand Oaks, CA: Sage.

  • 7/25/2019 Grupo s Control

    8/8

    Practical Assessment, Research & Evaluation, Vol 19, No 6 Page 8Walser, Historical Cohort Control Groups

    Citation:

    Walser, Tamara M. (2014). Quasi-Experiments in Schools: The Case for Historical Cohort Control Groups.Practical Assessment, Research & Evaluation, 19(6). Available online:

    http://pareonline.net/getvn.asp?v=19&n=6

    Author:

    Tamara M. WalserAssociate Professor and Director of Assessment and EvaluationWatson College of EducationUniversity of North Carolina Wilmington601 S. College RoadWilmington, NC 28403Email: walsert [at] uncw.edu

    http://pareonline.net/getvn.asp?v=19&n=6http://pareonline.net/getvn.asp?v=19&n=6