Thursday, March 10, 2011

Retrospective Experimental Designs – Week 1


 

Comment1


 In the textbook (de Vaus) chapter 3, the author describes various research designs. One design is " retrospective experimental design".

The  term "retrospective experimental design" seems unusual, in that I had thought a key attribute of the experimental design is that the researcher exercises control over the assignment of treatments to the experimental units through the process of randomization.

It would seem to me that any retrospective analysis could only be observational, as the researcher can in no way assign treatments to subjects.

In fact, the retrospective experimental design seems very similar to the cross-sectional or correlational design.

Also, de Vaus seems to suggest that surveys, because of the method of analysis and form of data, can be used for an understanding of causal relationships or links, whether or not the design is observational or experimental.

One of the points I came away with from HMS771 Analysis of Variance (and other statistics units) is that an experimental design allows inference of causality, observation allows inference of association

 

 If you can use de Vaus' article as an example (refer chapter 18, Putting it into Practice: a research example), the link he demonstrates is more of " association" rather than 'causality':

In the paper which the chapter refers to, Gender Differences in Religion: A Test of the Structural Location Theory", he states " the results show that the lower rates of female labor force participation are the major cause of their greater religious commitment". (my emphasis).

In the discussion section at the end of the paper though, he says, " the question remains as to why work force participation affects the religious orientation of females".

That seems to say, there is a correlation, but we still don't know what causes it.


 Comment 2


I agree with Graham.  I find it difficult to distinguish between a cross-sectional (correlational) survey design, and a retrospective experimental design.  Both designs appear to rely on the a posteriori selection of groups.  Grouping of the subjects is based on the level of exposure to the independent variable.  The only possible distinguishing feature mentioned for the "retrospective experimental design" is in the attempt to match the groups with respect to other independent variables, in order to remove confounding.

 

Comment 3


Hi guys,

Thank you for this very interesting string. I have a couple of thoughts that I hope can contribute (sorry having trouble with weird formatting on this computer).

Reinier >> I find it difficult to distinguish between a cross-sectional (correlational) survey design, and a retrospective experimental design.

Graham >> The  term "retrospective experimental design" seems unusual, in that I had thought a key attribute of the experimental design is that the researcher exercises control over the assignment of treatments to the experimental units through the process of randomization

I think that's the difference between "classic experimental" and "retrospective experimental". In the classical experimental design, the researcher can randomly assign subjects and obtain a "real-time" baseline measurement to be compared to a "real-time" second measurement according to a predetermined timeline. In the retrospective experimental, the subjects will naturally fall into the experimental or control group based on their exposure to the intervention (meaning that group sizes cannot be predetermined) and while the second measurement is "real-time", the baseline measurement is based on the subject's current perception of what their previous experience/behaviour was. The point of using "retrospective experimental" appears to be to avoid having to keep track of subjects over time, or I guess where you're researching the impact of interventions that have already happened and that you didn't have an input in controlling (eg. a skin cancer ad campaign).

Graham >> The  term "retrospective experimental design" seems unusual, in that I had thought a key attribute of the experimental design is that the researcher exercises control over the assignment of treatments to the experimental units through the process of randomization.

To me it sounds like the "research design" refers to the method of obtaining information from the subjects and what kind of information is obtained, whereas "sampling" refers to the assignment of subjects to treatment groups, and these are two separate aspects of the global project. Does this sound right or am I hopelessly confused? lol.

Graham >> It would seem to me that any retrospective analysis could only be observational, as the researcher can in no way assign treatments to subjects.

If you adopt a strict definition of "experimental", that's true, but if you look at the information that is being obtained, the only difference between the two data points obtained is that the first (baseline) data point is collected at the same time as the second (comparison) data point. There are still two groups (experimental and control), and these groups still either received the intervention/treatment or did not, regardless of the fact that you don't get to randomly assign them. So while it's not a classical experimental design, I can see how they arrive at calling it retrospective experimental.

Graham >> an experimental design allows inference of causality

This is a great point - this raises the question as to whether there is a point at which data from a retrospective experimental research design can allow inference of causality (of course dependent on sample size, error rates, etc) as opposed to strictly allowing only association. Is this simply an issue of semantics (whereby "retrospective experimental" is simply a label not necessarily referring to its observational/experimental orientation), or is this a grey area? I suspect we may need to study the topic further to find the answer, eh? haha.

Graham >> That seems to say, there is a correlation, but we still don't know what causes it.

I read that as "work force participation is the major cause of greater religious commitment in females, but we have no idea why this is so".

I'd love to discuss these points further.

Cheers, Shaheen.

 

I too had difficulty with this, so I prepared a table summarising the different features of each design (attached for discussion and debate), and that made it a bit clearer to me. I hope it can help others to.


Summary of the Types of Research Designs

 

Shaheen Aumeer-Donovan

 

 

Type of Research Design

Features

Why it is Used

Issues

Classic Experimental

- 2 groups (experimental & control)

- 2 data points at least for each group (before & after)

- Premeasures: used to select participants, check support for a program, ensure group comparability, calibrate change measurement

- Interim and postmeasures: used to measure change, outcomes or impacts

- Sometimes difficult to obtain a control group

- Ethical considerations preclude introduction of negative interventions

Panel

- 1 group (no control group)

- 2 data points at least for each group (before & after)

Used when control groups are difficult to obtain

Cannot conclude change was due to the intervention

Quasi-Panel

- 2 groups (no control group)

- Data point 1 (before) obtained from group 1, and data point 2 (after) obtained from group 2

Used to avoid keeping track of the same people over time

Same problems as panel + samples cannot be matched so effect could be due to sampling error

Retrospective Panel

- 1 group (no control group)

- Data point 2 (after) assessed, and participants asked about data point 1 (before)  retrospectively

Used to avoid keeping track of the same people over time

Same problems as panel + difficulty of selective memory of subjects

Retrospective Experimental

- 2 groups (experimental & control)

- Data point 2 (after) assessed, and participants asked about data point 1 (before)  retrospectively

Like panel research design but addresses lack of control group

 

Cross-Sectional / Correlation

- 2 groups (experimental & control)

- 1 data point (after) assessed only

 

Experimental and control groups may differ in other ways and this is not accounted for with a baseline measurement

One Group Post-Test Only

- 1 group (no control group)

- 1 data point (after) assessed only

Researcher has serious issues

No empirical reference point for comparison / interpretation

 

Comment 4


Hi Shaheen

I think you're spot on when you ask the question, is it simply an issue of semantics. It is an issue of semantics. From the reading I've undertaken for previous statistics units, the word "experimental" in statistics has a specific meaning, and part of that meaning is that the treatment has been assigned to the subject randomly. If the treatment has not not been assigned to sample subjects randomly, then it's not an experimental design, even if de Vaus uses that term.  For de Vaus to use the word "experimental" when referring to a retrospective study in my opinion is misusing the word. Interestingly, when I do a google search on "retrospective experimental design" , it responds with only approx 200 results. For a google search that's almost nothing, and confirms that the phrase is not a generally used one.

That's not to say an observational study is not a valid way to undertake research, but the conclusions are less authoritative.

cheers

 

Graham


Comment 5


 

Quote from 'Survey Methods in Social Investigation' by Moser, C.A., & Kalton, G. p.225

A common type of investigation is the retrospective or ex post facto (after the event) study; it is also often called an ex post facto experiment, but 'experiment' is avoided here because the design does not qualify as such by our definition. Such studies are of two kinds, neither of them involving actual manipulation of the experimental variable: in the first a comparison is made of two groups, which are equated as closely as possible by matching and adjustment, but where only one group has at some past time been exposed to the predictor variable. The hypothesis of interest is tested by comparing the incidence of the predictor variable in the two groups. The research looks from the past to the present, and is usually termed a cause-to-effect design.

In the second kind, two groups differing in the predictor variable, but as nearly as possible equated in other respects by matching and adjustment, are compared; the researcher then looks back for possible explanations of the difference. This is the effect-to-cause design.

Thus a prospective study, following the subjects forward in time, has much to commend it over a retrospective study, The retrospective study, however, has the advantage  of speed. In medical studies, for example, the onset of a disease may follow many years after a cause. The researcher must then wait for this period to elapse before drawing conclusions from a prospective study, while with a retrospective study the results are available as soon as the data are collectd and analysed. With a retrospective study one can hunt for the Y cases and compare them with a group of non-Y cases, determining the number of non-Y cases to suit the research requirements.

A stats student who I usually have end of term exams with had done this unit in sem.2 2010 & this book is listed as one of the 'supplementary readings' in HMS77Z text notes 2010, which I'd purchased over the summer break.

Hope this helps

Kathleen

Comment 6


One of the books I have from another course (Exploring Research 5th ed., Salkind, NJ.) doesn't refer to RED specifically, but the author does talk about something he calls 'quasi-experimental design'. By this he means a design that is close to being true experimental, but because the groups are assigned before the experiment rather than being randomly assigned (e.g. sex, social class, nationality, etc.), the design is not experimental. His conclusion is that you'd only use it when you have no other option, such as when your research topic would violate ethics if studied under true experimental conditions.

So your last post relating to randomisation is spot it, I think.


Comment 7


Quasi solves all ills sometimes doesn't it? I didn't really have a particular comment on the definitional aspect of a retrospective experimental design, but this thread in conjunction with doing the reading for this week made me reflect on how often a less than perfect design is all that you have to work with because for example... (a) the client collected the data and now wants you to analyse it (e.g., single group pre and post test) and draw conclusions about the efficacy of an intervention or (b) the client has specified the design and that is what you have to work with or.. some other reason...it is something I always find challenging because you have to walk a line between giving them what they want and educating them on the limitations....


Comment 8


Great discussion! Thanks to everybody for contributing to the topic.

I would just like to add some comments:

1. I agree that the question is an issue of semantics. Using the term 'retrospective experimental design' is a matter of classification and definition. In the classic experimental design (also called 'true experiment', or 'pretest-posttest group design with random assignment', or 'randomised controlled trial' etc.) three conditions should be satisfied:

1) Two groups: experimental and control;

2) Subjects are assigned to groups randomly;

3) Dependent variable is measured before and after intervention in both groups.

Not always all three conditions can be satisfied. For example, if the second condition (random assignment) is not met, then the design is called 'quasi-experimental', or 'experimental design without random selection', or 'pretest-posttest group design without random assignment', or 'non-randomised controlled trial'.

'Retrospective experimental design' is quite often classified as 'quasi-experimental'. In retrospective 'experimental' design we have two groups, no random selection, and the third condition can be considered partially satisfied.

2. The difference between random assignment and random sampling

'Random sampling' is a method of selecting participants for a study. 'Random assignment' is used after the participants are selected for the study.

3. The difference between cross-sectional/correlational design and retrospective experimental design

Experimental and quasi-experimental designs compare groups in terms of different amount of change. In cross-sectional design the dependent variable is measured at one point of time. Therefore, cross-sectional design measures difference between groups rather than change.

4. Causality and research designs.

Classic experimental design is the best way to establish causality because effects on the dependent variables are clearly attributable to the experimental manipulation. However, true experiments may still not give you a 100% conclusive result. There may be other causes of an outcome besides the manipulated variable. Experimental results may be effects of other extraneous variables, rather than effects of treatment.

Quasi-experimental designs do not allow you to establish causality. However, you may be allowed to infer causality if the following conditions are satisfied:

1) The hypothesised cause and the hypothesised effect must covary;

2) The cause must precede the effect in time;

3) There are no alternative explanations of the differences in the effect.

 

 

 

 

 

No comments: