Our Focus: Causation, not Correlation

The Center for Causal Inference supports RAND researchers—and their clients—by applying methodological and statistical rigor to sometimes confounding questions. Here are some of our primary methods for inferring causality, with some examples and links.

Randomized Controlled Trials

Randomized controlled trials (RCTs) aim to reduce certain sources of bias when testing the effectiveness of a new treatment or policy action by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response.

RCTs may be the gold-standard methodology of determining causal effects, but they are rarely available when examining policy problems. And even when an RCT is available, it might not always demonstrate cause and effect or assess why something worked (or didn’t).

More researching using Randomized Controlled Trials

Synthetic Control

Synthetic control is similar to propensity scores in the sense that it helps researchers balance the study groups and thus draw causal conclusions from observational studies. It involves the construction of a weighted combination of groups used as controls, to which the treatment group is compared. This method is applicable even in the case of only one treatment observation, a scenario not covered by propensity score methods.

Instrumental Variables

Instrumental variables may also be used to estimate causal relationships when RCTs are not feasible. The instrumental variable approach for controlling unobserved sources of variability is the mirror opposite of the propensity score method for controlling observed variables. We help identify and use instrumental variables to control for confounding and measurement error in observational studies so we can make appropriate causal inferences.


Difference-in-differences estimation is a statistical technique that is used after the fact to mimic an RCT using observational study data. In this case, we use longitudinal data from two groups to obtain an appropriate counterfactual to estimate a causal effect. We typically use this approach to estimate the effects of a specific intervention—such as passing a law, enacting a policy, or implementing a large-scale program—by comparing the changes in outcomes over time between a population that is affected by the intervention and a population that is not.

Regression-Discontinuity Design

Regression-discontinuity design allows us to determine whether a program is effective without requiring us to assign potentially vulnerable individuals to a "no-program" comparison group to evaluate the effectiveness of a program. In fact, we encourage the use of RDD when we wish to target a program or treatment to those who most need or deserve it—for example, students with low test scores, or patients in need of an experimental treatment.

Center Leadership