by Alan
The Affordable Care Act (ACA) of 2010 (aka "Obamacare") seeks to provide health-insurance coverage to an estimated 30 million Americans who were uninsured prior to the law's enactment. The two primary methods for doing so, both of which went into effect on October 1 of this year, are expanding eligibility for Medicaid, a long-existing government program for low-income individuals, and creating online marketplaces (or "exchanges") for people above the Medicaid income threshold to purchase private insurance (with federal subsidies available according to income level).
As I wrote on one of my other blogs back in August:
Starting next January [of 2014], individuals with income up to 133 percent of the poverty level will be eligible for Medicaid. The pre-ACA thresholds for Medicaid differ by state and by participant category (e.g., pregnant women, children, parents), but are sometimes as low as 50% of the poverty line.
As readers who have been following the Obamacare saga are no doubt aware, the U.S. Supreme Court ruled in 2012 that states could more readily opt out of the Medicaid expansion than had been intended by the ACA. Many states have indeed opted out, with governors' and legislatures' decisions heavily following party lines (here and here). Democrats have accepted the funding to expand their states' Medicaid eligibility and Republicans (largely) have not.
What struck me in reading about states' decisions on whether or not to accept the Medicaid expansion was the set of reasons cited by opponents for declining participation. Conservative arguments pertaining to an expanded federal-government role in health care and distaste in many states for increased spending on social programs (even though the federal government would largely cover the costs to states of expanding Medicaid) did not surprise me. What did surprise me was the claim that Medicaid is actually harmful to its participants and that people would be better off uninsured than with Medicaid.
Indeed, there are several published studies (many of which are compiled here) appearing to show Medicaid patients faring worse than their uninsured counterparts on health outcomes such as mortality, heart attacks, and timely diagnoses of serious illnesses. Perhaps the most widely publicized and discussed is a 2010 investigation (often referred to as the "University of Virginia study") by LaPar and colleagues entitled "Primary Payer Status Affects Mortality for Major Surgical Operations" (full-text).
In that article, which is based on a very large national data set containing statistics on surgical procedures such as hip replacement and coronary bypass, one finds that whereas uninsured patients had 74% greater odds of suffering in-hospital mortality than their privately insured counterparts, Medicaid patients had 97% greater odds of dying in the hospital than the private-insurance group (Table 6). This is, of course, a correlational or observational research design, linking the type of insurance a patient happened to have with his or her quality of surgical recovery. And with such a research design, patients will differ in many other ways than just their type of insurance coverage. Short of randomly assigning people to type of insurance (more on that later), we are always left with some degree of ambiguity in determining cause and effect.
In the Virginia surgical study, for example, the uninsured appeared actually to be quite wealthy, with roughly 60% in the two highest income categories (based on median incomes within the ZIP codes in which people live): 31.1% in the $45,000-or-greater category and 27.8% within $35,000-44,999 (Table 2). In contrast, only a combined 31% of Medicaid patients lived in the two highest-income sets of ZIP codes, with Medicaid patients clustered mainly (41.3%) in the lowest quartile (less than $25,000). The statistical analysis controlled for patient income, but to the extent the income measure did not capture all facets of patients' personal socioeconomic statuses, income may not have been as potent a control variable as possible (the authors noted that education and nutrition were among variables not included).
Interestingly, Avik Roy, a prominent Medicaid critic, acknowledges that unmeasured aspects associated with income may have played a role in the outcome of the Virginia study:
Another key element to consider is that many of the uninsured are not poor... These individuals are wealthy and/or healthy enough that they have decided to forego insurance. Though the Virginia study corrects for income status and other social factors, the fact that these patients are more capable of paying directly for their own care, at the prevailing rate, means that physicians are more willing to see them.
Another reason to be cautious about making a causal conclusion from the Virginia study that Medicaid is actively harmful to its subscribers is the lack of a well-established mechanism leading from Medicaid to poor health. As another Obamacare critic, Glenn Harlan Reynolds, acknowledges in a USA Today column, "Why Medicaid recipients do worse isn't entirely clear..." (Reynolds does suggest a few possible mechanisms, such as, "Uninsured patients probably go straight to the Emergency Room or to a free clinic, while Medicaid recipients may waste precious days, weeks, or months trying to navigate the bureaucracy." This particular conjecture turns out not to be supported, as Medicaid patients are especially likely to use the ER.)
Avik Roy, in his article cited above, also suggests potential mechanisms:
...the answer almost certainly begins with access to care. Medicaid’s extreme underpayment of doctors and hospitals leads fewer and fewer health-care providers to offer their services to Medicaid beneficiaries.
However, the evidence Roy states for this proposition pertains to both Medicaid patients and the uninsured.
Into the debate charges Austin Frakt, part of an ensemble of bloggers at the The Incidental Economist, whose training (collectively) includes medicine, social sciences, and statistics. Frakt advocates the use of a statistical technique called instrumental variables (IV) to deal with the correlational nature of most Medicaid-related studies and the inevitable omission of potentially relevant control variables and, in fact, he wrote a whole series of postings on Medicaid and instrumental variables. In one posting, Frakt writes:
There are observational studies that purport... that Medicaid coverage is worse or no better than being uninsured. One cannot draw such conclusions from such studies if they do not control for the unobservable factors that drive Medicaid enrollment. Causal inference requires appropriate techniques. Even a regression with lots of controls, even propensity score analysis, is insufficient in this area of study.
In another posting, Frakt writes:
Avik [Roy] dismisses IV as a “fudge factor,” casually and erroneously discrediting a vast amount of mainstream work by economists and several entire sub-disciplines. Since IV is a generalization of the concepts that underlie randomized controlled trials (differing in degree, but not in spirit, from purposeful randomization), and can be used to rehabilitate a trial with contaminated groups — a not infrequent occurrence – it is unwise to trivialize IV and what it can do.
According to Will Shadish and Tom Cook, research methodologists who write mainly in the areas of psychology, sociology, and program evaluation, "An instrumental variable is related to the treatment but not to the outcome except through its relationship on the treatment" (2009, p. 613). As suggested in some of Frakt's postings, for example, variation in states' Medicaid eligibility thresholds presumably would affect Medicaid enrollment, but would affect health only through Medicaid subscription. In a particularly useful posting, Frakt walks readers through a study of insurance and HIV treatment by Goldman et al., which used instrumental variables. [Here's another good explanation of instrumental variables, by David Kenny, which I forgot to include in my original posting.]
Frakt summarizes his Medicaid-Instrumental Variable series with this conclusion:
...there is no credible evidence that Medicaid results in worse or equivalent health outcomes as being uninsured. That is Medicaid improves health. It certainly doesn’t improve health as much as private insurance, but the credible evidence to date–that using sound techniques that can control for the self-selection into the program–strongly suggests Medicaid is better for health than no insurance at all.
As hinted above, opportunities for random-assignment experiments of Medicaid effectiveness occasionally do exist. Reference to true experiments would appear to be a good check on the validity of instrumental-variable studies purporting to substitute for randomized studies. Because Oregon's Medicaid program had been over-subscribed, the state used a lottery system to determine which eligible individuals were allowed to enroll in Medicaid and which were not. The random-assignment element replicates a traditional experiment, with the groups who were and were not admitted into Medicaid available for comparison of their health status over the following years. However, even a study that, in principle, is random-assignment can suffer from deficiencies, such as the incomplete participation within conditions in Oregon.
Still, this past May, two-year follow-up results of the Oregon Medicaid experiment were reported. Though the health benefits of being on Medicaid (vs. no insurance) were modest, there were some differences. Being on Medicaid led to: reduced probability of catastrophic health expenses, greater diagnosis of diabetes, and better treatment for depression. Various perspectives on the Oregon study are available here and here, as well as by searching on The Incidental Economist for Oregon Medicaid (the bloggers there wrote a huge number of posts about the study). Also worth noting, briefly and in conclusion, are other quasi-experimental methods that have been used to study the effectiveness of Medicaid: difference-in-difference and regression-discontinuity design.
By now it should be clear that trying to infer causality as to whether Medicaid leaves its holders better off, worse off, or unchanged is complex business. Still, there appears to be some common ground between analysts who, for the most part, interpret the Medicaid studies differently. Ultimately, Roy concludes his above-cited article with the acknowledgement that:
There is, doubtless, a level of poverty at which Medcaid is better than nothing at all. But most people can afford to take on more responsibility for their own care, and indeed would be far better off doing so.
References
Shadish, W. R., & Cook, T. D. (2009). The renaissance of field experimentation in evaluating interventions. Annual Review of Psychology 60, 607-629.
Friday, November 15, 2013
Monday, August 13, 2012
Causal Mediation Conference in Belgium
The Center for Statistics at Belgium's Ghent University will present the symposium
"Causal Mediation Analysis" on January 28-29, 2013, at Het Pand, Gent, Belgium. Further information is available here. According to a notice on the event, "This meeting aims to bridge the gap between traditional mediation analysis building on the famous work of Baron and Kenny and state-of-the-art causal mediation analysis. The idea is to discuss recent developments between methodological researchers and to bring them to the wider research community in a non-technical way."
Sunday, March 25, 2012
Michael Nielsen Offers (Relatively) Accessible Explanation of Pearl's "Causal Calculus"
by Alan
Judea Pearl, a UCLA professor of computer science, is one of the world's leading thinkers -- if not the leading thinker -- on conceptual approaches to causal inference. He is author of the book Causality and of numerous articles and presentations. He also operates the UCLA Causality Blog, a link to which appears in the left-hand column of the present page. On top of all this, Pearl recently garnered the Association for Computing Machinery (ACM) Turing Award for his contributions to artificial intelligence.
"Accessible" is not a word I would use to describe Pearl's writings, however. I have previously described the level of Pearl's writing as "quite frankly, well over my head." Heavy with logic symbols, Pearl's texts would, I suspect, challenge even many well-educated students of causality.
Fortunately for those of us seeking greater understanding of Pearl's ideas, Michael Nielsen has written an article trying to explain Pearl's "causal calculus" to a wider audience. I couldn't understand everything Nielsen wrote, but in relative terms, I found his exposition easier to grasp than Pearl's.
Fairly early on, Nielsen introduces the familiar example of smoking and lung cancer to discuss what conclusions can be drawn from correlational (observation) vs. randomized-controlled research designs (he seems to use the word "experimental" generically for any empirical investigation, specifying with terms such as "intervention" or "randomized controlled" when he means that participants are randomly assigned to conditions). Noting that human participants cannot ethically be randomly assigned to smoke cigarettes, Nielsen tantalizes the reader as follows:
We’ll see that even without doing a randomized controlled experiment it’s possible (with the aid of some reasonable assumptions) to infer what the outcome of a randomized controlled experiment would have been, using only relatively easily accessible experimental data, data that doesn’t require experimental intervention to force people to smoke or not, but which can be obtained from purely observational studies.
The main points I gleaned from Nielsen's piece were that (a) we can learn more than I previously thought simply from diagramming hypothetical causal relations between variables as in structural equation modeling or path analysis; and (b) one's conceptual model can be translated into conditional probability statements (i.e., given x, what is the probability of y) that potentially can be manipulated to answer causal questions without a randomized experiment. As Nielsen explains:
...Pearl had what turns out to be a very clever idea: to imagine a hypothetical world in which it really is possible to force someone to (for example) smoke, or not smoke. In particular, he introduced a conditional causal probability p(cancer|do(smoking)), which is the conditional probability of cancer in this hypothetical world. This should be read as the (causal conditional) probability of cancer given that we “do” smoking, i.e., someone has been forced to smoke in a (hypothetical) randomized experiment.
Now, at first sight this appears a rather useless thing to do. But what makes it a clever imaginative leap is that although it may be impossible or impractical to do a controlled experiment to determine p(cancer|do(smoking)), Pearl was able to establish a set of rules – a causal calculus – that such causal conditional probabilities should obey. And, by making use of this causal calculus, it turns out to sometimes be possible to infer the value of probabilities such as p(cancer|do(smoking)), even when a controlled, randomized experiment is impossible.
Returning to the lung-cancer example, it is theoretically possible that smoking leads directly to lung cancer or that an unobserved third variable causes both smoking and lung cancer (also, lung cancer may cause people to begin smoking, but that seems implausible). As Nielsen discusses, we can insert a fourth variable, namely particulate lung residue ("tar"), between smoking and lung cancer in the proposed causal sequence. This inclusion helps us partially break the connection between the hidden third variable and the other variables. Argues Nielsen: "But if the hidden causal factor is genetic, as the tobacco companies argued was the case, then it seems highly unlikely that the genetic factor caused tar in the lungs, except by the indirect route of causing those people to smoke."
Through manipulations such as the above: "the causal calculus lets us do something that seems almost miraculous: we can figure out the probability that someone would get cancer given that they are in the smoking group in a randomized controlled experiment, without needing to do the randomized controlled experiment. And this is true even though there may be a hidden causal factor underlying both smoking and cancer."
Ultimately, the manipulation of equations can lead to a formula to estimate the conditional probability of developing cancer given random assignment to a smoking condition, p(cancer|do(smoking)), as a function of "quantities which may be observed directly from experimental data, and which don’t require intervention to do a randomized, controlled experiment" (see Equation 5 in Nielsen's article). For any given problem, such non-intervention-based probabilities to plug into the equation may or may not be available.
Nielsen concludes the article by exploring possible future directions in the study of causality. For those interested in causal inference without randomized-controlled studies, Nielsen's article is a must-read.
Judea Pearl, a UCLA professor of computer science, is one of the world's leading thinkers -- if not the leading thinker -- on conceptual approaches to causal inference. He is author of the book Causality and of numerous articles and presentations. He also operates the UCLA Causality Blog, a link to which appears in the left-hand column of the present page. On top of all this, Pearl recently garnered the Association for Computing Machinery (ACM) Turing Award for his contributions to artificial intelligence.
"Accessible" is not a word I would use to describe Pearl's writings, however. I have previously described the level of Pearl's writing as "quite frankly, well over my head." Heavy with logic symbols, Pearl's texts would, I suspect, challenge even many well-educated students of causality.
Fortunately for those of us seeking greater understanding of Pearl's ideas, Michael Nielsen has written an article trying to explain Pearl's "causal calculus" to a wider audience. I couldn't understand everything Nielsen wrote, but in relative terms, I found his exposition easier to grasp than Pearl's.
Fairly early on, Nielsen introduces the familiar example of smoking and lung cancer to discuss what conclusions can be drawn from correlational (observation) vs. randomized-controlled research designs (he seems to use the word "experimental" generically for any empirical investigation, specifying with terms such as "intervention" or "randomized controlled" when he means that participants are randomly assigned to conditions). Noting that human participants cannot ethically be randomly assigned to smoke cigarettes, Nielsen tantalizes the reader as follows:
We’ll see that even without doing a randomized controlled experiment it’s possible (with the aid of some reasonable assumptions) to infer what the outcome of a randomized controlled experiment would have been, using only relatively easily accessible experimental data, data that doesn’t require experimental intervention to force people to smoke or not, but which can be obtained from purely observational studies.
The main points I gleaned from Nielsen's piece were that (a) we can learn more than I previously thought simply from diagramming hypothetical causal relations between variables as in structural equation modeling or path analysis; and (b) one's conceptual model can be translated into conditional probability statements (i.e., given x, what is the probability of y) that potentially can be manipulated to answer causal questions without a randomized experiment. As Nielsen explains:
...Pearl had what turns out to be a very clever idea: to imagine a hypothetical world in which it really is possible to force someone to (for example) smoke, or not smoke. In particular, he introduced a conditional causal probability p(cancer|do(smoking)), which is the conditional probability of cancer in this hypothetical world. This should be read as the (causal conditional) probability of cancer given that we “do” smoking, i.e., someone has been forced to smoke in a (hypothetical) randomized experiment.
Now, at first sight this appears a rather useless thing to do. But what makes it a clever imaginative leap is that although it may be impossible or impractical to do a controlled experiment to determine p(cancer|do(smoking)), Pearl was able to establish a set of rules – a causal calculus – that such causal conditional probabilities should obey. And, by making use of this causal calculus, it turns out to sometimes be possible to infer the value of probabilities such as p(cancer|do(smoking)), even when a controlled, randomized experiment is impossible.
Returning to the lung-cancer example, it is theoretically possible that smoking leads directly to lung cancer or that an unobserved third variable causes both smoking and lung cancer (also, lung cancer may cause people to begin smoking, but that seems implausible). As Nielsen discusses, we can insert a fourth variable, namely particulate lung residue ("tar"), between smoking and lung cancer in the proposed causal sequence. This inclusion helps us partially break the connection between the hidden third variable and the other variables. Argues Nielsen: "But if the hidden causal factor is genetic, as the tobacco companies argued was the case, then it seems highly unlikely that the genetic factor caused tar in the lungs, except by the indirect route of causing those people to smoke."
Through manipulations such as the above: "the causal calculus lets us do something that seems almost miraculous: we can figure out the probability that someone would get cancer given that they are in the smoking group in a randomized controlled experiment, without needing to do the randomized controlled experiment. And this is true even though there may be a hidden causal factor underlying both smoking and cancer."
Ultimately, the manipulation of equations can lead to a formula to estimate the conditional probability of developing cancer given random assignment to a smoking condition, p(cancer|do(smoking)), as a function of "quantities which may be observed directly from experimental data, and which don’t require intervention to do a randomized, controlled experiment" (see Equation 5 in Nielsen's article). For any given problem, such non-intervention-based probabilities to plug into the equation may or may not be available.
Nielsen concludes the article by exploring possible future directions in the study of causality. For those interested in causal inference without randomized-controlled studies, Nielsen's article is a must-read.
Tuesday, August 30, 2011
Judea Pearl and colleagues have launched the Journal of Causal Inference, to be published by the Berkeley Electronic Press.
Sunday, August 7, 2011
Causal-Inference References from SEMNET
by Alan
Over at the Structural Equation Modeling discussion listserve (SEMNET), participants lately have recommended several recent articles and resources on causal inference (and related topics) with non-experimental (correlational) research designs. For benefit of the larger research community, I have listed these materials below.
I've looked over some of these articles and they seem to vary in the prior training assumed. Some would seem accessible for social scientists without elaborate mathematical training, whereas others refer extensively to more sophisticated math (e.g., matrix algebra). The Antonakis et al. piece, in particular, appears to provide a (mostly) non-technical overview.
Three specific topics are covered in many of the articles:
Omitted-variable bias (or specification error, more generally), which is all-important to causal inference, due to the "third-variable" issue.
Propensity-score modeling, which already has been discussed extensively on this blog (e.g., here, here, and here).
The use of instrumental variables.
I am particularly impressed by the range of academic discliplines from which these articles arise. Thanks to those who contributed these items to SEMNET!
----------------------------------------------------------------------------------
Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal claims: A review and recommendations. The Leadership Quarterly, 21, 1086-1120.
Austin, P.C. (2011). A tutorial and case study in propensity score analysis: An application to estimating the effect of in-hospital smoking cessation counseling on mortality. Multivariate Behavioral Research, 46, 119-151.
Beguin, J., Pothier, D., & Côté, S.D. (2011). Deer browsing and soil disturbance induce cascading effects on plant communities: a multilevel path analysis. Ecological Applications, 21, 439–451.
Bollen, K.A., Kirby, J.B., Curran, P.J., Paxton, P.M., & Chen, F. (2007). Latent variable models under misspecification: Two-Stage Least Squares (2SLS) and Maximum Likelihood (ML) estimators. Sociological Methods & Research, 36, 48-86.
Bollen, K.A., & Bauer, D.J. (2004). Automating the selection of model-implied instrumental variables. Sociological Methods & Research, 32, 425-452.
Bollen, K.A., & Maydeu-Olivares, A. (2007). A polychoric instrumental variable (PIV) estimator for structural equation models with categorical variables. Psychometrika, 72, 309-326.
Clarke, K. (2005). The Phantom Menace: Omitted variable bias in econometric research. Conflict Management and Peace Science, 22, 341-352.
Clarke, K. (2009). Return of the Phantom Menace: Omitted variable bias in econometric research. Conflict Management and Peace Science, 26, 46-66.
Coffman, D.L. (2011). Estimating causal effects in mediation analysis using propensity scores. Structural Equation Modeling, 18, 357-369.
Freedman, D.A., Collier, D., Sekhon, J.S., & Stark, P.B. (Eds.). (2009). Statistical models and causal inference: A dialogue with the social sciences. Cambridge University Press. ISBN: 978-0521123909.
Frosch, C.A., & Johnson-Laird, P.N. (2011). Is everyday causation deterministic or probabilistic? Acta Psychologica, 137, 280-291.
Hancock, G. R., & Harring, J. R. (2011, May). Using phantom variables in structural equation modeling to assess model sensitivity to external misspecification. Paper presented at the Modern Modeling Methods conference, Storrs, CT. (Hancock webpage to request copy.)
Hoshino, T. (2008). A Bayesian propensity score adjustment for latent variable modeling and MCMC algorithm. Computational Statistics & Data Analysis, 52, 1413-1429.
Kirby, J.B., & Bollen, K.A. (2009). Using instrumental variable (IV) tests to evaluate model specification in latent variable structural equation models. Sociological Methodology, 39, 327–355. (Public copy)
Mahoney, J. (2008). Toward a unified theory of causality. Comparative Political Studies, 41, 412-436.
Markus, K.A. (2011). Mulaik on atomism, contraposition and causation. Quality and Quantity. Online First (subscription needed), http://www.springerlink.com/content/r754405614228w0v/
Markus, K.A. (2011). Real causes and ideal manipulations: Pearl's theory of causal inference from the point of view of psychological resarch methods. In P. McKay Illari, F. Russo & J. Williamson (Eds.),Causality in the sciences (pp. 240-269). Oxford, UK: Oxford University Press. (Errata)
Pearl, J. (2010). On a class of bias-amplifying variables that endanger effect estimates. Technical Report R-356. In P. Grunwald & P. Spirtes (Eds.), Proceedings of UAI, 417-424. Corvallis, OR: AUAI.
Pearl, J. (2011, August). The causal foundations of structural equation modeling. UCLA Cognitive Systems Laboratory, Technical Report (R-370), http://ftp.cs.ucla.edu/pub/stat_ser/r370.pdf. Chapter for R. H. Hoyle (Ed.), Handbook of structural equation modeling. New York: Guilford Press.
Shadish, W.R., & Steiner, P.M. (2010). A primer on propensity score analysis. Newborn & Infant Nursing Review, 10, 19-26.
Shipley, B. (2009). Confirmatory path analysis in a generalized multilevel context. Ecology, 90, 363-368.
Shipley, B. The Causal Toolbox: A collection of programs for testing or exploring causal relationships [website]. http://pages.usherbrooke.ca/jshipley/recherche/book.htm
Spector, P.E., & Brannick, M.T. (2011). Methodological urban legends: The misuse of statistical control variables. Organizational Research Methods, 14, 287-305.
Steiner, P.M., Cook, T.D., Shadish, W.R., & Clark, M.H. (2010). The importance of covariate selection in controlling for selection bias in observational studies. Psychological Methods, 15, 250-267.
Thoemmes, F.J., & Kim, E.S. (2011). A systematic review of propensity score methods in the social sciences. Multivariate Behavioral Research, 46, 90-118.
Over at the Structural Equation Modeling discussion listserve (SEMNET), participants lately have recommended several recent articles and resources on causal inference (and related topics) with non-experimental (correlational) research designs. For benefit of the larger research community, I have listed these materials below.
I've looked over some of these articles and they seem to vary in the prior training assumed. Some would seem accessible for social scientists without elaborate mathematical training, whereas others refer extensively to more sophisticated math (e.g., matrix algebra). The Antonakis et al. piece, in particular, appears to provide a (mostly) non-technical overview.
Three specific topics are covered in many of the articles:
I am particularly impressed by the range of academic discliplines from which these articles arise. Thanks to those who contributed these items to SEMNET!
----------------------------------------------------------------------------------
Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal claims: A review and recommendations. The Leadership Quarterly, 21, 1086-1120.
Austin, P.C. (2011). A tutorial and case study in propensity score analysis: An application to estimating the effect of in-hospital smoking cessation counseling on mortality. Multivariate Behavioral Research, 46, 119-151.
Beguin, J., Pothier, D., & Côté, S.D. (2011). Deer browsing and soil disturbance induce cascading effects on plant communities: a multilevel path analysis. Ecological Applications, 21, 439–451.
Bollen, K.A., Kirby, J.B., Curran, P.J., Paxton, P.M., & Chen, F. (2007). Latent variable models under misspecification: Two-Stage Least Squares (2SLS) and Maximum Likelihood (ML) estimators. Sociological Methods & Research, 36, 48-86.
Bollen, K.A., & Bauer, D.J. (2004). Automating the selection of model-implied instrumental variables. Sociological Methods & Research, 32, 425-452.
Bollen, K.A., & Maydeu-Olivares, A. (2007). A polychoric instrumental variable (PIV) estimator for structural equation models with categorical variables. Psychometrika, 72, 309-326.
Clarke, K. (2005). The Phantom Menace: Omitted variable bias in econometric research. Conflict Management and Peace Science, 22, 341-352.
Clarke, K. (2009). Return of the Phantom Menace: Omitted variable bias in econometric research. Conflict Management and Peace Science, 26, 46-66.
Coffman, D.L. (2011). Estimating causal effects in mediation analysis using propensity scores. Structural Equation Modeling, 18, 357-369.
Freedman, D.A., Collier, D., Sekhon, J.S., & Stark, P.B. (Eds.). (2009). Statistical models and causal inference: A dialogue with the social sciences. Cambridge University Press. ISBN: 978-0521123909.
Frosch, C.A., & Johnson-Laird, P.N. (2011). Is everyday causation deterministic or probabilistic? Acta Psychologica, 137, 280-291.
Hancock, G. R., & Harring, J. R. (2011, May). Using phantom variables in structural equation modeling to assess model sensitivity to external misspecification. Paper presented at the Modern Modeling Methods conference, Storrs, CT. (Hancock webpage to request copy.)
Hoshino, T. (2008). A Bayesian propensity score adjustment for latent variable modeling and MCMC algorithm. Computational Statistics & Data Analysis, 52, 1413-1429.
Kirby, J.B., & Bollen, K.A. (2009). Using instrumental variable (IV) tests to evaluate model specification in latent variable structural equation models. Sociological Methodology, 39, 327–355. (Public copy)
Mahoney, J. (2008). Toward a unified theory of causality. Comparative Political Studies, 41, 412-436.
Markus, K.A. (2011). Mulaik on atomism, contraposition and causation. Quality and Quantity. Online First (subscription needed), http://www.springerlink.com/content/r754405614228w0v/
Markus, K.A. (2011). Real causes and ideal manipulations: Pearl's theory of causal inference from the point of view of psychological resarch methods. In P. McKay Illari, F. Russo & J. Williamson (Eds.),Causality in the sciences (pp. 240-269). Oxford, UK: Oxford University Press. (Errata)
Pearl, J. (2010). On a class of bias-amplifying variables that endanger effect estimates. Technical Report R-356. In P. Grunwald & P. Spirtes (Eds.), Proceedings of UAI, 417-424. Corvallis, OR: AUAI.
Pearl, J. (2011, August). The causal foundations of structural equation modeling. UCLA Cognitive Systems Laboratory, Technical Report (R-370), http://ftp.cs.ucla.edu/pub/stat_ser/r370.pdf. Chapter for R. H. Hoyle (Ed.), Handbook of structural equation modeling. New York: Guilford Press.
Shadish, W.R., & Steiner, P.M. (2010). A primer on propensity score analysis. Newborn & Infant Nursing Review, 10, 19-26.
Shipley, B. (2009). Confirmatory path analysis in a generalized multilevel context. Ecology, 90, 363-368.
Shipley, B. The Causal Toolbox: A collection of programs for testing or exploring causal relationships [website]. http://pages.usherbrooke.ca/jshipley/recherche/book.htm
Spector, P.E., & Brannick, M.T. (2011). Methodological urban legends: The misuse of statistical control variables. Organizational Research Methods, 14, 287-305.
Steiner, P.M., Cook, T.D., Shadish, W.R., & Clark, M.H. (2010). The importance of covariate selection in controlling for selection bias in observational studies. Psychological Methods, 15, 250-267.
Thoemmes, F.J., & Kim, E.S. (2011). A systematic review of propensity score methods in the social sciences. Multivariate Behavioral Research, 46, 90-118.
Tuesday, February 1, 2011
Sunday, December 19, 2010
Correlation, Causality, and Parenting Studies
Alan and Bo are both quoted in this new article from Brain, Child magazine. In addition to discussing substantive issues of causal inference, author Katy Read also probes the chain of diffusion of social-science research.
The chain begins, of course, with the investigators who conducted the research. Those who conduct correlational studies typically include a statement of limitations at the end of their articles, noting that the findings are open to alternative causal interpretations. In less-guarded moments, however, even research scientists will use phraseology that implies a preferred causal direction.
Universities, research institutes, and/or professional organizations may then issue press releases on a particular study. Ultimately, a research finding may make it into the media. At each reporting step removed from the (methodologically trained) scientific investigators, therefore, statements of caution regarding causality are less likely to appear.
The chain begins, of course, with the investigators who conducted the research. Those who conduct correlational studies typically include a statement of limitations at the end of their articles, noting that the findings are open to alternative causal interpretations. In less-guarded moments, however, even research scientists will use phraseology that implies a preferred causal direction.
Universities, research institutes, and/or professional organizations may then issue press releases on a particular study. Ultimately, a research finding may make it into the media. At each reporting step removed from the (methodologically trained) scientific investigators, therefore, statements of caution regarding causality are less likely to appear.
Subscribe to:
Posts (Atom)