Inference in epidemiologic studies is plagued by exposure misclassification. Several methods exist to correct for misclassification error. One approach is to use point estimates for the sensitivity (Sn) and specificity (Sp) of the tool used for exposure assessment. Unfortunately, we typically do not know the Sn and Sp with certainty. Bayesian methods for exposure misclassification correction allow us to model this uncertainty via distributions for Sn and Sp. These methods have been applied in epidemiologic literature, but are not considered a mainstream approach, especially in occupational epidemiology.

Here we illustrate an occupational epidemiology application of a Bayesian approach to correct for the differential misclassification error generated by estimating occupational exposures from job codes using a job exposure matrix (JEM).

We argue that analyses accounting for exposure misclassification should become more commonplace in the literature.

Heterogeneity of estimates of associations produced by epidemiologic analyses present a challenge for synthesis of evidence. One plausible cause of such heterogeneity is rooted in differences in the accuracy of exposure assessment since misclassification of a binary exposure may influence the location and uncertainty of the effect estimate. If non-differential misclassification of exposure is ignored, the association between exposure and disease is expected (for an infinite sample size) to be attenuated towards the null. Such non-differential misclassification can reduce power and lead to missed associations as well as false positives (

Many methods have been proposed to account for misclassification in the context of retrospective case-control studies where misclassification parameters (sensitivity, Sn, and specificity, Sp) are not known exactly. Such approaches are motivated by the observation that misclassification-corrected odds ratios obtained using a particular point estimate of Sn and Sp can be highly sensitive to small differences between the actual and guessed values (

We apply Bayesian methods for correction of exposure misclassification in two population based case-control studies of the association between occupational asthmagen (agents that can trigger asthma) exposure and autism spectrum disorder (ASD): the Study to Explore Early Development (SEED) conducted in the United States (

The first is the Study to Explore Early Development (SEED), a United States multi-site, case-control study designed to investigate risk factors, co-morbidities, and phenotypes of ASD (

The second case-control study is nested in the Danish Registers (

The specific details of the exposure assessment using asthma-specific JEM (

The goal of this analysis is to correct for exposure misclassification generated by using an asthma JEM to classify exposure based on job codes into a binary exposure indicator. We begin with correction for exposure misclassification in an individual-level analysis that includes adjustment for covariates in the SEED study. For purely illustrative (not inferential) purposes, we contrast with a model where we assume near perfect exposure classification, as is typically done in occupational epidemiology studies (

In the second portion of the analysis, we apply misclassification correction using data from a two-by-two contingency table of maternal occupational asthmagen exposure by ASD case-control status from the Danish study, with Sn and Sp priors informed by the SEED study. (We do not conduct individual-level analysis in the Danish study because of logistical challenges in access to individual level data.) We set priors on Sn, Sp and the odds ratio based on posterior distributions from the SEED model S_2 (model D_1). We also acknowledge that performance of the JEM in SEED may have been different than in the Danish analysis and therefore also conduct an additional analysis with priors on Sn and Sp derived on the basis of SEED for model D_1 to have the same location but greater variance, leading to model D_2.

Since we use individual level data for the SEED and only a contingency table as data for the Denmark Bayesian analyses, the model specifications are different. However, all Bayesian analyses that account for exposure misclassification or measurement error share common features. We must specify three models: (a) an exposure model, (b) a measurement (misclassification) model and (c) an outcome model, following Gustafson (

We express the typical implicit assumption of almost perfect classification of exposure by setting prior distributions of

We derived realistic priors on Sn and Sp of the JEM from previous literature. We detail here the origin of these priors, which were initially elucidated through expert opinion and then were updated with data from two analyses. Liu et al (

Beach et al (^{th} percentiles, and 97.5^{th} percentiles to obtain a best guess of the mode, 2.5^{th} percentile, and 97.5^{th} percentile for Sn and Sp distributions for any asthmagen. The best guess of the 2.5^{th} percentile, mode, and 97.5^{th} percentile was 0.133, 0.381, and 0.728 for the Sn and 0.990, 0.992, and 0.994 for the Sp, respectively. We used the

We assumed a normally distributed uninformative prior with a mean of 0 and variance of 0.5 for all log-odds ratios in the outcome and measurement models, except for the intercept of the outcome model and the coefficient for sex. For these two coefficients, we set an uninformative normally distributed prior with a mean of 0 and a variance of 1. The

In the second series of Bayesian analyses, we update beliefs from the SEED study with a contingency table of data from a Denmark register study. Crude and adjusted odds ratios for the association between maternal asthmagen exposure and ASD in the Denmark register study are similar (_{0}) and θ to induce a prior distribution on the true exposure prevalence among cases:

In the first model (model D_1), we assume that exposure assessment was the same in the two studies, so we set priors on sensitivity as _{0}, and set an informative Gaussian prior with a mean of −0.34 with a variance of 0.33 on the log-odds ratio based on the posterior from our analysis of SEED.

The adjusted OR for maternal occupational asthmagen exposure comparing ASD cases to population controls in the SEED study was 1.39 (95% confidence interval (CI): 0.96 – 2.02), following a typical JEM exposure classification approach (

The posterior distribution of sensitivity is concentrated among smaller values compared to its prior, but priors and posteriors were similar for the specificity (model S_2,

In the Danish case-control analysis, we found an inverse association between maternal asthmagen exposure and ASD using typical analytic approaches (crude OR: 0.92, 95% CI: 0.86–0.99) (

In this paper, we illustrate a Bayesian method for correcting for exposure misclassification in the context of two studies examining the association between maternal occupational asthmagen exposure and ASD in the children. Inferentially, our models suggest that there is no measurable association between maternal asthmagen occupational exposure around the time of pregnancy and ASD. This conclusion was consistent with and without exposure misclassification adjustment, although the effect size estimates and confidence limits did fluctuate. We illustrate that it is hard to predict how misclassification of exposure affects every specific analysis. We also argue that it is more difficult than commonly realized to be sure that such misclassification is non-differential with respect to health outcome, even when exposure assessment is blind to outcome. We describe use of Bayesian tools that make correction for misclassification accessible to epidemiologists who collaborate with statisticians. We highlight the importance of setting defensible priors that capture knowledge that existed before the data in any given study is collected. This is particularly important when the data does not allow us to learn about all parameters of interest, as is typically the case with differential exposure misclassification. In doing so, we must take care to avoid showing over-confidence, as would arise from not considering a range of plausible priors. We consider these methodological matters in details below.

Our results nonetheless illustrate key points regarding exposure misclassification in epidemiologic studies. First, we demonstrate here that the odds ratio estimate can move in unexpected ways when we allow for misclassification by the JEM to be differential. In the epidemiologic literature, authors often assert the belief that exposure misclassification is non-differential and thus results are biased to the null, and argue that reported associations are likely not “spurious” under this rationale. However, in practice it may be difficult to know the true extent of differential misclassification. In the SEED study, as in any case-control study, we may suspect that differential recall for many self-reported variables because mothers who have a child with an ASD may recall the pregnancy differently than mothers of typically developing controls. However, mathematically, as discussed in

Second, we illustrate that even in situations of relatively large sample sizes, posterior effect estimates are affected by misclassification bias. In our example, we observed a precise protective effect estimate of occupational asthmagen exposure on ASD risk in the Denmark study, using typical analytic approaches. When we corrected for exposure misclassification based on prior evidence, we observed a more protective tendency in point estimate, but the credible intervals widened.

This illustrates the challenges faced in occupational epidemiology. The JEM we chose is among the best of its kind, yet the low sensitivity limits our ability to confidently identify new associations. The inability to improve our estimates and precision in the very large Danish study illustrates that without improving exposure measurements and assessment methods we have perhaps reached the limit of what we can discover with tools like JEMs when the associations are weak. Though we specifically focus on the JEM example here, this concern regarding exposure misclassification exists in any epidemiology study where we classify a continuous exposure measured with error.

One important caveat regarding these models is that we must be careful in setting prior distributions, especially given problems with non-identifiability. Recall that p_{i}=r_{i}*Sn + (1-r_{i})*(1-Sp) where p_{i} is observed exposure prevalence and r_{i} is the true exposure prevalence; i=0 denotes controls and i=1 denotes cases. If we observe an exposure prevalence, p_{i}, of 0.21, and specificity, Sp, is approximately 1, then there are two possible solutions for (r_{i}, Sn): (0.3, 0.7) and (0.7, 0.3). The priors will determine the solution that is selected. Thus, if we place a prior on sensitivity that is concentrated on 0.3, the solution for the true exposure prevalence will converge upon 0.7. Since we have prior knowledge of the performance of the JEM and not true exposure prevalence in the selected samples, we place informative priors on the Sn and Sp instead of the true exposure prevalence. In our analysis, the particular identifiability issue only emerges when the specificity is close to one, but illustrates that use of these models should be guided by knowledge. Prior knowledge may also be complicated by the fact model parameters may not be directly transportable across different study populations, suggesting the importance of perhaps considering a few plausible priors.

In epidemiologic studies we often present results as if there is no measurement error despite the fact that it exists. We illustrate the sensitivity of effect size estimates to exposure misclassification in a limited series of studies and show how Bayesian procedures can be readily applied to address exposure misclassification. These methods accommodate our uncertainty regarding the amount of misclassification. We illustrated these methods through use of WinBUGS and R, although there are alternative packages that can be used for Bayesian inference, including OpenBUGS (an open-source version of BUGS software) (

We thank Professor Paul Gustafson for reviewing our explanation of how “differential due to dichotomization” misclassification arises.

Funding:

ABS was funded by an Autism Speaks Dennis Weatherstone Predoctoral Fellowship (#8576) and by a training grant from the National Institute of Environmental Health Sciences (T32ES007018). The Study to Explore Early Development was funded by six cooperative agreements from the Centers for Disease Control and Prevention: Cooperative Agreement Number U10DD000180, Colorado Department of Public Health; Cooperative Agreement Number U10DD000181, Kaiser Foundation Research Institute (CA); Cooperative Agreement Number U10DD000182, University of Pennsylvania; Cooperative Agreement Number U10DD000183, Johns Hopkins University; Cooperative Agreement Number U10DD000184, University of North Carolina at Chapel Hill; and Cooperative Agreement Number U01DD000498, Michigan State University.

We illustrate the manner in which non-differential measurement error produced differential exposure misclassification below, following Section 6.1 of Gustafson (

Imagine a continuous exposure C observed as C* is dichotomized at a constant

For the moment, let us ignore the fact that we are interested in a binary outcome (such as case

If there is a relationship between Y and C (this does not have to be causal but can be due to uncontrolled confounding), then we can expect there also to exist a relationship between Y and C*. If this relationship is positive, this means that values of both C and C* will be greater among cases then controls, i.e. both the true and observed distributions of exposure among cases will be centered on larger values than that among controls if we condition on the part of the distribution where C is above the threshold,

It is important to note that this process of

SEED: Model specification

For the SEED Bayesian misclassification correction, we specify three models: (a) an exposure model, (b) a measurement model and (c) an outcome model, following Gustafson (_{0} is the sensitivity of X* among controls; Sn_{1} is the sensitivity of X* among ASD cases; Sp_{0} is the specificity of X* among controls; Sp_{1} is the specificity of X* among ASD cases; Y represents being an ASD case (Y=1) or a control (Y=0);

Exposure model:

(b) Measurement model: differential exposure misclassification

(c) Outcome model:

The exposure model expresses the log odds of the true occupational asthmagen exposure conditional on the confounders in the model. The measurement model specifies the probability of observed (misclassified) occupational asthmagen exposure as a function of the probability of true occupational asthmagen exposure, the sensitivity, and the specificity, as well as ASD status. In the outcome model, we model the log odds of having a child with ASD as a function of the true maternal occupational asthmagen exposure status and the potential confounders. Confounders in our analysis included maternal age at child’s birth (continuous), parity (1, 2, >2), child’s sex, maternal race/ethnicity (white, black, Asian, Hispanic, multiracial or other), current maternal education (less than high school, high school, some college/trade school, bachelors, advanced degree), current total household income (<$30,000, $30,000–70,000, $70,000–110,000, >$110,000), maternal psychiatric condition history (yes, no), and active smoking during pregnancy (yes, no).

SEED: Model convergence and characterizing posteriors

We ran a complete case analysis with 463 ASD cases and 710 controls. Bayesian analysis was implemented in Winbugs 1.4 (^{th} iteration of accepted samples from the posterior distribution. We reviewed trace plots, autocorrelation plots, density plots, and Gelman plots to check for convergence. The WinBugs code is included in ^{th} and 97.5^{th} percentiles of the posterior distribution). Posterior distributions were also obtained for the Sn and the Sp of the JEM, and the maternal occupational asthmagen exposure prevalence among controls.

Denmark: Model specification

In our contingency table, we have observed asthmagen exposure prevalences, X_{0} and X_{1}, for controls and cases, respectively, for N_{0} controls and N_{1} cases. We assume that observed prevalences follow binomial distributions: X_{0}~_{0}, N_{0}) and X_{1}~_{1}, N_{1}).If r_{0} and r_{1} are the true exposure prevalences for cases and controls, respectively, then allowing for differential exposure misclassification we can calculate the true exposure prevalence among controls, r_{0} = (p_{0}+Sp_{0}-1)/(Sn_{0}+Sp_{0}-1), and the true prevalence among cases, r_{1} = (p_{1}+Sp_{1}-1)/(Sn_{1}+Sp_{1}-1). Over many MCMC iterations, we sample candidate values for Sn and Sp for cases and controls from prior distributions based on the SEED analyses and generate posterior distributions for corrected exposure prevalences, r_{0} and r_{1}. Since _{0} and log odds ratio to induce a prior distribution on the true exposure prevalence among cases, r_{1}= ((θ)(r_{0}))/((θ)(r_{0})+1-r_{0}). The distributions for r_{0} and r_{1} are then reconciled with the observed exposure prevalences. Selected candidate values for sensitivity, specificity, and θ are retained for the posterior distributions if they are deemed plausible based on the likelihood of the data given the model and priors.

Denmark: Model convergence and characterization of posteriors

The contingency table consisted of 5,876 exposed controls, 23,483 unexposed controls, 1,247 exposed cases, and 5,459 unexposed cases. We ran 200,000 iterations, removing the initial 10,000 iterations to allow for a burn-in period and selected every 100^{th} iteration for inclusion in the posterior distribution in order to reduce auto-correlation. We generated posterior distributions for the odds ratio, sensitivity, specificity and exposure prevalence.

WinBugs code for Model S_1:

model {

for (i in 1:N) {

# Outcome model : includes the ‘true’ variable for asthmagen exposure

y[i] ~ dbern(pt[i])

logit(pt[i]) <- b0 + b1*astpregt[i] + b2*DR_AGEBIRTH_MX_C31[i] + b5*DR_PSYALL_MX[i] +

b6*dr_parity_2[i] + b7*dr_parity_3[i] + b8*dr_sexm0f1[i] + b9*dr_mrace_bla[i] + b10*dr_mrace_asi[i] + b11*dr_mrace_his[i] + b12*dr_mrace_oth[i] + b13*dr_medu_lhs[i] + b14*dr_medu_hs[i] + b15*dr_medu_sc[i] + b16*dr_medu_ad[i] + b17*dr_ti_1[i] +

b18*dr_ti_2[i] + b19*dr_ti_4[i] + b22*DR_ACTSMK_PREG[i]

# Measurement model

DR_ASTHMAGEN_PREG[i] ~ dbern(pm[i])

pm[i] <- SN0*(astpregt[i])*(1-y[i]) + (1-SP0)*(1-astpregt[i])*(1-y[i]) + SN1*(astpregt[i])*(y[i]) + (1-SP1)*(1-astpregt[i])*(y[i])

# Exposure model

astpregt[i] ~ dbern(prop[i])

logit(prop[i]) <- g1 + g2*DR_AGEBIRTH_MX_C31[i] + g5*DR_PSYALL_MX[i] +

g6*dr_parity_2[i] + g7*dr_parity_3[i] + g8*dr_sexm0f1[i] + g9*dr_mrace_bla[i] + g10*dr_mrace_asi[i] + g11*dr_mrace_his[i] +g12*dr_mrace_oth[i] + g13*dr_medu_lhs[i] + g14*dr_medu_hs[i] + g15*dr_medu_sc[i] + g16*dr_medu_ad[i] + g17*dr_ti_1[i] + g18*dr_ti_2[i] + g19*dr_ti_4[i] + g22*DR_ACTSMK_PREG[i]

}

# Calculate odds ratio

OR <- exp(b1)

# Calculate prevalence of exposure among unexposed

r0 <- (p0+SP0–1)/(SN0+SP0–1)

# PRIORS

b0 ~ dnorm(0,1)

b1 ~ dnorm(0,2)

b2 ~ dnorm(0,2)

b5 ~ dnorm(0,2)

b6 ~ dnorm(0,2)

b7 ~ dnorm(0,2)

b8 ~ dnorm(0,1)

b9 ~ dnorm(0,2)

b10 ~ dnorm(0,2)

b11 ~ dnorm(0,2)

b12 ~ dnorm(0,2)

b13 ~ dnorm(0,2)

b14 ~ dnorm(0,2)

b15 ~ dnorm(0,2)

b16 ~ dnorm(0,2)

b17 ~ dnorm(0,2)

b18 ~ dnorm(0,2)

b19 ~ dnorm(0,2)

b22 ~ dnorm(0,2)

g1 ~ dnorm(0,2)

g2 ~ dnorm(0,2)

g5 ~ dnorm(0,2)

g6 ~ dnorm(0,2)

g7 ~ dnorm(0,2)

g8 ~ dnorm(0,2)

g9 ~ dnorm(0,2)

g10 ~ dnorm(0,2)

g11 ~ dnorm(0,2)

g12 ~ dnorm(0,2)

g13 ~ dnorm(0,2)

g14 ~ dnorm(0,2)

g15 ~ dnorm(0,2)

g16 ~ dnorm(0,2)

g17 ~ dnorm(0,2)

g18 ~ dnorm(0,2)

g19 ~ dnorm(0,2)

g22 ~ dnorm(0,2)

SN0 ~ dbeta(1000,1)

SN1 ~ dbeta(1000,1)

SP0 ~ dbeta(1000,1)

SP1 ~ dbeta(1000,1)

}

WinBugs code for Model S_2:

model {

for (i in 1:N) {

# Outcome model : includes the ‘true’ variable for asthmagen exposure

y[i] ~ dbern(pt[i])

logit(pt[i]) <- b0 + b1*astpregt[i] + b2*DR_AGEBIRTH_MX_C31[i] + b5*DR_PSYALL_MX[i] + b6*dr_parity_2[i] + b7*dr_parity_3[i] + b8*dr_sexm0f1[i] + b9*dr_mrace_bla[i] + b10*dr_mrace_asi[i] + b11*dr_mrace_his[i] + b12*dr_mrace_oth[i] + b13*dr_medu_lhs[i] + b14*dr_medu_hs[i] + b15*dr_medu_sc[i] + b16*dr_medu_ad[i] + b17*dr_ti_1[i] + b18*dr_ti_2[i] + b19*dr_ti_4[i] + b22*DR_ACTSMK_PREG[i]

# Measurement model

DR_ASTHMAGEN_PREG[i] ~ dbern(pm[i])

pm[i] <- SN0*(astpregt[i])*(1-y[i]) + (1-SP0)*(1-astpregt[i])*(1-y[i]) + SN1*(astpregt[i])*(y[i]) + (1-SP1)*(1-astpregt[i])*(y[i])

# Exposure model

astpregt[i] ~ dbern(prop[i])

logit(prop[i]) <- g1 + g2*DR_AGEBIRTH_MX_C31[i] + g5*DR_PSYALL_MX[i] +

g6*dr_parity_2[i] + g7*dr_parity_3[i] + g8*dr_sexm0f1[i] + g9*dr_mrace_bla[i] + g10*dr_mrace_asi[i] + g11*dr_mrace_his[i] +g12*dr_mrace_oth[i] + g13*dr_medu_lhs[i] + g14*dr_medu_hs[i] + g15*dr_medu_sc[i] + g16*dr_medu_ad[i] + g17*dr_ti_1[i] + g18*dr_ti_2[i] + g19*dr_ti_4[i] + g22*DR_ACTSMK_PREG[i]

}

# Calculate odds ratio

OR <- exp(b1)

# Calculate prevalence of exposure among unexposed

r0 <- (p0+SP0–1)/(SN0+SP0–1)

# PRIORS

b0 ~ dnorm(0,1)

b1 ~ dnorm(0,2)

b2 ~ dnorm(0,2)

b5 ~ dnorm(0,2)

b6 ~ dnorm(0,2)

b7 ~ dnorm(0,2)

b8 ~ dnorm(0,1)

b9 ~ dnorm(0,2)

b10 ~ dnorm(0,2)

b11 ~ dnorm(0,2)

b12 ~ dnorm(0,2)

b13 ~ dnorm(0,2)

b14 ~ dnorm(0,2)

b15 ~ dnorm(0,2)

b16 ~ dnorm(0,2)

b17 ~ dnorm(0,2)

b18 ~ dnorm(0,2)

b19 ~ dnorm(0,2)

b22 ~ dnorm(0,2)

g1 ~ dnorm(0,2)

g2 ~ dnorm(0,2)

g5 ~ dnorm(0,2)

g6 ~ dnorm(0,2)

g7 ~ dnorm(0,2)

g8 ~ dnorm(0,2)

g9 ~ dnorm(0,2)

g10 ~ dnorm(0,2)

g11 ~ dnorm(0,2)

g12 ~ dnorm(0,2)

g13 ~ dnorm(0,2)

g14 ~ dnorm(0,2)

g15 ~ dnorm(0,2)

g16 ~ dnorm(0,2)

g17 ~ dnorm(0,2)

g18 ~ dnorm(0,2)

g19 ~ dnorm(0,2)

g22 ~ dnorm(0,2)

SN0 ~ dbeta(3.6,5.2)

SN1 ~ dbeta(3.6,5.2)

SP0 ~ dbeta(1000, 9.1)

SP1 ~ dbeta(1000, 9.1)

}

Priors for Model D_1:

data <- list(x0=5876, x1=1247, n0=29359, n1=6706, a.sn0=11.0, b.sn0=38.5,

a.sp0=718, b.sp0=8.2, a.sn1=8.3, b.sn1=17.2, a.sp1=718, b.sp1=8.2,

aa=1, bb=1, mu=−0.34, tau=3.0)

Priors for Model D_2:

data <- list(x0=5876, x1=1247, n0=29359, n1=6706, a.sn0=5.9, b.sn0=19.5,

a.sp0=46.6, b.sp0=3.4, a.sn1=4.2, b.sn1=8.1, a.sp1=46.6, b.sp1=3.4,

aa=1, bb=1, mu=−0.34, tau=3.0)

WinBugs code for Model D_1 and Model D_2:

model{

x0 ~ dbin(p0, n0)

x1 ~ dbin(p1, n1)

p0 <- r0*SN0 + (1-r0)*(1-SP0)

p1 <- r1*SN1 + (1-r1)*(1-SP1)

r0 ~ dbeta(aa,bb)

lor ~ dnorm(mu,tau)

SN0 ~ dbeta(a.sn0, b.sn0)

SN1 ~ dbeta(a.sn1, b.sn1)

SP0 ~ dbeta(a.sp0, b.sp0)

SP1 ~ dbeta(a.sp1, b.sp1)

OR <- exp(lor)

r1 <- (OR*r0)/(1-r0+OR*r0)

}

This paper demonstrates that exposure misclassification may result in false-positive conclusions.

This study is the first ilustration in epidemiological literature of differential misclassification arising from non-differential measurement error.

This study implements Bayesian methods for correction of exposure misclassification to resolve heterogenity of effect estimates.

This study demonstrates exposure misclassifcation correction using Bayesian modeling in a manner that is accessible to epidemiologists in general.

Illustration of prior and posterior distributions for analysis of SEED (Model S_2) and study nested in Denmark (Model D_1); posterior distributions from Model S_2 were used to set priors for Model D_1

Posterior distributions, medians and 95% Credible Intervals after allowing for differential exposure misclassification and learning from study to study about association between exposure to asthmagens and autism spectrum disorders.

Study/country | SEED/USA | Denmark | ||
---|---|---|---|---|

Prior on Odds Ratio | Flat | Informed by posterior of SEED analysis S_2 | ||

Assumption about | Near-perfect | Realistic exposure | Exposure^{2} | Uncertain whether^{3} |

Odds Ratio | 1.37 (0.96 – 1.96) ^{1} | 0.71 (0.23 – 2.42)^{1} | 0.64 (0.23 – 1.94) | 0.68 (0.23 – 1.97) |

Sensitivity | ||||

Controls | 0.999 (0.996 – 1.00) | 0.21 (0.15 – 0.36) | 0.26 (0.21 – 0.35) | 0.26 (0.20 – 0.41) |

Cases | 0.999 (0.996 – 1.00) | 0.31 (0.21 – 0.54) | 0.27 (0.19 – 0.44) | 0.27 (0.19 – 0.52) |

Specificity | ||||

Controls | 0.999 (0.996 – 1.00) | 0.99 (0.98 – 1.00) | 0.99 (0.98 – 1.00) | 0.93 (0.83 – 0.98) |

Cases | 0.999 (0.996 – 1.00) | 0.99 (0.98 – 1.00) | 0.99 (0.98 – 1.00) | 0.93 (0.85 – 0.98) |

: adjusted for covariates as described in text for SEED

: Priors on Sn, Sp and OR based posterior distribution from SEED

: Priors on Sn based on inflating variance of posterior distributions from SEED (see text); prior on OR based on posterior distribution from SEED as in model D_1