EVOLUTION-MANAGER
Edit File: p.adjust.html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Adjust P-values for Multiple Comparisons</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" type="text/css" href="R.css" /> </head><body> <table width="100%" summary="page for p.adjust {stats}"><tr><td>p.adjust {stats}</td><td style="text-align: right;">R Documentation</td></tr></table> <h2>Adjust P-values for Multiple Comparisons</h2> <h3>Description</h3> <p>Given a set of p-values, returns p-values adjusted using one of several methods.</p> <h3>Usage</h3> <pre> p.adjust(p, method = p.adjust.methods, n = length(p)) p.adjust.methods # c("holm", "hochberg", "hommel", "bonferroni", "BH", "BY", # "fdr", "none") </pre> <h3>Arguments</h3> <table summary="R argblock"> <tr valign="top"><td><code>p</code></td> <td> <p>numeric vector of p-values (possibly with <code><a href="../../base/html/NA.html">NA</a></code>s). Any other <span style="font-family: Courier New, Courier; color: #666666;"><b>R</b></span> object is coerced by <code><a href="../../base/html/numeric.html">as.numeric</a></code>.</p> </td></tr> <tr valign="top"><td><code>method</code></td> <td> <p>correction method. Can be abbreviated.</p> </td></tr> <tr valign="top"><td><code>n</code></td> <td> <p>number of comparisons, must be at least <code>length(p)</code>; only set this (to non-default) when you know what you are doing!</p> </td></tr> </table> <h3>Details</h3> <p>The adjustment methods include the Bonferroni correction (<code>"bonferroni"</code>) in which the p-values are multiplied by the number of comparisons. Less conservative corrections are also included by Holm (1979) (<code>"holm"</code>), Hochberg (1988) (<code>"hochberg"</code>), Hommel (1988) (<code>"hommel"</code>), Benjamini & Hochberg (1995) (<code>"BH"</code> or its alias <code>"fdr"</code>), and Benjamini & Yekutieli (2001) (<code>"BY"</code>), respectively. A pass-through option (<code>"none"</code>) is also included. The set of methods are contained in the <code>p.adjust.methods</code> vector for the benefit of methods that need to have the method as an option and pass it on to <code>p.adjust</code>. </p> <p>The first four methods are designed to give strong control of the family-wise error rate. There seems no reason to use the unmodified Bonferroni correction because it is dominated by Holm's method, which is also valid under arbitrary assumptions. </p> <p>Hochberg's and Hommel's methods are valid when the hypothesis tests are independent or when they are non-negatively associated (Sarkar, 1998; Sarkar and Chang, 1997). Hommel's method is more powerful than Hochberg's, but the difference is usually small and the Hochberg p-values are faster to compute. </p> <p>The <code>"BH"</code> (aka <code>"fdr"</code>) and <code>"BY"</code> method of Benjamini, Hochberg, and Yekutieli control the false discovery rate, the expected proportion of false discoveries amongst the rejected hypotheses. The false discovery rate is a less stringent condition than the family-wise error rate, so these methods are more powerful than the others. </p> <p>Note that you can set <code>n</code> larger than <code>length(p)</code> which means the unobserved p-values are assumed to be greater than all the observed p for <code>"bonferroni"</code> and <code>"holm"</code> methods and equal to 1 for the other methods. </p> <h3>Value</h3> <p>A numeric vector of corrected p-values (of the same length as <code>p</code>, with names copied from <code>p</code>). </p> <h3>References</h3> <p>Benjamini, Y., and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. <em>Journal of the Royal Statistical Society Series B</em>, <b>57</b>, 289–300. <a href="http://www.jstor.org/stable/2346101">http://www.jstor.org/stable/2346101</a>. </p> <p>Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. <em>Annals of Statistics</em>, <b>29</b>, 1165–1188. doi: <a href="https://doi.org/10.1214/aos/1013699998">10.1214/aos/1013699998</a>. </p> <p>Holm, S. (1979). A simple sequentially rejective multiple test procedure. <em>Scandinavian Journal of Statistics</em>, <b>6</b>, 65–70. <a href="http://www.jstor.org/stable/4615733">http://www.jstor.org/stable/4615733</a>. </p> <p>Hommel, G. (1988). A stagewise rejective multiple test procedure based on a modified Bonferroni test. <em>Biometrika</em>, <b>75</b>, 383–386. doi: <a href="https://doi.org/10.2307/2336190">10.2307/2336190</a>. </p> <p>Hochberg, Y. (1988). A sharper Bonferroni procedure for multiple tests of significance. <em>Biometrika</em>, <b>75</b>, 800–803. doi: <a href="https://doi.org/10.2307/2336325">10.2307/2336325</a>. </p> <p>Shaffer, J. P. (1995). Multiple hypothesis testing. <em>Annual Review of Psychology</em>, <b>46</b>, 561–584. doi: <a href="https://doi.org/10.1146/annurev.ps.46.020195.003021">10.1146/annurev.ps.46.020195.003021</a>. (An excellent review of the area.) </p> <p>Sarkar, S. (1998). Some probability inequalities for ordered MTP2 random variables: a proof of Simes conjecture. <em>Annals of Statistics</em>, <b>26</b>, 494–504. doi: <a href="https://doi.org/10.1214/aos/1028144846">10.1214/aos/1028144846</a>. </p> <p>Sarkar, S., and Chang, C. K. (1997). The Simes method for multiple hypothesis testing with positively dependent test statistics. <em>Journal of the American Statistical Association</em>, <b>92</b>, 1601–1608. doi: <a href="https://doi.org/10.2307/2965431">10.2307/2965431</a>. </p> <p>Wright, S. P. (1992). Adjusted P-values for simultaneous inference. <em>Biometrics</em>, <b>48</b>, 1005–1013. doi: <a href="https://doi.org/10.2307/2532694">10.2307/2532694</a>. (Explains the adjusted P-value approach.) </p> <h3>See Also</h3> <p><code>pairwise.*</code> functions such as <code><a href="pairwise.t.test.html">pairwise.t.test</a></code>. </p> <h3>Examples</h3> <pre> require(graphics) set.seed(123) x <- rnorm(50, mean = c(rep(0, 25), rep(3, 25))) p <- 2*pnorm(sort(-abs(x))) round(p, 3) round(p.adjust(p), 3) round(p.adjust(p, "BH"), 3) ## or all of them at once (dropping the "fdr" alias): p.adjust.M <- p.adjust.methods[p.adjust.methods != "fdr"] p.adj <- sapply(p.adjust.M, function(meth) p.adjust(p, meth)) p.adj.60 <- sapply(p.adjust.M, function(meth) p.adjust(p, meth, n = 60)) stopifnot(identical(p.adj[,"none"], p), p.adj <= p.adj.60) round(p.adj, 3) ## or a bit nicer: noquote(apply(p.adj, 2, format.pval, digits = 3)) ## and a graphic: matplot(p, p.adj, ylab="p.adjust(p, meth)", type = "l", asp = 1, lty = 1:6, main = "P-value adjustments") legend(0.7, 0.6, p.adjust.M, col = 1:6, lty = 1:6) ## Can work with NA's: pN <- p; iN <- c(46, 47); pN[iN] <- NA pN.a <- sapply(p.adjust.M, function(meth) p.adjust(pN, meth)) ## The smallest 20 P-values all affected by the NA's : round((pN.a / p.adj)[1:20, ] , 4) </pre> <hr /><div style="text-align: center;">[Package <em>stats</em> version 3.6.0 <a href="00Index.html">Index</a>]</div> </body></html>