Skip to content

Improving IRIS: Please Join the Conversation

2014 July 8

By Kacee Deener

IRIS graphic identifier

Over the past few years, EPA has embraced a major new effort to enhance its Integrated Risk Information System (IRIS) Program to improve the scientific foundation of assessments, increase transparency, and improve productivity. IRIS is a human health assessment program that evaluates information on health effects that may result from exposure to environmental contaminants. Information from IRIS is used by EPA and others to support decisions to protect human health.

We think we’ve made terrific progress so far, and we were thrilled that the National Academies’ National Research Council (NRC) agrees. They spent the past two years reviewing IRIS, and in May 2014, they issued a report highlighting our progress and offering recommendations on keeping the progress moving forward (Assistant Administrator Lek Kadeli recently wrote about this on EPA Connect, the Agency’s leadership blog).

In their report, the NRC commended EPA for its substantive new approaches, continuing commitment to improving the process, and successes to date. They noted that the IRIS Program has moved forward steadily in planning for and implementing changes in each element of the assessment process. They also provided several recommendations which they said should be seen as building on the progress we’ve already made.

We are happy to announce that we are taking additional steps to improve the IRIS Program. In October, we will hold a public workshop to discuss specific recommendations from the NRC’s report, which fall under the three broad topics below. We invite you to provide early input by commenting on this blog post, which is the first in a new IRIS blog series geared toward generating online scientific discussion about issues relevant to the IRIS Program. We plan to use blog posts like this more in the future to get your input.

  • Topic 1 – Refining systematic review methodology, including methods to evaluate risk of bias. The NRC stated that EPA should continue to document and standardize its process for evaluating evidence and recommended EPA develop tools for assessing risk of bias in human, animal, and mechanistic studies that are used as primary data sources. The NRC noted the limitations of available approaches for use with observational (nonrandomized) studies, and advocated exploration of differences in applying methods for evaluating epidemiological studies to controlled experimental in vivo and in vitro studies. They noted that these approaches will depend on the complexity and extent of data on a chemical and the resources available to EPA, and that additional methodological work might be needed to develop empirically-supported evaluation criteria for animal or mechanistic studies.
  • Topic 2 – Advancing methodology to systematically evaluate and integrate evidence streams. The NRC stated that EPA should continue to improve its evidence-integration process incrementally, and to enhance its transparency. The committee provided several alternatives for organizing evidence of hazard potential and recommended that the IRIS Program should either continue with the guided-expert-judgment process for evaluating evidence, but make its application more transparent, or adopt a structured approach with rating recommendations. The committee also encouraged the IRIS Program to simultaneously expand its ability to perform quantitative modeling, specifically using Bayesian methods, to inform hazard identification.
  • Topic 3 – Combining quantitative results from multiple studies, presenting appropriate quantitative toxicity information, and advancing analyses and communication of uncertainty. The committee encouraged the IRIS Program to continue its shift towards the use of multiple studies for dose-response assessment, but with increased attention to judging the relative merits of mechanistic, animal and epidemiologic studies, with an ultimate goal of developing formal methods for combining studies and deriving toxicity values in a transparent and replicable manner. The NRC stated that it is critical to consider systematic approaches to synthesizing and integrating the derivation of a range of toxicity values in light of variability and uncertainty. Integral to this latter goal is the NRC recommendation to develop methods to systematically conduct uncertainty analyses and to appropriately communicate uncertainty to the users of IRIS assessments.

We’re interested in hearing your thoughts about the NRC recommendations above. For example, do you have ideas about how we should move forward to address the recommendations in these topic areas? Do you have scientific suggestions for the IRIS Program to consider related to these topics? Do you have suggestions for who we should ask to speak at the workshop? Please add your thoughts, ideas, and suggestions in the comments below and join the conversation!

About the Author: Kacee Deener is the Communications Director in EPA’s National Center for Environmental Assessment.  She joined EPA 13 years ago and has a Masters degree in Public Health.

Editor’s Note: Shortly after this post was originally published, we learned that some people were receiving an error message when attempting to comment. That problem has been fixed, and we apologize for the inconvenience! We would very much like to hear from anyone who has a comment to offer, so please do try again now. Many thanks!

Editor's Note: The opinions expressed here are those of the author. They do not reflect EPA policy, endorsement, or action, and EPA does not verify the accuracy or science of the contents of the blog.

Please share this post. However, please don't change the title or the content. If you do make changes, don't attribute the edited title or content to EPA or the author.

5 Responses leave one →
  1. Chuck Elkins permalink
    July 14, 2014

    First, let me thank the IRIS staff for starting this blog. I hope it will be useful to everyone in advancing the scientific dialogue regarding IRIS.

    I have two thoughts about Risk of Bias that I would like to put on the table for discussion. I will make each one of them a separate comment in the hopes that the blog software will support a multi-party discussion. If so, then separating my thoughts into separate discrete topics may facilitate such a discussion.

    TOPIC: RISK OF BIAS: CAN WE CHANGE THE NAME?
    Examining the risk of bias in individual studies is very important in hazard assessment and it is a good addition to systematic review. However, I want to suggest before we get too committed to this term “risk of bias” that while we keep doing the analysis, we change what we call it—i.e. our terminology. “Risk of bias” may be a new term for some of us, but having sat through the National Toxicology Program’s webinars on systematic review last year, I now know that it is a term developed by clinical medicine practitioners in their worthy pursuit of systematic review. While we certainly owe our clinical colleagues a thank you for their leadership in this respect, I believe we should do so without feeling obliged to adopt their terminology for use in our separate fields of toxicology and.

    I realize that the recent National Research Council’s report on the IRIS program of May, 2014 adopted the use of this term, but I don’t think its use is not immutable. I believe the NRC is more interested in WHAT the IRIS program does than what it CALLS a particular function.

    Here’s why I think we should adopt a more generic term for the review of a study’s internal validity: It’s hard to live in the 21st century without reading in the newspaper of cases almost every day of “bias.” Legitimate concerns in our society about the fair treatment of people on the basis of race, gender, and sexual orientation have certainly brought this term “bias” to the forefront of our attention. This type of bias reported in the newspapers and that we sometimes experience personally may be intentional or conscious, or it may also be unintentional or unconscious, but either way bias toward other individuals is seen by many, including myself, as an ethical or moral issue.

    Consequently, when I hear “risk of bias” in the hazard assessment or risk assessment arena, I have to remind myself that this use of the term “bias” is different from its use in daily life, and those unfamiliar with the term “risk of bias” may think it has something to do with moral turpitude.
    Certainly, there may be some ethically-challenged bias in the conduct of a toxicological or epidemiological study, resulting in fraud or in reporting of less than the fully relevant results of a study. This kind of ethical bias should be identified and the study treated appropriately. But ascertaining the internal validity of a study goes far beyond any such ethical issues and addresses itself to such matters as a study’s design, elements of which could have been instituted without any ethical lapse on the part of the investigator or funder, but which, nevertheless, could seriously detract from the reliability of the study’s results. We obviously need to identify these elements as well in any review of the internal validity of a study without unintentionally implying, through the use of the term “risk of bias,” that the investigator may be guilty of ethical misconduct.

    I fear that if we continue to use the term “risk of bias” as a substitute for a more neutral term such as “internal validity”, we will find the general public as well as even some investigators, misunderstanding what we are saying and concluding that some ethical lapse has occurred with regard to particular studies. This would be unnecessary “baggage” for our communication of hazard assessments to carry and could prove counterproductive.

    I THEREFORE RECOMMEND THAT EPA FORESWEAR THE USE OF THE TERM “RISK OF BIAS” AND USE INSTEAD A TERM SUCH AS “INTERNAL VALIDITY.” This change should be made now; it would be more difficult to bring about this change if EPA waits a year or two and by that time has executed a number of “Risk of bias” examinations of groups of studies.

    Ideally, the National Toxicology Program (NTP) should also follow suit by not adopting this term from the clinical investigators. In a meeting last year on systematic review I made this same recommendation, and at the time I thought NTP Director Linda Birnbaum expressed agreement with me about the need to change the terminology. Somehow this change did not get made within the pilot Systematic Review program currently being conducted by NTP, but it is not too late to change.

    What do you think? Do you think the term “risk of bias” has the potential to be seriously misunderstood by the “uninitiated”? Is “internal validity” a suitable substitute?

    • Vince Cogliano, Acting EPA IRIS Program Director permalink
      July 31, 2014

      Thank you, Chuck, for getting the first discussion started. The term “risk of bias” came into the IRIS lexicon in April 2014 with the release of preliminary materials for the Inorganic Arsenic assessment and with the National Research Council’s “Review of the IRIS Process” released the following month. You’ve asked whether “risk of bias” has the potential to be misunderstood and whether “internal validity” might be a better term.

      “Internal validity” refers to the rigor of a study’s design and conduct so that it can credibly answer the research question it investigated. Internal validity encompasses risk of bias and some other aspects of study quality, though these concepts are not identical. “Internal validity” is also related to the complementary term “external validity,” which refers to how a study might apply, or be generalizable, to other questions. For IRIS, this means what a study implies about human environmental exposure. Some of the most critical hazard assessment questions involve external validity. Examples include how animal results apply to humans and how single-chemical experiments apply to complex exposure scenarios.

      Based on anecdotal observation, “risk of bias” does seem to be somewhat misunderstood so far, while “internal validity” and “external validity” have more of a history of use in the field of health hazard assessment. This might be because “risk of bias” is a relatively new term in our field, or it might signal a need for standardizing terms. Epidemiologists use the term “bias” without implying an ethical or moral dimension. Among the many biases they assess are selection bias, misclassification bias, recall bias, reporting bias, and confounding, which is a form of bias.

      We’d like to hear further perspectives on this question of terminology, which is an open one. The discussion board is an appropriate forum that can serve as a starting point for continued discussion at the IRIS workshop on October 15-16. Let’s hear what others think!

      -Vince Cogliano, Acting IRIS Program Director

      (The opinions expressed here are those of the author. They do not reflect EPA policy, endorsement, or action, and EPA does not verify the accuracy or science of the contents of the blog.)

  2. Chuck Elkins permalink
    July 14, 2014

    TOPIC: RISK OF BIAS: WHAT QUESTIONS SHOULD BE ASKED?

    Identifying the internal validity or the “risk of bias” of individual studies is and has long been an important aspect of any hazard assessment. However, any such analysis is no better than the questions that form the backbone of the analysis. Recently, NCEA undertook a very serious and admirable “risk of bias” examination of some 400+ studies as an experiment as part of its arsenic assessment. Given that this evaluation is at present the most transparent and comprehensive ever undertaken by the IRIS program, it seems constructive to look in detail at the thoroughness of this examination and to suggest any improvements.

    The first problem that I believe needs to be addressed is an ambiguity in terms:

    In its 2011 review of the formaldehyde IRIS assessment the National Research Council dealt with the need to assess the quality of each study. In this latest report, the committee added extensive advice about addressing “risk of bias” as well.

    “The committee notes that assessing the quality of the study is not equivalent to assessing the risk of bias in the study. An assessment of study quality evaluates the extent to which the researchers conducted their research to the highest possible standards and how a study is reported. Risk of bias is related to the internal validity of a study and reflects study-design characteristics that can introduce a systematic error (or deviation from the true effect) that might affect the magnitude and even the direction of the apparent effect.”

    During the June Bi-monthly meeting on arsenic, NCEA staff led me to believe that they would prefer to use the term “internal validity” to refer to both the study quality and the risk of bias examinations addressed in the above NRC quotation. This “internal” examination is to be distinguished from the examination of the “external validity” of a set of studies, another term that NCEA staff indicated they would like to use. I have no problem with this change of terms, especially because I do not like the term “risk of bias,” but I think our conversation needs to continue in the future so that NCEA staff can be more definitive, in writing, about their definitions of “internal” and “external” validity, so that we are all on the same page.

    Given the NRC recommendation above about addressing both study quality (and design) and risk of bias, now combined in the term “internal validity” by NCEA staff, I expressed concern that the questions that NCEA had asked for the risk of bias examination of arsenic did not address some of the study quality questions such as study design (e.g. Good Laboratory Practices issues) that are necessary to do a complete internal validity examination.

    The questions that NCEA asked in the case of the arsenic assessment can be found on pages 1-50 and 1-89 in the following document: http://www.epa.gov/iris/publicmeeting/iris_bimonthly-jun2014/iA-preliminary_draft_materials.pdf

    Subsequent conversations with NCEA staff lead me to believe that, at least in the case of arsenic, the agency plans to ask additional questions beyond the risk of bias questions already asked of the studies it may use in the assessment. One of my concerns, if this proves to be the case, is that NCEA address these questions on a study-by-study basis rather than across a set of studies, because many of the un-asked questions to date can identify problems that are unique to a particular study and might not be identified if these latter questions are asked only of a group of studies.

    More importantly, I believe it would be very useful for NCEA to set a goal of developing a master set of study validity questions that it will then employ in the determination of both the internal and external validity of studies. These questions may need to vary significantly depending on the type of study; if so, then developing such a master set of questions will be no easy task. This may be a situation where NCEA will need to work on developing these questions over the span of several assessments. This approach may be the only practical way of developing this master list of questions, but, if so, I believe it would be helpful for NCEA to identify this as an explicit and transparent project and invite the participation of the broader scientific community to suggest questions for NCEA’s consideration.

    The need for a rigorous examination of the validity of studies is not a trivial matter. It has been my experience that many of the past disputes between stakeholders and NCEA with regard to particular IRIS assessments have dealt with these very study validity issues. The more that can be done to regularize this examination of individual studies for their validity and then conduct the examination in a transparent fashion, the more that delays in the latter segments of assessment development may well be prevented.

    However, I would shy away from too much rigidity in the application of these questions, lest we develop a check the box approach to hazard assessment that ties the hands of all of the involved scientists, thereby preventing the application of expert judgment and experience, not to mention common sense. A little flexibility is not antithetical to good hazard assessment, but for now, certainly more systematic rigor and transparency than we have today would be a good thing.

    Do others find the arsenic list of questions to be incomplete? What other questions do you think need to be asked? How can NCEA move forward constructively on this front? Is a master list a worthwhile goal, and if so, how does NCEA go about developing it?

  3. Ted Simon permalink
    July 22, 2014

    Will AOPs overturn the LNT as the dominant paradigm in chemical risk assessment?

    When I arrived at the regional office of EPA in Atlanta in 1993, I knew nothing about risk assessment. My primary duty was to review site-specific risk assessments. I remain grateful to Julie Fitzpatrick, now in the Risk Assessment Forum for showing me the ropes.

    I had been trained as a biologist and was well aware of the need for redundant mechanisms for maintaining homeostasis. Hence, I was surprised when, early on, I learned that EPA believed that a little as a single molecule of a carcinogen was sufficient to cause cancer

    Wow, I thought, from what I know about biology and chemistry, that claim makes no sense! But with small children still at home, I needed the job and didn’t argue at the time.

    From 2009 through the present, Dr. Edward Calabrese of UMass at Amherst published his ideas about how the linear no threshold hypothesis (LNT) for dose-response came to be widely accepted, especially in the regulatory community. Reading these papers, I felt vindicated intellectually regarding my skepticism of the LNT. [1-5]

    The LNT was first put forward in 1978 by the Safe Drinking Water Committee of the National Academy—almost forty years ago and eons distant in terms of the progress of science and knowledge of mode of action. [6-9] It is appropriate that regulation lag behind science—but how much of a lag?

    Newer experiments have demonstrated a threshold for the effects of radiation on DNA. [10] Ample experimental evidence that those chemical carcinogens that damage DNA also exhibit dose thresholds and these thresholds are likely due to DNA repair, cell cycle checkpoints, apoptosis, and other mechanisms. Redundancy of biological mechanisms ensure damaged cells do not survive or that the damage is fixed as a mutation by mis-repair. [11-16]

    Evolutionarily successful organisms have developed such redundant systems that provide both immediate capacity and fail-safe mechanisms to deal with many different stresses. [17-27]

    The wrong-headed notion that normal physiological processes reflect a pathological continuum toward cancer or other diseases and that exposure to a stressor will act in an additive fashion with these ongoing pathological processes has been used to justify the assumption of low dose linearity. [28-31] Statistical rather than biological arguments are used to attempt to explain away the occurrence of thresholds, and these arguments ignore the need of all organisms to maintain homeostasis. [32]

    A stronger experimental and regulatory focus on biological mechanisms would enable greater flexibility in the regulation of carcinogens without compromising human health. [33, 34]

    The LNT is incorrect for both radiation and chemical carcinogenesis. and its use has driven risk assessment practice for the past sixty years with the result of unnecessary fear on the part of the general public and needless expenditure of resources to comply with regulations that may do more harm than good.

    More recently, advances in systems biology, high-throughput screening methods and chemical genomics suggest that the increased understanding of biological responses from these advances will be consistent with the assumption of thresholds and will also clarify the distinction between adaptive and adverse responses. Adverse outcome pathways (AOPs) incorporate details of mode of action into a framework that can potentially be used to understand the results of the newer testing strategies. [35-39]

    EPA has taken a leadership role along with the OECD in the development of AOPs and the curation of these pathways in a hyperlinked online database—think Wikipedia—called the AOP Wiki. For this focus on mode of action, EPA deserves much praise. Hopefully, these events signal a move away the use of the LNT as a default choice for regulation.

    Ted Simon
    - author of ENVIRONMENTAL RISK ASSESSMENT: A TOXICOLOGICAL APPROACH, CRC Press

    1. Calabrese EJ. The road to linearity: why linearity at low doses became the basis for carcinogen risk assessment. Arch Toxicol. 2009, Mar;83(3):203-25.
    2. Calabrese EJ. Muller’s Nobel lecture on dose-response for ionizing radiation: ideology or science? Arch Toxicol. 2011, Jun 6;
    3. Calabrese EJ. Key studies used to support cancer risk assessment questioned. Environ Mol Mutagen. 2011, Jul 7;
    4. Calabrese EJ. Origin of the linearity no threshold (LNT) dose-response concept. Arch Toxicol. 2013, Sep;87(9):1621-33.
    5. Calabrese EJ. How the US National Academy of Sciences misled the world community on cancer risk assessment: new findings challenge historical foundations of the linear dose response. Arch Toxicol. 2013, Dec;87(12):2063-81.
    6. Meek ME, Boobis A, Cote I, Dellarco V, Fotakis G, Munn S, et al. New developments in the evolution and application of the WHO/IPCS framework on mode of action/species concordance analysis. J Appl Toxicol. 2013, Oct 10;34(1):1-18.
    7. Meek ME, Bolger M, Bus JS, Christopher J, Conolly RB, Lewis RJ, et al. A framework for fit-for-purpose dose response assessment. Regul Toxicol Pharmacol. 2013, Jul;66(2):234-40.
    8. Meek MEB, Palermo CM, Bachman AN, North CM, and Jeffrey Lewis R. Mode of action human relevance (species concordance) framework: Evolution of the Bradford Hill considerations and comparative analysis of weight of evidence. Journal of Applied Toxicology. 2014, Feb;34(6):595-606.
    9. Rhomberg L. Hypothesis-Based Weight of Evidence: An Approach to Assessing Causation and its Application to Regulatory Toxicology. Risk Anal. 2014, Apr 4;
    10. Olipitz W, Wiktor-Brown D, Shuga J, Pang B, McFaline J, Lonkar P, et al. Integrated Molecular Analysis Indicates Undetectable Change in DNA Damage in Mice after Continuous Irradiation at ~ 400-fold Natural Background Radiation. Environ Health Perspect. 2012, Aug;120(8):1130-6.
    11. Waddell WJ. Critique of dose response in carcinogenesis. Hum Exp Toxicol. 2006, Jul;25(7):413-36.
    12. Trosko JE. Induction of iPS cells and of cancer stem cells: the stem cell or reprogramming hypothesis of cancer? Anat Rec (Hoboken). 2014, Jan;297(1):161-73.
    13. Hanahan D, and Weinberg RA. The hallmarks of cancer. Cell. 2000, Jan 7;100(1):57-70.
    14. Hanahan D, and Weinberg RA. Hallmarks of cancer: the next generation. Cell. 2011, Mar 4;144(5):646-74.
    15. Rosenbluh J, Wang X, and Hahn WC. Genomic insights into WNT/β-catenin signaling. Trends Pharmacol Sci. 2014, Feb;35(2):103-9.
    16. Preston RJ. DNA reactivity as a mode of action and its relevance to cancer risk assessment. Toxicol Pathol. 2013, Feb;41(2):322-5.
    17. Pottenger LH, Becker RA, Moran EJ, and Swenberg JA. Workshop report: identifying key issues underpinning the selection of linear or non-linear dose-response extrapolation for human health risk assessment of systemic toxicants. Regul Toxicol Pharmacol. 2011, Apr;59(3):503-10.
    18. Swenberg JA, Barrow CS, Boreiko CJ, Heck HD, Levine RJ, Morgan KT, and Starr TB. Non-linear biological responses to formaldehyde and their implications for carcinogenic risk assessment. Carcinogenesis. 1983, Aug;4(8):945-52.
    19. Swenberg JA, Richardson FC, Boucheron JA, and Dyroff MC. Relationships between DNA adduct formation and carcinogenesis. Environ Health Perspect. 1985, Oct;62177-83.
    20. Swenberg JA, and Fennell TR. DNA damage and repair in mouse liver. Arch Toxicol Suppl. 1987;10162-71.
    21. Swenberg JA, Richardson FC, Boucheron JA, Deal FH, Belinsky SA, Charbonneau M, and Short BG. High- to low-dose extrapolation: critical determinants involved in the dose response of carcinogenic substances. Environ Health Perspect. 1987, Dec;7657-63.
    22. Swenberg JA, La DK, Scheller NA, and Wu KY. Dose-response relationships for carcinogens. Toxicol Lett. 1995, Dec;82-83751-6.
    23. Swenberg JA, Koc H, Upton PB, Georguieva N, Ranasinghe A, Walker VE, and Henderson R. Using DNA and hemoglobin adducts to improve the risk assessment of butadiene. Chem Biol Interact. 2001, Jun 6;135-136387-403.
    24. Swenberg JA, Fryar-Tita E, Jeong Y-C, Boysen G, Starr T, Walker VE, and Albertini RJ. Biomarkers in toxicology and risk assessment: informing critical dose-response relationships. Chem Res Toxicol. 2008, Jan;21(1):253-65.
    25. Swenberg JA, Bordeerat NK, Boysen G, Carro S, Georgieva NI, Nakamura J, et al. 1,3-Butadiene: Biomarkers and application to risk assessment. Chem Biol Interact. 2011, Jun 6;192(1-2):150-4.
    26. Swenberg JA, Lu K, Moeller BC, Gao L, Upton PB, Nakamura J, and Starr TB. Endogenous versus exogenous DNA adducts: their role in carcinogenesis, epidemiology, and risk assessment. Toxicol Sci. 2011, Mar;120 Suppl 1S130-45.
    27. Jarabek AM, Pottenger LH, Andrews LS, Casciano D, Embry MR, Kim JH, et al. Creating context for the use of DNA adduct data in cancer risk assessment: I. Data organization. Crit Rev Toxicol. 2009;39(8):659-78.
    28. Crump KS, Hoel DG, Langley CH, and Peto R. Fundamental carcinogenic processes and their implications for low dose risk assessment. Cancer Res. 1976, Sep;36(9 pt.1):2973-9.
    29. Crump KS, Chiu WA, and Subramaniam RP. Issues in using human variability distributions to estimate low-dose risk. Environ Health Perspect. 2010, Mar;118(3):387-93.
    30. Crump KS, Chen C, Chiu WA, Louis TA, Portier CJ, Subramaniam RP, and White PD. What role for biologically based dose-response models in estimating low-dose risk? Environ Health Perspect. 2010, May;118(5):585-8.
    31. Crump KS. Use of threshold and mode of action in risk assessment. Crit Rev Toxicol. 2011, Sep;41(8):637-50.
    32. Mayr E. The Growth of Biological Thought: Diversity, Evolution and Inheritance. Cambridge, MA: Harvard University Press; 1982.
    33. Conolly RB, Gaylor DW, and Lutz WK. Population variability in biological adaptive responses to DNA damage and the shapes of carcinogen dose-response curves. Toxicol Appl Pharmacol. 2005, Sep 9;207(2 Suppl):570-5.
    34. Lutz WK, Gaylor DW, Conolly RB, and Lutz RW. Nonlinearity and thresholds in dose-response relationships for carcinogenicity due to sampling variation, logarithmic dose scaling, or small differences in individual susceptibility. Toxicol Appl Pharmacol. 2005, Sep 9;207(2 Suppl):565-9.
    35. Ankley GT, Bennett RS, Erickson RJ, Hoff DJ, Hornung MW, Johnson RD, et al. Adverse outcome pathways: a conceptual framework to support ecotoxicology research and risk assessment. Environ Toxicol Chem. 2010, Mar;29(3):730-41.
    36. Krewski D, Westphal M, Andersen ME, Paoli GM, Chiu WA, Al-Zoughool M, et al. A Framework for the Next Generation of Risk Science. Environ Health Perspect. 2014, Apr 4;
    37. Vinken M. The adverse outcome pathway concept: a pragmatic tool in toxicology. Toxicology. 2013, Oct 10;312158-65.
    38. Judson R, Houck K, Martin M, Knudsen T, Thomas RS, Sipes N, et al. In Vitro and Modelling Approaches to Risk Assessment from the U.S. Environmental Protection Agency ToxCast Programme. Basic Clin Pharmacol Toxicol. 2014, Mar 3;115(1):69-76.
    39. Kleinstreuer NC, Yang J, Berg EL, Knudsen TB, Richard AM, Martin MT, et al. Phenotypic screening of the ToxCast chemical library to classify toxic and therapeutic mechanisms. Nat Biotechnol. 2014, Jun;32(6):583-91.

  4. Chuck Elkins permalink
    August 28, 2014

    Regarding Topic #2 for the October workshop (more transparent data integration processes):

    I believe that more transparent data integration is one of the most important recommendations that the NRC has made, and it is an area that has received far less attention from NCEA since NRC’s Chapter 7 recommendations were published than has been needed.

    In its May, 2014 report, the NRC suggested several approaches that NCEA might take to achieve more transparency in data integration. One way was to continue with the current guided-expert-judgment process , but make its application more transparent. While the NRC suggested other more complex alternatives, I believe that at least in the short run, making the current process more transparent has the potential for the quickest improvements not only for newly-started assessments but also for many of the assessments in the pipeline.

    I want to suggest a way in which NCEA could bring considerably more transparency to the data integration step with only a small increase in allocated resources.

    Building on its increasingly successful “Step One” meetings early in the process, where important issues for resolution are identified and discussed, NCEA could identify a set of key issues following the meeting that it plans to resolve in the process of developing the assessment. NCEA could then–and here is the new part–have its staff develop discussion papers on each of these key issues during the development process.

    These issue papers, when completed in draft by the staff, could then be posted on the internet for reading and possible informal comment by the scientific community before NCEA staff and management arrive at a resolution that is then incorporated into the draft assessment.

    This may sound like a lot of extra work, but I don’t think that would actually be the case. NCEA is already committed to giving serious consideration and thought to the key issues identified in these early meetings. In addition, NCEA is organized so that no single person makes all the decisions for a draft assessment. Discussions of these key issues therefore already presumably take place among the NCEA professionals involved in drafting and management of the assessment. Having to commit to paper a discussion of the issue would bring a little more rigor to NCEA’s decision-making on these issues in these internal meetings than is provided by just oral discussions, but this step should not add much extra work to what already is, or should be, the hard work of thinking about and analyzing these issues.

    In addition, having the feedback of the larger scientific community at this drafting stage could be helpful to the staff. There would be NO NEED for the staff to respond to any comments they receive; they would just need to incorporate any useful insights into their own thinking and their subsequent decision making about the draft assessment which would then be put out for public comment.

    This approach would mean that by the time the staff finishes the draft assessment for review, many of the key issues will have been vetted by the larger scientific community in an in-depth and TRANSPARENT manner. As a result, the draft assessment published for review is more likely than in the past to explain alternative interpretations and analysis of the data in a way that promotes a thorough review of the draft assessment both by the scientific community as a whole and the peer review panel in particular.

    This approach is one way to bring more transparency to this key data integration stage of the process relatively quickly without additional formal comment periods or much extra staff work. In fact, these issue papers might be incorporated, with little change, into the actual draft assessment itself.

    Whatever way NCEA finds to bring greater transparency and rigor to the data integration process, I hope that they give this step top priority. NCEA staff need to make these key assessment decisions in the full light of day because a lot of expert judgment will inevitably be involved and, as the NRC said in its May 2014 report on IRIS:

    “The history of subjectivity in science, the arts, and esthetics goes back a long way and still causes tension in a scientific discourse. The only tentative solution is to describe as accurately as possible the methods by which scientific and policy decisions are made, by whom, and with what expertise.” (page 26)

    In my view, the BEST transparency is transparency that brings value back to the IRIS staff in the form of possible comments while the staff is still wrestling with an issue. Transparency that only tells the public the basis for a decision after that decision has been made and published is important, but this kind of transparency brings little value to the IRIS staff and their work.

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS