Chapter 1

Variation, Variability, Batches
and Bias in Microarray
Experiments: 

An Introduction

Andreas Scherer

Microarray-based measurement of gene expression levels is a widely used technology in biological and medical research. The discussion around the impact of variability on the reproducibility of microarray data has captured the imagination of researchers ever since the invention of microarray technology in the mid 1990s. Variability has many sources of the most diverse kinds, and depending on the experimental performance it can manifest itself as a random factor or as a systematic factor, termed bias. Knowledge of the biological/medical as well as the practical background of a planned microarray experiment helps alleviate the impact of systematic sources of variability, but can hardly address random effects. 

Chapter 2

Microarray Platforms and Aspects of Experimental Variation

John A Coller Jr

In the early 1990s, a number of technologies were developed that made it possible to investigate gene expression in a high-throughput fashion (Adams et al. 1991; Liang and Pardee 1992; Pease et al. 1994; Schena et al. 1995; Velculescu et al. 1995). Of these technologies, the DNA microarray has become the standard tool for investigating genomewide gene expression levels and has revolutionized the way scientists investigate the genome. This chapter will give a brief overview of current microarray technologies that are being used today. The chapter will also go through a typical experimental procedure discussing various sources of experimental variability and bias that may affect the data generated and experimental results.

Chapter 3Experimental DesignPeter Grass

The task of experimental design is to design a study in as economical a way as possible and simultaneously to optimize the information content of the experimental data. Design concepts include the different types of variation, including experimental error, biological variation and systematic error, called bias. A batch effect refers to systematic errors due to technical reasons. The conclusion drawn from the study results should be valid for an entire population, but a study is conducted in a sample with a limited number of experimental units (patients) where several observational units (measurements) are taken from each under the same experimental condition. Measures to increase precision and accuracy of an experiment include randomization, blocking, and replication. An increased sample size leads to a greater statistical power. Blinding, crossing, choice of controls, symmetry and simplicity of design are further means to increase precision.

Chapter 4Batches and Blocks, Sample Pools
and Subsamples in the Design and
Analysis of Gene Expression
Studies
Naomi Altman

Microarray experiments may have batch effects due to samples being collected or processed at different times or by different labs or investigators. Linear mixed models which incorporate the known sources of variation in the experiment can be used as design and analysis tools to assess and minimize the impact of batch effects and to determine the
effects of design decisions such as sample pooling or splitting.

Chapter 5Aspects of Technical BiasMartin Schumacher, Frank Staedtler, Wendell D Jones, Andreas Scherer

Variation in microarray data can result from technical and biological sources. While the extent to which factors contribute to this variation has been largely investigated (Bakay et al. 2002; Boedigheimer et al. 2008; Eklund and Szallasi 2008; Fare et al. 2003; Han et al. 2004; Lusa et al. 2007; Novak et al. 2002; Zakharkin et al. 2005), the nature and extent of the signal intensity changes with which variation manifests itself in the data has not been a major focus of research. Using several real microarray data sets with known batch effects, we analyze and describe how technical variation is translated into gene expression changes.

Chapter 6Bioinformatic Strategies for
cDNA-Microarray Data Processing
Jessica Fahlen, Mattias Landfors, Eva Freyhult, Max Bylesjö, Johan Trygg, Torgeir R Hvidsten, Patrik Ryden

Pre-processing plays a vital role in cDNA-microarray data analysis. Without proper preprocessing it is likely that the biological conclusions will be misleading. However, there are many alternatives and in order to choose a proper pre-processing procedure it is necessary to understand the effect of different methods. This chapter discusses several pre-processing steps, including image analysis, background correction, normalization, and filtering. Spike-in data are used to illustrate how different procedures affect the analytical ability to detect differentially expressed genes and estimate their regulation. The result shows that pre-processing has a major impact on both the experiment’s sensitivity and its bias. However, general recommendations are hard to give, since pre-processing consists of several actions that are highly dependent on each other. Furthermore, it is likely that pre-processing have a major impact on downstream analysis, such as clustering and classification, and pre-processing methods should be developed and evaluated with this in mind.

Chapter 7Batch Effect Estimation
of Microarray Platforms
with Analysis of Variance
Nysia I George and James J Chen

The vast amount of variation in microarray gene expression data hinders the ability to obtain meaningful and accurate analytical results and makes integrating data from independent studies very difficult. In this chapter, we assess the observed variability among microarray platforms through variance component analysis. We utilize the MicroArray Quality Control project data to examine the variability in a study implemented at several laboratory sites and across five platforms. A two-component analysis of variance mixed model is used to partition the sources of variation in the five microarray platforms. The contribution of each source of systematic variation variance is estimated for each random effect in the experiment. We demonstrate similar inter-platform variability between many of the platforms. For the platform with the highest variation, we find significant reduction in technical variability when data are normalized using quantile normalization.

Chapter 8Variance due to Smooth Bias in
Rat Liver and Kidney Baseline
Gene Expression in a Large
Multi-laboratory Data Set
Michael J Boedigheimer, Jeff W Chou, J Christopher Corton,
Jennifer Fostel, Raegan O’Lone, P Scott Pine, John Quackenbush,
Karol L Thompson, and Russell D Wolfinger

To characterize variability in baseline gene expression, the ILSI Health and Environmental Sciences Institute Technical Committee on the Application of Genomics in Mechanism Based Risk Assessment recently compiled a large data set from 536 Affymetrix arrays for rat liver and kidney samples from control groups in toxicogenomics studies. Institution was one of the prominent sources of variability, which could be due to differences in how studies were performed or to systematic biases in the signal data. To assess the contribution of smooth bias to variance in the baseline expression data set, the robust multi-array average data were further processed by applying loess normalization and the degree of smooth bias within a data set was characterized. Bias correction did not have a large effect on the results of analyses of the major sources of variance but did affect the identification of genes associated with certain study factors if significant smooth bias was present within the data set.

Chapter 9Microarray Gene Expression:
The Effects of Varying Certain
Measurement Conditions
Walter Liggett, Jean Lozach, Anne Bergstrom Lucas, Ron L Peterson,
Marc L Salit, Danielle Thierry-Mieg, Jean Thierry-Mieg, and
Russell D Wolfinger

This chapter explores measurements from an experiment with a batch effect induced by switching the mass of RNA analyzed from 400 ng to 200 ng. The experiment has as additional factors the RNA material (liver, kidney, and two mixtures) and the RNA source (six different animals). We show that normalization can partially correct the batch effect. On the basis of the normalized intensities, we compare, gene by gene, the size of the batch effect with the animal-to-animal variation. These comparisons show that the animal variation is larger or smaller depending on which gene is considered. We present gene-by-gene tests of the linearity of the microarray response. In addition, we present data analysis results that suggest other batch effects.

Chapter 10Adjusting Batch Effects
in Microarray Experiments
with Small Sample Size Using
Empirical Bayes Methods
W Evan Johnson and Cheng Li

Nonbiological experimental variation or batch effects are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes (at least 25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient.We propose parametric and nonparametric empirical Bayes frameworks for adjusting data for batch effects that are robust to outliers in small sample sizes and perform comparably to existing methods for large samples.

Chapter 11Identical Reference Samples
and Empirical Bayes Method
for Cross-Batch Gene Expression
Analysis
Wynn L Walker and Frank R Sharp

Nonbiological experimental error commonly occurs in microarray data collected in different batches. It is often impossible to compare groups of samples from independent experiments because batch effects confound true gene expression differences. Existing methods for adjusting for batch effects require that samples from all biological groups are represented in every batch. In this chapter we review an experimental design along with a generalized empirical Bayes approach which adjusts for cross-experimental batch effects and therefore allows direct comparisons of gene expression values to be made between biological groups drawn from independent experiments. The necessary feature
of this experimental design that enables such comparisons to be made is that identical replicate reference samples are included in each batch in every experiment. We discuss the clinical applications of our approach as well as the advantages of our approach in relation to meta-analysis approaches and other strategies for batch adjustment.

Chapter 12Principal Variance Components
Analysis: Estimating Batch Effects in Microarray Gene Expression Data
Jianying Li, Pierre  R Bushel, Tzu-Ming Chu, and Russell D Wolfinger

Batch effects are present in microarray experiments due to poor experimental design and when data are combined from different studies. To assess the quantity of batch effects, we present a novel hybrid approach known as principal variance components analysis (PVCA). The approach leverages the strengths of two popular data analysis methods: principal components analysis and variance components analysis, and integrates into a novel algorithm. It can be used as a screening tool to determine the sources of variability, and, using the eigenvalues associated with their corresponding eigenvectors as weights, to quantify the magnitude of each source of variability (including each batch effect) presented as a proportion of total variance. Although PVCA is a generic approach for quantifying the corresponding proportion of variation of each effect, it can be a handy assessment for estimating batch effect before and after batch normalization.

Chapter 13Batch Profile Estimation,
Correction, and Scoring
Tzu-Ming Chu, Wenjun Bao, Russell S Thomas, and Russell D Wolfinger

Batch effects increase the variation among expression data and, hence, reduce the statistical power for investigating biological effects. When the proportion of variation associated with the batch effect is high, it is desirable to remove the batch effect from data. A robust batch effect removal method should be easily applicable to new batches. This chapter discusses a simple but robust grouped-batch-profile (GBP) normalization method that includes three steps: batch profile estimation, correction, and scoring. Genes with similar expression patterns across batches are grouped. The method assumes the availability of control samples in each batch, and the corresponding batch profile of each group is estimated by analysis of variance. Batch correction and scoring are based on the estimated profiles. A mouse lung tumorigenicity data set is used to illustrate GBP normalization through cross-validation on 84 predictive models. On average, cross-validated predictive accuracy increases significantly after GBP normalization.

Chapter 14Visualization of Cross-Platform
Microarray Normalization
Xuxin Liu, Joel Parker, Cheng Fan, Charles M Perou, and J S Marron

Combining different microarray data sets, even across platforms, is considered in this chapter. The larger sample sizes created in this way have the potential to generally increase statistical power. Distance weighted discrimination (DWD) has been shown to provide this improvement in some cases. We replicate earlier results indicating that DWD provides an effective approach to cross-platform batch adjustment, using both novel and conventional visualization methods. Improved statistical power from combining data is demonstrated for a new DWD based hypothesis test. This result appears to contradict a number of earlier results, which suggested that such data combination is not possible. The contradiction is resolved by understanding the differences between gene-by-gene analysis and our more complete and insightful multivariate approach of DWD.

Chapter 15Toward Integration of Biological
Noise: Aggregation Effect
in Microarray Data Analysis
Lev klebanov and Andreas Scherer

Aggregation effect in microarray data analysis distorts the correlations between gene expression levels and, in some sense, plays a role of technical noise. This aspect is especially important in network and association inference analyses. However, it is possible to construct statistical estimators which take aggregation into account to generate ‘clean’ covariance of expression levels. Based on this estimator, we provide a method to findgene pairs having essentially different correlation structure.

Chapter 16Potential Sources of Spurious
Associations and Batch Effects in
Genome-Wide Association Studies
Huixiao Hong, Leming Shi, James C Fuscoe, Federico Goodsaid,
Donna Mendrick, and Weida Tong

Genome-wide association studies (GWAS) use dense maps of single nucleotide polymorphisms(SNPs) that cover the entire human genome to search for genetic markers with different allele frequencies between cases and controls. Given the complexity of GWAS, it is not surprising that only a small portion of associated SNPs in the initial GWAS results were successfully replicated in the same populations. Each of the steps in a GWAS has the potential to generate spurious associations. In addition, there are batch effects in the genotyping experiments and in genotype calling that can cause both Type I and Type II errors. Decreasing or eliminating the various sources of spurious associations and batch effects is vital for reliably translating GWAS findings to clinical practice and personalized medicine. Here we review and discuss the variety of sources of spurious associations and batch effects in GWAS and provide possible solutions to the problems.

Chapter 17Standard Operating Procedures
in Clinical Gene Expression
Biomarker Panel Development
Khurram Shahzad, Anshu Sinha, Farhana Latif, and Mario C Deng

The development of genomic biomarker panels in the context of personalized medicine aims to address biological variation (disease etiology, gender, etc.) while at the same time controlling technical variation (noise). Whether biomarker trials are undertaken by single clinical/laboratory units or in multicenter collaborations, technical noise can be addressed (though not completely eliminated) by the implementation of Standard Operating Procedures (SOPs). Once agreed upon by the study members, SOPs have to become obligatory. We illustrate the usefulness of SOPs by describing the development of a genomic biomarker panel for the absence of acute cardiac allograft rejection resulting from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) study. This biomarker panel is the first Food and Drug Administration-approved genomic biomarker test in the history of transplantation medicine.

Chapter 18Data, Analysis,
and Standardization
Gabriella Rustici, Andreas Scherer, and John Quackenbush

‘Reporting noise’ is generated when data and their metadata are described, stored, and exchanged. Such noise can be minimized by developing and adopting data reporting standards, which are fundamental to the effective interpretation, analysis and integration of large data sets derived from high-throughput studies. Equally crucial is the development of experimental standards such as quality metrics and a consensus on data analysis pipelines, to ensure that results can be trusted, especially in clinical settings. This chapter provides a review of the initiatives currently developing and disseminating computational and experimental standards in biomedical research.