Suchen und Finden

Titel

Autor

Inhaltsverzeichnis

Nur ebooks mit Firmenlizenz anzeigen:

 

The Design and Analysis of Computer Experiments

The Design and Analysis of Computer Experiments

Thomas J. Santner, Brian J. Williams, William I. Notz

 

Verlag Springer-Verlag, 2019

ISBN 9781493988471 , 446 Seiten

2. Auflage

Format PDF, OL

Kopierschutz Wasserzeichen

Geräte

139,09 EUR

Mehr zum Inhalt

The Design and Analysis of Computer Experiments


 

Preface to the Second Edition

7

Preface to the First Edition

9

Contents

11

1 Physical Experiments and Computer Experiments

16

1.1 Introduction

16

1.2 Examples of Computer Simulator Models

18

1.3 Some Common Types of Computer Experiments

35

1.3.1 Homogeneous-Input Simulators

36

1.3.2 Mixed-Input Simulators

37

1.3.3 Multiple Outputs

39

1.4 Organization of the Remainder of the Book

40

2 Stochastic Process Models for Describing Computer Simulator Output

42

2.1 Introduction

42

2.2 Gaussian Process Models for Real-Valued Output

45

2.2.1 Introduction

45

2.2.2 Some Correlation Functions for GP Models

49

2.2.3 Using the Correlation Function to Specify a GP with Given Smoothness Properties

56

2.3 Increasing the Flexibility of the GP Model

58

2.3.1 Hierarchical GP Models

61

2.3.2 Other Nonstationary Models

63

2.4 Models for Output Having Mixed Qualitative and Quantitative Inputs

64

2.5 Models for Multivariate and Functional Simulator Output

72

2.5.1 Introduction

72

2.5.2 Modeling Multiple Outputs

74

2.5.3 Other Constructive Models

77

2.5.4 Models for Simulators Having Functional Output

78

2.6 Chapter Notes

80

3 Empirical Best Linear Unbiased Prediction of Computer Simulator Output

82

3.1 Introduction

82

3.2 BLUP and Minimum MSPE Predictors

83

3.2.1 Best Linear Unbiased Predictors

83

3.2.2 Best MSPE Predictors

85

3.2.3 Some Properties of y"0362y(xte)

90

3.3 Empirical Best Linear Unbiased Prediction of Univariate Simulator Output

91

3.3.1 Introduction

91

3.3.2 Maximum Likelihood EBLUPs

92

3.3.3 Restricted Maximum Likelihood EBLUPs

93

3.3.4 Cross-Validation EBLUPs

94

3.3.5 Posterior Mode EBLUPs

95

3.3.6 Examples

95

3.4 A Simulation Comparison of EBLUPs

99

3.4.1 Introduction

99

3.4.2 A Selective Review of Previous Studies

100

3.4.3 A Complementary Simulation Study of Prediction Accuracy and Prediction Interval Accuracy

103

3.4.3.1 Performance Measures

104

3.4.3.2 Function Test Beds

104

3.4.3.3 Prediction Simulations

106

3.4.4 Recommendations

110

3.5 EBLUP Prediction of Multivariate Simulator Output

110

3.5.1 Optimal Predictors for Multiple Outputs

111

3.5.2 Examples

113

3.6 Chapter Notes

122

3.6.1 Proof That (3.2.7) Is a BLUP

122

3.6.2 Derivation of Formula 3.2.8

124

3.6.3 Implementation Issues

124

3.6.4 Software for Computing EBLUPs

127

3.6.5 Alternatives to Kriging Metamodels and Other Topics

128

3.6.5.1 Alternatives to Kriging Metamodels

128

3.6.5.2 Testing the Covariance Structure

129

4 Bayesian Inference for Simulator Output

130

4.1 Introduction

130

4.2 Inference for Conjugate Bayesian Models

132

4.2.1 Posterior Inference for Model (4.1.1) When = ?

132

4.2.1.1 Posterior Inference About ?

134

4.2.1.2 Predictive Inference at a Single Test Input xte

134

4.2.2 Posterior Inference for Model (4.1.1) When = (?,?Z )

138

4.3 Inference for Non-conjugate Bayesian Models

143

4.3.1 The Hierarchical Bayesian Model and Posterior

144

4.3.2 Predicting Failure Depths of Sheet Metal Pockets

147

4.4 Chapter Notes

151

4.4.1 Outline of the Proofs of Theorems 4.1 and 4.2

151

4.4.2 Eliciting Priors for Bayesian Regression

157

4.4.3 Alternative Sampling Algorithms

157

4.4.4 Software for Computing Bayesian Predictions

157

5 Space-Filling Designs for Computer Experiments

159

5.1 Introduction

159

5.1.1 Some Basic Principles of Experimental Design

159

5.1.2 Design Strategies for Computer Experiments

162

5.2 Designs Based on Methods for Selecting Random Samples

164

5.2.1 Designs Generated by Elementary Methods for Selecting Samples

165

5.2.2 Designs Generated by Latin Hypercube Sampling

166

5.2.3 Some Properties of Sampling-Based Designs

171

5.3 Latin Hypercube Designs with Additional Properties

174

5.3.1 Latin Hypercube Designs Whose Projections Are Space-Filling

174

5.3.2 Cascading, Nested, and Sliced LatinHypercube Designs

178

5.3.3 Orthogonal Latin Hypercube Designs

181

5.3.4 Symmetric Latin Hypercube Designs

184

5.4 Designs Based on Measures of Distance

186

5.5 Distance-Based Designs for Non-rectangular Regions

195

5.6 Other Space-Filling Designs

198

5.6.1 Designs Obtained from Quasi-Random Sequences

198

5.6.2 Uniform Designs

200

5.7 Chapter Notes

205

5.7.1 Proof That TL is Unbiased and of the Second Part of Theorem 5.1

205

5.7.2 The Use of LHDs in a Regression Setting

210

5.7.3 Other Space-Filling Designs

211

5.7.4 Software for Constructing Space-Filling Designs

212

5.7.5 Online Catalogs of Designs

214

6 Some Criterion-Based Experimental Designs

215

6.1 Introduction

215

6.2 Designs Based on Entropy and Mean Squared Prediction Error Criterion

216

6.2.1 Maximum Entropy Designs

216

6.2.2 Mean Squared Prediction Error Designs

220

6.3 Designs Based on Optimization Criteria

226

6.3.1 Introduction

226

6.3.2 Heuristic Global Approximation

227

6.3.3 Mockus Criteria Optimization

228

6.3.4 Expected Improvement Algorithms for Optimization

230

6.3.4.1 Schonlau and Jones Expected Improvement Algorithm

230

6.3.4.2 Picheny Expected Quantile Improvement Algorithm

236

6.3.4.3 Williams Environmental Variable Mean Optimization

237

6.3.5 Constrained Global Optimization

239

6.3.6 Pareto Optimization

243

6.3.6.1 Basic Pareto Optimization Algorithm

245

6.4 Other Improvement Criterion-Based Designs

250

6.4.1 Introduction

250

6.4.2 Contour Estimation

251

6.4.3 Percentile Estimation

252

6.4.3.1 Approach 1: A Confidence Interval-Based Criterion

253

6.4.3.2 Approach 2: A Hypothesis Testing-Based Criterion

254

6.4.4 Global Fit

255

6.5 Chapter Notes

256

6.5.1 The Hypervolume Indicator for Approximations to Pareto Fronts

257

6.5.2 Other MSPE-Based Optimal Designs

258

6.5.3 Software for Constructing Criterion-Based Designs

259

7 Sensitivity Analysis and Variable Screening

261

7.1 Introduction

261

7.2 Classical Approaches to Sensitivity Analysis

263

7.2.1 Sensitivity Analysis Based on Scatterplots and Correlations

263

7.2.2 Sensitivity Analysis Based on Regression Modeling

263

7.3 Sensitivity Analysis Based on Elementary Effects

266

7.4 Global Sensitivity Analysis

273

7.4.1 Main Effect and Joint Effect Functions

273

7.4.2 A Functional ANOVA Decomposition

278

7.4.3 Global Sensitivity Indices

281

7.5 Estimating Effect Plots and Global Sensitivity Indices

288

7.5.1 Estimating Effect Plots

289

7.5.2 Estimating Global Sensitivity Indices

296

7.6 Variable Selection

300

7.7 Chapter Notes

305

7.7.1 Designing Computer Experiments for SensitivityAnalysis

305

7.7.2 Orthogonality of Sobol´ Terms

306

7.7.3 Weight Functions g(x) with NonindependentComponents

307

7.7.4 Designs for Estimating Elementary Effects

308

7.7.5 Variable Selection

308

7.7.6 Global Sensitivity Indices for Functional Output

308

7.7.7 Software

311

8 Calibration

312

8.1 Introduction

312

8.2 The Kennedy and O'Hagan Calibration Model

314

8.2.1 Introduction

314

8.2.2 The KOH Model

314

8.2.2.1 Alternative Views of Calibration Parameters

317

8.3 Calibration with Univariate Data

320

8.3.1 Bayesian Inference for the Calibration Parameter ?

321

8.3.2 Bayesian Inference for the Mean Response ?(x) of the Physical System

321

8.3.3 Bayesian Inference for the Bias ?(x) and Calibrated Simulator E[ Ys(x,?) | Y ]

322

8.4 Calibration with Functional Data

333

8.4.1 The Simulation Data

335

8.4.2 The Experimental Data

340

8.4.3 Joint Statistical Models and Log Likelihood Functions

347

8.4.3.1 Joint Statistical Model That Allows Simulator Discrepancy

347

8.4.3.2 Joint Statistical Model Assuming No Simulator Discrepancy

355

8.5 Bayesian Analysis

359

8.5.1 Prior and Posterior Distributions

359

8.5.2 Prediction

371

8.5.2.1 Emulation of the Simulation Output Using Only Simulator Data

374

8.5.2.2 Emulation of the Calibrated Simulator Output Modeling the Simulator Bias

377

8.5.2.3 Emulation of the Calibrated Simulation Output Assuming No Simulator Bias

383

8.6 Chapter Notes

385

8.6.1 Special Cases of Functional Emulation and Prediction

385

8.6.2 Some Other Perspectives on Emulation and Calibration

387

8.6.3 Software for Calibration and Validation

391

A List of Notation

393

A.1 Abbreviations

393

A.2 Symbols

394

B Mathematical Facts

397

B.1 The Multivariate Normal Distribution

397

B.2 The Gamma Distribution

399

B.3 The Beta Distribution

400

B.4 The Non-central Student t Distribution

400

B.5 Some Results from Matrix Algebra

401

C An Overview of Selected Optimization Algorithms

404

C.1 Newton/Quasi-Newton Algorithms

405

C.2 Direct Search Algorithms

406

C.2.1 Nelder–Mead Simplex Algorithm

406

C.2.2 Generalized Pattern Search and Surrogate Management Framework Algorithms

407

C.2.3 DIRECT Algorithm

409

C.3 Genetic/Evolutionary Algorithms

409

C.3.1 Simulated Annealing

409

C.3.2 Particle Swarm Optimization

410

D An Introduction to Markov Chain Monte Carlo Algorithms

411

E A Primer on Constructing Quasi-Monte Carlo Sequences

415

References

417

Author Index

435

Subject Index

441