Back to Parameter Estimation in Engineering and Science Home
Table of Contents
APPENDIX D SOME E STIMAnON PROGRAMS Gauss method. SSQMIN This program uses the Powell procedure and is discussed in reference OJ. Index N CJ1 CD REFERENCES D I. 02. 03. 0 4. OS. Himmelblau. O. M .• P rocm Analysis by Sialisilcal MtlllodJ. John Wiley . t Sons. Inc .• New York. 1970. Bard. Y.• Nonlintar Parameltr ESlimolion. Academic Press. Inc .• New York. 1974. Keuster. J. l . and Mize. J. H .• Oplimizalion T«IIniquts Willi Fortran. McGraw-Hili Book Co .• New York. 1973. Miller. J. R.• O n-Lint Analysis for Social Scientists. MAC-TR-40. Project MAC. Massachusetts Institute o r Technology. Cambridge. Mass .• 1967. Hilsenrath. J.• Ziegler. G .• Messina. C. G .• Walsh. P. 1.. and Herbold. R .. O MNITAB. A Compu/er Program for Sia/is/ical and Numtrical Analysis. Nal. Bur. or Std. Handbook 101. U. S. Government Printing Orrice. Washington. O. c.. 1966. Reissued Jan. 1968. with corrections. A bbott, G. L , 4 15, 4 80 Abramowitz, M., 1 1 AJ-Arajl, 5 ., 263, 319 Analysis o f co~arlance, 131 A naly,l, o harlance, 1 30, 1 31, 115, 178 ArI" R., 4 14 Arkin, H., 78 Assumptions, Gauss-Markov, 1 34, 232 ,tandard, 134, 228, 229 violation of, 1 85-204, 2 90-319, 3 93, 4 00,401,459,460 Atkinson, A. C., 4 35, 4 38, 439, 4 14 A utocovulance, 5 9 Bacon, D. W., 3 19,414 Badaval, P. C., 4 32, 4 74 B ud, Y., 4 , 24, 335, 362, 3 64, 375, 386, 4 11,414,472,475,493,494 Bayesian e 'tlmation, 9 7-101. & ttll,o Maximum a po,teriori estimation Baye,·. t heorem. 4 6, 4 1,160,164,210 Beale. E. M. L ,414 Beck. I . V., 263, 3 19,415.474.475.494 Beveridse. G. S. G ., 3 38,414,481 Bevington. P. R., 24 Beyer, W. H., 7 8,319 Bias. 8 9 Bia, error, 1 80 Bonaelna, C., 474 Booth. G. W.• 4 14 Box, G. E. P., 2 4,114,129.162,204,229, 2 32.319,359,363.364,369-376, 3 80,386,414.415,419.432,438, 4 39.469,470,474,475 Box. M. 1 .,4 Box-Kanemasu i nterpolation method, 3 62-311,381,494 Box-Muller transformation, 126 Brownlee, K. A., 2 04, 2 26 Bryson. A. E., J r., 24 BUrington, R. 5 •• 78, 2 04,319,415 Butler, C. P., 4 75, 4 80 Cannon, J. R., 4 74 C uslaw, H. 5 .,474 C entral limit theorem, 6 4, 6 1,186 Cheby,hev's i nequaUty,62 Chl-squared t est, 2 68, 269 Cochran's theorem, 176 Coefficient o f multiple determination, 1 73-115 Colored errors, , tt Errors, correlated 495 4 96 Colton, R. R., 78 Comini, G., 474 Computer programs, 4 93,494 Confidence interval, 102- 108, 290, 380, 381 approximate, 380 - 386 matrix formulation, 290 mean, 102, 106 points on regression line, 184 standard deviation, 105 Confidence region, 300, 301, 380- 386 iU-determlned, 385 known error covariance matrix, 2 90-298 likelihood ratio, 383, 385, 386 matrix formulation, 2 90-301 minimum, 419 nonlinear, 3 78-386 probabilities of, 294 2 a unknown, 2 99-301 Consistency. 90, 186 Correlation coefficient, 56, 57 Correlation matrix. approximate, 379, 380 Cost, 125 Covariance, 56, 57 Covariance matrix, 120, 222 autoregressive erron, 322 least squares, 2 38-240, 489 maximum a posteriori, 272, 4 89 maximum likelihood, 259, 489 minimum, 239 parameters, 452 approximate, 378, 379 for predicted points, on MAP regression line, 489 on M l regression line, 260, 489 for O lS, 239, 489 Covariance matrix o f errors, uncertainty of, 2 73,274 Cramer-Rao or Cramer-Frechet-Rao lower bound, 91, 433 Cross tovariance, 59 Daniels, C., 226, 319 Data, r u Measurements Davies, M., 376, 3 77,414 Degrees o f freedom, 73, 75, 76, 176 Density function, probability, 37 Dependence, linear, 22 Dependent events, 44 Design, experimental. see INDEX Experiments, optimal Determinant, 2 15-217, 219 Deutsch, R.• 5. 24. 319 Digital data acquisition. 2. 3 2.419 Discrimination. 8 ,464.473 based on information theory. 4 67-470 likelihood ratio test. 472 termination criteria. 4 70-473 Distribution. Bernoulli. 65 binomial. 65 bivariate. 39 Chi·squared. 73, 74 (table) Conditional. 43 Exponential. 73 F. 76. 77 (table) Gamma. 72 marginal. 40 multivariate. 39 noninformative prior. 98 normal. 67. 70 (table). 154. 230 multivariate. 71. 230, 231 OLS estimator. 241 OLS residual sum o f squares. 241 Poisson. 66 posterior. 97 prior. 97. S u alto Information. prior probability. 36 t. 75. 76 (table) uniform. 67 variance. 73 Distribution function. 37 Draper. N. R.• 4 .204.226.229.232.319. 415 Efficiency, 91. 186 EIgenvalue, see Matrix, eigenvalues Eisenhart, C., 320 Error function, 293 complementary, 400, 44S, 446, 450 Errors, additive. 118, 134, 228 autoregressive, 191. 192, 229, 3 03-312, 3 14,320-325.460 fllst order. 303 moving average. 191 second order. 3 20-325 special cases. 305. 324. 325 constant variance. 134. 228 violation of. 1 88-190.459,460 correlated. 1 90-192. 3 93.400,401,460 matrix analysis, 301 - 325 INDEX cumUlative. 303. 305. 3 06.408 measurement. 7. 132 normal. 230 moving average. 191. 3 12-314 nonconstant variances. 260. 261 process. 133 standard assumptions. 134. 228. 229 uncorrelated. 134. 228 zero mean. 134. 228 violation. 185. 186 Estimate. l ee Estimator; Estimation Estimation. comparison o f nonlinear methods. 3 71-377 involving ordinary differential equations. 3 50-361 nonlinear. 3 34-410 optimal. see Experiments. optimal physical and statistical parameters. 3 15-319 sequential, 275 - 289 matrix inversion lemma. 3 91-393 m ultiresponse.387-393 nonlinear. 3 87-410 state. 6, 288. 289 l ee allo Gausr-Markov assumptions; Least squares estimation; Maximum a posteriori estimation; Maximum likelihood estimation Estimation programs. linear. 493 nonlinear. 4 93.494 Estimator. 84 . properties of. 8 9-101 table o ffor simple models. 152. 153 unbiased, 232 l ee allo Bayes estimation; Least squares estimation; Maximum a posteriori estimation; Maximum likelihood estimation Event, 32, 33 disjoint, 33. 34 independent. 44 Expected value. 51, 55 Expected value matrix, 1 20,222 Experiments, 32, 33 factorial,252-259 optimal, 6. 7. 14, 18. 1 49.419-463 attainable region, 435. 436, 437 constraints, 419. 420, 426. 427, 435-438,4S1, 4 55,457,458 criteria, 422, 4 32-434, 47S-477 497 equally-spated measurements. 421. 422, 4 40-443.444-446.458,459 multiresponse cases. 434 not all parameters o f interest. 4 61-463, 4 77.478 one-parameter cases. 4 20-432 operability region. 433. 436, 437 same number o f measurements as parameter. 4 34-440 simplex, 435 Factorial desIgn, 253. 255 Factors, 253 coded. 254 qualitative. 252 quantitative. 252 Farnla. K.• 415 Fedorov, 419. 432. 472. 474 Filter. 217 Kalinan. 289 Finite differences, 16. 3 34.410.411 Fisher, R. A., 78 F statistic. 76. 77. 176, 1 81.242,300.301. 3 83.386 F test. 242. 243, 244, 263, 387. See also Model building Gain matrix. 277 Ganant, A. R•• 370. 371. 380. 3 85,414 Gauss. K. F., 24 Gauss estimator, 341 Gaussian distribution. l ee DistrIbution. normal Gauss-Markov assumptions. 134. 232 estimation. 121. 489 nonlinear. 346, 389 sequential. 277 theorem, 2 32-234 Gauss method. 3 40-349 modifications to, 3 63-378 Gauss-Newton method. l ee Gauss method Ghosh. B. K., 472, 475 Goldfeld. S. M., 2 42. 319 Grashof number, 329 Graupe. D., 5, 2 4,414 GrayblU, F. A.• 475, 477 Guttman, F.• 414 Hald. A •• 78 Hammersley. J. M.• 129 498 . N t J1 00 Handscomb. D. C.• 129 Hartley. H. 0 .• 28. 364. 414 Heat transfer coefficient. 246. 3S9 conduction. 227. 263. 352. 4 00-410 semi-lnfinite body. 4 00. 4 01.445-453 convection. 14S. 2 36-238. 328. 329 cooling blUet. 2 43-247. 3 57-361. 3 97-399.443.444 mUltlresponse data. 4 02-404 Helneken. F. G.• 457 Henson. T. L . 414. 47S Herbold. R.• 494 Hildebrand. F. B.• 21 7. 248. 319 Hill. W. J .• 4 69. 470. 47S. 477 Hilsenrath. J .• 494 Hlmmelblau. D. M.• 319. 4 93.494 Ho. Yu-Chi. 24 Hoerl. A. E.• 287. 320 Homoskedastldty. lee Constant variance errors Hunter. J. S., 4. 414 Hunter. W. G .• 4 , 232. 386. 41 S, 4 19. 4 35. 4 38.439,474.475.477 Hypothesis, null. 112. 177 simple. 109 testing. 1 08-113 IdentlfilbiUty. 4 ,13,17,19-23,228, 1 46,481-487 Identification, 5, 8 JU-conditioned problem. 287. 335. 371. 3 79.380.382.486.487 Independence. 44 Independent variables. errorless. 134. 229 errors in, I 9 2- 204 n onstochutic. 134. 229 Information. for discrimination. 468 prior. 97. 1 34.229 subjective. 159, 1 62-16S. 269 estimation with, 272, 273 p rior,28S theory of. 4 67-469, 476 Invariant embedding. 371 Jacobian. 220 Jaeger. J. C.• 474 Jenkins, G. M., 24 Jenkins. R. J .• 475. 4 80 Jones, A.• 3 76.414 INDEX Kanemuu. H.• 363. 364. 369. 3 70-374. 414 Kennard. R. W.• 287. 320 Klein. R. E.• 474 Klimko. E. M.• 474 Kline. S. J .• 59. 77 Kmenta. J .• 5. 24 Krelth. F.• 24. 204 Kuester. J. L . 4 93.494 KuDback. 5 .• 468. 475 Lack o f fit. 184. & e .110 Sum o f squares. lack o f fit Lagrange multiplier. 194 method of. 1 92-194 l apidus. L . 5. 414. 4 57.474 Law o f l uge n umben. 63 L eut squares estimation. 2 .4, 10. 23, 120, 1 3S-153.489 autoregressive errors. 3 06-308 matrix form. 2 34-248 ordinary. I tt Least squares estimation sequential, 277 unbiased, 238 weighted, 247, 248 legendre, A. M.• 24 Levenberg. K., 362. 3 68-370,414 method of. 3 68-370 modified method, 370 Lewis. T. 0 .• 24 Likelihood function. 230 Likelihood ratio tests. 112 Linear estimation. algebraic formulation. 1 30-204 matrix formulation, 2 13-319 Linear model, interaction terms. 2S5 matrix form. 225 Log likelihood function. 230 lucas. H. L., 419. 432. 438. 439, 474, 476 McClintock. F. A.• 77 McCormack. D. J .• 432, 474 MAP estimates. u e Maximum a posteriori estimation Marquardt. D. W., 287, 320, 362. 370. 371, 414, 493 Marquardt method, 370, 371, 373 Matrices, 2 13-219 p roduct of. 214 Matrix. covariance. I tt Covariance matrix INDEX diagonal. 216 eigenvalues. 218. 219. 2 87.291.292. 2 94-296.476.486 p in. 277 Idempotent. 214. 240 Identity. 216 Inverse. 2 lS-218. 327. 328 inversion lemma. 277. 326. 327 negative definite. 219 negative semidefinite. 219 nonsllll"lar. 215 nuD. 218 partitloned,218 determinant, 218 I nvene.218 positive definite. 218. 219 poJitive semidefinite, 219 r ectangulu.214 square, 214 symmetric, 214 trace of, 219 transpose, 21S Matrix calculus. 2 19- 221 Matrix derivative. 219. 220 Maximum a posteriori estlmatJon. 98, 122. 1 59-167,208.271.333,489 matrix form. 2 69-274 nonlinear. 346 random p uameten, 159 sequential. 277. 284 subjective prior information. 159 Maximum likelihood. covariance matrix o f p ari meters. 259 2 estimate o f 0 • 1S7. 158 Maximum Ukelihood estimation. 94. 122. 15 4-1S9. 2 59-269. 489 autoregressive e rron. 3 08-312 matrix formulation. 2 59-269 nonlinear. 389 using prior information. 1S8 sequential. 277 sum o f squares function, 230 May. D. C.• J r ., 78. 204. 41S Mean. 86, 124 Meuurements. continuous. 339 expected value. l SI multiresponse, 2 26-228. 231. 232 predicted value. 15 1 repeated. 1 67-173, 181. 2S8 smoothed. 277 4 99 . ee ./ro E rror. Medmn.85. 1 24. 188 Meeter. D. A•• 4 14 Me. ... J. L.• 5. 24 Mendel. J. M.• 5 . 24. 277. 3 20 Messina. C. G .• 4 94 Miller. J. R.• 4 93.494 MInimum expected . quued deYlatlon estimation. 9 3 Minimum variance unblaied estJmators. 92. 1 88.232 Mlze. J. H., 4 93.494 Mode. 124 Model. 4 , 117 incorrect. 180, 181 linear, algebralc, 8 ,131, 2 2S-228 In p arameten,18 restrictions, 132 meChanistic, 359 n onlineu in p arameten,13, I S. 1 6. 18. 1 9,334.342,343.347.351,352. 3 57,358,367.372.376,381,385, 3 97,400,406.411-413 probabilistic, 84 simple l ineu. 130. 131 Model building. 178. 386, 387_ See .1.0 Discrimination; F-test Monte Carlo. examples. 1 25. 3 17-319, 382. 4 00,401 methods. 125 Moody chart. 211 Muller. M. E.• 129 Myen. G. E.• 4 75. 4 78 Myers. R. H.• 24. 319 Nabi. N. E., 432. 474 Newton-Gauss. lee Gauss method Normal density function. lee Distribution. normal Normal equations, 136. 23S Normality, standard assumption. 1 34.229 standard assumption, violation. 1 86-188 Nusselt number. 2 36-238 Observation. 32. S u . "0 Errors OdeD, P. L.• 24 Ordinary least squares. lee Least squares estimation Outcome. 3 2, 33 Owen. D. B.• 77 500 Parameters, constant, 229 nonrandom, 134 random, 134, 208, 229 vector, 270 Parker, W. J., 475, 480 Parsimony, 4 ,247,257,263,361 Partial differential equation o f conduction, optimal experiments, 4 44-459 Pearson, E. S., 78 Perlis, H. J., 432, 474 Peterson, T . I., 414 Polynomials, orthogonal, 2 48- 25 2 Power, 114 Prandtl number, 236 Predicted values, 136 Prior, see Distribution, prior; Information, prior Probability, 32, 33 Probabili ties, conditional, 43 Property, 2 Pseudorandom numbers, 126 Quadratic form, 221 expected value, 224 matrix derivative of, 221 Quandt, R. E., 242, 319 Quasi-lineariza tion, 371 Rablnowicz, E., 10, 24 Randomness, 29 Random numbers, 126, 147 Random variable, 32, 33 continuous, 36 discrete, 36 functions of, 48 Regression analysis, 130, 131 Regression function, 131 Repeated data, see Measurements, repeated Residuals, I I, 136, 288, 301, 302 relative, 211 signatures, 458 sum of, 145 Reynolds number, 145, 211. 236 - 238 Rice, J. R., 319 Ridge analysis, 287 Ridge regression estimation, 287, 289 Ruedenburg, K., 364 Runs, experimental, 253 number of, 303 INDEX Sage, A. P., 24 Sample path, 4 2 Sample space, 32, 33 continuous, 35 denumerably infinite, 34 discrete, 34 finite, 34 Saridis, G. W., 432, 474 Schechter, R. S., 3 38,414,487 Search, comparison of, 371 direct, 337 dynamic programming, 338 exhaustive, 336, 337 Fibonacci, 337 Gauss, ree Gauss method gradient, I tt Gauss method halving-doubllng method, 375 Hooke-Jeeves, 338 linearization, ree Gauss method random, 337 simplex, 338 trial and error approach, 15, 335, 336 Seinfeld, J. H., 4 , 414, 457, 4 74,475 Sensitivity, 14 Sensitivity coefficient, 4 ,17,18,22,228, 3 58,406,410-413,446,448-450, 4 53,455,481 finite difference evaluation, 410, 411 linear dependence, 349 Sensitivity equation, 1 9,411-413 Sensitivity matrix, 225, 226, 340 Sequential estimation, multiresponse, 286 Sequential method, advantages, 283, 288, 289 Sequential optimization, 460, 461 Significance, level of, 112 Significant linear regression, 184 Shannon, 476 Smith, H., 24, 204, 226, 3 19,415 Smith, K., 432, 474 Smooth values, 136 Splines, 252 Squared error loss estimators, 122 Standard deviation, 56, 137 Standard errors, estimated, 137 State variable, 2 Statistic, 84 Steepest descent, method of, 369 Stegun, I. A., 77 Stochastic approximation, 371 501 INDEX Studden, W. J ., 4 74 Sufficiency, 5 0 Sufficient statistic, 93 Sum o f squares, contours, 347, 348 error, 173, 175, 178 lack o f fit, 178 l eut squares, 10, 14 maximum a posteriori, 270 maximum likelihood residual, 267 minimization for nonlinear models, 3 34-410 pure error, 178 regression, 173, 175 residuals, 240, 241 total, 173, 175 Swed, F. S., 320 Taylor series, matrix form, 338 Tlao, A. C., 114, 162, 204 Tsuchlya, H. M., 4 74 Unbiased estimator, for a 2 , 139, 141,241 matrix form, 263 Unbiasedness,89 UncertaJnty, measure of, 476 Union, 34 Van Fossen, G. J., J r., 4 15, 457, 474, 475 Variance, 56, 57 estimation of, 87 Variance-covariance matrix, I tt Covariance matrix Variance error, 181 Varlate, continuous, 31 discrete, 31 Varlatlon, coefficient of, 5 6 Vector, column, 213 Wald, A., 472, 475 Walsh, P. J., 494 We!&hted l eut squares, sequential estimation, 277 W elty,J. R., 236, 319 Whitting, I. J., 376, 3 71,414 Wolt;lerg, J. R., 24 Wood, F. S., 226, 319 Yates, F., 78 Ziegler, G., 494