Applied Statistics for Engineers and Physical Scientists

Hardcover
from $0.00

Author: Johannes Ledolter

ISBN-10: 0136017983

ISBN-13: 9780136017981

Category: Engineering -> Statistical methods

This hugely anticipated revision has held true to its core strengths, while bringing the book fully up to date with modern engineering statistics. Written by two leading statisticians, Statistics for Engineers and Physical Scientists, Third Edition, provides the necessary bridge between basic statistical theory and interesting applications. Students solve the same problems that engineers and scientists face, and have the opportunity to analyze real data sets. Larger-scale projects are a...

Search in google:

This hugely anticipated revision has held true to its core strengths, while bringing the book fully up to date with modern engineering statistics. Written by two leading statisticians, Statistics for Engineers and Physical Scientists,3/e, provides the necessary bridge between basic statistical theory and interesting applications. Readers solve the same problems that engineers and scientists face, and have the opportunity to analyze real data sets. Larger-scale projects are a unique feature of this book, which let readers analyze and interpret real data, while also encouraging them to conduct their own studies and compare approaches and results. This book assumes a calculus background.Collection and Analysis of Information; Probability Models and Discrete Distributions; Continuous Probability Models; Statistical Inference: Sampling Distribution, Confidence Intervals, and Tests of Hypotheses; Statistical Process Control; Experiments with One Factor; Experiments with Two or More Factors; Regression Analysis.For all readers interested in applied statistics.

Chapter 1: Collection and Analysis of Information1.1 Introduction1.1.1 Data Collection1.1.2 Types of Data1.1.3 The Study of Variability1.1.4 Distributions1.1.5 Importance of Variability (or Lack Thereof) for Quality and Productivity Improvement1.2 Measurements Collected over Time1.2.1 Time-Sequence Plots1.2.2 Control Charts: A Special Case of Time-Sequence Plots1.3 Data Display and Summary1.3.1 Summary and Display of Measurement Data1.3.2 Measures of Location1.3.3 Measures of Variation1.3.4 Exploratory Data Analysis: Stem-and-Leaf Displays and Box-and-Whisker Plots1.3.5 Analysis of Categorical Data1.4 Comparisons of Samples: The Importance of Stratification1.4.1 Comparing Two Types of Wires1.4.2 Comparing Lead Concentrations from Two Different Years1.4.3 Number of Flaws for Three Different Products1.4.4 Effects of Wind Direction on the Water Levels of Lake Neusiedl1.5 Graphical Techniques, Correlation, and an Introduction to Least Squares1.5.1 The Challenger Disaster1.5.2 The Sample Correlations Coefficient as a Measure of Association in a Scatter Plot1.5.3 Introduction to Least Squares1.6 The Importance of Experimentation1.6.1 Design of Experiments1.6.2 Design of Experiments with Several Factors and the Determination of Optimum Conditions1.7 Available Statistical Computer Software and the Visualization of Data1.7.1 Computer Software1.7.2 The Visualization of DataChapter 2: Probability Models and Discrete Distributions2.1 Probability2.1.1 The Laws of Probability2.2 Conditional Probability and Independence2.2.1 Conditional Probability2.2.2 Independence2.2.3 Bayes' Theorem2.3 Random Variables and Expectations2.3.1 Random Variables and Their Distributions2.3.2 Expectations of Random Variables2.4 The Binomial and Related Distributions2.4.1 Bernoulli Trials2.4.2 The Binomial Distribution2.4.3 The Negative Binomial Distribution2.4.4 The Hypergeometric Distribution2.5 Poisson Distribution and Poisson Process2.5.1 The Poisson Distribution2.5.2 The Poisson Process2.6 Multivariate Distributions2.6.1 Joint, Marginal, and Conditional Distributions2.6.2 Independence and Dependence of Random Variables2.6.3 Expectations of Functions of Several Random Variables2.6.4 Means and Variances of Linear Combinations of Random Variables2.7 The Estimation of Parameters from Random Samples2.7.1 Maximum Likelihood Estimation2.7.2 Examples2.7.3 Properties of EstimatorsChapter 3: Continuous Probability Models3.1 Continuous Random Variables3.1.1 Empirical Distributions3.1.2 Distributions of Continuous Random Variables3.2 The Normal Distribution3.3 Other Useful Distributions3.3.1 Weibull Distribution3.3.2 Gompertz Distribution3.3.3 Extreme Value Distribution3.3.4 Gamma Distribution3.3.5 Chi-Square Distribution3.3.6 Lognormal Distribution3.4 Simulation: Generating Random Variables3.4.1 Motivation3.4.2 Generating Discrete Random Variables3.4.3 Generating Continuous Random Variables3.5 Distributions of Two or More Continuous Random Variables3.5.1 Joint, Marginal, and Conditional Distributions, and Mathematical Expectations3.5.2 Propagation of Errors3.6 Fitting and Checking Models3.6.1 Estimation of Parameters3.6.2 Checking for Normality3.6.3 Checking Other Models through Quantile-Quantile Plots3.7 Introduction to ReliabilityChapter 4: Statistical Inference: Sampling Distribution, Confidence Intervals, and Tests of Hypotheses4.1 Sampling Distributions4.1.1 Introduction and Motivation4.1.2 Distribution of the Sample Mean X4.1.3 The Central Limit Theorem4.1.4 Normal Approximation of the Binomial Distribution4.2 Confidence Intervals for Means4.2.1 Determination of the Sample Size4.2.2 Confidence Intervals for μ1— μ24.3 Inferences from Small Samples and with Unknown Variances4.3.1 Tolerance Limits4.3.2 Confidence Intervals for μ1— μ24.4 Other Confidence Intervals4.4.1 Confidence Intervals for Variances4.4.2 Confidence Intervals for Proportions4.5 Tests of Characteristics of a Single Distribution4.5.1 Introduction4.5.2 Possible Errors and Operating Characteristic Curves4.5.3 Tests of Hypotheses When the Sample Size Can Be Selected4.5.4 Tests of Hypotheses When the Sample Size Is Fixed4.6 Tests of Characteristics of Two Distributions4.6.1 Comparing Two Independent Samples4.6.2 Paired-Sample t-Test4.6.3 Test of p1= p24.6.4 Test of σ2/1 = σ2/24.7 Certain Chi-Square Tests4.7.1 Testing Hypotheses about Parameters in a Multinomial Distribution4.7.2 Contingency Tables and Tests of Independence4.7.3 Goodness-of-Fit TestsChapter 5: Statistical Process Control5.1 Shewhart Control Charts5.1.1 X-Charts and R-charts5.1.2 p-Charts and c-Charts5.1.3 Other Control Charts5.2 Process Capability Indices5.2.1 Introduction5.2.2 Process Capability Indices5.2.3 Discussion of Process Capability Indices5.3 Acceptance Sampling5.4 Problem Solving5.4.1 Introduction5.4.2 Pareto Diagram5.4.3 Diagnosis of Causes and Defects5.4.4 Six Sigma InitiativesChapter 6: Experiments with One Factor6.1 Completely Randomized One-Factor Experiments6.1.1 Analysis-of-Variance Table6.1.2 F-Test for Treatment Effects6.1.3 Graphical Comparison of k Samples6.2 Other Inferences in One-Factor Experiments6.2.1 Reference Distribution for Treatment Averages6.2.2 Confidence Intervals for a Particular Difference6.2.3 Tukey's Multiple-Comparison Procedure6.2.4 Model Checking6.2.5 The Random-Effects Model6.2.6 Computer Software6.3 Randomized Complete Block Designs6.3.1 Estimation of Parameters and ANOVA6.3.2 Expected Mean Squares and Tests of Hypotheses6.3.3 Increased Efficiency by Blocking6.3.4 Follow-Up Tests6.3.5 Diagnostic Checking6.3.6 Computer Software6.4 Designs with Two Blocking Variables: Latin Squares6.4.1 Construction and Randomization of Latin Squares6.4.2 Analysis of Data from a Latin SquareChapter 7: Experiments with Two or More Factors7.1 Two-Factor Factorial Designs7.1.1 Graphics in the Analysis of Two-Factor Experiments7.1.2 Special Case n = 17.1.3 Random Effects7.1.4 Computer Software7.2 Nested Factors and Hierarchical Designs7.3 General Factorial and 2K Factorial Experiments7.3.1 2 K Factorial Experiments7.3.2 Significance of Estimated Effects7.4 2¿-K Fractional Factorial Experiments7.4.1 Half Fractions of 2K Factorial Experiments7.4.2 Higher Fractions of 2K Factorial Experiments7.4.3 Computer SoftwareChapter 8: Regression Analysis8.1 The Simple Linear Regression Model8.1.1 Estimation of Parameters8.1.2 Residuals and Fitted Values8.1.3 Sampling Distributions of β0 and β18.2 Inferences in the Regression Model8.2.1 Coefficient of Determination8.2.2 Analysis-of-Variance Table and F-Test8.2.3 Confidence Intervals and Tests of Hypotheses for Regression Coefficients8.3 The Adequacy of the Fitted Model8.3.1 Residual Checks8.3.2 Output from Computer Programs8.3.3 The Importance of Scatter Plots in Regression8.4 The Multiple Linear Regression Model8.4.1 Estimation of the Regression Coefficients8.4.2 Residuals, Fitted Values, and the Sum-of-Squares Decomposition8.4.3 Inference in the Multiple Linear Regression Model8.4.4 A Further Example: Formaldehyde Concentrations8.5 More on Multiple Regression8.5.1 Multicollinearity among the Explanatory Variables8.5.2 Another Example of Multiple Regression8.5.3 A Note on Computer Software8.5.4 Nonlinear Regression8.6 Response Surface Methods8.6.1 The "Change One Variable at a Time" Approach8.6.2 Method of Steepest Ascent8.6.3 Designs for Fitting Second-Order Modes: The 3K Factorial and the Central Composite Design8.6.4 Interpretation of the Second-Order Model8.6.5 An Illustration