My Library

University LibraryCatalogue

     
Limit search to items available for borrowing or consultation
Result Page: Previous Next
Can't find that book? Try BONUS+
 
Look for full text

Search Discovery

Search CARM Centre Catalogue

Search Trove

Add record to RefWorks

Cover Art
PRINTED BOOKS
Author Nolan, Susan A.

Title Statistics for the behavioral sciences / Susan A. Nolan, Thomas E. Heinzen.

Published New York : Worth, [2008]
©2008

Copies

Location Call No. Status
 UniM Giblin Eunson  300.15195 NOLA    AVAILABLE
Physical description 1 volume (various pagings) : illustrations (some colour) ; 27 cm
Bibliography Includes bibliographical references (page R-1 - R10) and index.
Contents 1 An Introduction to Statistics and Research Design: The Elements of Statistical Reasoning -- Two Branches of Statistics: Growing Our Knowledge about Human Behavior -- Descriptive Statistics: Organizing, Summarizing, and Communicating Numerical Information -- Inferential Statistics: Using Samples to Draw Conclusions about a Population -- Distinguishing Between a Sample and a Population -- Variables: Transforming Observations into Numbers -- Independent and Dependent Variables: The Main Ingredients of Statistical Thinking -- Putting Variables to Work: Independent, Dependent, and Confounding Variables -- Developing and Assessing Variables: The Reliability and Validity of Tests -- An Introduction to Hypothesis Testing: From Hunch to Hypothesis -- Types of Research Designs: Experiments, Non-Experiments, and Quasi-Experiments -- Experiments and Causality: Control the Confounding Variables -- Research Designs Other than Experiments: Non-Experiments and Quasi-Experiments -- One Goal, Two Strategies: Between-subjects Designs vs. Within-subjects Designs -- Curiosity, Joy, and the Art of Research Design -- Digging Deeper Into the Data: Variations on Standard Research Designs -- Outlier Analyses: Does the Exception Prove the Rule? -- Archival Studies: When the Data Already Exist -- Chapter 2 Descriptive Statistics: Organizing, Summarizing, and Graphical Individual Variables -- Organizing Our Data: A First Step in Identifying Patterns -- Distributions: Four Different Ways to Describe Just One Variable -- Applying Visual Depictions of Data: Generating Research Questions -- Central Tendency: Determining the Typical Score -- Need for Alternative Measures of Central Tendency: Bipolar Disorder -- Mean: The Arithmetic Average -- Median: The Middle Score -- Mode: The Most Common Score -- Effect of Outliers on Measures of Central Tendency -- An Early Lesson in Lying With Statistics: Which Central Tendency is "Best?" -- Measures of Variability: Everyone Can't Be "Typical" -- Range: From the Lowest to the Highest Score -- Variance: The First Step in Calculating Standard Deviation -- Standard Deviation: Variation from the Mean -- Shapes of Distributions: Applying the Tools of Descriptive Statistics -- Normal Distributions: The Silent Power Behind Statistics -- Skewed Distributions: When Our Data Are Not Symmetrical -- Bimodal and Multimodal Distributions: Identifying Distinctive Populations -- Kurtosis and Distributions: Tall and Skinny Versus Short and Wide -- Digging Deeper into the Data: Alternate Approaches to Descriptive Statistics -- Interquartile Range: An Alternative to the Range -- Statistics that Don't Focus on the Mean: Letting the Distribution Guide our Choice of Statistics -- Chapter 3 Visual Displays of Data: Graphs That Tell a Story -- Uses of Graphs: Clarifying Danger, Exposing Lies, and Gaining Insight -- Graphing in the Information Age: A Critical Skill -- "The Most Misleading Graph Ever Published": The Cost and Quality of Higher Education -- "The Best Statistical Graph Ever Created": Napoleon's Disastrous March to Moscow -- Common Types of Graphs: A Graph Designer's Building Blocks -- Scatterplots: Observing Every Data Point -- Line Graphs: Searching for Trends -- Bar Graphs: An Efficient Communicator -- Pictorial Graphs: Choosing Clarity over Cleverness -- Pie Charts: Are Pie Charts Passé? -- How to Build a Graph: Dos and Don'ts -- APA Style: Graphing Guidelines for Psychologists -- Choosing the Type of Graph: Understanding Our Variables -- Limitations of Graphic Software: Who is Responsible for the Visual Display? -- Creating the Perfect Graph: General Guidelines -- Graphing Literacy: Learning to Lie Versus Creating Knowledge -- Lying with Statistics and Graphs: Eleven Sophisticated Techniques -- Future of Graphs: Breaking the Fourth Wall -- Uses and Misuses of Statistics: It's Not Just What You Draw, It's How You Draw It -- Digging Deeper into the Data: The Box Plot -- Chapter 4 Probabilities and Research: The Risks and Rewards of Scientific Sampling -- Samples and Their Populations: Why Statisticians Are Stingy! -- Decision Making: The Risks and Rewards of Sampling -- Random Sampling: An Equal Chance of Being Selected -- Variations on Random Sampling: Cluster Sampling and Stratified Sampling -- Convenience Sampling: Readily Available Participants -- Random Assignment: An Equal Chance of Being Assigned to a Condition -- Variations on Random Assignment: Block Design and Replication -- Sampling in the Behavioral Sciences: Why Sampling is Both an Art and a Science -- Neither Random Selection, Nor Random Assignment: A Study of Torture -- Random Assignment, But Not Random Selection: A Study of Expert Testimony -- Random Selection, But Not Random Assignment: A Study of Children's Literature -- Probability Theory: Distinguishing Between Mere Coincidence and Real Connections -- Coincidence and Probability: Why Healthy Skepticism Is Healthy -- Beyond Confirmation Biases: The Dangers of Groupthink -- Probability Theory: The Basics -- Expected Relative-Frequency Probability: The Probability of Statistics -- Independence and Probability: The Gambler's Fallacy -- Statistician Sleuths: The Case of Chicago's Cheating Teachers -- Statistics and Probability: The Logic of Inferential Statistics -- Dead Grandmothers: Using Probability to Make Decisions -- Consideration of Future Consequences: Developing Hypotheses -- Consideration of Future Consequences: Making a Decision about Our Hypotheses -- Type I and Type II Errors: Statistical Inferences Can Be Wrong -- Type I Errors: Sins of Commission -- Type II Errors: Sins of Omission -- Statistics in Everyday Life: Tying It All Together -- Case of Lush: Testimonial to a Moisturizer -- Understanding the Meaning of Proof: Statistical Literacy in Consumer Research -- Digging Deeper into the Data: The Shocking Prevalence of Type I Errors -- Estimating Type I Error in the Medical Literature -- Medical Findings and Our Own Confirmation Biases -- Chapter 5 Correlation: Quantifying the Relation between Two Variables -- Correlation: Assessing Associations between Variables -- Need for Standardization: Putting Two Different Variables on the Same Scale -- Z Score: Transforming Raw Scores into Standardized Scores -- Pearson Correlation Coefficient: Quantifying a Linear Association -- Everyday Correlation Reasoning: Asking Better Questions -- Calculation of the Pearson Correlation Coefficient: Harnessing the Power of z Scores -- Misleading Correlations: Considering the Stories behind the Numbers -- Correlation is Not Causation: Invisible Third Variables -- A Restricted Range: When the Values of One Variable Are Limited -- Effect of an Outlier: The Influence of a Single Data Point -- Reliability and Validity: A Correlation Coefficient Is Only as Good as Our Data -- Reliability and Validity: Correlation in Test Construction -- Correlation, Psychometrics, and a Super-Heated Job Market: Creating the Measures behind the Research -- Reliability: Using Correlation to Create a Consistent Test -- Validity: Using Correlation to Determine Whether We Are Measuring What We Intend to Measure -- Digging Deeper into the Data: Partial Correlation -- Chapter 6 Regression: Tpols for Predicting Behavior -- Regression: Building on Correlation -- Difference between Regression and Correlation: Prediction Versus Relation -- Linear Regression: Calculating the Equation for a Line using z Scores Only -- Reversing the Formula: Transforming z Scores to Raw Scores -- Linear Regression: Calculating the Equation for a Line by Converting Raw Scores to z Scores -- Linear Regression: Calculating the Equation for a Line with Raw Scores -- Drawing Conclusions from a Regression Equation: Interpretation and Prediction -- Regression: Now Think Again (Realistically)! -- What Correlation Can Teach Us about Regression: Correlation Still Isn't Causation -- Regression to the Mean: The Patterns of Extreme Scores -- Effect Size for Regression: Proportionate Reduction in Error -- Multiple Regression: Predicting from More than One Variable -- Multiple Regression: Understanding the Equation -- Stepwise Multiple Regression and Hierarchical Multiple Regression: A Choice of Tactics -- Digging Deeper Into the Data: Structural Equation Modeling (SEM) -- Chapter 7 Power of Standardization: From Description to Inference -- Normal Curve:
It's Everywhere! -- Standardization, z Scores, and the Normal Curve: Discovering Reason behind the Randomness -- Standardization: Comparing z Scores -- Putting z Scores to Work: Transforming z Scores to Percentiles -- Central Limit Theorem: How Sampling Creates a Less Variable Distribution -- Creating a Distribution of Means: Understanding Why It Works -- Characteristics of the Distribution of Means: Understanding Why It's So Powerful --
How to Take Advantage of the Central Limit Theorem: Beginning With z Scores -- Creating Comparisons: Applying z Scores to a Distribution of Means -- Estimating Population Parameters from Sample Statistics: Connecting Back -- Digging Deeper into the Data: The History of the Normal Curve -- Chapter 8 Hypothesis Testing With z Tests: Making Fair Comparisons -- Versatile z Table: Raw Scores, z Scores, and Percentages -- From z Scores to Percentages: The Benefits of Standardization -- From Percentages to z Scores: The Benefits of Sketching the Normal Curve -- Z Table and Distributions of Means: The Benefits of Unbiased Comparisons -- Hypothesis Tests: An Introduction -- Assumptions: The Requirements to Conduct Analyses -- Six Steps of Hypothesis Testing -- Hypothesis Tests: The Single Sample z Test -- Z Test: When We Know the Population Mean and the Standard Deviation -- Z Test: The Six Steps of Hypothesis Testing -- Effect of Sample Size: A Means to Increase the Test Statistic -- Increasing Our Test Statistic through Sample Size: A Demonstration -- Effect of Increasing Sample Size: What's Going On -- Digging Deeper in the Data: What to Do with Dirty Data -- Chapter 9 Hypothesis Testing with t Tests: Making Fair Comparisons between Two Groups -- T Distributions: Distributions of Means When the Parameters Are Not Known -- Using a t Distribution: Estimating a Population Standard Deviation from a Sample -- Calculating a t Statistic for the Mean of a Sample: Using the Standard Error -- When t and z Are Equal: Very Large Sample Sizes -- T Distributions: Distributions of Differences Between Means -- Hypothesis Tests: The Single Sample t Test -- Single Sample t Test: When We Know the Population Mean, But Not the Standard Deviation -- T Table: understanding Degrees of Freedom -- T Test: The Six Steps of Hypothesis Testing -- Hypothesis Tests: Tests for Two Samples -- Paired Samples t Test: Two Sample Means and a Within-Groups Design -- Independent Samples t Test: Two Sample Means and a Between-Groups Design -- Digging Deeper into the Data: Exploring Two Group Comparisons -- Difference Scores: Are All Differences Created Equal? -- Graphing Two Samples: Visualizing Two Sets of Scores -- Chapter 10 Hypothesis Testing Using One-Way ANOVA: Comparing Three or More Groups -- When to Use the F Distribution: Working With More than Two Samples -- A Mnemonic for When to Use a t Distribution or the F Distribution: 't' for Two -- F Distribution: Analyzing Variability to Compare Means -- Relation of F to t (and z): F as a Squared t for Two Groups and Large Samples -- Analysis of Variance (ANOVA): Beyond t Tests -- Problem of Too Many t Tests: Fishing for a Finding -- Assumptions for ANOVA: Naming the Ideal Conditions for the Perfect Study -- One-Way Between-Groups ANOVA: Applying the Six Steps of Hypothesis Testing -- Everything ANOVA but the Calculations: The Six Steps of Hypothesis Testing -- F Statistic: Logic and Calculations -- Bringing It All Together: What Is the ANOVA Telling Us to Do About the Null Hypothesis? -- Why the ANOVA is Not Sufficient: Post-Hoc Tests -- Digging Deeper into the Data: Post-Hoc Tests to Determine Which Groups Are Different -- Planned and A Priori Comparisons: When Comparisons between Pairs Are Guided by Theory -- Tukey HSD: An Honest Approach -- Bonferroni Test: A More Stringent Post-Hoc Test -- Chapter 11 Two-Way ANOVA: Understanding Interactions -- Two-Way ANOVA: When the Outcome Depends on More Than One Variable -- Why Use a Two-Way ANOVA: The Practicalities and Aesthetics -- More Specific Vocabulary of Two-Way ANOVAs: Name That ANOVA Part II -- Two Main Effects and an Interaction: Three F Statistics and Their Stories -- Layers of ANOVA: Understanding Interactions -- Interactions and Public Policy: Using Two-Factor ANOVA to Improve Planning -- Interpreting Interactions: Understanding Complexity -- Visual Representations of Main Effects and Interactions: Bar Graphs -- Expanded Source Table: Conducting a Two-Way Between Subjects ANOVA -- Two-Way ANOVA: The Six Steps of Hypothesis Testing -- Two-Way ANOVA: Identifying Four Sources of Variability -- Interactions: A More Precise Interpretation -- Interpreting Interactions: Towards a More Precise Statistical Understanding -- Residuals: Separating the Interaction from the Main Effect -- Digging Deeper into the Data: More Sophisticated Versions of ANOVA -- Within-Groups and Mixed Designs: When the Same Participants Experience More than One Condition -- MANOVA, ANCOVA, and MANCOVA: Multiple Dependent Variables and Covariates -- Chapter 12 Beyond Hypothesis Testing: Confidence Intervals, Effect Size, and Power -- Beyond Hypothesis Testing: Reducing Misinterpretations -- Men, Women, and Math: An Accurate Understanding of Gender Differences -- Beyond Hypothesis Testing: Enhancing Our Samples' Stories -- Confidence Intervals: An Alternative to Hypothesis Testing -- Interval Estimation: A Range of Plausible Means -- z Distributions: Calculating Confidence Intervals -- t Distributions: Calculating Confidence Intervals -- Effect Size: Just How Big is the Difference? -- Misunderstandings from Hypothesis Testing: When "Significant" Isn't Very Significant -- What Effect Size Is: Standardization across Studies -- Cohen's d: The Effect Size for a z Test or a t Test -- R2: The Effect Size for ANOVA -- Statistical Power and Sensitivity: Correctly Rejecting the Null Hypothesis -- Calculation of Statistical Power: How Sensitive is a z Test? -- Beyond Sample Size: Other Factors that Affect Statistical Power -- Digging Deeper into the Data: Meta-Analysis -- Meta-Analysis: A Study of Studies -- Steps to Conduct a Meta-Analysis -- File Drawer Statistic: Where Are All the Null Results? -- Chapter 13 Chi Square: Quantifying the Difference between Expectations and Observations -- Non-Parametric Statistics: When We're Not Even Close to Meeting the Assumptions -- Non-Parametric Tests: Using the Right Statistical Tool for the Right Statistical Job -- Non-Parametric Tests: When to Use Them -- Non-Parametric Tests: Why to Avoid Them Whenever Possible -- Chi-square Test for Goodness-of-Fit: When We Have One Nominal Variable -- Chi-Square Test for Goodness-of-Fit: The Six Steps of Hypothesis Testing -- A More Typical Chi-Square Test for Goodness-of-Fit: Evenly Divided Expected Frequencies -- Chi-square Test for Independence: When We Have Two Nominal Variables -- Chi-Square Test for Independence: The Six Steps of Hypothesis Testing -- Cramer's Phi: The Effect Size for Chi square -- Graphing Chi-square Percentages: Depicting the Relation Visually -- Relative Risk: How Much Higher Are the Chances of an Outcome? -- Digging Deeper into the Data: A Deeper Understanding of Chi square -- Standardized Residuals: A Post-Hoc Test for Chi square -- Chi-Square Controversies: Expectations about Expected Frequencies -- Chapter 14 Beyond Chi Square: Commonly Used Non-parametric Tests with Ordinal Data -- Non-Parametric Statistics: When the Data Are Ordinal -- Hypothesis Tests with Ordinal Data: A Non-parametric Equivalent for Every Parametric Test -- Examining the Data: Deciding to Use a Non-parametric Test for Ordinal Data -- Spearman Rank Order Correlation Coefficient: Quantifying the Association between Two Ordinal Variables -- Calculating Spearman's Correlation: Converting Interval Observations to Rank-Ordered Observations -- Eye-Balling the Data: Using Your Scientific Common Sense -- Non-parametric Hypothesis Tests: Comparing Groups Using Ranks -- Wilcoxon Signed-Rank Test for Matched Pairs: A Non-parametric Test for Within-Subjects Designs -- Mann-Whitney U Test: Comparing Two Independent Groups Using Ordinal Data -- Kruskal-Wallis H Test: Comparing the Mean Ranks of Several Groups -- Digging Deeper into the Data: Transforming Skewed Data, the Meaning of Interval Data, and Bootstrapping -- Coping with Skew: Data Transformations -- Controversies in Non-Parametric Hypothesis Tests: What Really Is an Interval Variable? -- Bootstrapping: When the Data Do the Work Themselves -- Chapter 15.
Choosing a Statistical Test and Reporting the Results: The Process of Statistics -- Before You Even Begin: Choosing the Right Statistical Test -- Planning Your Statistics First: How to Avoid That Post-Data Collection Regret -- Beyond the Statistical Plan: Tips for a Successful Study -- Guidelines for Reporting Statistics: The Common Language of Research -- Choosing the Right Statistical Test: Questions to Ask Yourself -- Choosing the Right Statistical Test: Questions to Ask About the Data -- Reporting the Statistics: The Results Section of an APA-Style Paper -- Telling Your Story: What to Include in a Results Section -- Defending Your Study: Convincing the Reader that the Results are Worth Reading -- "Traditional" Statistics: The Longstanding Way of Reporting Results --
Statistics Strongly Encouraged by APA: Essential Additions to the "Traditional" Statistics -- What Not to Include in a Results Section: Keeping the Story Focused -- Two Excerpts from Results Sections: Understanding the Statistical Story -- Unfamiliar Statistics: How to Approach Any Results Section with Confidence -- Digging Deeper into the Data: Reporting More Sophisticated Statistical Analyses.
Other author Heinzen, Thomas E.
Subject Social sciences -- Statistical methods.
ISBN 9780716750079 (hbk.) £39.99
0716750074 (hbk.) £39.99