Vis enkel innførsel

dc.contributor.authorGeiss, Stefan
dc.date.accessioned2022-04-07T07:07:41Z
dc.date.available2022-04-07T07:07:41Z
dc.date.created2022-01-20T19:23:56Z
dc.date.issued2021
dc.identifier.citationComputational Communication Research (CCR). 2021, 3 (1), 61-89.en_US
dc.identifier.issn2665-9085
dc.identifier.urihttps://hdl.handle.net/11250/2990341
dc.description.abstractThis study uses Monte Carlo simulation techniques to estimate the minimum required levels of intercoder reliability in content analysis data for testing correlational hypotheses, depending on sample size, effect size and coder behavior under uncertainty. The ensuing procedure is analogous to power calculations for experimental designs. In most widespread sample size/effect size settings, the rule-of-thumb that chance-adjusted agreement should be ≥.80 or ≥.667 corresponds to the simulation results, resulting in acceptable α and β error rates. However, this simulation allows making precise power calculations that can consider the specifics of each study’s context, moving beyond one-size-fits-all recommendations. Studies with low sample sizes and/or low expected effect sizes may need coder agreement above .800 to test a hypothesis with sufficient statistical power. In studies with high sample sizes and/or high expected effect sizes, coder agreement below .667 may suffice. Such calculations can help in both evaluating and in designing studies. Particularly in pre-registered research, higher sample sizes may be used to compensate for low expected effect sizes and/or borderline coding reliability (e.g. when constructs are hard to measure). I supply equations, easy-to-use tables and R functions to facilitate use of this framework, along with example code as online appendix.en_US
dc.language.isoengen_US
dc.publisherAmsterdam University Pressen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleStatistical Power in Content Analysis Designs: How Effect Size, Sample Size and Coding Accuracy Jointly Affect Hypothesis Testing – A Monte Carlo Simulation Approach.en_US
dc.typeJournal articleen_US
dc.typePeer revieweden_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber61-89en_US
dc.source.volume3en_US
dc.source.journalComputational Communication Research (CCR)en_US
dc.source.issue1en_US
dc.identifier.doi10.5117/CCR2021.1.003.GEIS
dc.identifier.cristin1986772
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal