Muniz-Terrera, et al., 2017. Latent growth models matched to research questions to answer questions about dynamics of change in multiple processes.

Muniz-Terrera, G., Robitaille, A., Kelly, A., Johansson, B., Hofer, S., & Piccinin, A. (2017). Latent growth models matched to research questions to answer questions about dynamics of change in multiple processes. Journal of clinical epidemiology, 82, 158-166.

Year: 
2017
Status: 
complete
Abstract: 

Objectives. Given theoretical and methodological advances that propose hypothesis about change in one or multiple processes, analytical methods for longitudinal data have been developed that provide researchers with various options for analyzing change over time. In this paper, we revisited several latent growth curve models that may be considered to answer questions about repeated measures of continuous variables, which may be operationalized as time-varying covariates or outcomes.

Study Design and Setting. To illustrate each of the models discussed and how to interpret parameter estimates, we present examples of each method discussed using cognitive and blood pressure measures from a longitudinal study of aging, the Origins of Variance in the Old-Old study.

Results and Conclusion. Although statistical models are helpful tools to test theoretical hypotheses about the dynamics between multiple processes, the choice of model and its specification will influence results and conclusions made.

Variables: 

Griffith et al., 2015.Statistical approaches to harmonize data on cognitive measures in systematic reviews are rarely reported

Griffith, L. E., Van Den Heuvel, E., Fortier, I., Sohel, N., Hofer, S. M., Payette, H., Wolfson, C., Belleville, S., Kenny, M., Doiron, D., & Raina, P. (2015). Statistical approaches to harmonize data on cognitive measures in systematic reviews are rarely reported. Journal of clinical epidemiology, 68(2), 154-162.

Year: 
2015
Status: 
complete
Abstract: 

Objectives. To identify statistical methods for harmonization, the procedures aimed at achieving the comparability of previously collected data, which could be used in the context of summary data and individual participant data meta-analysis of cognitive measures.

Study Design and Setting. Environmental scan methods were used to conduct two reviews to identify (1) studies that quantitatively combined data on cognition and (2) general literature on statistical methods for data harmonization. Search results were rapidly screened to identify articles of relevance.

Results. All 33 meta-analyses combining cognition measures either restricted their analyses to a subset of studies using a common measure or combined standardized effect sizes across studies; none reported their harmonization steps before producing summary effects. In the second scan, three general classes of statistical harmonization models were identified (1) standardization methods, (2) latent variable models, and (3) multiple imputation models; few publications compared methods.

Conclusion. Although it is an implicit part of conducting a meta-analysis or pooled analysis, the methods used to assess inferential equivalence of complex constructs are rarely reported or discussed. Progress in this area will be supported by guidelines for the conduct and reporting of the data harmonization and integration and by evaluating and developing statistical approaches to harmonization.

Weuve et al., 2015. Guidelines for reporting methodological challenges and evaluating potential bias in dementia research

Weuve, J., Proust-Lima, C., Power, M. C., Gross, A. L., Hofer, S. M., Thiébaut, R., Chêne, G., Glymour, M.M., Dufouil, C, & MELODEM Initiative. (2015). Guidelines for reporting methodological challenges and evaluating potential bias in dementia research. Alzheimer's & Dementia, 11(9), 1098-1109.

Year: 
2015
Status: 
complete
Abstract: 

Clinical and population research on dementia and related neurologic conditions, including Alzheimer's disease, faces several unique methodological challenges. Progress to identify preventive and therapeutic strategies rests on valid and rigorous analytic approaches, but the research literature reflects little consensus on “best practices.” We present findings from a large scientific working group on research methods for clinical and population studies of dementia, which identified five categories of methodological challenges as follows: (1) attrition/sample selection, including selective survival; (2) measurement, including uncertainty in diagnostic criteria, measurement error in neuropsychological assessments, and practice or retest effects; (3) specification of longitudinal models when participants are followed for months, years, or even decades; (4) time-varying measurements; and (5) high-dimensional data. We explain why each challenge is important in dementia research and how it could compromise the translation of research findings into effective prevention or care strategies. We advance a checklist of potential sources of bias that should be routinely addressed when reporting dementia research.

    Rush & Hofer, 2014. Differences in within-and between-person factor structure of positive and negative affect: Analysis of two intensive measurement studies using multilevel structural equation modeling

    Rush, J., & Hofer, S. M. (2014). Differences in within-and between-person factor structure of positive and negative affect: Analysis of two intensive measurement studies using multilevel structural equation modeling. Psychological Assessment, 26(2), 462.

    Year: 
    2014
    Status: 
    complete
    Abstract: 

    The Positive and Negative Affect Schedule (PANAS) is a widely used measure of emotional experience. The factor structure of the PANAS has been examined predominantly with cross-sectional designs, which fails to disaggregate within-person variation from between-person differences. There is still uncertainty as to the factor structure of positive and negative affect and whether they constitute 2 distinct independent factors. The present study examined the within-person and between-person factor structure of the PANAS in 2 independent samples that reported daily affect over 7 and 14 occasions, respectively. Results from multilevel confirmatory factor analyses revealed that a 2-factor structure at both the within-person and between-person levels, with correlated specific factors for overlapping items, provided good model fit. The best-fitting solution was one where within-person factors of positive and negative affect were inversely correlated, but between-person factors were independent. The structure was further validated through multilevel structural equation modeling examining the effects of cognitive interference, daily stress, physical symptoms, and physical activity on positive and negative affect factors. (PsycINFO Database Record (c) 2016 APA, all rights reserved)

     

    Variables: 

    Hofer, 2013. On the robustness of results from longitudinal observational studies: Integrative data analysis and designs for optimizing detection of within-person change.

    Year: 
    2013
    Status: 
    complete
    Presentation Citations: 

    Hofer, S. M. (October, 2013). On the robustness of results from longitudinal observational studies: Integrative data analysis and designs for optimizing detection of within-person change. Paper presented at the annual meeting of the Society of Multivariate Experimental Psychology, St. Pete Beach, FL.

    Hofer et al., 2009. Patterns of between-person age differences and within-person changes in cognitive capabilities with age

    Year: 
    2009
    Status: 
    complete
    Presentation Citations: 

    Hofer, S. M., Piccinin, A. M., Bontempo, D. E., Sparks, C., & Hoffman, L. (2009, November). Integrative Analysis of Longitudinal Studies of Aging (IALSA): Patterns of between-person age differences and within-person changes in cognitive capabilities with age.  In S. M. Hofer (Chair), Coordinated and pooled data analyses of longitudinal studies of aging: Aging and dementia-related change in cognition, affect, and physical functioning. Paper symposium conducted at the annual meeting of the Gerontological Society of America, Atlanta.

    Hofer et al., 2010. Too stressed to think? An undergraduate course on multivariate design, data collection and analysis.

    Year: 
    2010
    Status: 
    complete
    Presentation Citations: 

    Hofer, S. M., Sparks, C. A., & MacDonald, S. W. S. (2010, October). Too stressed to think? An undergraduate course on multivariate design, data collection and analysis. Paper presented at the annual meeting of the Society of Multivariate Experimental Psychology, Atlanta

    Rast & Hofer, 2014. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    Rast, P., & Hofer, S. M. (2014). Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies. Psychological Methods, 19(1), 133.

    Year: 
    2012
    Status: 
    complete
    Presentation Citations: 

    Hofer, S. M., & Rast, P. (October, 2012). Substantial power to detect variance and covariance among rates of change: Results based on actual longitudinal studies and related simulations. Paper presented at the annual meeting of the Society of Multivariate Experimental Psychology, Vancouver, BC

    Abstract: 

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies.

    Hofer & Piccinin, 2010. Toward an integrative science of lifespan development and aging.

    Hofer, S. M., & Piccinin, A. M. (2010). Toward an integrative science of lifespan development and aging. Journals of Gerontology: Psychological Sciences, 65B(3), 269-278.

    Year: 
    2010
    Status: 
    complete
    Abstract: 

    The study of aging demands an integrative life-span developmental framework, involving interdisciplinary collaborations and multiple methodological approaches for understanding how and why individuals change, in both normative and idiosyncratic ways. We highlight and summarize some of the issues encountered when conducting integrative research for understanding aging-related change, including, the integration of results across different levels of analysis; the integration of theory, design, and analysis; and the synthesis of results across studies of aging. We emphasize the necessity of longitudinal designs for understanding development and aging and discuss methodological issues that should be considered for achieving reproducible research on within-person processes. It will be important that current and future studies permit opportunities for quantitative comparison across populations given the extent to which historical shifts and cultural differences influence life-span processes and aging-related outcomes.

    Hoffman et al., 2011. On the confounds among retest gains and age-cohort differences in the estimation of within-person change in longitudinal studies: a simulation study

    Hoffman, L., Hofer, S. M., & Sliwinski, M. J. (2011). On the confounds among retest gains and age-cohort differences in the estimation of within-person change in longitudinal studies: A simulation study. Psychology and Aging, 26(4), 778-791.

    Year: 
    2011
    Status: 
    complete
    Abstract: 

    Although longitudinal designs are the only way in which age changes can be directly observed, a recurrent criticism involves to what extent retest effects may downwardly bias estimates of true age-related cognitive change. Considerable attention has been given to the problem of retest effects within mixed effects models that include separate parameters for longitudinal change over time (usually specified as a function of age) and for the impact of retest (specified as a function of number of exposures). Because time (i.e., intervals between assessment) and number of exposures are highly correlated (and are perfectly correlated in equal interval designs) in most longitudinal studies, the separation of effects of within-person change from effects of retest gains is only possible given certain assumptions (e.g., age convergence). To the extent that cross-sectional and longitudinal effects of age differ, obtained estimates of aging and retest may not be informative. The current simulation study investigated the recovery of within-person change (i.e., aging) and retest effects from repeated cognitive testing as a function of number of waves, age range at baseline, and size and direction of age-cohort differences on the intercept and age slope in age-based models of change. Significant bias and Type I error rates in the estimated effects of retest were observed when these convergence assumptions were not met. These simulation results suggest that retest effects may not be distinguishable from effects of aging-related change and age-cohort differences in typical long-term traditional longitudinal designs.

    Pages