• ToastedPlanet@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    22
    ·
    15 days ago

    https://psycnet.apa.org/fulltext/2025-33892-004.html

    We recruited 2,180 participants on Lucid between January 31 and February 17, 2020, about 9 months before the 2020 U.S. presidential election. Lucid, an online aggregator of survey respondents from multiple sources

    https://support.lucidhq.com/s/article/Sample-Sourcing-FAQs

    How are respondent incentives handled in the Marketplace?

    Our respondents are sourced from a variety of supplier types who have control over incentivizing their respondents based on their business rules. Each incentive program is unique. Some suppliers do not incentivize their respondents at all, most provide loyalty reward points or gift cards, and some provide cash payments. Lucid does not control the incentivization models of our suppliers, as that’s part of their individual business models. The method of incentivization also varies; for instance, some suppliers use the survey’s CPI to calculate incentive, others LOI, or a combination of the two. Each respondent agrees to their panel’s specific incentivization method when they join.

    So the study didn’t use a random sample. They took people who volunteered to do promotions that were funneled to together by one site. This is how we got those bogus polling data that Gen Z was secretly conservative because all the data was from YouGov.

    The study they cite as justification really is about using Lucid over MTurk, not using Lucid over random samples. So they cite another study.

    https://journals.sagepub.com/doi/10.1177/2053168018822174

    We would note, however, that even extremely idiosyncratic convenience samples (e.g. Xbox users; Gelman et al., 2016) can sometimes produce estimates that turn out to have been accurate.

    https://researchdmr.com/MythicalSwingVoterFinal.pdf

    The Xbox panel is not representative of the electorate, with Xbox respondents predominantly young and male. As shown in Figure 2, 66% of Xbox panelists are between 18 and 29 years old, compared to only 18% of respondents in the 2008 exit poll,4 while men make up 93% of Xbox panelists but only 47% of voters in the exit poll. With a typical-sized sample of 1,000 or so, it would be difficult to correct skews this large, but the scale of the Xbox panel compensates for its many sins. For example, despite the small proportion of women among Xbox panelists, there are over 5,000 women in our sample, which is an order of magnitude more than the number of women in an RDD sample of 1,000.

    The idea is to partition the population into cells (defined by the cross-classification of various attributes of respondents), use the sample to estimate the mean of a survey variable within each cell, and finally to aggregate the cell-level estimates by weighting each cell by its proportion in the population…MRP addresses this problem by using hierarchical Bayesian regression to obtain stable estimates of cell means (Gelman and Hill, 2006).

    Maybe this is a lack of stats knowledge on my part, but can a person really successfully math out the responses of demographics the study didn’t sample from when it mostly had respondents predominately from one age, one racial, and one sex demographic?

    This seems like they are using math to make stuff up. It seems like the main driver of online surveys is to cut costs and save time for researchers. I’m taking this survey with a grain of salt for now. Maybe that’s just my bias though. =/

      • ToastedPlanet@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        That’s fair. Especially when it’s open secret that at least 30 of 37 conservative organization’s polls where skewed in Trump’s favor to make it look he’s building momentum.