Description of the Survey
The Survey of Household Economics and Decisionmaking was fielded from October 29 through November 22, 2021. This was the ninth year of the survey, conducted annually in the fourth quarter of each year since 2013.67 Staff of the Federal Reserve Board wrote the survey questions in consultation with other Federal Reserve System staff, outside academics, and professional survey experts.
Ipsos, a private consumer research firm, administered the survey using its KnowledgePanel, a nationally representative probability-based online panel. Since 2009, Ipsos has selected respondents for KnowledgePanel based on address-based sampling (ABS). SHED respondents were then selected from this panel.
Survey Participation
Participation in the 2021 SHED depended on several separate decisions made by respondents. First, they agreed to participate in Ipsos' KnowledgePanel. According to Ipsos, 10.1 percent of individuals contacted to join KnowledgePanel agreed to join (study-specific recruitment rate). Next, they completed an initial demographic profile survey. Among those who agreed to join the panel, 61.3 percent completed the initial profile survey and became a panel member (study-specific profile rate). Finally, selected panel members agreed to complete the 2021 SHED.
Of the 18,322 panel members contacted to take the 2021 SHED, 11,965 participated and completed the survey, yielding a final-stage completion rate of 65.3 percent.68 Taking all the stages of recruitment together, the cumulative response rate was 4.0 percent. After removing a small number of respondents because of high refusal rates or completing the survey too quickly, the final sample used in the report included 11,874 respondents.69
Targeted Outreach and Incentives
To increase survey participation and completion among hard-to-reach demographic groups, Board staff and Ipsos used a targeted communication plan with monetary incentives. The target groups—young adults ages 18 to 29; adults with less than a high school degree; adults with household income under $50,000 who are under age 60; and those who are a race or ethnicity other than White, non-Hispanic—received additional email reminders and text messages during the field period, as well as additional monetary incentives.
All survey respondents not in a target group received a $5 incentive payment after survey completion. Respondents in the target groups received a $15 incentive. These targeted individuals also received additional follow-up emails during the field period to encourage completion. Additionally, the incentives offered to some targeted individuals increased to $25 during the field period to increase the incentive for completion.70
Survey Questionnaire
The 2021 survey took respondents 21.6 minutes (median time) to complete.
A priority in designing the survey questions was to understand how individuals and families—particularly those with low- to moderate-income—fared financially in 2021. The questions were intended to complement and augment the base of knowledge from other data sources, including the Board's Survey of Consumer Finances. In addition, some questions from other surveys were included to allow direct comparisons across datasets.71 The full survey questionnaire can be found in appendix A of the appendixes to this report.
Survey Mode
While the sample was drawn using probability-based sampling methods, the SHED was administered to respondents entirely online. Online interviews are less costly than telephone or in-person interviews and can be an effective way to interview a representative population.72 Ipsos' online panel offers some additional benefits. Their panel allows the same respondents to be re-interviewed in subsequent surveys with relative ease, as they can be easily contacted for several years.
Furthermore, internet panel surveys have numerous existing data points on respondents from previously administered surveys, including detailed demographic and economic information. This allows for the inclusion of additional information on respondents without increasing respondent burden.73 The respondent burdens are further reduced by automatically skipping irrelevant questions based on responses to previous answers.
The "digital divide" and other differences in internet usage could bias participation in online surveys, so recruited panel members who did not have a computer or internet access were provided with a laptop and access to the internet to complete the surveys. Even so, individuals who complete an online survey may have greater comfort or familiarity with the internet and technology than the overall adult population, which has the potential to introduce bias in the characteristics of who responds.
Sampling and Weighting
The SHED sample was designed to be representative of adults age 18 and older living in the United States.
The Ipsos methodology for selecting a general population sample from KnowledgePanel ensured that the resulting sample behaved as an equal probability of selection method (EPSEM) sample. This methodology started by weighting the entire KnowledgePanel to the benchmarks in the latest March supplement of the Current Population Survey along several geo-demographic dimensions. This way, the weighted distribution of the KnowledgePanel matched that of U.S. adults. The geo-demographic dimensions used for weighting the entire KnowledgePanel included gender, age, race, ethnicity, education, census region, household income, homeownership status, and metropolitan area status.
Using the above weights as the measure of size (MOS) for each panel member, in the next step a probability proportional to size (PPS) procedure was used to select study specific samples. This methodology was designed to produce a sample with weights close to one, thereby reducing the reliance on post-stratification weights for obtaining a representative sample.
After the survey collection was complete, statisticians at Ipsos adjusted weights in a post-stratification process that corrected for any survey non-response as well as any non-coverage or under- and oversampling in the study design. The following variables were used for the adjustment of weights for this study: age, gender, race, ethnicity, census region, residence in a metropolitan area, education, and household income. Demographic and geographic distributions for the noninstitutionalized, civilian population age 18 and older from the March Current Population Survey were the benchmarks in this adjustment. Household income benchmarks were obtained from the 2021 March Current Population Survey (CPS).
One feature of the SHED is that a subset of respondents also participated in prior waves of the survey. In 2021, about one-third of respondents had participated in the fall 2020 survey. Prior year case identifiers for these repeat respondents are available in the publicly available dataset, along with weights for this subset of respondents. These weights use a similar procedure as described above to ensure estimates based on the repeated sample are representative of the U.S. population.
Although weights allow the sample population to match the U.S. population (excluding those in the military or in institutions, such as prisons or nursing homes) based on observable characteristics, similar to all survey methods, it remains possible that non-coverage, non-response, or occasional disparities among recruited panel members result in differences between the sample population and the U.S. population. For example, address-based sampling likely misses homeless populations, and non-English speakers may not participate in surveys conducted in English.74
Despite an effort to select the sample such that the unweighted distribution of the sample more closely mirrored that of the U.S. adult population, the results indicate that weights remain necessary to accurately reflect the composition of the U.S. population. Consequently, all results presented in this report use the post-stratification weights produced by Ipsos for use with the survey.
Item Non-response and Imputation
Item non-response in the 2021 SHED was handled by imputation. Typically, less than 1 percent of observations were missing for each question.75 As a result, population estimates were not sensitive to the imputation procedure and a simple regression approach was used.76
The imputation procedure was carried out as follows:
- Impute questions, like income and education, to be used in the imputation models throughout.
- Continue at the beginning of the survey and impute missing values sequentially, question by question.
In some cases, the imputation for one question affected later questions by switching an observation from out-of-universe to in-universe or vice versa. These cases were handled by imputing the missing "downstream" question response or recoding it to missing, where appropriate.
Each variable in the publicly available SHED dataset has a corresponding imputation flag, ‘var'_iflag, which is set to 1 if the observation was imputed and 0 otherwise.77 For example, the first question of the survey about whether the respondent lived with their spouse or partner, L0_a, has a corresponding imputation flag of L0_a_iflag. This question had 31 missing values that were imputed, accounting for 0.3 percent of all observations.
References
67. Data and reports of survey findings from all past years are available at https://www.federalreserve.gov/consumerscommunities/shed.htm. Return to text
68. Three hundred ninety-five respondents were not included in the analysis because they started, but did not complete, the survey (known as break-offs). The study break-off rate for the SHED was 3.2 percent. Return to text
69. Of the 11,965 respondents who completed the survey, 91 were excluded from the analysis in this report because of either leaving responses to a large number of questions missing, completing the survey too quickly, or both. Return to text
70. All participants received a pre-notification email before the survey launch. They also received four email reminders during the three-week field period in addition to the initial survey invitation. Targeted respondents received two additional email reminders over this period. Three days before closing the survey, the email reminder to targeted adults increased the incentive for completing the survey from $15 to $25. Of the 5,733 respondents in a targeted group, 251 received the higher $25 incentive payment and the rest received the $15 incentive payment. Return to text
71. For a comparison of results to select overlapping questions from the SHED and Census Bureau surveys, see Jeff Larrimore, Maximilian Schmeiser, and Sebastian Devlin-Foltz, "Should You Trust Things You Hear Online? Comparing SHED and Census Bureau Survey Results," Finance and Economics Discussion Series Notes (Washington: Board of Governors of the Federal Reserve System, October 15, 2015), https://doi.org/10.17016/2380-7172.1619. Return to text
72. David S. Yeager et al., "Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples," Public Opinion Quarterly 75, no. 4 (2011): 709–47. Return to text
73. This approach also may allow for the retroactive linking of information learned about respondents from other data, as was done in 2021 to determine Asian respondents in earlier years of the survey. Return to text
74. For example, while the survey was weighted to match the race and ethnicity of the entire U.S. adult population, there is evidence that the Hispanic population in the survey were somewhat more likely to speak English at home than the overall Hispanic population in the United States. In the 2021 SHED, the 60 percent of Hispanic adults who speak Spanish at home is below estimates from the 2019 American Community Survey. See table B16006 at https://data.census.gov. See the Report on the Economic Well-Being of U.S. Households in 2017 for a comparison of results to select questions administered in Spanish and English at https://www.federalreserve.gov/publications/2018-economic-well-being-of-us-households-in-2017-preface.htm. Return to text
75. Because item non-response is very low in the SHED, 2021 estimates are comparable with prior years where item non-response was handled differently. Return to text
76. A logit regression was used for binary variables, a multinomial logit for categorical variables, an ordinal logit for ordered values, and a linear regression for continuous values. Typical predictors included income, education, race and ethnicity, age, gender, and metropolitan status, but varied depending on how well they predicted the variable of interest and item non-response. Additional predictors were included as appropriate. Return to text
77. The survey data can be downloaded at https://www.federalreserve.gov/consumerscommunities/shed_data.htm. Return to text