0704-xxxx_esgr_ssb_4.23.2021

0704-XXXX_ESGR_SSB_4.23.2021.docx

Understanding Employer Experiences Under Continuing Reserve Component Operations

OMB: 0704-0618

Document [docx]
Download: docx | pdf


SUPPORTING STATEMENT – PART B

B.  COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

If the collection of information employs statistical methods, it should be indicated in Item 17 of OMB Form 83-I, and the following information should be provided in this Supporting Statement:

1.  Description of the Activity

  1. Employer Survey

The purpose of this activity is to gather information from employers about their views on employing members of the National Guard and Reserve (G&R). The universe of possible respondents is the set of all private U.S. firms and federal, state and local government agencies, with a focus on those currently or recently employing G&R members. Employers will be stratified by sector (federal government, state government, local government, and private sector) and size (number of employees). Our primary analytical goal is to derive unbiased estimates that are representative of the population of firms employing G&R members within each stratum. We focus on stratum-level estimates rather than universe-level estimates because of two factors that inhibit meaningful aggregate summaries: (i) most private firms (including most private firms employing G&R personnel) are small but most G&R personnel work for large firms; and (ii) the lack of a consistent definition of a firm across the private and public sectors (i.e., there is no clear public sector analog of a private-sector firm). While we will collect some data on employers that do not employ G&R personnel, our approach is not designed to provide precise estimates for this population of employers. Our sampling design aims to provide adequate precision while meeting budgetary constraints and limiting respondent burden.

  1. Private Sector:

For private-sector firms, we will use different sampling strategies within each employment size stratum to effectively balance representativeness and respondent burden. We estimate that approximately 80% all large private firms (500 or more employees) are G&R employers. By contrast, we estimate that approximately 20% medium private firms (between 100 and 499 employees) and approximately 1% of small private firms (fewer than 100 employees) are G&R employers. These approximations are derived by combining data from the 2016 Annual Survey of Entrepreneurs (ASE) with the 2018 Status of Forces Survey of Reserve Component Members (SOFS-R). The first dataset details the distribution of firms by number of employees, while the second dataset contains information about the proportion of G&R personnel employed by firms of varying size. We note that these are rough approximations, based on assumptions about the unknown distribution of the number of G&R employees in firms of varying size; the true proportions may differ substantially. However, a basic feature that is evident from these estimates is that a random sample of large and medium firms will likely yield an appreciable number of G&R employers while a random sample of small firms will not.

For our sample of private-sector G&R firms, we will collect a random sample of large firms (which are likely to employ G&R personnel), construct a convenience sample of small firms, and use a combined random sample and convenience sample for medium firms. We use a convenience sampling approach for small firms, drawing on data sources (described below) that consist predominantly of employers of G&R personnel, because of the very low likelihood that a small firm selected at random would employ G&R members. To ensure sufficient representation of very small businesses, we will stratify the selection of the small private firm sample into firms with fewer than 50 employees and those with 50 to 99 employees. The combined sampling approach for medium-sized firms allows us to assess the degree of bias in the sources from which the convenience samples derive. This in turn will inform the analysis of our convenience sample of small-sized firms, for which a comparative random sample is not practically feasible and which is based on the same data sources as the medium-sized firm convenience sample.

While our primary analytical goal is to derive quantities that are representative of the population of G&R employers, we will also survey firms not employing G&R personnel (non-G&R employers). Many of the survey items are applicable to non-G&R employers and their perspective is relevant because while they do not currently employ G&R personnel, they may have done so in the past or may do so in the future. Our sample of large and medium non-G&R private firms will be derived from the random sample of large and medium firms that respond and identify as not employing G&R. Because we will not collect a random sample of small firms, we will not have a representative sample of non-G&R small firms. We also expect that our sample of large non-G&R firms will be small, since we expect the vast majority of large firms to employ G&R personnel.

Our universe of private U.S. firms will be derived from the Dun & Bradstreet Hoovers database, which to the best of our knowledge is the largest existing collection of U.S. firm data available to researchers. Email contact information for a human resources representative or other appropriate survey respondent will be retrieved from that database and supplemented as needed using a commercial email append service. Where email contact information is not available, outreach will be conducted by mail.

It is not possible to filter all public-sector entities from the Dun & Bradstreet database explicitly, so we will take the following steps to limit the overlap between our private- and public-sector sampling frames. First, we will exclude North American Industry Classification System (NAICS) code 92 (public administration), which is primarily comprised of public-sector entities, from the private-sector sampling frame. While other industry codes may contain both private- and public-sector firms, we will not explicitly remove them from our sampling frame of private-sector firms. This is because, outside of NAICS 92, we expect a large majority of firms to be private sector. We will also exclude firms found in our public-sector sampling frame, which consists of federal agencies, a subset of state agencies, and local governments. Our public sample is described in more detail below. We note, however, that our public-sector sampling frame is not a complete accounting of all public-sector entities. It is still possible that there are firms that are considered public-sector that are in neither NAICS 92 nor our public-sector sampling frame. Such firms will remain in our private sample frame, but we expect this to be a small proportion of this frame, and thus rarely sampled. As a result, in practice, our private-sector sample will be an analysis of U.S. firms aside from those in NAICS 92 and our public-sample frame.

To summarize, our sample of private-sector firms will be composed of four parts: (a) a random sample of large firms, (b) a random sample of medium firms, (c) a convenience sample of G&R medium firms, and (d) a convenience sample of G&R small firms. Sample selection for the convenience sample of small firms will be stratified into two groups: very small businesses (fewer than 50 employees) and businesses with between 50 and 99 employees. We summarize the sampling strategy of private firms in Table 1 and provide details below.

Table 1: Summary of sampled populations for private-sector strata.

Size (employees)

G&R

Non-G&R

500+

Random Sample

Random Sample

[100, 499]

Convenience + Random sample

Random sample

<100

Convenience sample

None

Our random sample of large employers (sample (a)) will be constructed by taking a random sample from the subset of the private-sector sampling frame with 500 or more employees in the Dun & Bradstreet database. Similarly, our random sample of medium-sized private firms (sample (b)) will be constructed through a random sample of firms categorized as having between 100 and 499 employees in the Dun & Bradstreet database.

Convenience samples for the medium and small firms (samples (c) and (d)) will be obtained through various available military data sources that consist primarily if not exclusively of G&R employers. These sources include: the Civilian Employment Information (CEI) database from the Defense Manpower Data Center (DMDC) (approximately 150,000 unique firms represented), information from Employer Support of the Guard and Reserve (ESGR) on employers who have received awards for their support of guard and reserve employees (approximately 20,000 unique firms for Patriot Awards and 300 for Freedom Awards, see https://www.esgr.mil/Employer-Awards/ESGR-Awards-Programs for more information on these awards), and a list of organizations that have been named in inquiries to the ESGR call center (approximately 3,000 unique firms). We then will merge information from firms identified through these sources into the Dun & Bradstreet database, and filter out firms that are (i) large employers, (ii) medium firms selected in our existing random sample, (iii) NAICS 92 employers, or (iv) belong to our public-sector sampling frame. After this merging and filtering, random subsets of medium and small employers will be selected from the convenience sample population (i.e., we will not survey all employers in these data sources and will randomly select those we do survey). We will deliberately select a portion of the small private firm convenience sample from among those firms that match that have fewer than 50 employees and another portion from the set of firms that have between 50 and 99 employees. The split between these two portions will be informed by the data and to the extent feasible will prioritize gathering sufficient responses from very small private firms to enable statistically precise estimates for that group.

Contact information and business data are available through the Dun & Bradstreet database for our convenience samples of medium and small firms. We note that generating convenience samples as described relies on merging employer lists from Department of Defense (DoD) sources with the Dun & Bradstreet database. As we developed our methodology, we attempted merging small subsets of these DoD lists to Dun & Bradstreet databases and found generally high match rates. In particular, the estimated proportions of successfully matched firms were: 63% for firms in DoD’s CEI database, 81% for Patriot Award firms, 37% for firms identified in ESGR call center files, and 90% for Freedom Award firms.

Table 2 lists the number of private U.S. firms within each size stratum. As described above, estimated proportions of G&R firms in each sector are based on ASE and SOFS-R data. Total numbers of firms in each stratum are determined by the ASE. Table 3 lists the number of sampled firms and expected number of respondents within each stratum. To calculate approximate number of respondents, we assume a 5% response rate among non-G&R firms and a 12% response rate for G&R firms. The assumed response rate for non-G&R employers was similar to that observed in Gates, et al., (2013).1 Our assumed response rate of 12% for G&R firms is slightly lower than was observed in the previous study (17%).

Table 2: Number of all private U.S. firms and all private G&R firms

Size (employees)

Approximate # of all firms

Estimated # of all G&R firms

Estimated proportion of all firms that hire G&R

500+

20,000

16,000

80%

[100, 499]

80,000

16,000

20%

<100

5,000,000

50,000

1%

Table 3: Number of sampled private firms and expected number of respondents

Size (employees)

Approximate # of sampled firms

Approximate # of G&R respondents

Approximate # of non-G&R respondents

500+

6,000 (Random)

576 (Random)

60 (Random)

[100, 499]

3,750 (Convenience) + 6,250 (Random)

450 (Convenience) + 150 (Random)

250 (Random)

<100

5,000 in total; split between firms with <50 and [50,99] (Convenience)

600 (Convenience)

0

Total

21,000

1,776

310

  1. Public Sector:

Based on the data sources mentioned above, we estimate that the public sector employs approximately 36% of G&R members. Therefore, an analysis of the experiences of public sector employers is critical to this study. Below we describe the sampling strategies for federal, state, and local employers. We summarize the overall sampling strategy in Table 4 below.

  • Federal government: Within the federal portion of the public sector, a firm will be defined by the “Level 1” sub-agency codes used for stratification purposes for the Office of Personnel Management’s Federal Employee Viewpoint Survey (FEVS). We will sample the entire universe of 211 unique “Level 1” sub-agencies, including 208 from the 2019 FEVS plus three “Level 1” sub-agencies from the Department of Veterans Affairs from the 2017 administration of the FEVS (the VA did not participate in the FEVS in 2019). Note that smaller organizations that are not separately itemized at the “Level 1” sub-agency level but rather fall under the umbrella of “All Other” organizations within federal agencies will not be surveyed. Survey findings, therefore, would be generalizable to the population of “Level 1” sub-agencies but not to smaller organizations. In addition, note that 60 of the sub-agencies to be surveyed are in the Department of Defense (including the military services); while they contribute to the federal sample for analysis purposes, we are separately seeking approval to survey those organizations through DoD’s Report Control Symbol (RCS) licensing process. In Tables 5 and 6 below, we will assume that all federal “Level 1” sub-agencies are employers of G&R. We will also assume a 75% response rate among these federal sub-agencies.

  • State governments: At the state level, we will first survey a central point of contact for each state (e.g., an HR director). We will also survey a central point of contact for agencies corresponding to each of the following four state-level government functions: corrections, judicial and legal, financial administration, and police protection. These four government functions are chosen because they are among the 10 functions employing the most people at the state level according to Annual Survey of Public Employment & Payroll (ASPEP) data (among government functions supported by all states), and also fall predominantly within NAICS 92 (public administration). Other functions of state government that employ large numbers of people (e.g., higher education and hospitals) are divided between administrative functions included in NAICS 92 and implementing functions at entities outside NAICS 92 that may be captured in the much larger sample of employers drawn from Dun & Bradstreet. We plan to use the state sample to shed light on organizations not captured in that larger sample, and therefore exclude those functions likely to be covered in the larger sample. In addition to the 50 states, we will also survey three U.S. territories with National Guard units (Puerto Rico, Guam, and the U.S. Virgin Islands), and the District of Columbia. With four agencies per state in addition to the central state point of contact for each state, the universe of state-level employers consists of 270 firms. We will sample this entire universe of state-level employers.

In Tables 5 and 6 below, we will assume that every state is an employer of G&R and that 50% of sub-state agencies are employers of G&R. We will assume a 100% response rate for the 54 the state-level samples (consistent with the response rates from the Census of Governments). For the sub-state agencies, we will assume 75% response rates for G&R employers and 25% response rates for non-G&R employers.

  • Local governments: Our universe of local governments will consist of general purpose and special district governments listed in the 2017 Census of Governments (COG). General purpose governments include township, municipality, county governments. Special districts are government entities established to perform one or a small number of government functions (e.g., fire, police, or utilities). In total, there are approximately 78,000 general purpose and special district governments in the census. Unlike the federal and state samples, we will not be able to sample the universe of these governments both due to study cost and respondent burden. As such, we will sample within local governments. In particular, we will conduct a random sample of large general-purpose governments (estimated 500+ employees), because of our interest in understanding the experiences of these larger entities that are more likely to be G&R employers. We will pair this random sample with a convenience sample drawn from small or medium general-purpose governments (<500 employees) and special districts (for which we are unable to estimate the number of employees), as well as large general-purpose governments not selected as part of the random sample.

For the purposes of identifying large local governments, we will first use regression models to estimate the relationship between population size and employment size using data from the Annual Survey of Public Employment & Payroll (ASPEP). We then will use these regressions (which will adjust for population size, government type, and interaction between population size and government type) to estimate the number of employees in each of the general-purpose governments. We will randomly select 2,500 large local governments (estimated to have 500 or more employees) from among those that merge into the Dun & Bradstreet database. Preliminary merge attempts suggest that 80% of governments within the Census of Governments database can be matched to the Dun & Bradstreet database.

We will then construct a convenience sample of local governments by identifying the intersection of COG governments and G&R-employing organizations identified in DoD sources (e.g., CEI, awards data, call center data) that match to the Dun & Bradstreet data. We will remove entities randomly selected for the large local government sample (to avoid resampling them) and will then take a random subset of 3,000 local governments. No random samples (of the population) will be taken of small-to-medium or special district local governments in order to reduce burden on these sectors. As a result, we do not expect to have an appreciable representation of non-G&R firms in these strata. In Tables 5 and 6 below, we assume that 80% of large local governments are employers of G&R and that 100% of firms within our convenience samples are employers of G&R. We assume that 10% of other-than-large local governments are G&R employers. We will also assume that the response rates of local governments are 12% for G&R firms and 5% for non-G&R firms, as we did for the private sector.

Table 4: Summary of sampled populations for public-sector strata.

Type

G&R

Non-G&R

Federal

Universe

Universe

State

Universe

Universe

 Local

Large

Random

Random


All

Convenience

None

Table 5: Number of all public-sector employer organizations and number with G&R employees

Type

Approximate # of all firms

Estimated # of all G&R firms

Estimated Proportion of firms that hire G&R

Federal

211

211

100%

State

270

162

60%

Local 

Large

4,700

3,760

80%

 

All

78,000

11,090

14%

Table 6: Number of sampled public firms and expected number of respondents

Type

Approximate # of sampled firms

Approximate # of G&R respondents

Approximate # of non-G&R respondents

Federal

211

158

0

State

594

135

27

 

Large

2,500 (Random)

240 (Random)

25

Local

All

3,000 (Convenience)

360 (Convenience)

0

Total

6,305

1,015

93

  1. Employer Interviews

RAND will interview representatives from organizations from the private sector and public sector that either currently employ or recently employed G&R members. We will ask about the organization’s experiences with G&R employees and duty-related absences; the impact of duty-related absences on the organization; the Uniformed Services Employment and Reemployment Rights Act (USERRA); Employer Support of the Guard and Reserve (ESGR) programs and support; and recommendations for possible ways to better support civilian employers of G&R members. In total, we will conduct 90 employer interviews, each expected to last approximately 45 minutes. This is a one-time interview; no follow-ups are planned.

Given the small size of the National Guard and Reserve relative to the entire U.S. labor force, employing a guard or reserve member is a relatively rare event. Recognizing the challenge this poses to identifying suitable candidates for these in-depth interviews, our plan is to select organizations for interviews from lists of organizations that either received an award from DoD publicly recognizing them as an exemplary employer of G&R personnel (e.g., a Patriot Award or Freedom Award) or submitted an inquiry to the ESGR support center via a telephone call or the ESGR website. The study team will use the information provided by the study sponsor to identify the appropriate point of contact to answer the interview questions on behalf of the organization (e.g., the lead human resources official for the organization). As needed, we will supplement this information by merging information from firms identified through these sources into the Dun & Bradstreet database, using the process described in section 1(A)(i) in the context of constructing convenience samples for the purposes of the employer survey.

The interviews that will be analyzed using qualitative methods, not statistical methods. The results are not intended to be generalizable to the entire population of civilian employers of RC personnel. We will, however, seek to achieve variation across employer types, such as sector (private vs. public), employer size (among private firms), and employer location, so that our qualitative findings may be informed by these varied perspectives.

2.  Procedures for the Collection of Information

  1. Employer Survey

  1. Sample allocation, stratification:

Our sample of firms will consist of six primary strata: (i) large private firms, (ii) medium private firms, (iii) small private firms, (iv) federal government agencies, (v) state agencies and (vi) local agencies. Within each of these strata, we will collect data on both G&R and non-G&R employing firms, although our primary concern will be to provide estimates for the population of G&R firms. For (i), we will randomly sample from the population of large firms. For (ii), a combination of a random sample and convenience sample will be employed. For (iii), a pure convenience sample will be necessary, which will be divided into sub-strata of firms employing fewer than 50 employees and firms with between 50 and 99 employees. For (iv) every federal agency (specified by “Level 1” sub-agency codes) will be sampled. For (v), we will sample a central contact as well as four sub-agencies for every state (plus 3 territories with Guard units and the District of Columbia). For (vi), we will draw a random sample of large local governments and supplement it with a convenience sample of known G&R employers from among all sizes of local governments.

  1. Estimation:

In our analyses of collected survey data, the mean (for continuous responses) and proportions (for binary and categorical responses) will be computed within each of the six specified strata from part a. Confidence intervals will be constructed at the 95% level using appropriate statistical methodology. Consistent with other firm-level surveys, we expect a substantial level of survey nonresponse. When survey nonresponse is correlated with outcomes of interest, naïve estimates of means and proportions may exhibit nonresponse bias. We will appropriately adjust for nonresponse bias using standard propensity score weighting and time-to-response analyses. Where random or full-stratum samples are employed (strata (i), (iv), and (v)), nonresponse adjustment is all that will be required.

For strata where convenience samples are used (strata (ii), (iii), and (vi)), adjusting only for nonresponse alone will only provide unbiased estimates for the population defined by the convenience sample. Additional adjustment will be needed to make the estimates representative of the population entire stratum. When both convenience and random samples are available (stratum (ii)), a smaller random sample can be used to assess the degree of bias in a larger convenience sample. In this case, the two samples can be combined by adding post-stratification weights to the convenience sample. These poststratification weights can be determined by defining substrata and treating the random sample as the population of interest. For small private firms (stratum (iii)) and local governments other than large local governments within stratum (vi)), in which only a convenience sample is available, this strategy will not be available. In this case, the adjusted estimates will formally only be representative of the available convenience sample. However, if the random sample of medium-sized firms has similar characteristics to the convenience sample of medium-sized firms, this will serve as circumstantial evidence that the convenience sample of small firms is also similar to a random sample of small firms. Conversely, if the convenience sample of medium-sized firms is different from the random sample of these firms, this would also cast doubt on the generalizability of the convenience sample of small firms.

  1. Degree of accuracy:

Below is a summary of sample size calculations. For simplicity, we perform all calculations with respect to a binary outcome (i.e., a yes or no question). We will also assume that the true underlying population proportion is 0.5. This is because an assumed proportion of 0.5 will yield the most conservative (i.e., largest) sample size requirements. Finally, we assume a margin of error of 0.05 and a level of 0.95. That is, we require that the estimated proportion lies in the interval (0.45, 0.55) with probability 0.95. Under these assumptions, standard sample size formulae suggest a minimum sample size of 385, which is smaller than the expected number of completed surveys in each of the private sector strata. To the extent feasible and informed by the data, we will divide the small private firm strata such that the anticipated number of responses from the very small firm group (fewer than 50 employees) will exceed this threshold as well. The expected number of completed surveys also exceeds 385 for G&R local governments when we combine both the random sample of large local governments and the convenience sample that includes large, small-to-medium, and special districts. For the federal and state strata, the expected number of respondents is smaller than 385, however we intend to sample the universe for these strata. As such, follow-up efforts will be focused on increasing response rates within these strata. Another factor that may reduce our statistical efficiency is the effect of nonresponse bias corrections.

Accounting for nonresponse corrections at the stage of study design is difficult because nonresponse weighting can either increase or decrease the variance of estimators, depending on the relationship between the adjustment variables and the outcome variables. One way to adjust for the increase in in variance due to nonresponse is consider the sample size necessary subject to a design effect (DEFF). In this case, the DEFF is the ratio of the variance of an estimator obtained through a random sample and the variance of an estimator of the same size obtained through nonresponse adjustment. Assuming a DEFF of 1.5 (which is large), yields a minimum required sample size of 577 under the same conservative assumptions described above. Under these assumptions, private firm strata and local governments should still reach required precision. Note none of the non-G&R firm strata have sufficient size to provide accurate estimates according to this criterion. We are utilizing a sampling strategy that prioritizes estimates for the population of G&R firms, because the experiences of these firms make them more likely to have policy-relevant lessons to offer, as well as to reduce the burden to the public and to target finite project resources.

  1. Unusual problems requiring specialized sampling procedures:

A limitation of our approach is the absence of a probability sample of small private firms and local governments; this is necessary due to the sparsity of G&R employers among small firms. As described above in section 2(A)(ii). this limitation is mitigated by the combination of a probability sample and a convenience sample for the medium-sized firms.

  1. Reduce respondent burden:

We expect respondent burden to be low for this one-time data collection. Moreover, we decided to not randomly sample small private firms or other-than-large local governments in part to reduce the burden on smaller organizations, which are less likely to employ G&R personnel and for which responding to the data collection may be a larger burden relative to organization resources.

  1. Employer Interviews

We will not formally stratify the employer interview sample but will purposefully divide the universe of potential respondents into categories of firms developed based on the firm-level information available to us in the DoD-provided sources. Contingent on the nature of the data available from ESGR, groupings may include public or private sector firms, large or small businesses, first responder organizations, or employers based in different regions. These categories will not necessarily align with the identification of strata for the employer survey, owing to the different objectives of the two data collections. The employer survey is designed to yield estimates that are representative of organizations in each stratum while the interviews will not be analyzed using statistical methods. Nonetheless, we will randomly select organizations from within the specified categories. We will conduct outreach to encourage participation, as discussed below and will replace non-responding organizations within specified categories with another organization in that category until we have reached our targets within the category and our overall target of 90 interviews.

We will code the comments made during the interviews, and then analyze those results using qualitative methods. We may report differences/similarities by employer type but given the relatively small sample and how it will be selected (convenience sample), most findings will be reported for the entire group of interview participants. We may provide summary descriptive information about the types of employers we interviewed to help convey the variation in our sample. However, we will not use formal estimation procedures to analyze information collected during the interviews, for the sample overall or for any subgroups.

We expect respondent burden to be low for this one-time data collection.

3.  Maximization of Response Rates, Non-response, and Reliability

  1. Employer Survey

We will employ various methods to improve the response rate of the survey. We will obtain, when possible, names and email contact information for appropriate respondents at each firm, to improve the targeting of the survey to those best-positioned to respond. This will include the use of a commercial email append service to supplement email information from Dun & Bradstreet.

Via the survey vendor, NORC, we will mail a hard copy invitation letter to all firms invited to participate. The invitation package will include a cover letter from the Department of Defense, which will highlight the importance of the survey to the Department and how it will be used to inform policymaking to the benefit of employers and current or potential G&R employees. The package also will include an invitation letter from RAND supplying the respondent with the web link to the online survey and a unique PIN they will use to access the survey, along with a fact sheet describing the content of the survey, providing assurances about confidentiality, and indicating that respondents will be able to print out a certificate of participation upon completing the survey. The vendor will send the same materials electronically to firms for which we are able to obtain email contact information. Follow-up to nonrespondents will include one mailed postcard to firms for which we do not have email contact information and up to three additional emails to firms for which we do have emails.

Despite efforts to minimize nonresponse, we still expect a high level of nonresponse, as is typical of firm-level surveys. We will use statistical methods to adjust for nonresponse bias and ensure that survey results are representative of the employer strata specified in section 1. In particular, we will use both propensity score weighting and time-to-response analysis. In propensity score weighting, data collected from respondents is reweighted so that the population of respondents is representative of the full population of both respondents and nonrespondents. A critical step in propensity score weighting is the estimation of firm-level propensity scores. Propensity scores will be computed using a flexible class of generalized boosted regression models. Methods for fitting these models for propensity score estimation is implemented in the R package TWANG. The TWANG package selects a model for propensity scores that minimizes imbalance between respondent and nonrespondent characteristics.

The underlying assumption of all propensity score methods is that nonresponse is random conditional on modeled covariates. While standard, this assumption is not explicitly testable. These assumptions are more feasible when propensity score models are constructed on a rich set of covariates. We will use firm-level covariates for both public and private sector firms, including firm size, industry, and geography. One additional challenge is that, even conditional on characteristics observed by both respondents and nonrespondents, G&R status is both unobserved for nonrespondents and likely to affect response rates. To account for this, we will compare results of three weighting strategies with different assumptions. As a baseline, we will compute propensity weighted estimates where propensity scores are modeled on all observed covariates. Next, we will perform a sensitivity analysis in which we assume response rates of G&R and non-G&R firms differ by a fixed ratio, conditional on covariates. By varying this ratio, we can determine the impact of unobserved G&R status on estimates. Finally, we propose to model G&R status on all observed respondents, for which G&R status is known. We then multiply impute G&R status for nonrespondents and evaluate the increased variance in estimates when G&R status is included in the propensity score model.

To supplement the propensity weighting analysis, we will also use a time-to-response analysis, which relies on a different assumption. Namely, time-to-response analyses assume that late respondents are characteristic of nonrespondents. To implement time-to-response analyses, we will automatically collect the date of completion of our web-based survey. We then test whether or not the distribution response outcomes depends on response time. If response time is predictive of responses, then there is evidence of nonresponse bias.

For all strata of interest, barring the population of small private firms and local governments, random sampling combined with appropriate adjustments for nonresponse ensures generalizability within each stratum. For the small private firms and local governments, which are not randomly sampled, even appropriate nonresponse corrections cannot ensure generalizability to the population stratum; instead we can only hope to generalize to the original convenience sample itself. This is a limitation of our design, which is necessary due to the sparsity of small firms and local governments employing G&R members. However, it is also notable that these strata employ a relatively small proportion of G&R members.

  1. Employer Interviews

The study team will take several steps to encourage selected employers to participate in the interviews. First, we anticipate that restricting the sample to organizations with recent, documented evidence of having employed G&R personnel increases the likelihood that prospective interviewees respondents at those organizations will find the prospect of an interview interesting and relevant to them. In particular, because these are organizations that either received recognition from ESGR or had reason to reach out to ESGR with an inquiry, there is reason to believe they may be able to shed light on the topics of interest to the study.

We will obtain, when possible, names and email contact information for appropriate respondents at each firm, to improve the targeting of the survey to those best-positioned to respond. We will send initial recruitment materials via U.S. mail. These materials will include a cover letter from DoD underscoring the value of the research to DoD, an invitation letter from RAND, and a fact sheet describing the interviews. This package will inform prospective interviewees that they will be contacted soon by RAND to schedule an interview. It also will include the RAND project leader’s business card, so that prospective interviewees may contact us with any questions.

We will then contact prospective interviewees by email (when available) or by phone (if email unavailable or we do not receive a response within a week to the email). Interviews will be scheduled within a few weeks of this outreach, to sustain momentum for those agreeing to participate. We will send reminder emails and/or make reminder phone calls, in the days leading up to the interviews. At the start of interviews, the interviewer will ask the interviewee if he or she has any questions and will offer to resend the materials or review the key details. We will further encourage participation by clearly establishing the confidentiality of the interviews in all communications. In the interview itself, the subject will be asked if he or she agrees to participate and consents to recording the interview.

4.  Tests of Procedures

  1. Employer Survey

The survey instrument was developed by drawing on questions that were first asked of employers as part of DMDC’s 2011 U.S. Department of Defense National Survey of Employers, revising and updating those questions as appropriate based on responses to that survey and analyses of it included in a 2013 RAND study on employer experienced with G&R personnel (led by one of the current project principal investigators), and including additional or replacement questions based on the goals of the current study. We iterated with and incorporated feedback from the DoD sponsor in developing the survey instrument. We also tested the survey instrument by conducting cognitive interviews with five respondents, and questions were revised based on findings from those cognitive interviews.

  1. Employer Interviews

RAND utilizes best practices in its design and execution of interviews and in the subsequent qualitative analysis of interview comments. To develop the interview protocol, the project team reviewed the results of the 2013 RAND study, which included a qualitative interview component. Additionally, we reviewed responses from recent DMDC SOFS-R surveys, and incorporated feedback from the DoD sponsor of the research to refine the protocol. The RAND team includes researchers and support staff with extensive experience conducting interviews.

5.  Statistical Consultation and Information Analysis

  1. Employer Survey

  1. Statistical Analysis

Irineo Cabreros, Ph.D.

Associate Statistician

RAND Corporation

1776 Main Street

Santa Monica, CA 90407

Office: 310-393-0411 x 6361

Fax: 310-393-4818

Email: cabreros@rand.org

  1. Provide name and organization of person(s) who will actually collect and analyze the collected information.

NORC will collect the data under a formal purchase agreement with RAND:

Heidi Whitmore, M.P.P., Principal Research Scientist, NORC, whitmore-heidi@norc.org, (763) 478-6725

The RAND project team will analyze the survey data under the supervision of the project’s co-principal investigators:

Laura Werber, Ph.D., RAND Corporation, lauraw@rand.org, (310) 393-0411 x6897

Susan Gates, Ph.D., RAND Corporation, sgates@rand.org, (310) 393-0411 x7452



  1. Employer Interviews

  1. Statistical Analysis

Not applicable

  1.  Provide name and organization of person(s) who will actually collect and analyze the collected information.

The RAND project team will collect and analyze the interview data under the supervision of the project’s co-principal investigators:

Laura Werber, Ph.D., RAND Corporation, lauraw@rand.org, (310) 393-0411 x6897

Susan Gates, Ph.D., RAND Corporation, sgates@rand.org, (310) 393-0411 x7452





1 Gates, Susan M., Geoffrey McGovern, Ivan Waggoner, John D. Winkler, Ashley Pierson, Lauren Andrews, and Peter Buryk, Supporting Employers in the Reserve Operational Forces Era: Are Changes Needed to Reservists' Employment Rights Legislation, Policies, or Programs?, Santa Monica, Calif.: RAND Corporation, RR-152-OSD, 2013. As of March 04, 2021: https://www.rand.org/pubs/research_reports/RR152.html

7


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorCabreros, Irineo
File Modified0000-00-00
File Created2021-08-20

© 2024 OMB.report | Privacy Policy