Skip to main content

Introduction and survey methodology

The National Survey for Wales is a large-scale, random-sample telephone survey covering people across Wales. Addresses are sampled at random, and invitations sent by post, requesting that a phone number be provided for each address. Numbers can be provided via an online portal, a telephone enquiry line, or direct to the mobile number of the interviewer for that case.

The interviewer then calls the phone number provided for the address, establishes how many adults live there, and selects one at random (the person with the next birthday) to take part in the survey. If no number is obtained for the address then the interviewer makes a brief, socially-distanced visit to the address to select a respondent and collect a phone number for that person. The achieved sample size each month is approximately 1,000, and the response rate is around 40% of those asked to take part [footnote 1].

The survey lasts 40 minutes on average and covers a range of topics. It began on 24 April 2020, after face-to-face National Survey fieldwork was halted on 16 March 2020 due to the coronavirus outbreak. Interviewers who carried out the face-to-face survey were retrained to deliver the survey by telephone. Respondents are offered a £15 voucher to say thank you for taking part.

From July 2021, a subsample of respondents were asked to complete an online section following the telephone section. Those who were not able to complete the survey online instead did the extra section by telephone. From April 2022, this approach was extended to cover all respondents.

The survey questionnaire is available on our web pages.

The results are used to inform and monitor Welsh Government policies as well as being a valuable source of information for other public sector organisations such as local councils and NHS Wales, voluntary organisations, academics, the media, and members of the public.

Relevance

The degree to which the statistical product meets user needs for both coverage and content
Characteristic Details

What it measures

The survey covers a broad range of topics including education, exercise, health, social care, use of the internet, community cohesion, wellbeing, employment, and finances. The topics change regularly in order to keep up with changing needs for information.

A range of demographic questions is also included, to allow for detailed cross-analysis of the results.

The survey content and materials are available from the National Survey web pages. This includes questionnaires and the advance letter sent to each selected household.

Mode c.30 minute telephone interview plus c.10 minute online section.

Frequency

Fieldwork is continuous; results are now reported annually (for example results based on interviews carried out between April 2022 and March 2023 were published in July 2023).

Sample size

An achieved sample of around 1,000 respondents a month. See 2021-22 technical report for details of the issued and achieved sample, including by local authority.  (For 2020 to 2021, separate fieldwork reports provide details of issued/achieved interviews).

Periods available

April 2022 to Mar 2023.

(Previous annual results for April 2021 to March 2022; quarterly results for January to March 2021; quarterly and monthly results for May to December 2020; and annual results based on face-to-face interviews, from 2019-20 back to 2012-13, are also available.)

Sample frame

Addresses are sampled randomly from Royal Mail’s small user Postcode Address File (PAF), a list of all UK addresses (excluding institutional accommodation). The address sample is drawn by ONS to ensure that respondents have not recently been selected for a range of other large-scale government surveys, including previous years of the National Survey. Only people aged 16+ are eligible to take part. Proxy interviews are not allowed.

Sample design

The telephone sample is stratified by local authority.

The address sample is broadly proportionate to local authority population size but is designed to achieve a minimum effective sample size of 250 in the smallest authorities and 750 in Powys. At addresses containing more than one household, one household is selected at random using a Kish grid method.

In each sampled household, the respondent is randomly selected from all adults (aged 16 or over) in the household who regard the sample address as their main residence, regardless of how long they have lived there. Random selection within the household is undertaken using the ‘next birthday’ method: that is, the person who the interviewer first speaks to is asked which adult in the household has the next birthday; that person is selected to take part. The ‘next birthday’ method is a long-established method for achieving an acceptably random sample in telephone surveys, while minimising the amount of potentially sensitive information about the household that has to be obtained at first contact.

Weighting

Results are weighted to take account of unequal selection probabilities and for differential non-response, i.e. to ensure that the age and sex distribution of the responding sample matches that of the population of Wales. 

Imputation

No imputation.

Outliers

No filtering of outliers.

Primary purpose

The main purpose of the survey is to provide information on the views and behaviours of people across Wales, covering a wide range of topics relating to them and their local area.

The results help public sector organisations to:

  • make decisions that are based on sound evidence
  • monitor changes over time
  • identify areas of good practice that can be implemented more widely
  • identify areas or groups that would benefit from intensive local support, so action can be targeted as effectively as possible

Users and uses

The survey is commissioned and used to help with policy-making by the Welsh Government, Sport Wales, Natural Resources Wales, and the Arts Council of Wales. As well as these organisations, there is a wide range of other users of the survey, including:

  • local authorities across Wales, NHS Wales, and Public Health Wales
  • other UK government departments and local government organisations
  • other public sector organisations
  • academics
  • the media
  • members of the public
  • voluntary sector, particularly organisations based in Wales

The latest data is deposited each autumn at the UK Data Archive, to ensure that the results are widely accessible for research purposes. Results are also linked with other datasets via secure research environments, for example the Secure Anonymised Information Linkage databank (SAIL) at Swansea University. Respondents are able to opt out of having their results linked if they wish.

Strengths and limitations

Strengths
  • A randomly-selected sample with a relatively high response rate. This helps to ensure that the results are representative of people in Wales, including harder-to-reach groups such as younger people. It means that the sample is not skewed in favour of people who are less busy or who have a particular view that they are keen to get across. The survey is weighted to adjust for non-response, which also helps make the results as representative as possible.
  • The first of the two sections of the survey (and where necessary the second section) is carried out by telephone, allowing people to take part who do not use the internet or who have lower levels of literacy. Compared with paper and online surveys the telephone section helps ensure that all relevant questions are answered. It also allows interviewers to read out introductions to questions and to help ensure respondents understand what is being asked (but without deviating from the question wording), so that respondents can give accurate answers.
  • The second of the two sections is carried out online (or by phone if necessary, e.g. if the respondent doesn’t use the internet).  This is better for more sensitive topics or ones where the involvement of an interviewer could affect responses.  It is also good for presentation of longer lists of information, and where comparability with other online surveys is wanted.
  • Results from the telephone section seem to be generally comparable to those obtained face-to-face, although we will continue to explore this further as the survey progresses.
  • The telephone mode appears to be as accessible as face-to-face mode for people with hearing impairments, given that the proportion of respondents reporting hearing impairments did not fall when the primary mode switched to telephone from face-to-face.  For respondents who would struggle with a telephone interview for accessibility reasons, face-to-face interviews are carried out (a small proportion of the total number of interviews).
  • The survey covers a wide range of topics, allowing cross-analyses between topics to be undertaken. A range of demographic questions are also included to allow cross-analysis by age, sex, employment status, etc.
  • Where possible, questions are selected that have been used in full-year versions of the National Survey and in other major surveys. This means that they are tried and tested, and that some results can be compared over time and with other countries. Where necessary, questions are adapted (typically shortened) to ensure that they work well by telephone or online.
  • Questions are developed by survey experts, peer-reviewed by the ONS National Survey team, reviewed by experienced interviewers, cognitively tested with members of the public, and where possible trialled by interviewers with a small number of respondents before fieldwork begins.
  • The results are available quickly after the end of fieldwork, within around three months. Large numbers of results tables are available in an interactive viewer.
  • Use can be made of linked records (that is, survey responses can be analysed in the context of other administrative and survey data that is held about the relevant respondents).
Limitations
  • Although the response rate is high for a telephone survey, there is still a substantial proportion of the individuals sampled who do not take part. This is likely to affect the accuracy of the estimates produced.
  • A proportion of respondents (around 15%) who complete the first section by telephone choose not to complete the second, online section.
  • The survey does not cover people living in communal establishments (e.g. care homes, residential youth offender homes, hostels, and student halls).
  • Although care has been taken to make the questions as accessible as possible, there will still be instances where respondents do not respond accurately, for example because they have not understood the question correctly or for some reason they are not able or do not wish to provide an accurate answer. Again, this will affect the accuracy of the estimates produced.
  • The sample size means that robust analyses for smaller geographical areas and other small subgroups are not always possible.

Several of the strengths and limitations mentioned above relate to the accuracy of the results. Accuracy is discussed in more detail in the following section.

Accuracy

The closeness between an estimated result and the (unknown) true value.

The main threats to accuracy are sources of error, including sampling error and non-sampling error.

Sampling error

Sampling error arises because the estimates are based on a random sample of the population rather than the whole population. The results obtained for any single random sample are likely to vary by chance from the results that would be obtained if the whole population was surveyed (i.e. a census), and this variation is known as the sampling error. In general, the smaller the sample size the larger the potential sampling error.

For a random sample, sampling error can be estimated statistically based on the data collected, using the standard error for each variable. Standard errors are affected by the survey design, and can be used to calculate confidence intervals in order to give a more intuitive idea of the size of sampling error for a particular variable. These issues are discussed in the following subsections.

Effect of survey design on standard errors

The survey is stratified at local authority level, with different probabilities of selection for people living in different local authorities. Weighting is used to correct for these different selection probabilities, as well as (as noted above) to ensure the results reflect the population characteristics (age and sex) of each local authority.

One of the effects of this complex design and of applying survey weights is that standard errors for the survey estimates are generally higher than the standard errors that would be derived from a simple random sample of the same size. [footnote 2]

The ratio of the standard error of a complex sample to the standard error of a simple random sample (SRS) of the same size is known as the design factor, or “deft”. If the standard error of an estimate in a complex survey is calculated as though it has come from an SRS survey, then multiplying that standard error by the deft gives the true standard error of the estimate which takes into account the complex design.

The ratio of the sampling variance of the complex sample to that of a simple random sample of the same size is the design effect, or “deff” (which is equal to the deft squared). Dividing the actual sample size of a complex survey by the deff gives the “effective sample size”. This is the size of an SRS that would have given the same level of precision as did the complex survey design.

All cross-analyses produced by the National Survey team, for example in bulletins and in the tables and charts available in our results viewer, take account of the design effect for each variable.

Confidence intervals (‘margin of error’)

Because the National Survey is based on a random sample, standard errors can be used to calculate confidence intervals, sometimes known as the ‘margin of error’, for each survey estimate. The confidence intervals for each estimate give a range within which the ‘true’ value for the population is likely to fall (that is, the figure we would get if the survey covered the entire population).

The most commonly-used confidence interval is a 95% confidence interval. If we carried out the survey repeatedly with 100 different samples of people and for each sample produced an estimates of the same particular population characteristic (e.g. satisfaction with life) with 95% confidence intervals around it, the exact estimates and confidence intervals would all vary slightly for the different samples. But we would expect the confidence intervals for about 95 of the 100 samples to contain the true population figure.

The larger the confidence interval, the less precise an estimate is.

95% confidence intervals have been calculated for a range of National Survey variables and are included in the technical report for each year. These intervals have been adjusted to take into account the design of the survey, and are larger than they would be if the survey had been based on a simple random sample of the same size. They equal the point estimate plus or minus approximately 1.96 * the standard error of the estimate [footnote 3]. Confidence intervals are also included in all the charts and tables of results available in our Results viewer.

Confidence intervals can also be used to help tell whether there is a real difference between two groups (one that is not just due to sampling error, i.e. the particular characteristics of the people that happened to take part in the survey). As a rough guide to interpretation: when comparing two groups, if the confidence intervals around the estimates overlap then it can be assumed that there is no statistically significant difference between the estimates. This approach is not as rigorous as doing a formal statistical test, but is straightforward, widely used and reasonably robust.

Note that compared with a formal test, checking to see whether two confidence intervals overlap is more likely to lead to "false negatives": incorrect conclusions that there is no real difference, when in fact there is a difference. It is also less likely than a formal test to lead to "false positives": incorrect conclusions that there is a difference when there is in fact none. However, carrying out many comparisons increases the chance of finding false positives. So when many comparisons are made, for example when producing large numbers of tables of results containing confidence intervals, the conservative nature of the test is an advantage because it reduces (but does not eliminate) the chance of finding false positives.

Non-sampling error

'Non-sampling error' means all differences between the survey estimates and true population values except differences due to sampling error. Unlike sampling error, non-sampling error is present in censuses as well as sample surveys. Types of non-sampling error include: coverage error, non-response error, measurement error and processing error.

It is not possible to eliminate non-sampling error altogether, and it is not possible to give statistical estimates of the size of non-sampling error. Substantial efforts have been made to reduce non-sampling error in the National Survey. Some of the key steps taken are discussed in the following subsections.

Measurement error: question development

To reduce measurement error, harmonised or well-established questions are used in the survey where possible. New questions are developed by survey experts and many have been subject to external peer review. A number of questions have also been cognitively tested for face-to-face use, to increase the likelihood that the questions are consistently understood as intended and that respondents can recall the information needed to answer them. New questions are developed by survey experts and many have been subject to external peer review, with a subset also being cognitively tested with members of the public. Reports on question review and testing are available on the National Survey webpages.

Non-response

Non-response (i.e. individuals who are selected but do not take part in the survey) is a key component of non-sampling error. Response rates are therefore an important dimension of survey quality and are monitored closely.

The response rate is the proportion of eligible telephone numbers that yielded an interview, and is defined as:

Image
Completed interviews divided by (total sample minus ineligible telephone numbers)

The survey results are weighted to take account of differential non-response across age and sex population subgroups, i.e. to ensure that the age and sex distribution of the responding sample matches that of the population of Wales. This step is designed to reduce the non-sampling error due to differential non-response by age and sex.

Missing answers

Missing answers occur for several reasons, including refusal or inability to answer a particular question, and cases where the question is not applicable to the respondent. These are minimised by not actively presenting refusal and Don’t Know codes to respondents in telephone mode (although these are generally presented to respondents in the online section, for ethical reasons). Missing answers are usually omitted from tables and analyses, except where they are of particular interest (e.g. a high level of “Don’t know” responses may be of substantive interest).

Measurement error: interview quality checks

Another potential cause of bias is interviewers systemically influencing responses in some way. It is likely that responses will be subject to effects such as social desirability bias (where the answer given is affected by what the respondent perceives to be socially acceptable or desirable). Extensive interviewer training is provided to minimise this effect and interviewers are also closely supervised, with a proportion of interviews verified through interviewer managers listening to recordings of the interviews.

The telephone questionnaire is administered using a Computer Assisted Telephone Interviewing (CATI) script. This approach allows the interviewer to provide some additional explanation where it is clear that the reason for asking the question or the question meaning is not understood by that respondent. To help them do this, interviewers are provided with background information on some of the questions at the interviewer briefings that take place before fieldwork begins. The script also contains additional information where prompts or further explanations have been found to be needed. However, interviewers are made aware that it is vital to present questions and answer options exactly as set out in the CATI script.

Some answers given are reflected in the wording of subsequent questions or checks (e.g. the names of children given are mentioned in questions on children’s schools). This helps prevent the respondent (and interviewer) to understand the questions correctly.

A range of logic checks and interviewer prompts are included in the script to make sure the answers provided are consistent and realistic. Some of these checks are ‘hard checks’: that is, checks used in cases where the respondent’s answer is not consistent with other information previously given by the respondent. In these cases the question has to be asked again, and the response changed, in order to proceed with the interview. Other checks are ‘soft checks’, for responses that seem unlikely (either because of other information provided or because they are outside the usual range) but could be correct. In these cases the interviewer is prompted to confirm with the respondent that the response is indeed correct.

Similar checks are included in the online section of the survey. Respondents are able to contact their interviewer or the survey enquiry line if they have any difficulties. Checks are in place to ensure the online section is completed by the same person who was selected to complete the telephone section.

Processing error: data validation

The main survey outputs are SPSS data files that are delivered every quarter. For each fieldwork period, two main data files are provided:

  • a household dataset, containing responses to the enumeration grid and any information asked of the respondent about other members of the household
  • a respondent dataset, containing each respondent’s answers

Each dataset is checked by the survey contractor. A set of checks on the content and format of the datasets is then carried out by Welsh Government and any amendments made by the contractor before the datasets are signed off.

Timeliness and punctuality

Timeliness refers to the lapse of time between publication and the period to which the data refers. Punctuality refers to the time lag between the actual and planned dates of publication.

Results are released around three months after the end of the fieldwork period. This period has been kept as short as possible to ensure that results are timely.

More detailed topic-specific reporting follows, depending on the needs of survey users.

Accessibility and clarity

Accessibility is the ease with which users are able to access the data, also reflecting the format(s) in which the data are available and the availability of supporting information. Clarity refers to the quality and sufficiency of the metadata, illustrations and accompanying advice.

Publications

All reports are available to download from the National Survey web pages. The National Survey web pages have been designed to be easy to navigate.

Detailed charts and tables of results are available via an interactive results viewer. Because there are hundreds of variables in the survey and many thousands of possible analyses, only a subset are included in the results viewer. However further tables / charts can be produced quickly on request.

For further information about the survey results, or if you would like to see a different breakdown of results, contact the National Survey team at surveys@gov.wales or on 0300 025 6685.

Disclosure control

We take care to ensure that individuals are not identifiable from the published results. We follow the requirements for confidentiality and data access set out in the Code of Practice for Statistics.

Language requirements

We comply with the Welsh language standards for all our outputs. Our website, first releases, Results viewer and Question viewer are published in both Welsh and English.

We aim to write clearly, using plain English / ‘Cymraeg Clir’.

UK Data Archive

Anonymised versions of the survey datasets (from which some information is removed to ensure confidentiality is preserved), together with supporting documentation, is deposited with the UK Data Archive in the autumn following the end of the fieldwork period. These datasets may be accessed by registered users for specific research projects.

From time to time, researchers may wish to analyse more detailed data than is available through the Data Archive. Requests for such data should be made to the National Survey team (see contact details below). Requests are considered on a case by case basis, and procedures are in place to keep data secure and confidential.

Methods and definitions

Each survey publication also contains a glossary with descriptions of more general terms used in the output. An interactive question viewer and copies of the questionnaires are available.

Comparability and coherence

The degree to which data can be compared over both time and domain.

Throughout National Survey statistical bulletins and releases, we highlight relevant comparators as well as information sources that are not directly comparable but provide useful context.

Comparisons with other countries

Wherever possible, survey questions are taken from surveys that run elsewhere. This allows for some comparisons across countries to be made (although differences in design and context may affect comparability).

Comparisons over time

Although the telephone / online survey covers some of the same topics as the face-to-face survey, in some cases using the same or only slightly adapted questions, care should be taken in making comparisons over time. The change of mode could affect the results in a variety of ways. For example, different types of people may be more likely to take part in the different modes; or the modes may affect how people answer questions [footnote 4]. Comparability is likely to be more problematic for less-factual questions (e.g. about people’s views on public services, as opposed to more-factual questions like whether they have used these services in a particular time period).

Footnotes

[1] See Fieldwork report for more details of the sample issued and achieved each month.

[2] Survey estimates themselves (as opposed to the standard errors and confidence intervals for those estimates) are not affected by the survey design.

[3] The value of 1.96 varies slightly according to the sample size for the estimate of interest.

[4] See 'Mixing modes within a social survey: opportunities and constraints for the National Survey for Wales', section 3.2.3 for a fuller review.

Contact details

Chris McGowan
Telephone: 0300 025 1067
Email: surveys@gov.wales

Media: 0300 025 8099