Introduction and survey methodology
The National Survey for Wales monthly/quarterly survey is a large-scale, random sample telephone survey covering people across Wales. Addresses are sampled at random and invitations sent by post, requesting that a phone number be provided for each address. Numbers can be provided via an online portal, a telephone enquiry line, or direct to the mobile number of the interviewer for that case.
The interviewer then calls the phone number provided for the address, establishes how many adults live there, and selects one at random (the person with the next birthday) to take part in the survey. If no number is obtained for the address then the interviewer makes a brief, socially-distanced visit to the address to select a respondent and collect a phone number for that person. The achieved sample size each month is approximately 1,000, and the response rate is around 41% of those asked to take part. See Fieldwork report for more details of the sample issued and achieved each month.
The survey lasts 35 minutes on average and covers a range of topics. It began on 24 April 2020, after face-to-face National Survey fieldwork was halted on 16 March 2020 due to the coronavirus outbreak. Interviewers who carried out the face-to-face survey were retrained to deliver the survey by telephone. Respondents are offered a £15 voucher to say thank you for taking part.
The results are used to inform and monitor Welsh Government policies as well as being a valuable source of information for other public sector organisations such as local councils and NHS Wales, voluntary organisations, academics, the media, and members of the public.
This report sets out how the survey adheres to the European Statistical System definition of quality.
Between July 2021 and March 2022, two trials are also being carried out to help assess aspects of the survey design (an online section, and the level of incentives to offer respondents); and these are also described.
What it measures
The survey covers a broad range of topics including education, exercise, health, social care, use of the internet, community cohesion, wellbeing, employment, and finances. The topics change regularly in order to keep up with changing needs for information. Questions can be explored using our interactive question viewer.
A range of demographic questions is also included, to allow for detailed cross-analysis of the results.
|Mode||35 minute telephone interview.|
|Fieldwork is continuous; results are reported after the first two quarters of 2021 (interviews carried out in January to March, then April to June) and then annually (April 2021 to March 2022 etc.).|
|An achieved sample of around 1,000 respondents a month. See separate Fieldwork report for details of the issued and achieved sample, including by local authority.|
January to June 2021
(Previous quarterly results for January to March 2021; quarterly and monthly results for May to December 2020; and annual results based on face-to-face interviews, from 2019-20 back to 2012-13, are also available. Results for May to December 2020 are based on a recontact survey of previous respondents to the face-to-face National Survey.)
Addresses are sampled randomly from Royal Mail’s small user Postcode Address File (PAF), a list of all UK addresses (excluding institutional accommodation). The address sample is drawn by ONS to ensure that respondents have not recently been selected for a range of other large-scale government surveys, including previous years of the National Survey. Only people aged 16+ are eligible to take part. Proxy interviews are not allowed.
The telephone sample is stratified by local authority.
The address sample is broadly proportionate to local authority population size, but is designed to achieve a minimum effective sample size of 250 in the smallest authorities and 750 in Powys. At addresses containing more than one household, one household is selected at random using a Kish grid method.
In each sampled household, the respondent is randomly selected from all adults (aged 16 or over) in the household who regard the sample address as their main residence, regardless of how long they have lived there. Random selection within the household is undertaken using the ‘next birthday’ method: that is, the person who the interviewer first speaks to is asked which adult in the household has the next birthday; that person is selected to take part. The ‘next birthday’ method is a long-established method for achieving an acceptably random sample in telephone surveys, while minimising the amount of potentially sensitive information about the household that has to be obtained at first contact.
Results are weighted to take account of unequal selection probabilities and for differential non-response, i.e. to ensure that the age and sex distribution of the responding sample matches that of the population of Wales.
No filtering of outliers.
The main purpose of the survey is to provide information on the views and behaviours of people across Wales, covering a wide range of topics relating to them and their local area.
The results help public sector organisations to:
- make decisions that are based on sound evidence
- monitor changes over time
- identify areas of good practice that can be implemented more widely
- identify areas or groups that would benefit from intensive local support, so action can be targeted as effectively as possible
Users and uses
The survey is commissioned and used to help with policy-making by the Welsh Government, Sport Wales, Natural Resources Wales, and the Arts Council of Wales. As well as these organisations, there is a wide range of other users of the survey, including:
- local authorities across Wales, NHS Wales, and Public Health Wales
- other UK government departments and local government organisations
- other public sector organisations
- the media
- members of the public
- voluntary sector, particularly organisations based in Wales
Datasets are deposited at the UK Data Archive, to ensure that the results are widely accessible for research purposes. Results are also linked with other datasets via secure research environments, for example the Secure Anonymised Information Linkage databank (SAIL) at Swansea University. Respondents are able to opt out of having their results linked if they wish.
Strengths and limitations
- A randomly-selected sample with a relatively high response rate. This helps to ensure that the results are representative of people in Wales, including harder-to-reach groups such as younger people. It means that the sample is not skewed in favour of people who are less busy or who have a particular view that they are keen to get across. The survey is weighted to adjust for non-response, which also helps make the results as representative as possible.
- It is carried out by telephone, allowing people to take part who do not use the internet or who have lower levels of literacy. Compared with paper and online surveys the mode helps ensure that all relevant questions are answered. It also allows interviewers to read out introductions to questions and to help ensure respondents understand what is being asked (but without deviating from the question wording), so that respondents can give accurate answers.
- The survey covers a wide range of topics, allowing cross-analyses between topics to be undertaken. A range of demographic questions are also included to allow cross-analysis by age, gender, employment status, etc.
- Where possible, questions are selected that have been used in full-year versions of the National Survey and in other major surveys. This means that they are tried and tested, and that some results can be compared over time and with other countries. Where necessary, questions are adapted (typically shortened) to ensure that they work well by telephone.
- Questions are developed by survey experts, peer-reviewed by the ONS National Survey team, reviewed by experienced interviewers, and where possible trialled by interviewers with a small number of respondents before fieldwork begins.
- The results are available quickly after the end of fieldwork, within around three months. Large numbers of results tables are available in an interactive viewer.
- Use can be made of linked records (that is, survey responses can be analysed in the context of other administrative and survey data that is held about the relevant respondents).
- Although the response rate is high for a telephone survey, there is still a substantial proportion of the individuals sampled who do not take part. This is likely to affect the accuracy of the estimates produced.
- Telephone surveys can be less accessible than other modes for individuals with hearing impairments.
- The survey does not cover people living in communal establishments (for example, care homes, residential youth offender homes, hostels, and student halls).
- Although care has been taken to make the questions as accessible as possible, there will still be instances where respondents do not respond accurately, for example because they have not understood the question correctly or for some reason they are not able or do not wish to provide an accurate answer. Again, this will affect the accuracy of the estimates produced.
- Robust analyses for smaller geographical areas and other small subgroups are not possible.
Several of the strengths and limitations mentioned above relate to the accuracy of the results. Accuracy is discussed in more detail in the following section.
The closeness between an estimated result and the (unknown) true value.
The main threats to accuracy are sources of error, including sampling error and non-sampling error.
Sampling error arises because the estimates are based on a random sample of the population rather than the whole population. The results obtained for any single random sample are likely to vary by chance from the results that would be obtained if the whole population was surveyed (i.e. a census), and this variation is known as the sampling error. In general, the smaller the sample size the larger the potential sampling error.
For a random sample, sampling error can be estimated statistically based on the data collected, using the standard error for each variable. Standard errors are affected by the survey design, and can be used to calculate confidence intervals in order to give a more intuitive idea of the size of sampling error for a particular variable. These issues are discussed in the following subsections.
Effect of survey design on standard errors
The survey is stratified at local authority level, with different probabilities of selection for people living in different local authorities. Weighting is used to correct for these different selection probabilities, as well as (as noted above) to ensure the results reflect the population characteristics (age and sex) of each local authority.
One of the effects of this complex design and of applying survey weights is that standard errors for the survey estimates are generally higher than the standard errors that would be derived from a simple random sample of the same size. Survey estimates themselves (as opposed to the standard errors and confidence intervals for those estimates) are not affected by the survey design.
The ratio of the standard error of a complex sample to the standard error of a simple random sample (SRS) of the same size is known as the design factor, or “deft”. If the standard error of an estimate in a complex survey is calculated as though it has come from an SRS survey, then multiplying that standard error by the deft gives the true standard error of the estimate which takes into account the complex design.
The ratio of the sampling variance of the complex sample to that of a simple random sample of the same size is the design effect, or 'deff' (which is equal to the deft squared). Dividing the actual sample size of a complex survey by the deff gives the 'effective sample size'. This is the size of an SRS that would have given the same level of precision as did the complex survey design.
All cross-analyses produced by the National Survey team, for example in bulletins and in the tables and charts available in our results viewer, take account of the design effect for each variable.
Confidence intervals (‘margin of error’)
Because the National Survey is based on a random sample, standard errors can be used to calculate confidence intervals, sometimes known as the ‘margin of error’, for each survey estimate. The confidence intervals for each estimate give a range within which the ‘true’ value for the population is likely to fall (that is, the figure we would get if the survey covered the entire population).
The most commonly-used confidence interval is a 95% confidence interval. If we carried out the survey repeatedly with 100 different samples of people and for each sample produced an estimates of the same particular population characteristic (for example, satisfaction with life) with 95% confidence intervals around it, the exact estimates and confidence intervals would all vary slightly for the different samples. But we would expect the confidence intervals for about 95 of the 100 samples to contain the true population figure.
The larger the confidence interval, the less precise an estimate is.
95% confidence intervals have been calculated for a range of National Survey variables and are included in the technical report for each year. These intervals have been adjusted to take into account the design of the survey, and are larger than they would be if the survey had been based on a simple random sample of the same size. They equal the point estimate plus or minus approximately 1.96 * the standard error of the estimate (The value of 1.96 varies slightly according to the sample size for the estimate of interest). Confidence intervals are also included in all the charts and tables of results available in our Results viewer.
Confidence intervals can also be used to help tell whether there is a real difference between two groups (one that is not just due to sampling error, i.e. the particular characteristics of the people that happened to take part in the survey). As a rough guide to interpretation: when comparing two groups, if the confidence intervals around the estimates overlap then it can be assumed that there is no statistically significant difference between the estimates. This approach is not as rigorous as doing a formal statistical test, but is straightforward, widely used and reasonably robust.
Note that compared with a formal test, checking to see whether two confidence intervals overlap is more likely to lead to 'false negatives': incorrect conclusions that there is no real difference, when in fact there is a difference. It is also less likely than a formal test to lead to 'false positives': incorrect conclusions that there is a difference when there is in fact none. However, carrying out many comparisons increases the chance of finding false positives. So when many comparisons are made, for example when producing large numbers of tables of results containing confidence intervals, the conservative nature of the test is an advantage because it reduces (but does not eliminate) the chance of finding false positives.
'Non-sampling error' means all differences between the survey estimates and true population values except differences due to sampling error. Unlike sampling error, non-sampling error is present in censuses as well as sample surveys. Types of non-sampling error include: coverage error, non-response error, measurement error and processing error.
It is not possible to eliminate non-sampling error altogether, and it is not possible to give statistical estimates of the size of non-sampling error. Substantial efforts have been made to reduce non-sampling error in the National Survey. Some of the key steps taken are discussed in the following subsections.
Measurement error: question development
To reduce measurement error, harmonised or well-established questions are used in the survey where possible. New questions are developed by survey experts and many have been subject to external peer review. A number of questions have also been cognitively tested for face-to-face use, to increase the likelihood that the questions are consistently understood as intended and that respondents can recall the information needed to answer them. Further cognitive testing of questions for the telephone survey will be undertaken shortly. Cognitive testing reports are available on the National Survey webpages.
Non-response (i.e. individuals who are selected but do not take part in the survey) is a key component of non-sampling error. Response rates are therefore an important dimension of survey quality and are monitored closely.
The response rate is the proportion of eligible telephone numbers that yielded an interview, and is defined as:
The survey results are weighted to take account of differential non-response across age and sex population subgroups, i.e. to ensure that the age and sex distribution of the responding sample matches that of the population of Wales. This step is designed to reduce the non-sampling error due to differential non-response by age and sex.
Missing answers occur for several reasons, including refusal or inability to answer a particular question, and cases where the question is not applicable to the respondent. Missing answers are usually omitted from tables and analyses, except where they are of particular interest (for example, a high level of “Don’t know” responses may be of substantive interest).
Measurement error: interview quality checks
Another potential cause of bias is interviewers systemically influencing responses in some way. It is likely that responses will be subject to effects such as social desirability bias (where the answer given is affected by what the respondent perceives to be socially acceptable or desirable). Extensive interviewer training is provided to minimise this effect and interviewers are also closely supervised, with a proportion of interviews verified through interviewer managers listening to recordings of the interviews.
The questionnaire is administered by telephone using a Computer Assisted Telephone Interviewing (CATI) script. This approach allows the interviewer to provide some additional explanation where it is clear that the reason for asking the question or the question meaning is not understood by that respondent. To help them do this, interviewers are provided with background information on some of the questions at the interviewer briefings that take place before fieldwork begins. The script also contains additional information where prompts or further explanations have been found to be needed. However, interviewers are made aware that it is vital to present questions and answer options exactly as set out in the CATI script.
Some answers given are reflected in the wording of subsequent questions or checks (for example, the names of children given are mentioned in questions on children’s schools). This helps prevent the respondent (and interviewer) to understand the questions correctly.
A range of logic checks and interviewer prompts are included in the script to make sure the answers provided are consistent and realistic. Some of these checks are ‘hard checks’: that is, checks used in cases where the respondent’s answer is not consistent with other information previously given by the respondent. In these cases the question has to be asked again, and the response changed, in order to proceed with the interview. Other checks are ‘soft checks’, for responses that seem unlikely (either because of other information provided or because they are outside the usual range) but could be correct. In these cases the interviewer is prompted to confirm with the respondent that the response is indeed correct.
Processing error: data validation
The main survey outputs are SPSS data files that are delivered every quarter. For each fieldwork period, two main data files are provided:
- a household dataset, containing responses to the enumeration grid and any information asked of the respondent about other members of the household
- a respondent dataset, containing each respondent’s answers
Each dataset is checked by the survey contractor. A set of checks on the content and format of the datasets is then carried out by Welsh Government and any amendments made by the contractor before the datasets are signed off.
Timeliness and punctuality
Timeliness refers to the lapse of time between publication and the period to which the data refers. Punctuality refers to the time lag between the actual and planned dates of publication.
Results are released around three months after the end of the fieldwork period. This period has been kept as short as possible to ensure that results are timely.
More detailed topic-specific reporting follows, depending on the needs of survey users.
Accessibility and clarity
Accessibility is the ease with which users are able to access the data, also reflecting the format(s) in which the data are available and the availability of supporting information. Clarity refers to the quality and sufficiency of the metadata, illustrations and accompanying advice.
Detailed charts and tables of results are available via an interactive results viewer. Because there are hundreds of variables in the survey and many thousands of possible analyses, only a subset are included in the results viewer. However further tables / charts can be produced quickly on request.
For further information about the survey results, or if you would like to see a different breakdown of results, contact the National Survey team at firstname.lastname@example.org or on 0300 025 6685.
We take care to ensure that individuals are not identifiable from the published results. We follow the requirements for confidentiality and data access set out in the Code of Practice for Statistics.
We comply with the Welsh language standards for all our outputs. Our website, first releases, Results viewer and Question viewer are published in both Welsh and English.
We aim to write clearly (using plain English / ‘Cymraeg Clir’).
UK Data Archive
Anonymised versions of the survey datasets (from which some information is removed to ensure confidentiality is preserved), together with supporting documentation, is deposited with the UK Data Archive. These datasets may be accessed by registered users for specific research projects.
From time to time, researchers may wish to analyse more detailed data than is available through the Data Archive. Requests for such data should be made to the National Survey team (see contact details below). Requests are considered on a case by case basis, and procedures are in place to keep data secure and confidential.
Methods and definitions
Comparability and coherence
The degree to which data can be compared over both time and domain.
Throughout National Survey statistical bulletins and releases, we highlight relevant comparators as well as information sources that are not directly comparable but provide useful context.
Comparisons with other countries
Wherever possible, survey questions are taken from surveys run elsewhere. This allows for some comparisons across countries to be made (although differences in design and context may affect comparability).
Comparisons over time
Although the telephone survey covers some of the same topics as the face-to-face survey, in some cases using the same or only slightly adapted questions, care should be taken in making comparisons over time. The change of mode could affect the results in a variety of ways. For example, different types of people may be more likely to take part in the different modes; or the mode may affect how people answer questions (See Mixing modes within a social survey: opportunities and constraints for the National Survey for Wales, section 3.2.3 for a fuller review). Comparability is likely to be more problematic for less-factual questions (e.g. about people’s views on public services, as opposed to more-factual questions like whether they have used these services in a particular time period).
The results for most topics covered in the face-to-face National Survey change only slowly over time, so in normal times can remain a good indicator of the current situation even a year or more after the end of fieldwork. However the particular circumstances of the fieldwork period for the survey, with major changes to people’s everyday lives, may mean that things change much more quickly than is usually the case; so the results may not be a good indicator of the current situation for so lengthy a time.
Between July 2021 and March 2022, we are carrying out two trials. Results from these will help inform ways in which the survey might be improved in the future.
Since the start of the COVID-19 pandemic in 2020, the National Survey has taken place over the phone instead of face-to-face as previously. Although the telephone approach has been successful, what can be covered with the telephone survey is to some extent limited.
The previous 45 minutes face-to-face survey was shortened to 35 minutes over the phone to maintain respondents’ engagement and attention during the telephone interview.
Some questions from the face-to-face survey have not been included in the telephone survey. This includes more sensitive topics that respondents might not feel comfortable covering with an interviewer. The results from such topics could also potentially be affected by social desirability bias (people tending to give the interviewer the answer they think they should give). An online section also allows visual presentation of information. Finally, some of these topics are asked in self-completion format on other surveys, and using self-completion format for these topics in the National Survey will make it more possible to compare results.
To help address these issues, from July 2021 to March 2022 we are trialling a follow-up online survey after the telephone interview. Results from the online trial will help inform decisions on the future design of the survey.
The online trial involves a random subsample of 2,000 participants who will be asked to complete a 15-minute online survey after they have completed the telephone survey with an interviewer.
After respondents complete the telephone survey, the interviewer will email and text respondents the survey link and their Unique Access Code (UAC) which they will need to log into the survey. Like the telephone survey, the online questionnaire can be completed in Welsh or English.
If a respondent is not able to complete the online survey (if they don’t have internet for example), they will be offered a telephone version of the online survey to complete over the phone with an interviewer.
For more information, please see the 2021-22 questionnaire.
From May 2020 onwards, respondents were offered a £15 conditional incentive (in the form of a shopping voucher) for taking part in the survey. From July 2021, we are conducting a split-sample trial to look at whether a £10 incentive is as effective as a £15 incentive. The 10,000 respondents not selected for the online trial described above will be split 50:50, with half offered a £10 gift voucher and half offered a £15 gift voucher for completing the telephone survey.
Respondents in the incentives trial will not be asked to complete the online survey.
The question topics covered in the telephone survey for the Online and Incentives trial are the same. It is only the start and end of the telephone survey that vary in each trial: mainly in the incentive value mentioned and, for Online trial respondents, instructions on how to access the online survey.