National Survey for Wales: 2020 monthly survey quality report
This report sets out how the survey adheres to the European Statistical System definition of quality, and provides a summary of methods used to compile the output.
This file may not be fully accessible.
In this page
Introduction and survey methodology
The National Survey for Wales: 2020 monthly survey is a large-scale, random sample telephone survey covering people across Wales. The sample is selected from people who previously took part in the face-to-face version of the National Survey and agreed to take part in further research. The achieved sample size each month is approximately 1,000 (although it was higher in May 2020 at just over 3,000 respondents), and the response rate is around 74% of those asked to take part. See Fieldwork report for more details of the sample issued and achieved each month
The survey lasts 20 minutes on average and covers a range of topics. It began on 24 April 2020, after face-to-face National Survey fieldwork was halted on 16 March 2020 due to the coronavirus outbreak. Interviewers who carried out the face-to-face survey were retrained to deliver the survey by telephone. As with the face-to-face survey, an advance letter is posted to all sampled respondents. Respondents are offered a £10 voucher to say thank you for taking part.
The monthly survey covers topics that are of particular relevance to the coronavirus situation. It also includes a small number of topics that would otherwise have been covered in the face-to-face survey but for which direct comparability with earlier results is not essential. Questionnaires are available on our web pages.
The results are used to inform and monitor Welsh Government policies as well as being a valuable source of information for other public sector organisations such as local councils and NHS Wales, voluntary organisations, academics, the media, and members of the public.
This report sets out how the survey adheres to the European Statistical System definition of quality (Section 2), and provides a summary of methods used to compile the output (Section 3).
Relevance
Characteristic | Details |
---|---|
What it measures |
The survey covers a broad range of topics including education, exercise, health, social care, use of the internet, community cohesion, wellbeing, employment, and finances. The topics change from month to month in order to keep up with changing needs for information. Questions can be explored using our interactive question viewer. A range of demographic questions is also included, to allow for detailed crossanalysis of the results. The survey content and materials are available from the National Survey web pages. This includes questionnaires and the advance letter sent to each selected household. |
Mode | 20 minute telephone interview. |
Frequency | Monthly. Fieldwork started in May 2020, with topics updated at the start of every month. The end date for the survey will be kept under review. |
Sample size | An achieved sample of around 1,000 respondents a month, except for the first month (May 2020) when 3,000 interviews were achieved. See separate Fieldwork report for details of the issued and achieved sample, including by local authority. The Fieldwork report is updated each month. |
Periods available | Results are based on interviews carried out within the month prior to publication (with the exception of the first month’s fieldwork, which ran from late April to late May). The first results are published around four weeks after fieldwork is completed for each month. |
Sample frame | Respondents are sampled randomly from those who took part in the face-toface version of the National Survey in 2019-20, and who agreed to be recontacted for future research (74.3% of all participants in the face-to-face survey). For the original face-to-face survey, addresses were sampled randomly from Royal Mail’s small user Postcode Address File (PAF), a list of all UK addresses. The address sample was drawn by Office for National Statistics (ONS) to ensure that respondents have not recently been selected for a range of other large-scale government surveys, including previous years of the National Survey. |
Sample design |
The telephone sample is drawn from those agreeing to take part in further research when originally interviewed face-to-face. The characteristics of the face-to-face sample mean that the telephone sample is stratified by local authority. The face-to-face address sample was broadly proportionate to local authority population size, but with a minimum effective sample size of 250 in the smallest authorities and 750 in Powys. At addresses containing more than one household, one household was selected at random. In each sampled household, the respondent was randomly selected from all adults (aged 16 or over) in the household who regarded the sample address as their main residence, regardless of how long they had lived there. |
Weighting | Results are weighted to take account of unequal selection probabilities and for differential non-response, i.e. to ensure that the age and sex distribution of the responding sample matches that of the population of Wales. |
Imputation | No imputation |
Outliers | No filtering of outliers. |
Primary purpose
The main purpose of the survey is to provide information on the views and behaviours of adults in Wales, covering a wide range of topics relating to them and their local area.
The results help public sector organisations to:
- make decisions that are based on sound evidence
- monitor changes over time
- identify areas of good practice that can be implemented more widely
- identify areas or groups that would benefit from intensive local support, so action can be targeted as effectively as possible
Most topics on the monthly survey are included to help understand the situation as the country responds to the coronavirus outbreak. The information is valuable in the short term, but also has a longer-term use in understanding what has happened and how to respond to future major disruptions of this kind.
There are a small number of factual topics (for example, in May: armed forces membership) that are asked because there is a need for information that would have been met by the 2020-21 face-to-face survey had it happened, but can equally well be collected by phone even during this unusual period, and for which comparisons with previous results from face-to-face surveys are not essential.
Users and uses
The survey is commissioned and used to help with policy-making by the Welsh Government, Sport Wales, Natural Resources Wales, and the Arts Council of Wales. As well as these organisations, there is a wide range of other users of the survey, including:
- local authorities across Wales, NHS Wales, and Public Health Wales
- other UK government departments and local government organisations
- other public sector organisations; academics
- the media
- members of the public
- the voluntary sector, particularly organisations based in Wales
Datasets are deposited at the UK Data Archive to ensure that the results are widely accessible for research purposes. Results are also linked with other datasets via secure research environments, for example the Secure Anonymised Information Linkage databank (SAIL) at Swansea University. Respondents are able to opt out of having their results linked if they wish.
Strengths and limitations
Strengths
- A randomly-selected sample with a high response rate. This helps to ensure that the results are representative of people in Wales, including harder-to-reach groups such as younger people. It means that the sample is not skewed in favour of people who are less busy or who have a particular view that they are keen to get across. The survey is weighted to adjust for non-response, which also helps make the results as representative as possible.
- It is carried out by telephone, allowing people to take part who do not use the internet or who have lower levels of literacy. Compared with paper and online surveys the mode helps ensure that all relevant questions are answered. It also allows interviewers to read out introductions to questions and to help ensure respondents understand what is being asked (but without deviating from the question wording), so that respondents can give accurate answers.
- The survey covers a wide range of topics, allowing cross-analyses between topics to be undertaken. A range of demographic questions are also included to allow cross-analysis by age, gender, employment status, etc.
- Where possible, questions are selected that have been used in full-year versions of the National Survey and in other major surveys. This means that they are tried and tested, and that some results can be compared over time and with other countries. Where necessary, questions are adapted (typically shortened) to ensure that they work well by telephone.
- Questions are developed by survey experts, peer-reviewed by the ONS National Survey team, reviewed by experienced interviewers, and trialled by interviewers with a small number of respondents before fieldwork begins
- The results are available quickly after the end of fieldwork: within around four weeks. Large numbers of results tables are available in an interactive viewer.
- Use can be made of linked records (that is, survey responses can be analysed in the context of other administrative and survey data that is held about the relevant respondents).
Limitations
- Although the response rate is high for a telephone survey, there is still a substantial proportion of the individuals originally sampled for the face-to-face National Survey who do not take part. This is likely to affect the accuracy of the estimates produced.
- The survey does not cover people living in communal establishments (for example, care homes, residential youth offender homes, hostels, and student halls)
- Although care has been taken to make the questions as accessible as possible, there will still be instances where respondents do not respond accurately, for example because they have not understood the question correctly or for some reason they are not able or do not wish to provide an accurate answer. Again, this will affect the accuracy of the estimates produced.
- Due to the short timescales for putting together the survey, it has not yet been possible to carry out cognitive testing of questions used over the telephone. Testing is planned for the coming months.
- Robust analyses for smaller geographical areas and other small subgroups are not possible.
Several of the strengths and limitations mentioned above relate to the accuracy of the results. Accuracy is discussed in more detail in the following section.
Accuracy
The closeness between an estimated result and the (unknown) true value.
The main threats to accuracy are sources of error, including sampling error and non-sampling error.
Sampling error
Sampling error arises because the estimates are based on a random sample of the population rather than the whole population. The results obtained for any single random sample are likely to vary by chance from the results that would be obtained if the whole population was surveyed (i.e. a census), and this variation is known as the sampling error. In general, the smaller the sample size the larger the potential sampling error.
For a random sample, sampling error can be estimated statistically based on the data collected, using the standard error for each variable. Standard errors are affected by the survey design, and can be used to calculate confidence intervals in order to give a more intuitive idea of the size of sampling error for a particular variable. These issues are discussed in the following subsections.
Effect of survey design on standard errors
The survey is stratified at local authority level, with different probabilities of selection for people living in different local authorities. Weighting is used to correct for these different selection probabilities, as well as (as noted above) to ensure the results reflect the population characteristics (age and sex) of each local authority.
One of the effects of this complex design and of applying survey weights is that standard errors for the survey estimates are generally higher than the standard errors that would be derived from a simple random sample of the same size. Survey estimates themselves (as opposed to the standard errors and confidence intervals for those estimates) are not affected by the survey design.
The ratio of the standard error of a complex sample to the standard error of a simple random sample (SRS) of the same size is known as the design factor, or 'deft'. If the standard error of an estimate in a complex survey is calculated as though it has come from an SRS survey, then multiplying that standard error by the deft gives the true standard error of the estimate which takes into account the complex design.
The ratio of the sampling variance of the complex sample to that of a simple random sample of the same size is the design effect, or 'deff' (which is equal to the deft squared). Dividing the actual sample size of a complex survey by the deff gives the 'effective sample size'. This is the size of an SRS that would have given the same level of precision as did the complex survey design.
All cross-analyses produced by the National Survey team, for example in bulletins and in the tables and charts available in our results viewer, take account of the design effect for each variable.
Confidence intervals (‘margin of error’)
Because the National Survey is based on a random sample, standard errors can be used to calculate confidence intervals, sometimes known as the ‘margin of error’, for each survey estimate. The confidence intervals for each estimate give a range within which the ‘true’ value for the population is likely to fall (that is, the figure we would get if the survey covered the entire population).
The most commonly-used confidence interval is a 95% confidence interval. If we carried out the survey repeatedly with 100 different samples of people and for each sample produced an estimates of the same particular population characteristic (for example, satisfaction with life) with 95% confidence intervals around it, the exact estimates and confidence intervals would all vary slightly for the different samples. But we would expect the confidence intervals for about 95 of the 100 samples to contain the true population figure.
The larger the confidence interval, the less precise an estimate is.
95% confidence intervals have been calculated for a range of National Survey variables and are included in the technical report for each year. These intervals have been adjusted to take into account the design of the survey, and are larger than they would be if the survey had been based on a simple random sample of the same size. They equal the point estimate plus or minus approximately 1.96 * the standard error of the estimate (The value of 1.96 varies slightly according to the sample size for the estimate of interest). Confidence intervals are also included in all the charts and tables of results available in our Results viewer.
Confidence intervals can also be used to help tell whether there is a real difference between two groups (one that is not just due to sampling error, i.e. the particular characteristics of the people that happened to take part in the survey). As a rough guide to interpretation: when comparing two groups, if the confidence intervals around the estimates overlap then it can be assumed that there is no statistically significant difference between the estimates. This approach is not as rigorous as doing a formal statistical test, but is straightforward, widely used and reasonably robust.
Note that compared with a formal test, checking to see whether two confidence intervals overlap is more likely to lead to 'false negatives': incorrect conclusions that there is no real difference, when in fact there is a difference. It is also less likely than a formal test to lead to 'false positives': incorrect conclusions that there is a difference when there is in fact none. However, carrying out many comparisons increases the chance of finding false positives. So when many comparisons are made, for example when producing large numbers of tables of results containing confidence intervals, the conservative nature of the test is an advantage because it reduces (but does not eliminate) the chance of finding false positives.
Non-sampling error
'Non-sampling error' means all differences between the survey estimates and true population values except differences due to sampling error. Unlike sampling error, non-sampling error is present in censuses as well as sample surveys. Types of non-sampling error include: coverage error, non-response error, measurement error and processing error.
It is not possible to eliminate non-sampling error altogether, and it is not possible to give statistical estimates of the size of non-sampling error. Substantial efforts have been made to reduce nonsampling error in the National Survey. Some of the key steps taken are discussed in the following subsections.
Measurement error: question development
To reduce measurement error, harmonised or well-established questions are used in the survey where possible. New questions are developed by survey experts and many have been subject to external peer review. A number of questions have also been cognitively tested for face-to-face use, to increase the likelihood that the questions are consistently understood as intended and that respondents can recall the information needed to answer them. Further cognitive testing of questions for the telephone survey will be undertaken shortly. Cognitive testing reports are available on the National Survey webpages.
Non-response
Non-response (i.e. individuals who are selected but do not take part in the survey) is a key component of non-sampling error. Response rates are therefore an important dimension of survey quality and are monitored closely.
The response rate is the proportion of eligible telephone numbers that yielded an interview, and is defined as
The survey results are weighted to take account of differential non-response across age and sex population subgroups, i.e. to ensure that the age and sex distribution of the responding sample matches that of the population of Wales. This step is designed to reduce the non-sampling error due to differential non-response by age and sex.
Missing answers
Missing answers occur for several reasons, including refusal or inability to answer a particular question, and cases where the question is not applicable to the respondent. Missing answers are usually omitted from tables and analyses, except where they are of particular interest (for example, a high level of 'Don’t know' responses may be of substantive interest).
Measurement error: interview quality checks
Another potential cause of bias is interviewers systemically influencing responses in some way. It is likely that responses will be subject to effects such as social desirability bias (where the answer given is affected by what the respondent perceives to be socially acceptable or desirable). Extensive interviewer training is provided to minimise this effect and interviewers are also closely supervised, with a proportion of interviews verified through 'back-checking'.
The questionnaire is administered by telephone using a Computer Assisted Telephone Interviewing (CATI) script. This approach allows the interviewer to provide some additional explanation where it is clear that the reason for asking the question or the question meaning is not understood by that respondent. To help them do this, interviewers are provided with background information on some of the questions at the interviewer briefings that take place before fieldwork begins. The script also contains additional information where prompts or further explanations have been found to be needed. However, interviewers are made aware that it is vital to present questions and answer options exactly as set out in the CATI script.
Some answers given are reflected in the wording of subsequent questions or checks (for example, the names of children given are mentioned in questions on children’s schools). This helps prevent the respondent (and interviewer) to understand the questions correctly.
A range of logic checks and interviewer prompts are included in the script to make sure the answers provided are consistent and realistic. Some of these checks are ‘hard checks’: that is, checks used in cases where the respondent’s answer is not consistent with other information previously given by the respondent. In these cases the question has to be asked again, and the response changed, in order to proceed with the interview. Other checks are ‘soft checks’, for responses that seem unlikely (either because of other information provided or because they are outside the usual range) but could be correct. In these cases the interviewer is prompted to confirm with the respondent that the response is indeed correct.
Processing error: data validation
The main survey outputs are SPSS data files that are delivered every month. For each fieldwork period, two main data files are provided:
- a household dataset, containing responses to the enumeration grid and any information asked of the respondent about other members of the household
- a respondent dataset, containing each respondent’s answers
Each dataset is checked by the survey contractor. A set of checks on the content and format of the datasets is then carried out by Welsh Government and any amendments made by the contractor before the datasets are signed off.
Timeliness and punctuality
Timeliness refers to the lapse of time between publication and the period to which the data refers. Punctuality refers to the time lag between the actual and planned dates of publication.
Results for each month’s fieldwork are released around four weeks after the end of the fieldwork period. This period has been kept as short as possible given the urgency of policy needs for information about the coronavirus situation.
More detailed topic-specific reporting follows, depending on the needs of survey users.
Accessibility and clarity
Accessibility is the ease with which users are able to access the data, also reflecting the format(s) in which the data are available and the availability of supporting information. Clarity refers to the quality and sufficiency of the metadata, illustrations and accompanying advice.
Publications
All reports are available to download from the National Survey web pages. The National Survey web pages have been designed to be easy to navigate.
Detailed charts and tables of results are available via an interactive results viewer. Because there are hundreds of variables in the survey and many thousands of possible analyses, only a subset are included in the results viewer. However further tables / charts can be produced quickly on request.
For further information about the survey results, or if you would like to see a different breakdown of results, contact the National Survey team at surveys@gov.wales or on 0300 025 6685.
Disclosure control
We take care to ensure that individuals are not identifiable from the published results. We follow the requirements for confidentiality and data access set out in the Code of Practice for Statistics.
Language requirements
We comply with the Welsh language standards for all our outputs. Our website, first releases, Results viewer and Question viewer are published in both Welsh and English.
We aim to write clearly (using plain English / ‘Cymraeg Clir’).
UK Data Archive
Anonymised versions of the survey datasets (from which some information is removed to ensure confidentiality is preserved), together with supporting documentation, is deposited with the UK Data Archive. These datasets may be accessed by registered users for specific research projects.
From time to time, researchers may wish to analyse more detailed data than is available through the Data Archive. Requests for such data should be made to the National Survey team (see contact details below). Requests are considered on a case by case basis, and procedures are in place to keep data secure and confidential.
Methods and definitions
Each survey publication also contains a glossary with descriptions of more general terms used in the output. An interactive question viewer and copies of the questionnaires are available.
Comparability and coherence
The degree to which data can be compared over both time and domain.
Throughout National Survey statistical bulletins and releases, we highlight relevant comparators as well as information sources that are not directly comparable but provide useful context.
Comparisons with other countries
Wherever possible, survey questions are taken from surveys run elsewhere. This allows for some comparisons across countries to be made (although differences in design and context may affect comparability).
Comparisons over time
Although the telephone survey covers some of the same topics as the face-to-face survey, in some cases using the same or only slightly adapted questions, care should be taken in making comparisons over time. The change of mode could affect the results in a variety of ways. For example, different types of people may be more likely to take part in the different modes; or the mode may affect how people answer questions. (See Mixing modes within a social survey: opportunities and constraints for the National Survey for Wales, section 3.2.3 for a fuller review) Comparability is likely to be more problematic for less-factual questions (for example, about people’s views on public services, as opposed to more factual questions like whether they have used these services in a particular time period).
The results for most topics covered in the face-to-face National Survey change only slowly over time, so in normal times can remain a good indicator of the current situation even a year or more after the end of fieldwork. However the particular circumstances of the fieldwork period for the monthly survey, with major changes to people’s everyday lives, may mean that things change much more quickly than is usually the case; so the results may not be a good indicator of the current situation for so lengthy a time.
Feedback or further information
If you have would like further information, contact us on 0300 025 6685 or at surveys@gov.wales. We welcome comments from users of the survey, for example on the content and presentation of our publications.