Skip to main content

Small-scale user research was conducted to scope how workers in the Welsh public sector understand artificial intelligence (AI), particularly with regard to how AI relates to the workplace. The user research was carried out as part of a larger project being conducted by the Welsh Workforce Partnership Council (WPC). The WPC brings together representatives of the Welsh Government, representatives of employers and representatives of trade unions in partnership and its remit covers work in the public sector. This user research will be put before the WPC for agreement in November 2024 and it is based on data collected by members of the WPC AI sub-group in August and September 2024. 

The use of AI in the workplace is an emerging phenomenon and developments can occur rapidly. This user research offers a snapshot of the current understanding of AI in the Welsh public services and provides a point of reference for future surveys as the workplace use of AI develops. Further investigation of the integration of AI into our work and our workplaces, as well as effective social partnership working in response to any findings, will be essential to ensure that the principle of fair work is embedded in the future of the Welsh public sector. A glossary below provides definitions of key terms used in this document. 

Key insights

The purpose of this user research was to gain a “benchmark” of the understanding and knowledge of AI in the public sector workforce in Wales. This report of the user research draws out themes from the survey responses, which were small in number but nevertheless offer interesting and varied insights into the understanding and perception of AI amongst the respondents. The report initially focuses on the respondents’ understanding of AI generally and its uses, benefits and risks in daily life outside of work. The report then discusses the role of AI at work, offering an account of the respondents’ views about current and future uses of AI at work, the barriers to AI adoption in workplaces and the consequences that the respondents perceive of using AI at work. 

In general terms, the respondents showed very varied awareness and understanding of AI and other associated concepts, such as machine learning and large language models. The responses showed that their understanding was gained from education, news coverage, fictional media such as television, films and books, and personal experience with devices and software that use AI. Some aspects of respondents’ understanding of what AI is and how it operates were accurate (for example that AI uses algorithms, data, and computational power to mimic decision-making and learning), although there were some misconceptions (such as that AI being reliant on access to the internet). Respondents could see the value of using AI to increase efficiency and automate repetitive tasks and several mentioned purposes they had used AI for outside of work. Respondents also highlighted risks, particularly regarding the impact of AI on the creative industries, the effect upon human interaction and connection, and general concern about the ethical implications of the continuing and accelerating use of AI in daily life.

Turning to the context of work, respondents again acknowledged where AI could create efficiencies and enhance productivity. Use of AI at work was reported to be increasing, either formally through organisational encouragement to use AI tools or informally on an individual basis to carry out particular tasks. Document drafting, note taking, and training tools were mentioned as areas where AI is currently used. Two commonalities could be seen in the responses here: first, that AI adoption would be highly sector-specific and depend upon organisational context and, second, that AI use in the workplace will inevitably increase in the future. 

The user research responses brought to the fore a range of barriers to the adoption of AI in public sector workplaces, as well as highlighting the respondents’ perceptions of the consequences of AI adoption for their work and the work of others. The three key barriers to adoption were the lack of education and understanding amongst the workforce of AI and its potential in the workplace; the limits of current technology relied upon by organisations, and the effects of disconnects between system designers or procurers and the staff that use the systems day-to-day. The respondents’ views on the consequences of AI adoption for their workplace, and working people more broadly, were varied. There was broad agreement that how work is performed would change, in some cases rapidly, in others at a more gradual pace. Some respondents expressed concerns about job losses associated with particular job roles like administration and retail, whereas others expressed a view that humans will remain necessary for all jobs particularly with regard to ensuring quality and accuracy of outputs. There were differences perceived according to sector here, with the role of human empathy and judgement emphasised in medical and care settings. 

The report concludes by offering a series of recommendations regarding how a broader evidence base for future policy initiatives should be build. The report recommends that ongoing social partnership working, including between the Workforce Partnership Council and the Social Partnership Council, should be enhanced by close collaboration with relevant academic expertise. The report suggests that building capability across the Welsh public sector workforce is essential to ensure that all workers are enabled in realising the potential of AI in the workplace whilst also being aware of the limits of AI technologies and guarding against the risks they entail. The report recommends that, in the context of ongoing work on this pressing issue, regular user research and surveys should be undertaken to assess the effectiveness of any policies that aim to build capability within the workforce and to deepen the insights available into the understanding and use of AI at work.

User research methods

This user research sought to gain initial insights into and to benchmark the current level of knowledge in the public sector workforce in Wales regarding the meaning, uses, opportunities and challenges associated with use of AI, particularly in the workforce. The team conducted semi-structured interviews with 4 respondents based on the Discussion Guide at Appendix A. Interviews took place online during August 2024. The survey team then adapted the interview prompts into a written questionnaire (found at Appendix B) to gather a further 9 responses. A questionnaire was open from 16 August to 2 September 2024. 13 people responded to in total. Respondents were male and female and a range of ages. 

Respondents all work in the Welsh public sector, across a range of roles. Some respondents were approached directly by the team whilst others volunteered after seeing a call for respondents on social media (LinkedIn). The respondents’ roles were the following: Local Authority Audit and Risk Manager, Digital Advisory Consultant, Digital Lead, Managing Director, Nurse Practitioner, Commercial Associate, User-centred Design Apprentice, Researcher, Communications Officer, Delivery Manager, Teacher, Service Delivery Supervisor and Director. The way in which respondents were included in the survey is likely to have an impact on the survey data received. For example, the respondents may be more likely to be working with AI already or have an interest in it by reason of their volunteering or being contacted for this study. The sample size is small on this initial research. The results, therefore, will not be representative of the broader Welsh population and, as recommended below, a further survey should be conducted that builds on the findings shared in this report. 

Respondents were asked questions in sections: questions about their role, technology use within and outside work, questions on their understanding of AI and its meaning, questions on AI usage in work, then questions on the ethics, benefits and the future of AI. As the questions were intended to uncover respondents’ current understanding of AI and related terminology, the team asked respondents to offer their own descriptions of AI (see for example Q1). The questions also probed awareness and understanding of terms associated with AI (see Q4). In the interviews, we discussed Machine Learning, Neural Networks and Large Language Models with respondents. In the questionnaire, the list of prompts was expanded to include Generative AI, hallucinations, Responsible AI, multimodal models, prompts, copilots, and bias. The team combined interview responses and questions and here, an analysis of the results is offered in the following five sections. 

Understanding of AI

Respondents generally view AI as a tool for automating tasks, streamlining processes, and enhancing efficiency. Many associate it with human-like intelligence in machines, while others referenced robots as a physical manifestation of AI. One respondent said, for example, “Artificial intelligence - using robots and technology to help with different things but overall make tasks quicker and easier to do.” AI is seen as helpful for decision-making and simplifying work, though a few expressed concerns about potential mistakes or budget issues.

The understanding of AI among respondents comes from a combination of media, such as TV, films (Terminator, Artificial Intelligence), and science fiction novels, as well as real-world experiences with tools like Siri, Alexa, and ChatGPT. Some learned about AI through academic settings or work environments, while others recall early exposure to AI concepts in the 1970s and 80s. Pop culture and personal experiences with AI technologies have strongly shaped these perceptions.

While some aspects of their understanding are accurate - such as AI using algorithms, data, and computational power to mimic decision-making and learning - there are also misconceptions. The concept of AI "learning" from data is correct, as is the use of AI to process vast amounts of information, but some believe AI is reliant on the internet or is fully self-aware like in science fiction, which isn't the case. Overall, respondents’ understanding is broad but incomplete, combining some accurate insights with myths from pop culture, the media or fictitious accounts of AI.

Table 1 below shows which concepts associated with AI the respondents reported that they were familiar with through the online questionnaire. Familiarity was very varied between respondents. Respondents were most familiar with the terms ‘machine learning’ (9), ‘generative AI’ (9) and ‘prompts’ (8). Respondents were least familiar with ‘neural networks’ (3) and multi-modal models (3). 

Image
A graph with blue and white text -  'With respect to AI have you heard of...?'

Uses, benefits and risks of AI in daily life

Respondents showed awareness of some uses of AI outside of the work context. Of 13 respondents, 10 mentioned ChatGPT specifically or generative AI. AI-based voice assistants, like Siri and Alexa, were mentioned by 4 respondents. 4 respondents also noted examples where AI may play a role in a device or piece of software that has been used previously (for example ‘[I] imagine there [are] things on phone that use it’). Several respondents also mentioned specific uses of AI, such as chatbots used by companies to answer questions in real-time online or algorithms that tailor advertising to the particular customer/user. 

Respondents mentioned AI tools like Notion and ChatGPT as helpful for simplifying complex information, organising thoughts, and assisting with writing (though comments were made about outputs often needing adjustment to match tone or style). AI was seen as supporting creativity by serving as a brainstorming tool and generating realistic images. Respondents were generally positive about everyday AI interactions (e.g. predictive text and retail recommendations). While respondents appreciate AI’s intuitive nature, they note that effective use requires thoughtful prompts, and some express frustration when tools don’t fully meet expectations.

Respondents identified a number of risks and challenges to AI outside of work. These include the difficulty of judging the authenticity of outputs that may have been created by AI. The speed and ease with which AI generates outputs also appeared to contribute to one respondent’s concern about the impact of AI on the arts industries. One said that AI “takes away creativity in the arts” whilst another said it could lead to “apathy from the general public about creative content.” This comment could be interpreted as expressing both a concern about the authenticity of apparently human created content and a sense of apathy about human endeavour that may come about if generative AI can create content quickly. 

Several respondents referenced the loss of human connection or interaction that they considered flowed from the use of AI. One respondent cited attempts by chatbots to create human connection artificially. Chatbots are “annoying”, the respondent stated, adding that “some of the newer models can fool you into thinking that they are human, but they are not.” This comment reflected a wider concern expressed by another respondent regarding a growing lack of human connection associated with AI. The respondent was anxious about ‘the social impact, for example younger people not knowing how to talk to real people or how to socialise and spending too much time using technology instead of outside.’ In a similar vein another person referred to the impact on the quality of services. ‘I know you can go online and speak to a doctor,’ they said, ‘but using AI instead would be a worry. We will lose human interaction when it’s really important. We need empathy and I don’t know if AI can develop this.’

Questions about the morality and ethics of using AI were raised by respondents. ‘I feel like we are only scratching the surface with it but are we pausing to see whether we should morally or if there are any unintended consequences[?]’ said one. The same respondent raised a concern regarding the lack of an ‘official body governing the use and development of AI’. Another specific area where respondents highlighted a lack of training related to the misuse of AI by bad actors. One worker expressed concern at ‘a lack of skills in understanding AI output and the fraudulent use of it.’ Misuse of AI in the education sector was also highlighted as a concern: “[AI] gives my pupils the opportunity to cheat,” said one teacher, referring to gen AI’s ability to create essays and coursework, “[p]upils will not have a thorough understanding of topics if they use it.”

AI in the workplace: current uses and future opportunities

Respondents generally understood that AI was being used in their workplaces, although the application of AI differed according to sector. Many respondents advised that ChatGPT or CoPilot were in operation in their workplaces in an attempt to streamline note taking and report writing, with a focus on using resources more effectively rather than diminishing employee numbers. It was widely accepted that the accuracy of such an application was a challenge and as such a human element was always required to ensure that the work was as accurate as possible.

Application and use of AI across the Health and Social Care sector was varied, with a respondent working in the voluntary and third sector highlighting that the technology and applications provided to employees was behind pace with a single laptop being used for multiple service users with a multidisciplinary team. By contrast, another respondent reported that AI was being used to aid surgery and create training scenarios for students in Healthcare settings.

Some respondents cited the positive benefits of AI in the workplace or particular use cases that would be helpful. For example, one person listed “workforce planning, general forecasting of operational performance and identification of operational improvement,” as areas where the technology could be of assistance.  Another respondent working as a manager in health and social care said they could envisage the “use of AI tools to work more efficiently.” They gave examples of note taking and report writing, as areas where it could be used. They expressed enthusiasm that an AI version of existing team collaboration tools could help their team undertake thematic analysis more easily. They also cite the potential use of remote "survey" platforms potentially for field work. Another respondent said that AI could be helpful in answering questions about medicine and health.

Several respondents expressed a sense that it was inevitable that AI would be integrated into their work or workplace, although the timeframe over which this might occur varied between respondents. One stated that AI would become part of existing tools and software that they already rely upon. Another stated that, in their context (local government), adoption will be slower due to financial constraints and the length of processes required ‘to get it in’. Nevertheless ‘[AI is] going to be something that comes in much more over the next couple of years.’ This perspective was echoed by other observations: [AI] ‘is going to expand in line with technology’ and it is ‘a must and inevitable as budgets tighten’. These responses show a shared view amongst respondents that AI, even given the barriers and consequences discussed below, will be increasingly used in workplaces. 

Barriers to AI adoption in the workplace

The respondents shared a range of views about the use of AI now and in the future workplace. Barriers that could prevent or substantially slow the uptake of AI in the workplace were referenced by the respondents. With regard to AI specifically, a lack of understanding and a need for education and training was mentioned several times. With regard to the use of technology more generally, respondents noted the limits of the systems and processes that are currently used, which may present a barrier to future implementation of AI. 

Variable understanding of AI was highlighted amongst the respondents and some respondents were aware of the limits of their own knowledge (e.g. ‘a tool I should learn more about’). One respondent mentioned the lack of education as ‘a negative as [AI] is under-utilised’. Another emphasised that one of the first steps must be improvements in education about technology and a respondent with a union background said that members report concerns about being ‘asked to do tasks on computer and with programmes [they] haven’t used before’. There is a clear need for a push on education and training around AI, particularly as it may relate to people’s work. Sustained efforts to build capability regarding using AI effectively and responsibly will be essential to overcoming this barrier to realising the opportunities and benefits outlined here. 

A second barrier arising from the responses was a group of concerns about the technologies relied on by organisations at the moment. One respondent noted that they have poor access to software that is standard to their industry. Another raised the extremely limited range of devices that are accessible in their workplace, with a whole staff team sharing one laptop and many members of staff not able to use the tablet because it doesn’t have a keyboard. The same respondent also noted that the systems they rely upon are very slow to use. A third respondent noted that they use a variety of different systems, but these needed to be consolidated so they are more straightforward to operate. These responses highlight the importance of ensuring an organisation is ready - in terms of the systems and devices that are to be used - to implement any further changes. 

Finally, one respondent introduced a note of concern based on past experiences of introducing new technology. They said: ‘a lot of workplaces, including workplaces in Local Authorities, bring in [a] database- all singing and dancing [but it] doesn’t work. The new systems coming in are designed by IT and managers and not designed by the people who use them’. This observation impresses the need for staff at all levels to be involved from an early stage in the decision-making regarding the design or procurement and implementation of new technologies (including those based on AI) so that the tools are useful to staff in the performance of their roles. 

Consequences of AI in the workplace

Numerous consequences for the future of work and the workplace were raised by respondents as possible, or even likely, to arise. These are in addition to the more general concerns about the use of AI discussed above. These consequences can be grouped into three categories: 

  • The possibility of job loss and job change, 
  • Changes in how people perform their role,
  • Concerns about some legal or ethical implications. 

In line with the discussion above, the respondents converged on a sense that AI’s integration into the workplace is inevitable. There were mixed views expressed on what the impact of this process would be upon the number of jobs available and what type of work would be available. Only one respondent noted the possibility that AI will create new opportunities, although this was referenced implicitly by others (e.g. seeing a different work plan in the future). There was a shared sense that administrative tasks, which are a component of many jobs, will be performed by AI. In addition, there was broad consensus evident that the impact upon the number of jobs available (and the possibility therefore of job losses) would be highly sector specific. Jobs in which human interaction and judgement is perceived as central were viewed as less likely to be affected by the introduction of AI, whereas administrative roles, retail and logistics were highlighted as areas where job losses may be a consequence of AI in the workplace. 

These sentiments appear in the following responses. One respondent said ‘I don't feel AI will ever replace the human elements required in most of our roles’ and another was more specific to a sectoral context: ‘In my workplace [AI] won’t take over the main caring role but take over a lot of the recording and medical elements’. The relevance of sector/type of work performed was mentioned by further respondents. AI ‘could reduce workload to a point where there is not enough work for some roles to be viable’ or, as said by another, it ‘chips away at responsibilities over time’ (here, of minute takers). Finally: ‘[I] think that people will lose jobs, shop[s] in London [with] no staff and the shelves are reading cards as you walk out.  Retail massively impacted.  Healthcare affected but much slower because care is a lot of guessing and human interaction important, but retail and banking is different.  There is [a] robot picking orders [at a large logistics company] – warehouse jobs going.’ 

Some respondents discussed how AI is likely to change how people perform their work and interact with technology. One response might be interpreted as mitigating the concerns regarding the loss of administrative jobs because of the current limits of AI technology. They said that AI is used to take minutes on Teams meetings but ‘still need minute taker to be there to go through what it produced as not always accurate.’ Another respondent said they knew not to ‘trust’ all AI responses. These responses emphasise the need to check the accuracy of any work produced by AI, perhaps in more detail than simply proof-reading one’s own, human-generated work. Other respondents expressed a concern about individuals “slacking off” by using technology or becoming reliant upon it: people may become ‘more lazy in relying on technology for everything’ and ‘some people use it to take advantage and may use it instead of work’. The impact upon the quality, accuracy and deep understanding of the work produced were raised across responses. 

A wide-ranging set of consequences can be considered under the banner of legal or ethical risks associated with the use of AI, and these build on concerns discussed above and relate also to the research cited in the WPC’s recent guidance. Respondents were aware of risks related to discrimination and inequality in terms of outputs and inputs. Respondents were concerned about AI ‘giving biased answers to questions’ and replicating human bias whilst another wondered ‘what biases are feeding the machine’? The role of individual data (‘not sharing sensitive data’) and of AI’s consumption of data (‘harvesting’) were also mentioned. A concern about the “slow creep” of technology appeared in several answers: for example, ‘as AI becomes more powerful, I have concerns that it could become even more intrusive.’ 

Recommendation for future steps

This user research has strengthened the understanding of the state of knowledge amongst workers regarding AI, how it is used and how it could be used in the Welsh public sector workforce. Even from a small number of participants, the report shows that there are opportunities and potential benefits for workers and employers. The findings also highlight that there are concerns, barriers and risks that must be taken into consideration. Allowing the benefits for the public sector to be realised, whilst guarding against risks, is a theme that runs through the WPC’s related work on algorithmic management systems. 

The WPC regards this report as the early stages of a wider project that will deepen our understanding and strengthen our approach to the integration of AI in public sector workplaces. Over time, an approach must be found that strikes an appropriate balance between innovation and respect for established rights, principles and ways of working in the Welsh public sector. The latter includes, amongst other things, a strong commitment to social partnership, to fair work, to building capability within the sector regarding the implementation of technology and to the preservation and prioritisation of human oversight and interaction. The WPC recommends the following steps be undertaken: 

  • to create a strong foundation of evidence regarding the current and future use of AI in the workplace, including a review of the available academic research. 
  • to conduct surveys regularly with a larger number of respondents to investigate the understanding and use of AI in the workplace in greater depth and at a more significant scale. 
  • to build on the successes of the WPC in drawing on academic expertise as we continue to make progress in this area.
  • to continue productive social partnership working between the Welsh Government, employers’ and workers’ representatives and conduct regular reviews of the available evidence on this topic. 
  • to forge a connection between the work being conducted by the WPC and that of the Social Partnership Council with regard to AI in Welsh workplaces. 
  • to prioritise building wider capability in the Welsh public sector workforce regarding the understanding of AI and the uses, opportunities, and risks of using AI at work. 

Glossary

These definitions are drawn from Artificial Intelligence: An Explainer (2023, POSTbrief 57) provided by the UK Parliamentary Office of Science and Technology and The Alan Turing Institute’s Data Science and AI Glossary

Algorithm

A set of instructions used to perform tasks (such as calculations and data analysis) usually using a computer or another smart device.

Algorithmic management systems

The term refers to any system that uses computational processes to take or support decisions relating to the management of employment or work. An algorithmic management system includes some aspect of automation and may include processes based on machine learning, statistical analysis or artificial intelligence. An algorithmic management system may be implemented to undertake or support one management function, such as recruitment or the organisation of work, or a system may undertake or support a series of management tasks

Bias 

The tendency of AI systems to produce unfair or prejudiced behaviour or outcomes due to the data upon which it was trained, algorithms, or developer assumptions. It can lead to discrimination in areas like hiring, finance, and healthcare, requiring diverse data and careful monitoring to address.

Chatbot 

A software application that has been designed to mimic human conversation, allowing it to talk to users via text or speech. Previously used mostly as virtual assistants in customer service, chatbots are becoming increasingly powerful and can now answer users’ questions across a variety of topics, as well as generating stories, articles, poems and more (see also ‘generative AI’).

Computational power

AI computational power, or “compute”, is the computing resources needed for artificial intelligence (AI) systems to perform tasks such as processing data, training machine learning models, and making predictions.  AI requires a lot of computing power because it often involves running computations on gigabytes of data. The amount of compute needed depends on the complexity of the AI system and the amount of data being processed.

Copilots 

An AI copilot is a virtual assistant that uses artificial intelligence (AI) to help users complete tasks and make decisions. AI copilots can analyse data from software and databases, and then use that information to guide users through tasks or complete them on their behalf. They can also adapt to a user's behaviour and provide relevant suggestions.

Generative AI 

An AI model that generates text, images, audio, video or other media in response to user prompts. It uses machine learning techniques to create new data that has similar characteristics to the data it was trained on. Generative AI applications include chatbots, photo and video filters, and virtual assistants.

Hallucinations 

Large Language Models, such as ChatGPT, generate text by predicting the most likely words and phrases that go together based on patterns they have seen in training data. However, they are unable to identify if the phrases they generate make sense or are accurate. This can sometimes lead to inaccurate results, also known as ‘hallucination’ effects, where Large Language Models generate plausible sounding but inaccurate text. Hallucinations can also result from biases in training data or the model’s lack of access to up-to-date information.

Large language models  

A type of machine learning model that is trained on vast amounts of text to carry out natural language processing tasks.

Machine learning

A field of artificial intelligence involving computer algorithms that can ‘learn’ by finding patterns in sample data. The algorithms then typically apply these findings to new data to make predictions or provide other useful outputs, such as translating text or guiding a robot in a new setting. Medicine is one area of promise: machine learning algorithms can identify tumours in scans, for example, which doctors might have missed.

Multimodal models 

Multimodal AI models are artificial intelligence systems that can process multiple types of data simultaneously to produce more accurate outputs. Multimodal AI models can combine data such as images, text, audio, and video to perform tasks that single-modality AI models cannot. For example, a multimodal AI model can analyse a photo, understand spoken instructions about the photo, and generate a descriptive text response

Neural Network

An artificial intelligence system inspired by the biological brain, consisting of a large set of simple, interconnected computational units (‘neurons’), with data passing between them as between neurons in the brain. Neural networks can have hundreds of layers of these neurons, with each layer playing a role in solving the problem. They perform well in complex tasks such as face and voice recognition.

Prompt

An AI prompt is a query or instruction that you provide to an artificial intelligence (AI) system to get a specific response. Prompts can be simple questions or keywords, or they can be more complex, like code snippets or creative writing. The quality of the AI's response depends on the effectiveness of the prompt.

Responsible AI

The practice of designing, developing, and deploying AI with certain values, such as being trustworthy, ethical, transparent, explainable, fair, robust and upholding privacy rights.  

Appendix A: user research discussion guide

Introduction

“Thank you for participating in this discussion, we’re looking to understand people’s awareness and understanding of artificial intelligence (AI) in the workplace. This will help us better communicate to and educate the public about AI. There are no right or wrong answers, we are interested in your unique thoughts and experiences"

Introductory questions

  1. What is your current role?
  2. Can you tell me what your role entails (talk me through your day)?
  3. What sort of technology do you use day to day in your role?
    1. What sort of technology do you use outside of work? 
  4. Are there any areas in work you feel technology could be improved? 

AI discussions 

  1. What does AI mean to you?
    1. Can you recall where you first heard of it?
    2. From your understanding, how does AI work?
  2. Can you give me any examples of AI you are aware of?
    1. Are you aware of any AI playing a part in your daily life?
  3. Based on what you know, what is your opinion on AI?
    1. Probe on positives and negatives with this but give them an open statement to react to first
  4. I’m going to list some terms relating to AI here, please let me know if you have heard of them, and what they mean to you
    1. Machine learning
    2. Neural networks
    3. Large Language models
    4. Add more as you need here
  5. Has there been any discussion of AI in your workplace?
    1. Are you aware of it being used in your workplace? If so, how?
    2. Do you have any thoughts on how it may affect your workplace?
    3. Do you have any ideas how it could affect other jobs?
  6. Are you aware of any ethical issues related to AI?
    1. Do you have any concerns about AI and how it is used currently?
    2. Do you have any concerns about how it could be used in the future?
  7. Do you follow AI in the news at all?
    1. What was the last thing you recall seeing in the media about AI?
    2. How did you feel about that?
  8. Have you used any applications or devices that use AI? Can you describe your experience?
  9. From what you know what is your opinion about the future of AI?
    1. In the workplace?
    2. In your personal life?
  10. Is there anything else you’d like to add?

Appendix B: online form questions

Introductory questions

  1. What is your current role?      
  2. What sort of technology do you use day to day in your role?    
  3. Are there any areas in work you feel technology could be improved?  
  4. What sort of modern technology do you use outside of work? Do you use any of these at home or in your personal life?  
    1. Smartphone
    2. Computer
    3. Washing machine or other digital kitchen appliances
    4. Car
    5. Smart TV
    6. Electric toothbrushes
    7. Coffee makers
    8. Other

​​Artificial Intelligence (AI)

  1. What does AI mean to you?   
  2. If you can recall, where did you first hear of it?         
  3. From your understanding, how does AI work?          
  4. Can you give me any examples of AI you are aware of?            
  5. Are you aware of any AI playing a part in your daily life?            
  6. Based on what you know, what is your opinion on AI? Are you aware of any positives or negatives?           
  7. With respect to AI have you heard of...?
    1. Machine learning
    2. Neural networks
    3. Large Language models
    4. Generative AI
    5. Hallucinations
    6. Responsible AI
    7. Multimodal models
    8. Prompts
    9. Copilots
    10. Bias
  8. Do you have any comments on the list above?

AI usage in work

  1. Has there been any discussion of AI in your workplace?            
  2. Are you aware of it being used in your workplace?   
  3. If so, how?      
  4. Do you have any thoughts on how it may affect your workplace?      
  5. Do you have any ideas on how it could affect other jobs?            

Ethics

  1. Are you aware of any ethical issues related to AI?    
  2. Do you have any concerns about AI and how it is used currently?        
  3. If so, what?     
  4. Do you have any concerns about how it could be used in the future?   
  5. If so, what?     

Benefits

  1. Do you follow AI in the news at all?   
  2. What was the last thing you recall seeing in the media about AI?        
  3. How did you feel about it?      
  4. Are there any benefits you can see for your workplace?            
  5. If so, what?
  6. Are you aware of any other benefits related to AI?
  7. If so, what?
  8. If you have used any applications or devices that use AI, please describe your experience.

The future

  1. From what you know, what is your opinion about the future of AI in the workplace?
  2. From what you know, what is your opinion about the future of AI in your personal life?
  3. Is there anything else you’d like to add?