Skip to main content

Enabling AI

Responsible use of AI in research

This guidance is designed to support researchers in understanding how to use artificial intelligence (AI) effectively in their research. It aims to guide researchers by addressing key questions on permissible use, establishing best practices, and drawing attention to the challenges that AI may pose in research.

The remit of this document is about the use of AI in research and does not apply to developing or building AI tools. Research students should also consult the ‘Guidance on use of AI in post-graduate research’ available on the Enabling AI website.

Further context and guidance are provided in the wider University of Exeter Artificial Intelligence policy which includes the AI Catalogue of approved AI tools.

The University does not require you to use AI if you do not want to; you may choose not to use it for research-related, practical or for ethical reasons and that is your choice.

We discuss below key aspects of the practical, ethical, and environmental implications of using AI tools in your research to help you do so in a responsible and safe manner.

Given the rate of technological advancement in AI, this guidance is intended to be a ‘living’ document and will be regularly reviewed and updated accordingly.

Artificial Intelligence (AI): A technology in which a computing system is coded to ‘think for itself’, adapting and operating autonomously. AI is increasingly used in more complex tasks, such as medical diagnosis, drug discovery, and predictive maintenance.

The following are types of Artificial Intelligence technologies:

Machine Learning (ML): The set of techniques and tools that allow computers to ‘learn and adapt’ by creating mathematical algorithms based on accumulated data. Example technologies include spam filters in emails using words in subject line, predictive mathematical models.
Example tools in the higher education sector: Coursera ML-Based Recommendation System, Jisc Learning Analytics, Turnitin AI detection, common algorithms and models such as clustering methods, random forests etc.

Deep Learning (DL): A subset of machine learning where systems ‘learn’ to detect features that are not explicitly labelled in the data. Example technologies include facial recognition technologies, autonomous vehicle navigation, etc.
Example tools in the higher education sector: Otter.ai, AlphaFold.

Generative AI (Generative AI): AI models that can create new content e.g. text, computer code, audio, music, images, and videos. Typically, these models are trained on extensive datasets, which allows them to exhibit a broad range of general-purpose capabilities.
Example tools are tools that use Large Language Models (LLMs) like ChatGPT and Copilot (see below), Google Bard, DALL-E (image generation).

There are three main forms of Generative AI systems:

  • Direct-use AI systems: Technologies providing a Generative AI capability (e.g., Copilot, ChatGPT)
  • AI embedded in other systems: Systems with Generative AI components (e.g., data mining in storage solutions)
  • AI used in research: Creating in-house Large Language Models (LLM) using open-source tools, and local data sets (Utilising tools like Azure AI Studio and AWS SageMaker for Gen AI outputs)

Large Language Models (LLMs): Type of Generative AI systems designed to learn grammar, syntax and semantics of one or more languages to generate coherent and context-relevant language.
Example tools are ChatGPT, Google Gemini, or Microsoft Copilot.

AI can help to drive innovation in research, from hypothesis generation and literature reviews to interdisciplinary collaboration, data analysis, simulation, output creation and dissemination. However, it also has many limitations as a research tool, and it is important to understand these to use it safely and responsibly in line with research integrity commitments and data protection regulations.

AI produced outputs should always be assessed and where necessary verified for accuracy. Generative AI tools are prone to producing inaccurate responses, known as “hallucinations,” which can arise depending on how the tool is used and/or applied.

Incorrect use of AI risks transgressing the University of Exeter’s research integrity and academic integrity policies and could potentially result in being investigated under the University’s research misconduct and/or academic misconduct regulations.

AI must be used in line with the University of Exeter Artificial Intelligence policy, and where applicable the Doctoral College regulations on the use of Generative AI in research assessments. Also where applicable, funder guidance and the feedback of a Research Ethics Committee ethics review must also be taken into consideration. If you are unsure about how to use AI appropriately in your research, you should discuss with your academic lead or departmental Director of Research in the first instance if anything is not clear.

AI tools offer benefits for efficiency and accessibility in research, but there is evidence that when used without critical consideration they inadvertently diminish users’ engagement in deep reflective thinking processes. Critical thinking and reflection are essential to the process of research and for development as a researcher. AI should not be used to replace deep engagement with the literature and practice of research to the detriment of development as a researcher.

To use AI most productively, researchers are encouraged to experiment and learn how to use AI to best effect before using it in research. Learning how to critically and effectively use AI tools is itself a valuable research skill.

Your research could be subject to an ethical review if the use of AI involves the data of human participants, as defined by the university Research Ethics Framework. A Data Privacy Impact Assessment (DPIA) will likely also be needed from the Information Governance team.

You should consider the ethical issues, environmental costs and potential impacts on the development of your research skills as part of your decisions on whether, and how, to use AI. It may not always be appropriate to use AI tools.

See Appropriate vs. inappropriate uses of GenAI - Understanding AI - LibGuides at University of Exeter

As a general rule, you should not just ‘copy and paste’ uncritically from an AI into your work. The practice however varies between disciplines and different forms of data, i.e. what is appropriate for working with code may not be appropriate if you are writing prose. A programmer who uses prompts to derive functioning code (that they can critically check by running the code themselves) is reasonable. A qualitative researcher conducting desk-research using prompts to create a whole chapter is not reasonable – but using AI to test the hypotheses or fact in that chapter and improve the language of the chapter would be a collaboration that might improve the quality of the output.

The key principle is that any use of AI must be accompanied by critical scrutiny and research oversight, because you, as the researcher, remain responsible for any research outputs.

As the UK Research Integrity Office advises, ‘any use of AI tools in writing (including code generation), editing, and the creation or visualisation of images and videos should be clearly declared... It is good practice to declare the use of generative AI or AI-assisted technologies when developing initial research, such as ideas and theories’. Work that is not your own effort or does not appropriately acknowledge and reference the sources and tools used to generate it does not meet the criteria for academic/research integrity. If you do not appropriately reference any use of (generative) AI in your work, you will violate the university’s regulations on plagiarism and research/academic misconduct.

Researchers are accountable for the integrity of the content generated by or with the assistance of AI tools. You need to maintain a critical approach to using the outputs generated by AI and be aware of the tools’ limitations, biases and risks (see sections below). You must follow all relevant policies around data privacy and confidentiality and regarding intellectual property rights when entering sensitive or protected information into web-based AI tools.

The UK Research Integrity Office provides useful guidance on how to use AI safely and in line with research integrity principles: Embracing AI with integrity.

Actions to take before and while using AI

  1. Read the University of Exeter Artificial Intelligence policy and associated guidance before using AI tools.
  2. Complete the mandatory training on information governance.
  3. When seeking to use Generative AI, use a toolset from the university Generative AI Catalogue, making sure the data you plan to provide the platform meets the guidance given for that tool.
  4. Read the Information Classification Scheme. Don't share any data classified above the level associated with the tool you intend to use.
  5. Be aware of the varied and emerging norms around the use of AI within your discipline, and across different disciplines. Discuss these with academic colleagues and your academic lead or departmental Director of Research. If needed, seek guidance on what is best practice and what is discouraged within your discipline(s).
  6. Consider whether your research will need an Ethics Review. Refer to the University’s Research Ethics Framework to determine whether ethics approval is needed before your research can start.
  7. Keep a record of your use of AI in your research and, where appropriate, cite the use of AI tools appropriately in research outputs (referring to journal and publisher guidelines).
  8. Test your AI results. Be ambitious but think critically: always consider that AI platforms can make errors and contain bias. Always apply human judgement to any data generated.

AI can be a powerful tool for research innovation and efficiency. But these benefits come with risks and limitations that you need to consider and look for when using AI:  

  • AI poses privacy risks: many tools use inputs to ‘train’ the tools. Researchers should exercise caution in putting data or writing into these tools, or when attempting to fine-tune or train tools using research data. Exposing your data or research to an AI tool may, in effect, put it into the public domain prior to publication, compromise confidentiality, or allow the work to be used without attribution or accountability. The University’s Information Classification Scheme will help you identify which types of AI tools you can use depending on your data.
  • AI may increase risks of plagiarism: generative AI tools re-present information developed by others and so there is the risk of plagiarised content being submitted by a user as their own and/or copyright infringement. Artwork used by image generators may have been included without the creator’s consent or licence. 
  • AI can make mistakes: AI might produce incorrect and sometimes non-sensical outputs. AI does not know true from false and may present all outputs as though they were equally valid.  
  • AI can make things up: some Generative AI tools will make false references to non-existent texts. These are often called ‘hallucinations’. Users should be aware of this risk and investigate approaches to mitigate against it. 
  • AI is inconsistent: generative AI tools are stochastic (random) and can produce different outputs from the same inputs. This inhibits reproducibility and robustness in results and conclusions.  
  • AI may exclude anomalies: it is trained to generate an apparently plausible response based upon data on the internet and can exclude anomalous results, impacting outputs.  
  • AI can be biased: generative AI tools produce answers which contain societal biases and stereotypes that, in-turn, may be replicated in the generative AI tool’s response.  
  • AI is potentially unreliable: AI cannot access all the available and necessary information, for instance information behind organisational firewalls. It may provide different answers to the same inputs and it may not be trained on the most up to date information.  
  • AI can misinterpret information: data and information contained within generative AI tools is garnered from a wide range of sources, including those that are poorly referenced or incorrect. Similarly, unclear commands or information may be misinterpreted by generative AI tools and produce incorrect, irrelevant or out-of-date information.  

See also: Using GenAI tools responsibly: Privacy, bias, and transparency - Understanding AI - LibGuides at University of Exeter 

AI models, data storage and water consumption levels associated with cooling large data centres. Studies suggest that writing a 100-word email with ChatGPT consumes the equivalent of one 500 ml bottle of water and enough energy to charge a mobile phone seven times.  

The University of Exeter is committed to sustainable research. We ask you to consider whether your use of artificial intelligence is justifiable under these commitments and where possible use lower-energy AI models.   

Under the University of Exeter Artificial Intelligence policy all users are urged to apply AI thoughtfully, using it where it adds genuine value, and to avoid default or excessive use. To support informed decision-making, the AI Catalogue links to sustainability insights for suppliers, where possible.  

Researchers and developers should actively design AI systems to optimise code and model design to reduce compute loads and energy demand. Several teams within the university are working towards GreenDiSC certification and can provide guidance on energy efficient design of software. Contact the Research Software & Analytics Group for more information. 

AI tools, and especially commercially available generative AI tools can be powerfully utilised for research, but their results need to be viewed critically, and you need to consider potential ethical issues before using them. These include:  

  • Privacy and data considerations: whether an AI tool is designed to learn directly from its users’ inputs or not, there are risks to privacy and intellectual property associated with entering information.  
  • Societal bias: AI tools produce answers based on information generated by humans which may contain societal biases and stereotypes which, in-turn, may be replicated in the generative AI tool’s response.  
  • Inaccuracy and misinterpretation of information: data and information contained within AI tools is garnered from a wide range of sources, including those that are poorly referenced or incorrect. Similarly, unclear commands or information may be misinterpreted by generative AI tools and produce incorrect, irrelevant or out-of-date information. This means that accountability for the accuracy of information generated by these tools when transferred to another context lies with the user.  
  • Ethics codes: users of AI tools should be aware that while ethics codes exist, they may not be embedded within all AI tools.  
  • Plagiarism: AI tools re-present information developed by others and so there is the risk of plagiarised content and/or copyright infringement being submitted by a user as their own, and artwork used by image generators may have been included without the creator’s consent or licence.  
  • Exploitation: the process by which AI tools are built can present ethical issues. For example, some developers have outsourced data labelling to low-wage workers in poor conditions.  
  • Global North-South data inequality: unequal access to AI and unequitable data sharing can exacerbate inequalities within global academic structures.

If your research requires ethics approval, the use of AI should be declared as part of your Ethics Application and also included in your Data Management Plan submitted as part of the ethics review. Remember, you must secure ethics approval before starting the research; it cannot be awarded retrospectively.

You must seek and obtain informed consent from your research participants to use AI to process their data. This would include translating, transcription, analysis, aggregation or interaction with the data. Research participants have a right to expect openness and transparency from researchers, and this includes the use of AI. The use of some types of AI substantially increases the risk of data leakage, so you must use AI carefully and responsibly to safeguard their interests.

In line with Information Governance requirements, you also need to perform a Data Protection Impact Assessment. This is particularly essential if you are processing special category personal data or using AI as part of high risk data processing. It can take time for checks to be made so please do this as early as you can. You must also include this use of AI in your Data Management Plan.

If you have queries about using AI to process participants’ data, please discuss these with your supervisors or your relevant Research Ethics Committees.

When utilising generative AI specifically, researchers are advised to make use of the tools provided in the AI catalogue (and associated Information Classification Schedule).

Some AI tools are marketed specifically as ‘AI Research Assistants’. Examples include Elicit, Scite, and Scholarly. These tools claim to save researchers’ time, e.g. with literature reviews. If you use these, you must, of course, understand their limitations and use them critically. Whilst these tools can carry out searches and superficially summarise findings, they may not correctly evaluate the quality of studies. Ensure you check results carefully and come to your own conclusions on their usefulness and merits: learning how to evaluate research is an important part of your development as a researcher.

The University accepts that researchers may need access to a broader suite of AI tools and approaches for research purposes. If researchers require new Generative AI tools to be added to the Generative AI catalogue, please discuss this with your Faculty IT Partner in the first instance.

Generative AI tools can bring benefits in the context of preparing applications for research funding. However, they also present potential risks in areas such as rigour, transparency, originality, data protection and intellectual property.   

When preparing applications for funding, researchers should ensure that generative AI tools are used responsibly and in accordance with relevant legal and ethical standards. Any outputs from generative AI tools used in funding applications should be acknowledged. Where individual funders apply further specific restrictions, these will be explicitly stated and should be followed. Most funders’ guidance also states that assessors, including reviewers and panellists, must not use generative AI tools as part of their assessment activities as this would be breach of confidentiality. 

Researchers are encouraged to refer to further guidance available from UK funding agencies: 

Generative AI tools can be used to support your academic writing and assist with proofreading final drafts, but this may not actually be the best way to improve the quality of your academic writing. AI can help with writing, but it can lack the nuance and precision necessary, and lead to clunky and repetitive writing.

The Library has collated a list of examples of appropriate and inappropriate use of AI in academic writing from using it to developing ideas to proof reading.  See Appropriate vs. inappropriate uses of GenAI - Understanding AI - LibGuides at University of Exeter 

As a general rule, you should never just ‘copy and paste’ uncritically from an AI tool into your work. The practice however varies between disciplines and different forms of data; i.e. what is appropriate for working with code may not be appropriate if you are writing prose. A programmer using prompts to derive functioning code (that they can critically check by running the code themselves) is reasonable. A qualitative researcher conducting desk-based research using prompts to create a whole chapter is not reasonable.

Please remember that when you are using commercial AI tools to support your academic writing, these tools may record your inputs, including data sets, algorithms, videos, images or text prompts. These inputs may be used to train AI algorithms, improve the performance of AI systems or even be sold to third parties for purposes such as data mining.

To protect your research and comply with legal and ethical standards, never upload copyrighted, personal, or sensitive data into an AI tool without appropriate permissions and consent. Consult the University’s information classification scheme to identify appropriate tools to use.

Citations and references 

In some disciplines, it is best practice to provide citations and references for the use of AI in your research outputs. If this is the case for you, here is some general guidance on citing and referencing AI: please follow the guidance for your preferred citation style and be consistent in its use.   

Styles of referencing or citation for the use of AI are currently developing. AI cannot be the author of a work and the tool cannot take responsibility for outputs. In general, current guidance on how to reference any use of AI is to cite it as a form of private correspondence. This is because like private correspondence, the prompts and responses you enter are unique to you, and AI cannot be easily replicated or verified.  Styles will evolve however so please check for up to date guidance on your preferred citation style (e.g. APA or MLA), see Generative Artificial Intelligence (GenAI) use in assessments - Understanding AI - LibGuides at University of Exeter.  

General guidelines for citation are to:  

  • Name the AI tool/platform used. 
  • Give the dates of use and name of person input prompts. 
  • Include details of the prompts input (and, if possible, the responses). 
  • Keep records of the output responses from AI, even if you do not include these in the submission itself. 

Remember: Be clear, open and transparent about your use of AI. DO NOT present any of the responses from AI as your own work. This constitutes academic misconduct which could lead to disciplinary measures being taken against you.  

Mandatory training  

Prior to using an AI tool for the first time, researchers should ensure that they have completed the mandatory training on information governance to confirm they will adhere to University rules.  Access to tools maybe removed if this training expires.  

University resources and training in using AI

Key external guidance