Use Generative AI services safely

Gartner describes Generative AI as a capability that can “learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but doesn’t repeat it. It can produce a variety of novel content, such as images, video, music, speech, text, software code and product designs”.  ChatGPT is a type of generative AI service based on a large language model (LLM) developed by OpenAI and it uses deep learning techniques to generate a variety of content, including human-like text responses to a wide range of prompts.

At a glance

 Ensure a third-party security assessment has been completed before using any generative AI service. All suppliers have been assessed you can find on a Third Party Register (SSO required).

 Check the relevant agreements about third-party use of information supplied to the service

 Only supply information classified up to a level for which the service has been assessed as suitable, and take particular care when processing personal data, as many of these services are not appropriate for this type of use.

 
AT OXFORD

Risk to confidentiality 

The main information security risk to the university in using generative AI cloud services is in relation to a loss of confidentiality. The university has an information classification scheme with three levels of confidentiality: Public, Internal and Confidential.

Information classified as Public does not carry a confidentiality risk. Information classified as Internal or Confidential carries risk. Before any processing of Internal or Confidential information using generative AI services, the following steps must be taken to mitigate risk.  

1. As with all service providers holding or processing university information, information supplied to the tool in the form of questions or other artefacts is typically stored by the third-party service provider and is subject to the threats from cyber criminals and other malicious actors, such as hostile nation states. Therefore, all cloud-based Generative AI tools should be subject to a security risk assessment before being used. The Information Security GRC Team has a TPSA tool to help complete an assessment. It is generally not possible to complete a full assessment for free and open-source services and in such cases they should not be used for confidential information.

2. Information provided to generative AI services may be accessible by the service provider, its partners and sub-contractors and is likely to be used in some way, such as to train AI models. This is particularly likely when the service is free to use. Check service agreements for conditions on usage and ownership and if not explicitly set out in an agreement, the service carries an unknown risk to confidentiality. If any personal data is processed using generative AI, and you fail to opt out of the use of that data by the third party, this secondary processing must be considered in the data protection by design work. In particular, you must ensure that you have been transparent with those whose data may be fed into the generative AI model and alert them to any secondary processing that may occur. 

Data Integrity 

It is important to check generated output, particularly if used to produce code and sensitive output, as it may be false or misleading. One potential cyber risk is "poisoning" AI training data to manipulate the behaviour of the model and cause it to produce malicious output. This is an emerging threat. We will continue to watch this and other evolving threats and update our advice accordingly. 

Use of Generative AI to launch cyber attacks 

There is a lot of discussion about the use of generative AI services by criminal groups and other types of cyber attacker, for example to develop malware, write convincing phishing emails or create deepfake videos. Awareness is key to preventing this type of attack, as is adherence to the extant University information security policy and underpinning baseline security standard, to ensure a good level of security.  

Further support 

Please contact the Information security GRC team grc@infosec.ox.ac.uk for support on information security issues.  

If you are intending to provide personal data in the use of Generative AI you should seek advice from your local information governance lead, or the Information Compliance team  information.compliance@admin.ox.ac.uk

THE BASICS

Working with third parties

Before you entrust the University's data or information to any partner or supplier, you need to be sure they can and will keep it safe from attack.

In order to ensure that third-party partners and suppliers meet the standards of information security required by the University and your division, department or faculty, you must:

  1. Maintain an up-to-date record of all third parties that access, store or process University information on behalf of your division, department or faculty
  2. Ensure that, for all new agreements with third parties, due diligence is exercised around information security and that contractual arrangements are adequate
  3. Ensure that information security arrangements contained in existing agreements are reviewed and are adequate
  4. Monitor the compliance of third parties against your information security requirements and contractual arrangements
STUDENTS

Please note that artificial intelligence (AI) can only be used within assessments where specific prior authorisation has been given, or when technology that uses AI has been agreed as reasonable adjustment for a student’s disability (such as voice recognition software for transcriptions, or spelling and grammar checkers).

To find out more about AI and plagiarism, visit the Plagiarism page on the Oxford Students website. This not an Information Security requirement but a requirement from the Proctors. 

Contact us

 

Please contact the Information security GRC team grc@infosec.ox.ac.uk for support on information security issues.  

If you are intending to provide personal data in the use of Generative AI you should  seek advice from your local information governance lead, or the Information Compliance team  information.compliance@admin.ox.ac.uk