Lawyer provides tips for businesses using ChatGPT securely after Italy ban

With Italy’s recent ban on ChatGPT and the growing concerns surrounding privacy, Richard Forrest, Legal Director at the UK’s leading

Facebook
LinkedIn
X

With Italy’s recent ban on ChatGPT and the growing concerns surrounding privacy, Richard Forrest, Legal Director at the UK’s leading data breach law firm, Hayes Connor, provides practical advice to businesses on how to use the technology securely within their workplace.

THE recent ban and lack of certainty over the chatbot’s GDPR adherence has further intensified ongoing concerns from Hayes Connor, who fear that business employees are negligently disclosing confidential data. This is compounded by recent stats, which revealed that sensitive data makes up 11% of what employees submit to ChatGPT.

Hayes Connor’s main concern is how large language models (LLMs) like ChatGPT use personal data for training purposes, which could then be regurgitated later down the line.

For example, a doctor might input a patient’s information into the chatbot, including their name, address, NHS number and medical condition, and ask the chatbot to write a personalised email to the patient to save them time. Is there a risk that ChatGPT would provide this information if a later user asks something about said patient?

For businesses using ChatGPT for admin tasks, confidentiality agreements with clients may be at risk, as sensitive information may be entered by employees into the chatbot. Similarly, trade secrets, including codes and business plans, may also be at risk if entered into the chatbot, meaning employees are potentially breaching their contracts.

As such, Forrest urges all businesses who use ChatGPT to implement various measures to ensure employees are remaining GDPR compliant. He provides the following actionable tips for businesses:  

  1. Assume that anything you enter could later be accessible in the public domain
  2. Don’t input software code or internal data
  3. Revise confidentiality agreements to include the use of AI
  4. Create an explicit clause in employee contracts
  5. Hold sufficient company training on the use of AI
  6. Create a company policy and an employee user guide

Forrest says, “The news that ChatGPT is now banned in Italy demonstrates the importance of compliance measures for companies operating in Europe.

“There have been growing concerns over the nature of LLMs, like ChatGPT, posing issues around the integration and retrieval of data within these systems. If these services do not have appropriate data protection and security measures in place, then sensitive data could become unintentionally compromised. 

“Businesses that use ChatGPT without proper training and caution may unknowingly expose themselves to GDPR data breaches, resulting in significant fines, reputational damage, and legal action taken against them. As such, usage as a workplace tool without sufficient training and regulatory measures is ill-advised.”

Related Stories from Silicon Scotland

One third of employers think AI will increase productivity
Zoho deepens Customer Experience Platform, adding Generative AI
Scottish accountants largely unconcerned about AI impacting roles, new survey data reveals
CIO Strategy: Meaningful AI implementation is key to innovating business strategy
Defence firm partners with AI to accelerate the development of next generation technologies
AI set to supercharge Cyber Threats by 2027, warns NCSC

Other Stories from Silicon Scotland