Usage Guidelines for ChatGPT
Released in November of 2022, ChatGPT has taken the world by storm. ChatGPT is a language model, created by OpenAI, that is trained to produce text outputs in response to user inputs. From poetry to emails, ChatGPT creates clear and humanlike writing in record time. It was trained by analyzing a large amount of data from across the internet, in combination with testing that occurred throughout its development to refine the model. While it can be a very helpful tool, there are privacy, copyright, accuracy, and international concerns. As AI becomes an even larger part of our lives, it is important for companies to create usage guidelines for employee usage of ChatGPT and other AI tools, taking into account the below issues.
OpenAI reviews the conversation history of users and their inputs. Thus, it is important that employees do not input any proprietary or confidential information into ChatGPT. It is also possible that OpenAI will use the conversation histories to improve and better train the model. Essentially, employees should only put information into ChatGPT that can be shared publicly.
Users should exercise extreme caution in using ChatGPT to create computer code that will be used by third parties or customers. Properly reviewed AI-generated code may be useful for internal purposes. However, any copyrightable works (including software code) generated by ChatGPT may not be owned by you or your organization. In other words, what you think may be proprietary software may not have any owner at all if ChatGPT had a hand in its creation.
Currently, the United States Copyright Office does not recognize works made by non-human authors. Thus, software created using ChatGPT may not be protectable by copyright. Further, anything coming out of the AI is based on training data, so anything produced by AI could possibly be considered a derivative work of someone else, and may therefore infringe on a third party’s copyright.
Another risk is that ChatGPT could include snippets of open source code (unknown to the user) that would trigger licensing requirements/restrictions and in certain situations could make proprietary software all open source.
Many users have pointed out inaccuracies in the outputs they have received from ChatGPT. Sometimes such inaccuracies are due to vague inputs. Other times, ChatGPT has included made up information. Recently, lawyers were sanctioned for citing fake opinions that had been generated by ChatGPT. Mata v. Avianca Inc., No. 22-cv-141 (PKC), 2023 WL 411965 at *15 (S.D.N.Y. June 22, 2023). Note: This was written by a human and therefore the preceding is a real citation. The outputs read with a sense of authority, so it is important that the information is reviewed by an expert.
If your company operates internationally, it is important to keep track of how other countries and international bodies regulate AI. For example, it is possible to receive copyright in computer-generated works in the United Kingdom, and the European Commission has developed a legal framework for AI.