This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

Can Using ChatGPT Invalidate a Company's Patent?

ChatGPT and its artificial intelligence capabilities have been all over the news recently. However, companies should be careful in allowing employees to access ChatGPT for work-related purposes, especially those employees that work closely with or develop the company's technology. 

For something to be patentable in the United States, one requirement is that the invention is novel. This means a patent application must be on file with the U.S. Patent & Trademark Office before any prior art is publicly available that discloses the invention. Publicly available prior art is most commonly another patent application filed before the Applicant's patent application, but also includes public use, an offer for sale, and commercial exploitation, among others. An Applicant is also given a one-year grace for its own prior art in the United States, but this one-year grace period doesn't exist in most foreign jurisdictions. Stated differently, an Applicant loses the ability to patent its invention in many foreign countries as soon as the invention is publicly available.

ChatGPT is an open-source tool that relies on various data sources including historical data of past uses of the tool. Using inventive disclosure material with ChatGPT involves contributing that disclosure material into the public domain as training data for future ChatGPT uses. While there is no case law on this issue yet, this contribution likely invalidates patent rights in many foreign countries to the inventive concepts in those disclosure materials and starts the one-year time bar in the United States.  

In summary, a company should avoid using tools like ChatGPT with any of its confidential or inventive materials before filing a patent application because the company may unintentionally lose its right to protect its intellectual property.

“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote further, according to Insider.