Jan 25 2024
Security

What Are the Privacy Risks of Generative Artificial Intelligence?

Businesses should apply controls so that AI tools do not compromise users’ personal information, and users must be careful not to inadvertently share private data.

Innovative technologies can offer amazing new capabilities, yet they can also be fraught with unforeseen challenges. With the recent rise of generative artificial intelligence, the power and potential to change the way we do business seemed to appear overnight. Yet one thing that hasn’t changed is the importance of protecting the privacy and confidentiality of users’ data when implementing this unique technology.

According to the Cisco 2024 Data Privacy Benchmark study, 79 percent of businesses are already getting significant value from generative AI. Organizations and individuals have rushed to use generative AI tools for tasks such as generating documents, creating and reviewing code, and driving productivity in general. But, in doing so, many are entering data about their customers, employees and internal processes.

Consumers who use generative AI act similarly. According to a separate study, the Cisco 2023 Consumer Privacy Survey, 39 percent have entered work information, and over 25 percent have entered personal information, account numbers and even health and ethnicity information. At the same time, 88 percent of these same consumers said they are concerned that the data they entered could be shared with others.

Click the banner to learn how your organization can increase its ransomware recovery capabilities.

Generative AI Applications Need Privacy and Security Controls

Entering personal or confidential information into generative AI applications without a full understanding of the potential impact raises serious concerns. Those who are producing or making these applications available may not be implementing proper data privacy and security controls, and most end users are likely unaware of what will happen to their information. Personal data may be shared with other users or released to the public, leading to potential identity theft or privacy violations, and confidential corporate data could be exfiltrated to third parties or competitors.

Still, only 50 percent of respondents to Cisco’s 2023 survey said they have refrained from entering personal or confidential information into the tools. That means that the other 50 percent have likely entered information that they would not want shared. Regardless of the application, it’s critical that personal or confidential information be protected and not shared with anyone who shouldn’t have access to it. Generative AI tools can be used, but appropriate guardrails are required to protect certain data from being leaked or abused.

READ MORE: Learn how businesses can keep endpoints secure.

A few leading practices — and requirements in some jurisdictions — for organizations that use generative AI applications include limiting the available tools or the types of information users may input so that confidential data can’t be accessed by others; choosing not to do business with or use the tools of generative AI companies that do not adequately protect the privacy or confidentiality of their users; and making their privacy, security and terms of use policies readily available to the public so that people can avoid using tools that don’t have appropriate safeguards in place. 

Why Trust Is Critical with Generative AI

Trust has always been critical in social and business relations, even more so today when technology can make one thing seem like another: take for example deepfakes, where copy, photos, video and other data are falsified. Who is responsible and accountable for keeping our personal information secure and our private lives private? Governments, businesses and consumers each have a role to play.

Our research shows that 91 percent of organizations say that they need to do more to reassure their customers about how their data is being used. And Cisco’s 2023 research notes that 60 percent of consumers say that they have lost trust in providers over AI use. If customers can’t trust companies’ generative AI policies and practices, stronger regulations may be necessary. However, as generative AI is still so new, it will take time for governments to adopt and enact new regulations — and even longer for those regulated to implement the requirements. 

With so many organizations already getting significant value from generative AI, the technology is clearly here to stay. Consumers today are entering data into these tools, despite their concerns about how the data might be used or shared. This puts the onus on businesses to adopt strong guardrails that protect the security, fairness and privacy of the data, helping them to preserve the hard-earned trust of their customers.

Just_Super/Getty Images
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT