Generative AI Applications Need Privacy and Security Controls
Entering personal or confidential information into generative AI applications without a full understanding of the potential impact raises serious concerns. Those who are producing or making these applications available may not be implementing proper data privacy and security controls, and most end users are likely unaware of what will happen to their information. Personal data may be shared with other users or released to the public, leading to potential identity theft or privacy violations, and confidential corporate data could be exfiltrated to third parties or competitors.
Still, only 50 percent of respondents to Cisco’s 2023 survey said they have refrained from entering personal or confidential information into the tools. That means that the other 50 percent have likely entered information that they would not want shared. Regardless of the application, it’s critical that personal or confidential information be protected and not shared with anyone who shouldn’t have access to it. Generative AI tools can be used, but appropriate guardrails are required to protect certain data from being leaked or abused.
READ MORE: Learn how businesses can keep endpoints secure.
A few leading practices — and requirements in some jurisdictions — for organizations that use generative AI applications include limiting the available tools or the types of information users may input so that confidential data can’t be accessed by others; choosing not to do business with or use the tools of generative AI companies that do not adequately protect the privacy or confidentiality of their users; and making their privacy, security and terms of use policies readily available to the public so that people can avoid using tools that don’t have appropriate safeguards in place.
Why Trust Is Critical with Generative AI
Trust has always been critical in social and business relations, even more so today when technology can make one thing seem like another: take for example deepfakes, where copy, photos, video and other data are falsified. Who is responsible and accountable for keeping our personal information secure and our private lives private? Governments, businesses and consumers each have a role to play.
Our research shows that 91 percent of organizations say that they need to do more to reassure their customers about how their data is being used. And Cisco’s 2023 research notes that 60 percent of consumers say that they have lost trust in providers over AI use. If customers can’t trust companies’ generative AI policies and practices, stronger regulations may be necessary. However, as generative AI is still so new, it will take time for governments to adopt and enact new regulations — and even longer for those regulated to implement the requirements.
With so many organizations already getting significant value from generative AI, the technology is clearly here to stay. Consumers today are entering data into these tools, despite their concerns about how the data might be used or shared. This puts the onus on businesses to adopt strong guardrails that protect the security, fairness and privacy of the data, helping them to preserve the hard-earned trust of their customers.