1. AI systems are susceptible to manipulated data.
Cyberattackers are using AI to reconstruct and manipulate data points to prey on users’ weaknesses, explained Kumar. This means if IT leaders pull from AI-generated data, a hacker may have already infiltrated it with false information, knowing users will find it.
Alina Oprea, an associate professor of computer science at Northeastern University, described this tactic as “data poisoning” in a recent article in The Economist. It occurs when AI systems pull from data on the open web, making it more susceptible to cyberattacks.
2. Artificial standards are written in a vague way.
According to Kumar’s research, software engineers find the language of AI standards to lack clarity and precision, especially regarding ethical implications.
“When the language of tech standards is so vague, there are more suitcase words that are projected onto these systems that may be problematic,” he said.
Kumar noted that although many companies have imposed regulatory standards on AI models, preventing them from answering unethical or potentially dangerous prompts, more training is needed.
There are also privacy and sourcing standards to consider. For example: Where is the information coming from? Who has the right to use it? Who has verified it?