Workplace sexual harassment has come to the fore this year, with a number of well-known men in the tech, news and entertainment industries facing serious accusations. In response, businesses may be wondering what technology could spot workplace misconduct before it escalates to a lawsuit against the company.
While its uses are limited today, artificial intelligence might soon help. Advances in tracking and predictive software could enable AI to sniff out patterns of unacceptable workplace behavior and retaliation.
When this potential becomes reality, however, businesses must think carefully about using AI to monitor employees for clues that indicate misconduct. Whether AI should play a role in flagging workplace sexual harassment and other misbehavior is not an easy question, nor is there an easy answer.
What’s Legal in the Realm of AI Monitoring?
Companies don’t have complete carte blanche to monitor employees. Generally, data generated on company equipment (such as computers, smartphones, tablets and voicemail) belongs to the employer and can be scrutinized to varying degrees. But keylogging or screen-capture software can infringe on employees’ privacy when a computer accesses personal information.
Additionally, the National Labor Relations Board warns that employers may not conduct email monitoring of employees who are discussing working conditions, including pay, hours or other terms of employment. Though it has yet to be tested by the judicial system, the same ruling could apply to texts and instant messages.
In several states, workers are protected from employers’ demands for usernames, passwords or other means of accessing personal accounts. There are also some state laws requiring all parties to consent before any audio recordings are made.
Video monitoring is allowed in most states if it does not include areas such as locker rooms, break areas or restrooms and does not monitor employees engaged in union-related activities. It is also a best practice to notify employees of surveillance in business areas where video monitoring is permissible.
One tough question is, do businesses really want all that data? And if they do, are there enough HR staff to review it? Who determines what behavior crosses over to misconduct and what the response should be?
Say, for example, that AI flags an off-color joke in an email and alerts HR. Should that trigger an immediate response? Or should HR wait for more such activity? How much is enough to investigate or start disciplinary action?
These are not rhetorical questions, as AI-gathered intelligence could expose a company to legal claims if it does not properly act upon what it learns. A company could face punitive damages because it cannot argue there was a lack of knowledge.
Update and Enforce Workplace Behavior Policies
It’s not far-fetched to imagine, in the near future, a shop floor outfitted with a digital assistant ready to answer questions about unfulfilled orders, market closings and shipment tracking, as well as listen to employee conversations and parse whether the content is acceptable.
For instance, the device could audibly warn that an off-color comment had crossed a line drawn by the business, set up a training session for the offending employee and file a report with HR.
Perhaps, though, instead of shrinking the “human” in human resources, companies should enforce and update policies on workplace behavior. Anti-harassment and corporate expectations training should be provided for everyone, from the CEO down. Supervisors must immediately report any harassment, discrimination or retaliation to HR.
Businesses should also be prompt and thorough in investigating complaints and following up. After all, good management isn’t about managing problems — it’s about managing people.