RESOURCES

Secure AI: Compliance, control and building customer trust

In this resource:

Voluntarily following ethical AI guidelines helps build customer trust. Implementing security controls, human oversight, and transparency is crucial, as is regulatory compliance. The keys to getting there are continuous testing, authentication and enhanced security measures, along with an open, transparent approach.

Voluntary ethical guidelines around the use of AI are important for many reasons — one of which is building customer trust. A KPMG survey found that 63% of respondents in the U.S. are concerned about GenAI’s ability to compromise their privacy and expose their data to breaches and misuse. 

Using security controls, following ethical guidelines and providing human oversight over chatbot activity can help to build trust with customers. But it’s also important to be transparent about how you’re using AI and how customers’ data is being collected, used, stored and shared. 

Ensuring compliance  

The European Union’s Artificial Intelligence Act provides a framework for developing and deploying AI systems in the EU. While there isn’t yet national AI legislation in the U.S., an executive order on AI is a preview of what’s coming. 

But there are also privacy regulations that pertain to AI. In Europe, the EU’s General Data Protection Regulation (GDPR) requires companies to take measures to de-identify and encrypt personal data. In the U.S., the California Consumer Privacy Act (CCPA) has strict standards around data collection and handling.

There are also industry-related privacy laws, such as the Health Insurance Portability and Accountability Act (HIPAA) for healthcare data and the Gramm-Leach-Bliley Act (GLBA) for financial data. Violations can result in hefty fines and reputational damage. 

In other words, incorporating AI into your call center may be more involved than simply adding interactive voice response (IVR), which is based on a limited number of pre-set responses to customer queries. AI systems are continuously ‘learning’ from datasets, so security is paramount not only before, but during and even after you’ve deployed it.

Tips to Keep Your AI Secure  

Continuous testing: Testing should be done throughout the development process to detect issues before they become a problem. But testing doesn’t end when you’ve deployed your AI solution. You’ll want to proactively test and monitor your environment, since the technology is constantly evolving — and cyber criminals are evolving their methods along with it. 

Authentication: Chats can be further protected with authentication, a tried-and-true method that requires a user to confirm their identity through various verification measures, such as delivering a one-time access code via text message. There are different forms of authentication, including two-factor authentication, multi-factor authentication and timeouts. 

Other security controls: To take a ‘defense in depth’ approach, deploy malware and network security, as well as specific tools such as a Web Application Firewall (WAF) that blocks malicious addresses. 

In brief:
  • Ethical AI guidelines build customer trust; 63% of U.S. respondents are concerned about GenAI privacy risks. 
  • Using security controls and human oversight, and being transparent about data practices, is essential. 
  • Compliance with regulations like GDPR, CCPA, HIPAA, and GLBA is crucial for responsible data management. 
  • Continuous testing, authentication, and security measures (e.g., WAF) are key to maintaining AI security. 
SHARE THIS ARTICLE
SELECT YOUR LANGUAGE
SELECT YOUR LANGUAGE