How can CIOs safely unleash generative AI on their company’s data?

Generative artificial intelligence (GenAI) has become a transformative force in various business fields since its emergence in 2022. According to McKinsey, GenAI could potentially bring about savings opportunities of up to 2.6 trillion dollars across different operational functions. However, many business leaders are hesitant to adopt it due to security concerns. IBM reports that 96% of executives believe that using generative AI will increase their organization’s chances of a security breach within the next three years.

GenAI could refine data faster and to a higher standard, but it must be safeguarded as it is a precious resource. Large Language Models (LLMs) used in GenAI have been known to compromise data safety. This presents a dilemma for business leaders: how to balance potential benefits with security risks.

GenBI (the new generation of business intelligence) aims to resolve this dilemma by combining GenAI and Business Intelligence (BI). GenBI makes BI truly accessible to non-technical users, fulfilling the promise of self-service BI (SSBI) tools. While SSBI tools aimed to democratize data insights, they often left users unsure about what visualizations to ask for or how to prepare data.

In contrast, GenBI enables users to ask queries in natural language and explore data more intuitively. These solutions understand user needs and automatically choose the best data formulation, generating complex and dynamic visualizations.

Despite its advantages, GenBI faces the same security concerns as GenAI. LLMs often store the data used in queries, posing a risk of exposing sensitive business information. The Cisco 2024 Data Privacy Benchmark Study revealed that 48% of people admitted having entered private company information into GenAI tools. Consequently, over a quarter of organizations have temporarily banned the use of GenAI tools due to privacy and security issues.

Developers are addressing these security issues with different approaches. For instance, Amazon Q in QuickSight responds to natural language prompts to create interactive data insights while adhering to established roles, permissions, and governance identities without using customer data to improve its models. Pyramid Analytics ensures data privacy by creating context-informed metadata that enables the LLM to generate responses without accessing the underlying data itself. Tableau’s Einstein Copilot uses data masking to prevent LLMs from accessing sensitive proprietary data.

Source: How can CIOs safely unleash generative AI on their company’s data?.