ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. IT and cybersecurity leaders need to find a way to capitalize on its speed without sacrificing security. OpenAI reports that enterprise adoption is surging, with over 80% of Fortune 500 companies’ employees and departments having accounts.
Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ChatGPT reduced skill inequalities and accelerated document creation times while enabling enterprise workers to be more efficient with their time. ChatGPT is helping enterprise workers get more done in less time, with workers reluctant to share what they’re using the tool for. Seventy percent haven’t told their bosses about it.
ChatGPT’s greatest risk is having employees accidentally share intellectual property (IP), confidential pricing, cost, financial analysis and HR data with large language models (LLMs) accessible by anyone. Samsung and other companies accidentally divulging confidential data is still fresh in the minds of security and senior management leaders.
Given how urgent the issue is to solve and how it all pivots on guiding user behavior, many organizations are looking to generative AI-based approaches to solve the security challenge. That’s why there’s growing interest in generative AI Isolation and comparable technologies to keep confidential data out of ChatGPT, Bard and other gen AI sites. Every business wants to balance the competitive efficiency, speed, and process improvement gains ChatGPT provides with a solid strategy for reducing risk.
VentureBeat spoke with Alex Philips, CIO at National Oilwell Varco (NOV), last year regarding his company’s approach to generative AI. Philips told VentureBeat he’d taken on the role of educating his board on the advantages and risks of ChatGPT and generative AI in general. He told VentureBeat that he periodically provides the board with updates on the current state of gen AI technologies, including how NOV can get the most value out of the emerging technology with the least risk. This ongoing education process is helping to set expectations about the technology and how NOV can put guardrails in place to ensure Samsung-like leaks never happen.
There’s an emerging series of new technologies being introduced to take on the challenge of securing ChatGPT sessions without sacrificing speed. Cisco, Ericom Security by Cradlepoint’s Generative AI isolation, Menlo Security, Nightfall AI, Wiz and Zscaler are a few of the most notable new systems on the market that aim to help security leaders solve this challenge.
How vendors are taking on the challenge
Each of the six major providers of solutions aimed at keeping confidential data out of ChatGPT sessions takes a different approach to protect organizations from having their confidential data shared. The two getting the most traction are Ericom Security by Cradlepoint’s Generative AI Isolation and Nightfall for ChatGPT.
Cradlepoint’s approach relies on a clientless approach where user interactions with generative AI sites are executed in a virtual browser inside the Ericom Cloud Platform. Cradlepoint says this approach is designed to allow for data loss protection and access policy controls to be applied in their cloud platform. Designing their system to route all traffic through their proprietary cloud platform prevents personally identifiable information (PII) or other sensitive data from being submitted to generative AI sites like ChatGPT. Ericom Security by Cradlepoint’s approach is unique in how it’s designed to deliver the least privileged access through its cloud architecture.
Nightfall AI offers three different solutions to organizations that want to protect their confidential data from being shared with ChatGPT and comparable sites. They offer Nightfall for ChatGPT, Nightfall for LLMs, and Nightfall for Software as a service (SaaS). Nightfall for ChatGPT is a browser-based solution that scans and redacts sensitive data in real time before it can be exposed. Nightfall for LLMs is an API that detects and redacts sensitive data used in training LLMs. Nightfall for SaaS integrates with popular SaaS applications to prevent sensitive information from being exposed within various cloud services.
Gen AI is defining the future of knowledge now
Gen AI is the knowledge engine every business has been waiting for. VentureBeat has learned that outright banning ChatGPT, Bard and other generative AI-based chatbots has the opposite effect. Shadow AI flourishes when IT attempts to stop its use, fueling new AI apps getting downloaded, adding to the challenge of keeping confidential data safe.
It’s good to see more CIOs and CISOs taking the knowledge-based approach of piloting and then putting into production gen AI-based systems that can eliminate the risk at the browser level. Shielding the organization from sharing data by using their secured cloud architecture, as Cradlepoint Ericom does, provides the scale larger enterprises need to protect thousands of employees from accidentally sharing confidential data.
The goal needs to be turning the rapid pace of innovation happening in gen AI into a competitive advantage. It’s IT’s and security’s job to make that happen. CISOs and security teams need to stay updated on the latest technologies and techniques to safeguard confidential, PII and patent-based data. Knowing what the options are for protecting data and how they change is crucial to staying competitive as a knowledge-based business.
“Generative AI websites provide unparalleled productivity enhancements, but organizations must be proactive in addressing the associated risks,” said Gerry Grealish, Vice President of Marketing Ericom Cybersecurity Unit of Cradlepoint. “Our Generative AI Isolation solution empowers businesses to attain the perfect balance, harnessing the potential of generative AI while safeguarding against data loss, malware threats, and legal and compliance challenges.”
By Louis Columbus,
Original source: Venturebeat