The potential for data loss can bring a halt to GenAI initiatives. Existing DLP approaches have struggled to protect against the loss of unstructured sensitive data like intellectual property and source code. Security teams need to facilitate secure deployments rather than being a “department of no” that stops projects due to security concerns.
Harmonic Security achieves its results with an approach that lends itself to detecting and blocking unstructured data while avoiding alert noise. While AI is helping improve security solutions, large language models (LLMs) can be imprecise and incur latency that results in a poor user experience. The small language models used as part of the Harmonic solution provide precision as well as low latency to facilitate inline blocking where appropriate.
With the new version of its tool, Harmonic claims to increase AI tool coverage by 30x to solve for the expanding AI tool ecosystem. As new AI tools crop up that may be sanctioned or risky, Harmonic provides visibility so security leaders can make optimal policy decisions.
Harmonic has led the industry in addressing GenAI data loss concerns and continues to do so with the addition of features like the ability to address key file types and the ability to understand the accounts being used. For example, a sanctioned work account on ChatGPT Enterprise may be fine, but using a personal Gmail with an unapproved tool may violate policy. Harmonic allows users to shape their response based on the context.