Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
LLM Trust: Reliability & Risks - News Directory 3

LLM Trust: Reliability & Risks

May 30, 2025 Catherine Williams Tech
News Context
At a glance
  • Organizations are increasingly integrating large language models (LLMs) into thier applications and workflows.​ However, cybersecurity experts are cautioning against​ hasty adoption, emphasizing the need for thorough risk management.
  • Joseph Steinberg, an expert in cybersecurity, AI, and privacy, advises that organizations introducing AI models should‍ implement clear policies, procedures, technical controls, and ‌complete risk analyses.
  • Steinberg stressed the seriousness of data leaks via​ user prompts.
Original source: cio.com


AI Security ​Risks: Experts Warn of Data ‍Leaks in Large ⁣Language Models












Key⁤ Points

  • Experts⁢ urge caution in adopting large language models (LLMs) due to⁢ security risks.
  • Data leaks through user prompts are a ⁤significant concern.
  • A ⁢recent study reveals varying security ​risks among different AI models.
  • CSOs should consider learning data, prompt records, ‍and access control before approving llms.

AI Security Risks: ⁤Experts Warn ‌of Data‌ Leaks in Large⁣ Language ⁢Models

‍ Updated May 30, 2025

Organizations are increasingly integrating large language models (LLMs) into thier applications and workflows.​ However, cybersecurity experts are cautioning against​ hasty adoption, emphasizing the need for thorough risk management.

Joseph Steinberg, an expert in cybersecurity, AI, and privacy, advises that organizations introducing AI models should‍ implement clear policies, procedures, technical controls, and ‌complete risk analyses. He noted that many organizations are underinvesting in these critical areas, thereby​ overlooking the potential security challenges‌ posed‌ by LLMs.

Steinberg stressed the seriousness of data leaks via​ user prompts. He explained that users ‌might unintentionally⁤ input ⁣personal facts, failing ‍to recognize the potential consequences. For example, if multiple ‍individuals ⁤within an organization repeatedly ‌query an AI‌ about a specific technology, ⁢the⁣ AI could ⁣infer that the organization uses ⁣that technology and lacks ​advanced expertise.

security Risks ​Higher ​Than Expected

Recent research‍ indicates that the security risks associated⁤ with LLMs might potentially be⁣ greater than initially anticipated. A Cybernews research ‍team analyzed 10 AI models, finding that half were rated as having relatively high security‍ risks. OpenAI and 01.AI received a D‌ grade, ​indicating high risk, while Inflection AI received an F grade, signifying‍ a ⁢critical security risk. Anthropic, Cohere, and ‌Mistral⁣ were ​considered low risk.

The team reported that five of ‍the ten companies experienced data leaks. OpenAI, for instance,​ had 1,140 recorded leaks just nine days before the analysis. perplexity AI reportedly‌ had 190 corporate ⁤credentials seized‍ due⁢ to a‌ leak 13‍ days prior.

An OpenAI spokesman told ‌CSO.com ‌that ‌they welcome AI security research⁣ and prioritize user security⁢ and privacy. The spokesman added that⁤ they transparently ‌disclose ‍security program progress and regularly publish threat intelligence ⁤reports, disagreeing ⁣with the study’s claims.

Robert T. Lee, a senior researcher at the ‌SANS⁢ Research Institute,⁢ commented on​ the report, stating that most LLMs fail basic security tests. He suggested ‌that the weekly leaks from D or F-rated models indicate a lack of security consideration by ‌these companies.

Security Advice for CSOs

Lee recommends that CSOs consider the following⁤ before approving LLMs:

  • Learning data: Understand where ​the model sourced its information, as random web ⁣scraping⁣ can expose organizational data.
  • Prompt record: ‌ Ensure that questions are not ​stored on ‌servers,which‍ could led to future leaks.
  • Qualification: Protect against stolen API ⁤keys or vulnerable passwords ‍with multi-factor authentication (MFA) and‍ real-time ⁢alerts.
  • Infrastructure: Verify thorough TLS settings, timely patch ‌application, and complete network isolation.‌ Immutable settings are prone to attacks.
  • access control: clearly define permissions by role, record all AI calls, ⁣and send logs to SIEM/DLP systems‌ to ⁣mitigate shadow AI threats.
  • Infringement⁣ accident response​ training: Establish immediate notification procedures and simulate​ API key leaks or prompt insertion attacks to ‍prepare for real-world scenarios.

Lee ​advises treating LLMs like bank safes,emphasizing strict verification and avoiding ⁣exaggerated expectations to prevent‍ inadvertently creating backdoors while leveraging AI ⁣benefits.

What’s next

Organizations‌ should⁤ prioritize robust security​ measures and ⁢continuous monitoring as they integrate large language models to mitigate potential risks and ​ensure data protection.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service