If you want to test a new application, the best data to test it on is live data! Now, I’m sure that there are procedures in place to not do that. I’m sure anonymized data would be used instead. But it became apparent a few years ago that some members of staff were copying live data off the mainframe and testing it in cloud applications. Again, hopefully this doesn’t happen anymore. However, there is apparently a new problem facing mainframe security teams. And that is using live data on artificial intelligence (AI) applications.
It was the rapid
increase in people working from home during the pandemic that led to a rise in shadow
IT – people using applications to get work done, but those applications hadn’t been
tested by the IT security team. A recent survey has found that AI is now giving
rise to another massive security issue. This becomes even more of an issue with
the current popularity of Deepseek V3, and the announcement of Alibaba Qwen
2.5, both AIs originating from China.
Cybsafe’s The Annual Cybersecurity Attitudes and
Behaviors Report 2024-2025
found that, worryingly, almost 2 in 5 (38%) professionals have admitted to sharing
personal data with AI platforms, without their employer’s permission. So, what
data is being shared most often and what are the implications? That’s what application
security SaaS company, Indusface, looked into. Here’s what they found.
One of the most
common categories of information shared with AI is work-related files and documents.
Over 80% of professionals in Fortune 500 enterprises use AI tools, such as ChatGPT,
to assist with tasks such as analysing numbers, refining emails, reports, and presentations.2
However, 11% of
the data employees paste into ChatGPT is strictly confidential, for example internal
business strategies, and the employees don’t fully understand how the platform processes
this data. Staff should remove sensitive data when inputting search commands into
AI tools.3
Personal
details such as names, addresses, and contact information are often being
shared with AI tools daily. Shockingly, 30% of professionals believe that
protecting their personal data isn’t worth the effort, which indicates a
growing sense of helplessness and lack of training.
Access to
cybersecurity training has increased for the first time in four years, with 1
in 3 (33%) participants using it and 11% having access but not utilizing it.
For businesses to remain safe from cyber security threats, it is important to
carry out cybersecurity training for staff, upskilling on the safe use of AI.1
Client
information, including data that may fall under regulatory or confidentiality
requirements, is often being shared with AI by professionals.
For business
owners or managers using AI for employee information, it is important to be
wary of sharing bank account details, payroll, addresses, or even performance
reviews because this can violate contract policy and lead to organization
vulnerability due to any potential legal actions if sensitive employee data is
leaked.
Large language
models (LLMs) are often used and are crucial AI models for many generative AI
applications, such as virtual assistants and conversational AI chatbots. This
can often be used with Open AI models, Google Cloud AI, and many more.
However, the
data that helps train LLMs is usually sourced by web crawlers scraping and
collecting information from websites. This data is often obtained without
users’ consent and might contain personally identifiable information (PII).
Other AI
systems that deliver tailored customer experiences might collect personal data,
too. It is recommended to ensure that the devices used when interacting with
LLMs are secure, with full antivirus protection to safeguard information before
it is shared, especially when dealing with sensitive business financial
information.
AI models are
designed to provide insights, but not safely secure passwords, and could result
in unintended exposure, especially if the platform does not have strict privacy
and security measures.
Indusface
recommends that individuals avoid reusing passwords that may have been used
across multiple sites because this could lead to a breach on multiple accounts.
The importance of using strong passwords with multiple symbols and numbers has
never been more important, in addition to activating two-factor identification
to secure accounts and mitigate the risk of cyberattacks.
Developers and
employees increasingly turn to AI for coding assistance, however sharing
company codebases can pose a major security risk because it is a business’s
core intellectual property. If proprietary source code is pasted into AI
platforms, it may be stored, processed, or even used to train future AI models,
potentially exposing trade secrets to external entities.
Businesses
should, therefore, implement strict AI usage policies to ensure sensitive code
remains protected and never shared externally. Additionally, using self-hosted
AI models or secure, company-approved AI tools can help mitigate the risks of
leaking intellectual property.
The sources
given for their research are:
- Cybsafe | The Annual CybersecurityAttitudes
and Behaviors Report 2024-2025
- Masterofcode | MOCG Picks: 10 ChatGPT Statistics
Every Business Leader Should Know
- CyberHaven | 11% of data employees paste
into ChatGPT is confidential
No comments:
Post a Comment