Artificial intelligence (AI) is at the forefront when it comes to organizations radically increasing efficiency through technology. However, there are hesitations around the use of AI because of the potential consequences to cyber security — and rightfully so.
Here we’ll get you up to speed on what AI is, its capabilities and how to take advantage of it, all while keeping your organization safe from evolving cyber threats.
Defining AI and LLMs
AI refers to computer systems’ ability to perform tasks that would typically require human reasoning, like learning, comprehension, problem solving and decision making. It has the capacity to outperform humans, as in the case of AlphaZero — an AI computer program created by Google DeepMind that learns complex games, like chess, with superhuman skill and beats its human counterparts.
A large language model (LLM) is a type of AI that is designed to understand and generate human language through processing large amounts of text data. It is used for tasks like answering questions, producing text and translating language. You’re probably familiar with ChatGPT, which is built on an LLM and allows you to correspond with the system.
Although generative AI (GenAI) models are useful, they can pose potential risks to a business.
Security Risks With GenAI
According to the Open Worldwide Application Security Project (OWASP), “Organizations are entering uncharted territory in securing and overseeing GenAI solutions. The rapid advancement of GenAI also opens doors for adversaries to enhance their attack strategies, a dual challenge of defense and threat escalation.”[1]
Some security threats OWASP identifies include:
- Prompt injection
A bad actor’s malicious input influences the LLM to output sensitive information, bypassing security measures and evading detection as it mimics a legitimate user prompt. - Sensitive information disclosure
The model accidentally releases confidential data in responses, such as user credentials or system details. - Data and model poisoning
Attackers manipulate the model’s training data to alter how the LLM functions and create security vulnerabilities. - Improper output handling
This can cause outputs to include biased or inaccurate information in responses. - Excessive agency
This occurs when LLMs have unrestricted access to perform high-risk tasks without proper security safeguards in place.
Because the use of AI is still in its infancy for many industries, there can be unpredictable and unintentional consequences to security and other areas of your organization, like reputation, human skillsets, regulatory challenges and ethical dilemmas. Be prepared to address the potential downsides to using AI.
Ethical Implications of AI
There are ethical issues at play with the use of AI concerning data privacy and confidentiality, compliance, transparency and bias. Protecting your organization’s sensitive information must be top-of-mind, as well as following the latest cyber security best practices to ensure your organization is protected and compliant with security standards.
Understanding AI, how it works and why it makes certain decisions and responses is also imperative for proper usage. Being aware of bias in AI is important as algorithms can develop biases through the machine learning process that threaten impartiality and fairness.
It is important to monitor the system’s training data for flaws that can unintentionally affect the responses and pose ethical dilemmas for your organization’s use of AI.
Implementing AI
When it comes to incorporating AI into your business, you should determine its business feasibility, implementation feasibility and data feasibility concerning your organization. Don’t try to fit a round peg in a square hole — implement AI where it makes sense, so you maximize its utility.
Proof of Concept vs. The Pilot
A proof of concept (PoC) demonstrates a concept in a test environment without having to actually be deployed throughout your organization. A pilot is a real-world project that uses emerging technologies, such as AI, in a protected, safe environment within your organization. When implementing AI, a PoC and a pilot can help ensure proper adoption of AI as you can evaluate if, where and when the use of AI is suitable.
In Real Life
AI is being deployed across industries — including the legal sector. LegalMotion used IBM’s Watson to build a tool that automates the drafting of early phase responses to complaints. This tool reduced the workload of new lawyers by around 80%, allowing them to focus on more strategic tasks.[2]
While there are risks associated with AI, it can also be used to mitigate risk and boost security. For example, NetSPI’s AI-driven security solutions, including continuous threat exposure management (CTEM), have been instrumental in protecting organizations from evolving cyber threats. Their approach unifies various proactive security solutions, offering a comprehensive view of vulnerabilities and response orchestration.[3]
Conclusion
As you embark on your AI journey, it’s important to consider how AI can best be incorporated into your workflows while remaining aware of the risks and consequences associated with it. While AI can present risks for your organization, it can also protect against risks. It’s in your hands, so make sure to use AI wisely.
Need Help?
Gross Mendelsohn’s Technology Solutions Group is here to assist you with your cyber security needs and how to utilize AI tools. Contact us here or call 410.685.5512 for help.
[1] https://blog.barracuda.com/2024/11/20/owasp-top-10-risks-large-language-models-2025-updates
[2] https://www.vktr.com/ai-disruption/5-ai-case-studies-in-law/