Unlocking AI’s Power In Cybersecurity
Every day, CISOs and their cybersecurity teams face the daunting task of making sense of massive amounts of data. Among the hundreds of alerts they receive daily, they must rapidly determine whether an alert signals a larger coordinated attack; and if so, assess its scope and identify who or what might be vulnerable.
Generative AI, particularly large language models, are inherently designed to understand complex natural language queries and produce coherent insights. Shouldn't we be leveraging these tools far more aggressively to tackle such challenges?
However, LLMs alone have limitations, especially in critical cybersecurity scenarios. For example, even advanced models like GPT-4 can only handle a few thousand tokens at a time - a fraction of what’s required to analyze vast volumes of logs, system settings, or identity networks effectively.
Beyond Glib ChatGPT Answers
Sure, “chunking” helps, but it doesn’t fix fragmented context. LLMs infer relevance from prompts but can focus on the wrong details. This is dangerous in security where nuance is critical. Plus, LLMs don’t truly understand how users, devices, and behaviors connect; they’re ultimately sophisticated autocorrect engines.
Fine-tuning a private LLM for your environment can help address these gaps, but it's costly, slow, and hard to maintain in fast-evolving threat landscapes. And since LLMs offer so much potential, it would be a mistake to dismiss them. So the question is: how can we harness their strengths for cybersecurity?
The answer lies in reframing the problem around relationships. Security isn’t just about isolated facts, it’s about how assets (including devices and people, often organized in hierarchies) are connected. In breach investigations, the critical insights usually come from tracing who connected where, when, and how those actions relate to other events.
That’s why graph database-based approaches are gaining traction. Graph databases organize data as nodes (entities) and edges (relationships), making them ideal for modeling and querying complex, interconnected information. By mapping relationships—between users, endpoints, malware, domains, and more—CISOs gain a unified view of how seemingly unrelated alerts may be part of the same attack chain. Without this connected context, LLMs alone can’t reliably answer critical security questions.
Practical Ways To Deliver Cyber ROI
You can’t simply drop an LLM into your security stack and expect results. Its real power emerges only when it can access a graph-based view of your environment, and when it has a helper component that connects it to dynamic graph data, prompting it to reason over a real-time, structured security context. Increasingly, practitioners are finding that retrieval-augmented generation (RAG), when applied to graph data, “GraphRAG,” is the key to unlocking the full potential of LLMs for cybersecurity.
GraphRAG works by injecting real-time, relevant context directly into AI prompts, making LLMs significantly more accurate and effective in cyber scenarios. Here's how it typically works:
1. Structure your data as a knowledge graph
Define nodes, relationships, and properties explicitly to avoid ambiguities that LLMs might struggle with.
2. Identify key data points
Use keyword, text, or vector search based on the query to find pivot points.
3. Expand the relevance
Apply community detection algorithms like Louvain or perform graph traversals to uncover deeper insights by analyzing related entities beyond the initial query.
4. Enrich the prompt
Append the relevant graph data to the user’s query.
5. Pipe it all to the LLM
Provide the enriched prompt to the LLM for a context-aware, accurate response.
The result is cyber concern responses grounded in your environment’s actual structure and behavior, not just patterns in language.
In other words, by combining the power of LLMs with the contextual richness of graph data, graph technology and GraphRAG unlock valuable applications for cybersecurity teams - making complex analysis more intuitive and the results more actionable.
A Chatbot That Knows How The Bad Actors Got In
Another major advantage of a chatbot-style interface is that it accelerates threat investigations and makes them more accessible across the security team. Powered by graph technology and GraphRAG, analysts can use plain language to query complex security graphs - automatically translating natural questions into graph queries and receiving real-time, context-rich answers from the LLM.
Imagine starting an investigation by simply asking, “Has this user interacted with any flagged IPs in the last 48 hours?” A graph and GraphRAG-powered LLM can interpret and answer that instantly. This kind of direct querying also supports root cause analysis by tracing attacker activity, such as logins and lateral movement, so you can quickly understand how a threat originated and spread.
Beyond investigations, graph and GraphRAG-enabled cyber response tools can generate clear, human-readable summaries.
For example, “This alert was triggered by credential theft from a phishing campaign, leading to unauthorized database access.” Such summaries are invaluable for incident documentation and communicating with stakeholders.
This type of LLM-powered AI can also identify and explain exploitable attack paths, helping analysts justify remediation efforts. Additionally, a graph and GraphRAG-powered LLM aids threat hunting by revealing connections between suspicious domains, users, IPs, and internal systems, drawing on both internal telemetry and external threat intelligence.
By bridging complex graph data with analyst-friendly language, GraphRAG enhances insight generation, risk communication, and response coordination, without users needing to learn graph query languages.
The next evolution goes beyond chatbots to fully autonomous AI agents. These agents will query graph databases in real time, run traversals, detect anomalies, and explain their findings. For example, some innovative banks are developing graph-powered AI agents that detect fraud patterns instantly.
Agentic Cyber AI: Already At tThe PoC Stage?
This is incredibly promising. By monitoring customer behavior, device usage, and transaction flows, the GraphRAG-powered LLM can automatically trigger responses to suspicious logins or account changes in real time, perform deep path traversals to flag privilege escalation routes, and more.
For example, a large retail bank customer is exploring graph algorithms like betweenness centrality and community detection to identify clusters, outliers, and anomalies. Essentially, this creates an AI fraud detection agent capable of deep path traversals, leveraging tools such as GraphRAG alongside user-defined graph analytics modules.
The bank’s security team is confident this can be safely achieved at real-time run rates, bringing the power of AI even closer to practical, real-world cybersecurity use.
In summary, the future of AI in cybersecurity is bright, but only if developers move beyond focus on practical ways to make LLMs truly cyber-ready. At the core of this approach lie graphs and GraphRAG.
Dominik Tomicevic is CEO of Memgraph
Image: DeepMind
You Might Also Read:
Building A Future-Ready GenAI Security Strategy:
If you like this website and use the comprehensive 8,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible