DeepSeek - A Deep Dive Reveals More Than One Red Flag

Like many advanced AI-driven tools, the Chinese DeepSeek AI application offers incredible innovation. Still, it raises significant data privacy concerns due to the sensitive nature of the data being processed and the regulatory environment.

Integrating large-scale data collection and advanced AI technologies, particularly in healthcare, surveillance, and financial services, exacerbates these concerns. 

The Australian government has recently banned the DeepSeek AI app from being installed on devices due to privacy concerns. Similar concerns were raised by the South Korean intelligence agency.  Regulators in Italy have blocked the use of the DeepSeek AI app on mobile devices. 

Among the risks and challenges associated with DeepSeek AI are: 

  • Excessive data collection is a critical issue. DeepSeek may collect vast amounts of personal data, including location, biometric, behavioural and sensitive health information without transparent consent mechanisms. Users do not fully understand what data is being collected, how it is processed, or whether it will be shared with third parties, raising ethical and regulatory red flags. 
  • Data sharing and cross-border transfers are another primary security concern. Under Chinese data protection laws, such as the Personal Information Protection Law (PIPL), companies must adhere to strict data localisation rules. However, in global collaborations DeepSeek data could be shared across borders, raising questions about compliance and exposing user data to jurisdictions with weaker privacy protections. Moreover, the Chinese government’s regulatory requirements for data access could result in heightened surveillance risks, as sensitive user data could be accessed for state purposes, raising international concerns about civil liberties and the misuse of personal information. 
  • Data security vulnerabilities add to the privacy risks. Advanced AI applications like DeepSeek rely on centralised or cloud-based architectures to process and analyse data making them targets for cyberattacks.

A lack of robust encryption or security protocols could expose users to data breaches, identity theft or misuse of personal data.

Threat actors could exploit DeepSeek AI's algorithms through model inference or poisoning attacks, potentially exposing sensitive training or input data. This creates a dual threat: user privacy is compromised, and the reliability and trustworthiness of the AI outputs are also jeopardised. 

  • The lack of user control over their data is a recurring issue. DeepSeek users often have limited visibility into how their data is stored, processed or retained. The potential for misuse of personal information once it enters the system is a significant concern. DeepSeek AI developers must prioritise compliance with privacy laws, enhance transparency and adopt privacy-by-design principles.

This includes implementing secure data storage practices, encrypting sensitive information and giving users greater control over their data to build trust and ensure ethical AI use. 

Understanding how attackers target applications like DeepSeek is not just important, it's urgent. It allows us to assess the presence of inherent security controls and verify their robustness, which is an immediate task given the increasing cyber threats.  

  • Attackers exploit DeepSeek's AI models to infer sensitive training data through model inversion attacks. By analysing the model's outputs, they can reconstruct input data such as user profiles, health metrics, or personal identifiers that were part of the training set. Such attacks can be executed without direct access to the training data, making them stealthy and dangerous. 
  • Applications like DeepSeek rely on continuous learning or updates to their models and are vulnerable to data poisoning attacks. Attackers introduce malicious or biased data into the training pipeline by infiltrating external data sources or exploiting insufficient validation mechanisms. This can corrupt the model, causing it to generate harmful or inaccurate outputs.
  • Adversarial attacks exploit weaknesses in the app’s data processing capabilities by introducing crafted inputs that deceive the model. In the case of DeepSeek AI, attackers could manipulate input data, such as images or text, with subtle perturbations that cause the model to misclassify or generate unintended results. For example, an attacker could use adversarial inputs like subtly altered images or text to bypass a security feature or generate misleading outputs. 
  • Open-source AI models like DeepSeek, while offering accessibility and innovation, are increasingly vulnerable to supply chain attacks triggered during large-scale cyberattacks. These attacks, where adversaries exploit the reliance on third-party dependencies, pre-trained models, or public repositories, can have severe consequences. Adversaries may tamper with pre-trained models by embedding malicious code, backdoors, or poisoned data, which can compromise downstream applications. Additionally, attackers may target the software supply chain by manipulating dependencies, libraries, or scripts used during model training or deployment.

This can lead to the corruption of systemic AI functionality. For example, malicious code disguised as a DeepSeek package was recently distributed to spread infections via supply chains. 

  • DeepSeek AI apps rely on APIs and third-party integrations to function efficiently. Attackers can exploit insecure APIs to gain unauthorised access to user data or the app's backend systems. Additionally, if DeepSeek shares data with other applications or services, attackers can intercept or manipulate these exchanges, creating a broader attack surface.

Improperly secured API endpoints or insufficient authentication protocols can expose sensitive data to exploitation. Recently, the attackers have exploited open reverse proxy (ORP) instances containing DeepSeek API keys, indicating widespread exploitation. These compromised ORP servers allow access to critical LLM services, resulting in unauthorized access to AI models while masking user identities to conduct LLM-jacking.

  • Adversaries can trigger a DDoS attack against the DeepSeek AI app, aiming to overwhelm its infrastructure, rendering the app unavailable to legitimate users and disrupting its services. Given the data-intensive nature of AI applications like DeepSeek, which rely on real-time data processing and model inference, attackers can exploit these resource demands by flooding the app with excessive fake requests or malicious traffic. This overloads the servers, consuming bandwidth, computational power and storage capacity, leading to system crashes or delays.

Such attacks impact the app’s availability and create opportunities for secondary attacks, such as injecting malware or exploiting vulnerabilities during recovery efforts. 

When AI applications and services like DeepSeek are attacked, data becomes a key angle of exploitation and vulnerability. Data is the foundation of AI systems—it drives their functionality, accuracy and decision-making capabilities. Adversaries conduct data exfiltration as AI systems such as DeepSeek often process sensitive information such as customer data, proprietary models or real-time inputs.

Malicious actors may exploit vulnerabilities to extract this data, exposing organisations to privacy breaches, regulatory violations, and reputational damage.  

Professionals must understand these risks and take steps to mitigate them, as attacks targeting the data aspect of AI systems can have far-reaching consequences, including undermining the system's integrity, exposing sensitive information and corrupting the AI model's behaviour.  

Aditya K Sood is VP of Security Engineering and AI Strategy at Aryaka 

Image: Ideogram

You Might Also Read:

US Researchers Launch A DeepSeek Competitor:


If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

 

« From Accidental Hacker To Cybersecurity Champion
Hackers Exploiting Malware In Google Docs »

Infosecurity Europe
CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

Resecurity

Resecurity

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

Willis Towers Watson

Willis Towers Watson

Willis Towers Watson is a global risk management, insurance brokerage and advisory company. Services offered include Cyber Risks insurance.

Verimuchme

Verimuchme

Verimuchme is a digital wallet and exchange platform to secure, verify and re-use personal information.

Engineering Group

Engineering Group

Engineering is the Digital Transformation Company, a leader in Italy and with over 80 offices across Europe, the United States, and South America.

Swimlane

Swimlane

Swimlane is a leader in security automation and orchestration (SAO). Our platform empowers organizations to manage, respond and neutralize cyber threats with adaptability, efficiency and speed.

TES

TES

TES is a provider of IT Lifecycle Services, offering bespoke solutions that help customers manage the commissioning, deployment and retirement of Information Technology assets.

SixThirty CYBER

SixThirty CYBER

SixThirty is a venture fund that invests in early-stage enterprise technology companies from around the world building FinTech, InsurTech, and Cybersecurity solutions.

Fifosys

Fifosys

Fifosys is a professional technology infrastructure specialist, delivering a broad portfolio of high quality technical and strategic managed services.

TopSOC Information Security

TopSOC Information Security

TopSOC Information Security provide a wide range of security consultation, implementation and training services.

HackEDU

HackEDU

HackEDU provides secure coding training to companies ranging from startups to the Fortune 500.

RankedRight

RankedRight

RankedRight empowers security teams to take immediate action on their most critical risks.

Cyware

Cyware

Cyware is the only company building Virtual Cyber Fusion Centers enabling end-to-end threat intelligence automation, sharing, and unprecedented threat response for organizations globally.

Ontinue

Ontinue

Ontinue ION is an MXDR service that provides Nonstop SecOps through five key capabilities that enable your organization to respond to attacks and continuously reduce risk.

Cranium

Cranium

Cranium are an international consultancy organisation specialised in privacy, security and data management.

HashiCorp

HashiCorp

At HashiCorp, we believe infrastructure enables innovation, and we are helping organizations to operate that infrastructure in the cloud.

Allot

Allot

Allot are a global provider of leading innovative network intelligence and security solutions for Service Providers and Enterprises worldwide.

Dispel

Dispel

Dispel makes the fastest secure remote access for industrial networks. Built by operators for operators: a zero trust engine for your entire OT, IoT, and xIoT stack.