Nigeria’s tech regulator has raised an urgent cybersecurity alert over newly discovered vulnerabilities in ChatGPT that could expose users to data-leakage risks and other cyberattacks. The National Information Technology Development Agency issued the warning through its cybersecurity response arm, noting that researchers recently identified seven weaknesses affecting GPT-4o and GPT-5 models. According to the advisory, attackers can exploit these flaws through indirect prompt injection, a technique that hides malicious instructions inside webpages, comments or URLs that ChatGPT may later process during tasks such as browsing, summarisation or online research.
The agency explained that once ChatGPT interacts with such content, the model may unknowingly execute unintended commands, even without direct user engagement. Some of the weaknesses, it added, allow harmful content to bypass safety controls by disguising itself behind trusted domains. Others exploit markdown rendering bugs, allowing hidden instructions to slip through undetected. More concerning is the risk of memory poisoning, where malicious prompts force the system to retain harmful instructions that influence future conversations and responses.
Authorities warn that these exposures could lead to unauthorised model behaviour, leakage of sensitive information, manipulated outputs, and long-term changes in how the model responds to future tasks. CERRT.NG stressed that individuals and organisations could trigger an attack without clicking anything if ChatGPT summarises or scans web pages embedded with hidden instructions. The agency is urging Nigerians, businesses, and government institutions to minimise the use of browsing and memory features when dealing with untrusted sites, especially within official and enterprise environments. It also emphasised the need for continuous updates to GPT-4o and GPT-5 deployments to ensure new patches cover known vulnerabilities.
This development has important implications for MSMEs and digital entrepreneurs who increasingly rely on AI tools for content creation, customer engagement, automation and research. A manipulated output could mislead business decisions, while an information leak could compromise customer data or strategic plans. Small business owners using AI for financial records, marketing materials or internal documents are advised to apply extra caution, restrict ChatGPT from reading external links from unknown sources and avoid feeding confidential information into AI systems without risk evaluation.
NITDA’s latest notice follows an earlier public alert issued a few months ago concerning a critical security flaw affecting embedded SIM profiles in billions of devices globally. That vulnerability was linked to the GSMA TS 48 Generic Test Profile in versions up to 6.0, a standard used in eSIM technology. The agency warned at the time that attackers could plant malicious applets in exposed devices, extract cryptographic keys or clone eSIM profiles, potentially enabling message interception and persistent remote access.
Cybersecurity experts say the renewed advisory underscores the growing risks surrounding digital adoption and AI integration. With AI now powering research, commerce, administration and personal productivity, maintaining safe practices is becoming as important as the innovation itself. For Nigerian businesses, especially MSMEs scaling through digital tools, security vigilance is no longer optional.








