{"id":6288,"date":"2025-10-28T17:33:45","date_gmt":"2025-10-28T17:33:45","guid":{"rendered":"https:\/\/musictechohio.online\/site\/serious-new-hack-openai-ai-browser\/"},"modified":"2025-10-28T17:33:45","modified_gmt":"2025-10-28T17:33:45","slug":"serious-new-hack-openai-ai-browser","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/serious-new-hack-openai-ai-browser\/","title":{"rendered":"Serious New Hack Discovered Against OpenAI\u2019s New AI Browser"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\">It didn\u2019t take long for cybersecurity researchers to notice some glaring issues with <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-atlas-web-browser-messy\">OpenAI\u2019s recently unveiled AI browser Atlas<\/a>.<\/p>\n<p class=\"article-paragraph skip\">The browser, which puts OpenAI\u2019s blockbuster ChatGPT front and center, features an \u201cagent mode\u201d \u2014 currently limited to paying subscribers \u2014 that allows it to complete entire tasks, such as booking a flight or purchasing groceries.<\/p>\n<p class=\"article-paragraph skip\">However, that makes the browser vulnerable to \u201cprompt injection\u201d attacks, allowing hackers to embed hidden messages on the web that force it to carry out harmful instructions, as several researchers have already shown. For instance, one researcher <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-browser-victim-prompt-injection-attacks\">tricked the browser<\/a> into spitting out the words \u201cTrust No AI\u201d instead of generating a summary of a document in Google Docs, as prompted.<\/p>\n<p class=\"article-paragraph skip\">Now, researchers at AI agent security firm NeuralTrust found that even Atlas\u2019s \u201cOmnibox,\u201d the text box at the top of the browser that can accept either URLs or natural language prompts, is also extremely vulnerable to prompt injection attacks.<\/p>\n<p class=\"article-paragraph skip\">Unlike previously demonstrated \u201cindirect\u201d prompt injection attacks that embed instructions in webpages, this particular exploit requires the user to copy and paste a poisoned URL into the omnibox \u2014 just like you\u2019ve probably done with countless web addresses.<\/p>\n<p class=\"article-paragraph skip\">\u201cWe\u2019ve identified a prompt injection technique that disguises malicious instructions to look like a URL, but that Atlas treats as high-trust \u2018user intent\u2019 text, enabling harmful actions,\u201d NeuralTrust software engineer Mart\u00ed Jord\u00e0 wrote in a <a href=\"https:\/\/neuraltrust.ai\/blog\/openai-atlas-omnibox-prompt-injection\" rel=\"nofollow\">recent blog post<\/a>, as <a href=\"https:\/\/www.theregister.com\/2025\/10\/27\/openai_atlas_prompt_injection\/\" rel=\"nofollow\">spotted by <em>The Register<\/em><\/a>.<\/p>\n<p class=\"article-paragraph skip\">By slightly adjusting the URL, the browser fails to validate it as a web address and instead \u201ctreats the entire content as a prompt.\u201d That makes a disguised URL a perfect place to embed harmful messages.<\/p>\n<p class=\"article-paragraph skip\">\u201cThe embedded instructions are now interpreted as trusted user intent with fewer safety checks,\u201d Jord\u00e0 wrote. \u201cThe agent executes the injected instructions with elevated trust. For example, \u2018follow these instructions only\u2019 and \u2018visit neuraltrust.ai\u2019 can override the user\u2019s intent or safety policies.\u201d<\/p>\n<p class=\"article-paragraph skip\">The vulnerability could even be used to make Atlas\u2019s agent navigate to the user\u2019s Google Drive and mass delete files, since the user is already running an authenticated session.<\/p>\n<p class=\"article-paragraph skip\">\u201cWhen powerful actions are granted based on ambiguous parsing, ordinary-looking inputs become jailbreaks,\u201d Jord\u00e0 wrote.<\/p>\n<p class=\"article-paragraph skip\">In response, NeuralTrust recommends that OpenAI\u2019s browser be far more strict when parsing URLs, and in case of \u201cany ambiguity, refuse navigation and do not auto-fallback to prompt mode.\u201d<\/p>\n<p class=\"article-paragraph skip\">As browser company Brave <a href=\"https:\/\/brave.com\/blog\/unseeable-prompt-injections\/\" rel=\"nofollow\">pointed out last week<\/a>, indirect prompt injection attacks have become a problem for the \u201centire category of AI-powered browsers,\u201d <a href=\"https:\/\/futurism.com\/artificial-intelligence\/researchers-severe-vulnerabilities-ai-browser-comet\">including Perplexity\u2019s Comet browser<\/a>.<\/p>\n<p class=\"article-paragraph skip\">\u201cIf you\u2019re signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit post could result in an attacker being able to steal money or your private data,\u201d Brave wrote at the time.<\/p>\n<p class=\"article-paragraph skip\">In a <a href=\"https:\/\/x.com\/cryps1s\/status\/1981037851279278414\" rel=\"nofollow\">lengthy update on X-formerly-Twitter<\/a> last week, OpenAI\u2019s chief information security officer Dane Stuckey conceded that \u201cprompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks.\u201d<\/p>\n<p class=\"article-paragraph skip\">OpenAI didn\u2019t respond to <em>The Register<\/em>\u2018s request for comment regarding NeuralTrust\u2019s latest findings.<\/p>\n<p class=\"article-paragraph skip\"><strong>More on Atlas:<\/strong> <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-browser-victim-prompt-injection-attacks\"><em>OpenAI\u2019s New AI Browser Is Already Falling Victim to Prompt Injection Attacks<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/artificial-intelligence\/serious-new-hack-openai-ai-browser\">Serious New Hack Discovered Against OpenAI\u2019s New AI Browser<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>It didn\u2019t take long for cybersecurity researchers to notice some glaring issues with OpenAI\u2019s recently unveiled AI browser Atlas. The browser, which puts OpenAI\u2019s blockbuster ChatGPT front and center, features&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,1177,4260,3842,3928,179],"tags":[],"class_list":["post-6288","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-cybersecurity","category-data-privacy","category-future-society","category-hacking","category-openai"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/6288","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=6288"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/6288\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=6288"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=6288"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=6288"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}