{"id":3726,"date":"2025-07-16T16:15:31","date_gmt":"2025-07-16T16:15:31","guid":{"rendered":"https:\/\/musictechohio.online\/site\/top-ai-researchers-concerned\/"},"modified":"2025-07-16T16:15:31","modified_gmt":"2025-07-16T16:15:31","slug":"top-ai-researchers-concerned","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/top-ai-researchers-concerned\/","title":{"rendered":"Top AI Researchers Concerned They\u2019re Losing the Ability to Understand What They\u2019ve Created"},"content":{"rendered":"<div>\n<div><img loading=\"lazy\" width=\"2400\" height=\"1260\" src=\"https:\/\/wordpress-assets.futurism.com\/2025\/07\/top-ai-researchers-concerned.jpg\" class=\"attachment-full size-full wp-post-image\" alt=\"Researchers from Google, OpenAI, DeepMind, and Meta have joined forces to warn about the cognitive powers of what they're building.\" style=\"margin-bottom: 15px;\" decoding=\"async\"><\/div>\n<p>Researchers from OpenAI, Google DeepMind, and Meta have joined forces to warn about what they&#8217;re building.<\/p>\n<p>In a <a href=\"https:\/\/tomekkorbak.com\/cot-monitorability-is-a-fragile-opportunity\/cot_monitoring.pdf\">new position paper<\/a>, 40 researchers spread across those four companies called for more investigation of AI powered by so-called &#8220;chains-of-thought&#8221; (CoT), the &#8220;thinking out loud&#8221; process that advanced &#8220;reasoning&#8221; models \u2014 the current vanguard of consumer-facing AI \u2014 use when they&#8217;re working through a query.<\/p>\n<p>As those researchers acknowledge, CoTs add a certain transparency into the inner workings of AI, allowing users to see &#8220;intent to misbehave&#8221; or get stuff wrong as it happens. Still, there is &#8220;no guarantee that the current degree of visibility will persist,&#8221; especially as models continue to advance.<\/p>\n<p>Depending on how they&#8217;re trained, advanced models may no longer, the paper suggests, &#8220;need to verbalize any of their thoughts, and would thus lose the safety advantages.&#8221; There&#8217;s also the non-zero chance that models could intentionally &#8220;obfuscate&#8221; their CoTs after realizing that they&#8217;re being watched, the researchers noted \u2014 and as we&#8217;ve already seen, AI has indeed rapidly become <a href=\"https:\/\/futurism.com\/ai-godfather-lying-deception\">very good at lying and deception<\/a>.<\/p>\n<p>To make sure this valuable visibility continues, the cross-company consortium is calling on developers to start figuring out what makes CoTs &#8220;monitorable,&#8221; or what makes the models think out loud the way they do. In this request, those same researchers seem to be admitting something stark: that nobody is entirely sure why the models are &#8220;thinking&#8221; this way, or how long they will continue to do so.<\/p>\n<p>Zooming out from the technical details, it&#8217;s worth taking a moment to consider how strange this situation is. Top researchers in an emerging field are warning that they don&#8217;t quite understand how their creation works, and lack confidence in their ability to control it going forward, even as they forge ahead making it stronger; there&#8217;s no clear precedent in the history of innovation, even looking back to civilization-shifting inventions like atomic energy and the combustion engine.<\/p>\n<p>In an <a href=\"https:\/\/techcrunch.com\/2025\/07\/15\/research-leaders-urge-tech-industry-to-monitor-ais-thoughts\/\">interview with\u00a0<em>TechCrunch<\/em><\/a> about the paper, OpenAI research scientist and paper coauthor Bowen Baker explained how he sees the situation.<\/p>\n<p>&#8220;We&#8217;re at this critical time where we have this new chain-of-thought thing,&#8221; Baker told the website. &#8220;It seems pretty useful, but it could go away in a few years if people don\u2019t really concentrate on it.&#8221;<\/p>\n<p>&#8220;Publishing a position paper like this, to me, is a mechanism to get more research and attention on this topic,&#8221; he continued, &#8220;before that happens.&#8221;<\/p>\n<p>Once again, there appears to be tacit acknowledgement of AI&#8217;s &#8220;black box&#8221; nature \u2014 and to be fair, even CEOs like <a href=\"https:\/\/futurism.com\/sam-altman-admits-openai-understand-ai\">OpenAI&#8217;s Sam Altman<\/a> and <a href=\"https:\/\/futurism.com\/anthropic-ceo-admits-ai-ignorance\">Anthropic&#8217;s Dario Amodei<\/a> have admitted that at a deep level, they don&#8217;t <em>really<\/em> understand <a href=\"https:\/\/www.technologyreview.com\/2024\/03\/05\/1089449\/nobody-knows-how-ai-works\/\">how the technology they&#8217;re building works<\/a>.<\/p>\n<p>Along with its 40-researcher author list that includes DeepMind cofounder <a href=\"https:\/\/futurism.com\/google-deepmind-agi-5-years\">Shane Legg<\/a> and xAI safety advisor <a href=\"https:\/\/fortune.com\/2024\/11\/13\/scale-ai-dan-hendrycks-elon-musk-xai-safety-trump-ties\/\">Dan Hendrycks<\/a>, the paper has also drawn endorsements from industry luminaries including former OpenAI chief scientist <a href=\"https:\/\/futurism.com\/the-byte\/openai-scientists-agi-bunker\">Ilya Sutskever<\/a> and AI godfather and Nobel laureate <a href=\"https:\/\/futurism.com\/nobel-prize-winner-ai-godfather\">Geoffrey Hinton<\/a>.<\/p>\n<p>Though Musk&#8217;s name doesn&#8217;t appear on the paper, with Hendrycks on board, all\u00a0of the &#8220;Big Five&#8221; firms \u2014 OpenAI, Google, Anthropic, Meta, and xAI \u2014 have been brought together to warn about what might happen when and if AI stops showing its work.<\/p>\n<p>In doing so, that powerful cabal has said the quiet part out loud: that they don&#8217;t feel entirely in control of AI&#8217;s future. For companies with untold billions between them, that&#8217;s a pretty strange message to market \u2014 which makes the paper all the more remarkable.<\/p>\n<p><strong>More on AI warnings:<\/strong> <a href=\"https:\/\/futurism.com\/bernie-sanders-ai-warning\"><em>Bernie Sanders Issues Warning About How AI Is Really Being Used<\/em><\/a><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/top-ai-researchers-concerned\">Top AI Researchers Concerned They\u2019re Losing the Ability to Understand What They\u2019ve Created<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>Researchers from OpenAI, Google DeepMind, and Meta have joined forces to warn about what they&#8217;re building. In a new position paper, 40 researchers spread across those four companies called for&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,2701,2702,2703,1514],"tags":[],"class_list":["post-3726","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-black-box","category-chains-of-thought","category-frontier-models","category-reasoning-models"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/3726","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=3726"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/3726\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=3726"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=3726"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=3726"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}