{"id":7547,"date":"2025-12-20T14:45:00","date_gmt":"2025-12-20T14:45:00","guid":{"rendered":"https:\/\/musictechohio.online\/site\/ai-cancer-diagnostic-bias\/"},"modified":"2025-12-20T14:45:00","modified_gmt":"2025-12-20T14:45:00","slug":"ai-cancer-diagnostic-bias","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/ai-cancer-diagnostic-bias\/","title":{"rendered":"Doctors Catch Cancer-Diagnosing AI Extracting Patients\u2019 Race Data and Being Racist With It"},"content":{"rendered":"<div>\n<p class=\"article-paragraph skip\">Just when you thought you heard it all, AI systems designed to spot cancer have startled researchers with a<strong> <\/strong>baked-in penchant for racism.<\/p>\n<p class=\"article-paragraph skip\">The alarming findings were <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2666379125006007\" rel=\"nofollow\">published<\/a> in the journal <em>Cell Reports Medicine<\/em>, showing that four leading AI-enhanced pathology diagnostic systems differ in accuracy depending on patients<strong>\u2018<\/strong> age, gender, and race \u2014 demographic data, disturbingly, that the AI is extracting directly from pathology slides, a feat that\u2019s impossible for human doctors. <\/p>\n<p class=\"article-paragraph skip\">To conduct the study, researchers at Harvard University combed through nearly 29,000 cancer pathology images from some 14,400 cancer patients. Their analysis found that the deep learning models exhibited alarming biases 29.3 percent of the time \u2014 on nearly a third of all the diagnostic tasks they were assigned, in other words.<\/p>\n<p class=\"article-paragraph skip\">\u201cWe found that because AI is so powerful, it can differentiate many obscure biological signals that cannot be detected by standard human evaluation,\u201d Harvard researcher Kun-Hsing Yu, a senior author of the study, said in a <a href=\"https:\/\/scitechdaily.com\/what-ai-learned-from-cancer-slides-shocked-researchers\/\" rel=\"nofollow\">press release<\/a>. \u201cReading demographics from a pathology slide is thought of as a \u2018mission impossible\u2019 for a human pathologist, so the bias in pathology AI was a surprise to us.\u201d<\/p>\n<p class=\"article-paragraph skip\">Yu said that these bias-based errors are the result of AI models relying on patterns linked to various demographics when analyzing cancer tissue. In other words, once the four AI tools locked onto a person\u2019s age, race, or gender, those factors would form the backbone of the tissue analysis. In effect, AI would go on to replicate bias resulting from gaps in AI training data.<\/p>\n<p class=\"article-paragraph skip\">The AI tools were able to identify samples taken specifically from Black people, to give a concrete example. These cancer slides, the authors wrote, contained higher counts of abnormal, neoplastic cells, and lower counts of supportive elements than those from white patients, allowing AI to snuff them out, even though the samples were anonymous. <\/p>\n<p class=\"article-paragraph skip\">Then came the trouble. Once an AI pathology tool had identified a person\u2019s race, they became overly-obsessed with finding previous analyses that fit that particular identifier. But when the model was trained mostly on data from white people, the tools would struggle with those who aren\u2019t as represented. The AI models had trouble distinguishing subclasses of lung cancer cells in Black people, for example \u2014 not because there was a lack of lung cancer data for them to draw from, but because there was lacking data from <em>Black<\/em> lung cancer cells to draw from.<\/p>\n<p class=\"article-paragraph skip\">That was unexpected, Yu said in the <a href=\"https:\/\/scitechdaily.com\/what-ai-learned-from-cancer-slides-shocked-researchers\/\" rel=\"nofollow\">press release<\/a>, \u201cbecause we would expect pathology evaluation to be objective. When evaluating images, we don\u2019t necessarily need to know a patient\u2019s demographics to make a diagnosis.\u201d<\/p>\n<p class=\"article-paragraph skip\">Back in June, medical researchers <a href=\"https:\/\/www.nature.com\/articles\/s41746-025-01746-4\" rel=\"nofollow\">discovered a similar racial bias<\/a> in large language model (LLM) psychiatric diagnostic tools. In that case, results showed AI tools often proposed \u201cinferior treatment\u201d plans for Black patients whenever their race was explicitly known.<\/p>\n<p class=\"article-paragraph skip\">In the case of AI cancer-screening tools, the Harvard research team also developed a new AI-training approach called the FAIR-Path. When this training framework was introduced to the AI tools prior to analysis, they found that it thwarted 88.5 percent of the disparities in performance.<\/p>\n<p class=\"article-paragraph skip\">That there\u2019s a solution out there is good news, though that remaining 11.5 percent is nothing to sneeze at either. And until training frameworks like this are made mandatory across all AI tools in the pathology field, questions of the system\u2019s inherent biases will remain.<\/p>\n<p class=\"article-paragraph skip\"><strong>More on cancer: <\/strong><em><a href=\"https:\/\/futurism.com\/artificial-intelligence\/amazon-data-center-oregon\">Amazon Data Center Linked to Cluster of Rare Cancers<\/a><\/em><\/p>\n<p>The post <a href=\"https:\/\/futurism.com\/health-medicine\/ai-cancer-diagnostic-bias\">Doctors Catch Cancer-Diagnosing AI Extracting Patients\u2019 Race Data and Being Racist With It<\/a> appeared first on <a href=\"https:\/\/futurism.com\/\">Futurism<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>Just when you thought you heard it all, AI systems designed to spot cancer have startled researchers with a baked-in penchant for racism. The alarming findings were published in the&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[177,4822,3841,3844,3955],"tags":[],"class_list":["post-7547","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-cancer","category-ethics","category-health-medicine","category-medical"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/7547","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=7547"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/7547\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=7547"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=7547"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=7547"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}