{"id":2269,"date":"2025-06-04T13:04:26","date_gmt":"2025-06-04T13:04:26","guid":{"rendered":"https:\/\/musictechohio.online\/site\/open-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration\/"},"modified":"2025-06-04T13:04:26","modified_gmt":"2025-06-04T13:04:26","slug":"open-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration","status":"publish","type":"post","link":"https:\/\/musictechohio.online\/site\/open-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration\/","title":{"rendered":"Open-source AI that pays for itself: Block\u2019s Vision for AI Integration"},"content":{"rendered":"<div>\n<p><span style=\"font-weight: 400;\">Just when I think I\u2019ve grasped the full landscape of AI coding assistants, another compelling tool I\u2019d never encountered invariably surfaces. <\/span><a href=\"https:\/\/block.github.io\/goose\/\"><b>codename goose<\/b><\/a><span style=\"font-weight: 400;\"> (hereafter \u201cGoose\u201d), an open-source agent used weekly by 5,000 Block employees, shows what happens when you give an LLM a toolbox. Built at Block (formerly Square) and released under an MIT licence in January 2025, Goose <\/span><b>runs locally<\/b><span style=\"font-weight: 400;\"> on engineers\u2019 machines and pairs large-language-model reasoning with a growing library of tool integrations to form a flexible automation platform. That blend turns the agent into a workhorse capable of tackling everything from autonomous code fixes to lightning-fast incident response, converting maintenance drudgery into review-only work. Thanks to the Model Context Protocol, Goose is a blank slate: its capabilities are defined entirely by whichever tools it plugs into, rather than by hard-coded specialties. The heavily edited conversation that follows with <\/span><a href=\"https:\/\/www.linkedin.com\/in\/jbrosamer\/\"><span style=\"font-weight: 400;\">Jackie Brosamer<\/span><\/a><span style=\"font-weight: 400;\"> and <\/span><a href=\"https:\/\/www.linkedin.com\/in\/bradleyaxen\/\"><span style=\"font-weight: 400;\">Brad Axen<\/span><\/a><span style=\"font-weight: 400;\">, members of the team behind Goose, unpacks how the project came together, what it means for enterprise automation, and where it is heading next.<\/span><\/p>\n<hr>\n<p style=\"text-align: center;\"><strong>Your support = more great content. Join us! <img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/15.1.0\/72x72\/1f31f.png\" alt=\"\ud83c\udf1f\" class=\"wp-smiley\" style=\"height: 1em; max-height: 1em;\"><\/strong><\/p>\n<\/p>\n<p><center><iframe loading=\"lazy\" style=\"border: 1px solid #EEE; background: white;\" src=\"https:\/\/gradientflow.substack.com\/embed\" width=\"480\" height=\"320\" frameborder=\"0\" scrolling=\"no\"><\/iframe><\/center><\/p>\n<hr>\n<h4><b>Introduction &amp; Overview<\/b><\/h4>\n<p><b>What exactly is Goose?<\/b><span style=\"font-weight: 400;\"> Goose is an open source, on-machine AI agent designed to automate complex engineering and knowledge work tasks from start to finish. Developed by Block (formerly Square), it combines large language model reasoning with tool integrations to create a flexible automation platform. The project was publicly released in January 2025 under an MIT license after about nine months of internal development.<\/span><\/p>\n<p><b>Why did Block build yet another AI copilot?<\/b><span style=\"font-weight: 400;\"> The team initially set out to leverage LLMs for developer tasks, recognizing they had become genuinely useful tools for building code. However, they quickly realized the potential extended far beyond developer workflows. Because LLMs are general-purpose, an agent with the right tools can automate tasks for design teams, support agents, and many other roles. Goose was created to be this flexible agent platform capable of handling diverse automation needs across the organization.<\/span><\/p>\n<p><b>How widely is Goose used within Block?<\/b><span style=\"font-weight: 400;\"> Approximately 5,000 people at Block use Goose weekly, including both developers and non-developers. Block runs the same open source version internally (a practice called \u201cdogfooding\u201d), adding only proprietary authentication and security connectors required for their corporate environment.<\/span><\/p>\n<h4><b>Architecture &amp; Technical Integration<\/b><\/h4>\n<p><b>How central is the Model Context Protocol (MCP) to Goose?<\/b><span style=\"font-weight: 400;\"> While Goose predated MCP, the team quickly integrated it upon release. MCP is powerful because it provides a standard way for Goose to connect with any model (Anthropic, OpenAI, or open source options) and integrate with diverse data sources like GitHub, Slack, Google Calendar, and custom internal systems. Goose is essentially a \u201cblank slate\u201d until connected to tools via MCP \u2013 its capabilities depend entirely on these connections. This makes it highly customizable without changing code.<\/span><\/p>\n<p><b>What models work with Goose, and which are most popular?<\/b><span style=\"font-weight: 400;\"> Goose supports any LLM, and users can hot-swap models mid-conversation. Currently, users gravitate toward \u201cfrontier models\u201d \u2013 the latest and most capable options. The Anthropic Sonnet family and OpenAI\u2019s reasoner models see the most use. Interestingly, users often employ different models for different tasks within the same conversation: one model for planning\/design, then switching to Sonnet for execution. Gemini 2.5 Pro with its 1-million token context window handles tasks requiring large amounts of content. The proactiveness of the Sonnet family is a key reason for its popularity.<\/span><\/p>\n<p><img data-recalc-dims=\"1\" fetchpriority=\"high\" decoding=\"async\" data-attachment-id=\"45836\" data-permalink=\"https:\/\/gradientflow.com\/can-a-single-agent-automate-90-of-your-code-fixes-block-thinks-so\/goose-architecture\/\" data-orig-file=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?fit=2208%2C1006&amp;ssl=1\" data-orig-size=\"2208,1006\" data-comments-opened=\"0\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Goose Architecture\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?fit=300%2C137&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?fit=750%2C342&amp;ssl=1\" class=\"aligncenter wp-image-45836\" src=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?resize=750%2C342&amp;ssl=1\" alt=\"\" width=\"750\" height=\"342\" srcset=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?w=2208&amp;ssl=1 2208w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?resize=300%2C137&amp;ssl=1 300w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?resize=1024%2C467&amp;ssl=1 1024w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?resize=768%2C350&amp;ssl=1 768w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?resize=1536%2C700&amp;ssl=1 1536w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?resize=2048%2C933&amp;ssl=1 2048w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-Architecture.png?resize=1568%2C714&amp;ssl=1 1568w\" sizes=\"(max-width: 750px) 100vw, 750px\"><\/p>\n<p><b>How does Goose handle context window limitations?<\/b><span style=\"font-weight: 400;\"> The team considers this a fundamental challenge requiring multiple strategies:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Smart summarization over long context windows<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Selective context retrieval using RAG to identify which tools are relevant for specific queries<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Enabling the agent to navigate information through tool-calling (like using a knowledge graph)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Multi-turn searching that outperforms simple semantic search<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Having the agent iteratively search codebases using tools like ripgrep rather than dumping all results into the prompt The goal is feeding the right tokens into the context window, not all available tokens.<\/span><\/li>\n<\/ul>\n<p><b>Can Goose work with local models?<\/b><span style=\"font-weight: 400;\"> Yes, hobbyists in the open source community are seeing success running Goose with local models like Llama. While local models don\u2019t yet solve all enterprise-scale coding problems, they\u2019re becoming increasingly capable for many practical tasks.<\/span><\/p>\n<h4><b>Developer Experience &amp; Workflow<\/b><\/h4>\n<p><b>How are new engineers onboarded to Goose?<\/b><span style=\"font-weight: 400;\"> Goose is auto-installed on every new Block laptop. New users typically start with the chat interface, which works like any standard chat when the agent isn\u2019t using tools. As users interact, Goose proactively suggests available tools and capabilities, naturally guiding them into its full feature set. For instance, when discussing a codebase, Goose might ask if the user wants it to attempt changes, dynamically evolving the interaction.<\/span><\/p>\n<p><b>How do developers typically use Goose in their workflow?<\/b><span style=\"font-weight: 400;\"> Usage patterns vary by task type:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">For one-off tasks like dashboards or data visualizations, engineers often let Goose generate everything (\u201cvibe coding\u201d)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">For maintained codebases, a common pattern is ~90% AI-generated code with engineers handling the final 10% for quality assurance<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">For longer-running processes, developers ask Goose to work on the side, then return to their IDE to review diffs and make minor tweaks<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Notebook users can chat with Goose while it fills in cells for model training or data analysis<\/span><\/li>\n<\/ul>\n<p><b>Can engineers still use other AI assistants alongside Goose?<\/b><span style=\"font-weight: 400;\"> Block maintains a liberal \u201cbring your own assistant\u201d policy, recognizing that different tools suit different problems. Complex codebases might work better with autocomplete-style tools like Cursor, while Goose excels at volume fixes and smaller codebases. This experimentation helps the organization learn which patterns map to which assistants.<\/span><\/p>\n<h4><b>Practical Applications &amp; Use Cases<\/b><\/h4>\n<p><b>What are \u201crecipes\u201d and why are they important?<\/b><span style=\"font-weight: 400;\"> Recipes are asynchronous, trigger-driven workflows that Goose can execute autonomously. For example, Goose can monitor GitHub issues and automatically attempt fixes, or address security vulnerabilities. The human still reviews and gets credit for the PR, but the agent handles the initial autonomous work. This transforms maintenance drudgery into review-only work.<\/span><\/p>\n<p><b>What specific tasks can Goose automate end-to-end?<\/b><span style=\"font-weight: 400;\"> Goose handles a wide range of tasks with varying levels of autonomy:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Fully autonomous: Small tasks like handling vulnerability tickets<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Semi-autonomous: Automating parts of model training, generating feature definitions in complex Java systems, documenting sensitive models<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Human-partnered: Design critiques, SRE support during incidents, creating websites, generating content in Google Docs or Slack<\/span><\/li>\n<\/ul>\n<p><b>How does Goose handle hallucinations?<\/b><span style=\"font-weight: 400;\"> Hallucinations still occur, such as suggesting non-existent libraries or methods. The most successful users are quick to \u201csavagely discard\u201d unproductive sessions rather than trying to correct the agent. Since Goose generates code quickly, starting over multiple times is still faster than manual coding. The key is focusing on tasks with objective validation\u2014code that runs, passes tests, or SQL queries that can be explained\u2014which makes hallucinations easier to catch.<\/span><\/p>\n<p><b>How is Goose being used for incident response?<\/b><span style=\"font-weight: 400;\"> Goose significantly reduces time to recovery (TTR) by processing volumes of data that would overwhelm humans. When a service is down, AI can read system logs from the last hour across multiple LLMs in parallel, surfacing insights to humans. This parallel processing of logs and system data can potentially make incident recovery 100x faster compared to the more modest speedups seen in code generation.<\/span><\/p>\n<h4><b>Security &amp; Governance<\/b><\/h4>\n<p><b>What are the security considerations with MCP servers?<\/b><span style=\"font-weight: 400;\"> The team acknowledges the current \u201cwild west\u201d landscape of MCP servers and recommends users perform due diligence on third-party MCPs. Block sits on the MCP steering committee with Anthropic, working to improve the ecosystem through:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Adding human confirmation flags for dangerous actions<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Creating a vetted registry (like PyPI for MCP) with proper vetting processes<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Evolving the protocol to address issues like long-lived connections through features like streamable HTTP components<\/span><\/li>\n<\/ul>\n<p><b>How does working in financial services affect AI tool adoption?<\/b><span style=\"font-weight: 400;\"> Contrary to expectations, regulatory controls are actually an asset when automating with AI. Non-engineering teams handling compliance work are enthusiastic about automating responses to forms and regulatory requirements. The existing constraints help ensure proper validation as more processes become automated, and code validation is often easier than validating outputs from, say, an executive assistant.<\/span><\/p>\n<p><img loading=\"lazy\" data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"45837\" data-permalink=\"https:\/\/gradientflow.com\/can-a-single-agent-automate-90-of-your-code-fixes-block-thinks-so\/goose-art\/\" data-orig-file=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-art.png?fit=1536%2C1024&amp;ssl=1\" data-orig-size=\"1536,1024\" data-comments-opened=\"0\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Goose \u2013 art\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-art.png?fit=300%2C200&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-art.png?fit=750%2C500&amp;ssl=1\" class=\"aligncenter wp-image-45837\" src=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-art.png?resize=626%2C417&amp;ssl=1\" alt=\"\" width=\"626\" height=\"417\" srcset=\"https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-art.png?w=1536&amp;ssl=1 1536w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-art.png?resize=300%2C200&amp;ssl=1 300w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-art.png?resize=1024%2C683&amp;ssl=1 1024w, https:\/\/i0.wp.com\/gradientflow.com\/wp-content\/uploads\/2025\/06\/Goose-art.png?resize=768%2C512&amp;ssl=1 768w\" sizes=\"auto, (max-width: 626px) 100vw, 626px\"><\/p>\n<h4><b>Multi-Agent Systems &amp; Architecture<\/b><\/h4>\n<p><b>Does Goose support multi-agent architectures?<\/b><span style=\"font-weight: 400;\"> Currently, the team focuses on practical patterns rather than complex multi-agent abstractions. They\u2019re implementing:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Multiple agents running in parallel for volume work (like handling support tickets)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Quick task retries for multiple attempts<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Single-model context summarization, which currently outperforms splitting context between multiple models They expect this may change as foundation models evolve, but specialized agent-to-agent communication protocols need more real-world production use cases before adding significant value.<\/span><\/li>\n<\/ul>\n<p><b>How do reasoning models factor into Goose\u2019s capabilities?<\/b><span style=\"font-weight: 400;\"> Block deliberately uses the most capable (and currently most expensive) reasoning models to prove what\u2019s possible, betting that costs will continue dropping dramatically. Reasoner models excel at tasks requiring precise instruction following, such as generating rich UI elements with exact specifications. The team mixes expensive reasoner models for critical tasks with cheaper models for bulk work.<\/span><\/p>\n<h4><b>Future Directions &amp; Industry Impact<\/b><\/h4>\n<p><b>What\u2019s on the near-term roadmap?<\/b><span style=\"font-weight: 400;\"> Key priorities for the next 6-12 months include:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">A complete UX redesign moving away from \u201cdesign by engineers\u201d to create a more intuitive interface for AI agent interaction<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Reducing context switching so users can stay within the tool<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Productizing Goose-style agents for Block\u2019s customer-facing products to help small businesses and financial management<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Implementing RAG and knowledge graphs to help agents select from the 40-50 MCP servers (with hundreds of tools) typically connected to a session<\/span><\/li>\n<\/ul>\n<p><b>How is Goose being adopted outside Block?<\/b><span style=\"font-weight: 400;\"> Partner companies are contributing to the codebase and using it internally. An interesting pattern emerging is data teams using Goose to replace traditional dashboards with conversational, agent-driven insights. Engineers work with data engineers to make Goose productive for answering questions, bridging the engineering experience to other roles like PMs and marketers.<\/span><\/p>\n<p><b>How will tools like Goose affect engineering hiring and skills?<\/b><span style=\"font-weight: 400;\"> The team sees this as augmentation rather than replacement, moving engineers up levels of abstraction. For interviews, they plan to give candidates harder problems but allow them to use LLMs, recognizing that the real differentiator is how well someone leverages these tools. The emphasized skills are changing -it\u2019s about giving engineers more leverage, not eliminating engineering work.<\/span><\/p>\n<p><a class=\"a2a_button_bluesky\" href=\"https:\/\/www.addtoany.com\/add_to\/bluesky?linkurl=https%3A%2F%2Fgradientflow.com%2Fopen-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration%2F&amp;linkname=Open-source%20AI%20that%20pays%20for%20itself%3A%20Block%E2%80%99s%20Vision%20for%20AI%20Integration\" title=\"Bluesky\" rel=\"nofollow noopener\" target=\"_blank\"><\/a><a class=\"a2a_button_linkedin\" href=\"https:\/\/www.addtoany.com\/add_to\/linkedin?linkurl=https%3A%2F%2Fgradientflow.com%2Fopen-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration%2F&amp;linkname=Open-source%20AI%20that%20pays%20for%20itself%3A%20Block%E2%80%99s%20Vision%20for%20AI%20Integration\" title=\"LinkedIn\" rel=\"nofollow noopener\" target=\"_blank\"><\/a><a class=\"a2a_button_facebook\" href=\"https:\/\/www.addtoany.com\/add_to\/facebook?linkurl=https%3A%2F%2Fgradientflow.com%2Fopen-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration%2F&amp;linkname=Open-source%20AI%20that%20pays%20for%20itself%3A%20Block%E2%80%99s%20Vision%20for%20AI%20Integration\" title=\"Facebook\" rel=\"nofollow noopener\" target=\"_blank\"><\/a><a class=\"a2a_button_reddit\" href=\"https:\/\/www.addtoany.com\/add_to\/reddit?linkurl=https%3A%2F%2Fgradientflow.com%2Fopen-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration%2F&amp;linkname=Open-source%20AI%20that%20pays%20for%20itself%3A%20Block%E2%80%99s%20Vision%20for%20AI%20Integration\" title=\"Reddit\" rel=\"nofollow noopener\" target=\"_blank\"><\/a><a class=\"a2a_button_email\" href=\"https:\/\/www.addtoany.com\/add_to\/email?linkurl=https%3A%2F%2Fgradientflow.com%2Fopen-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration%2F&amp;linkname=Open-source%20AI%20that%20pays%20for%20itself%3A%20Block%E2%80%99s%20Vision%20for%20AI%20Integration\" title=\"Email\" rel=\"nofollow noopener\" target=\"_blank\"><\/a><a class=\"a2a_button_mastodon\" href=\"https:\/\/www.addtoany.com\/add_to\/mastodon?linkurl=https%3A%2F%2Fgradientflow.com%2Fopen-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration%2F&amp;linkname=Open-source%20AI%20that%20pays%20for%20itself%3A%20Block%E2%80%99s%20Vision%20for%20AI%20Integration\" title=\"Mastodon\" rel=\"nofollow noopener\" target=\"_blank\"><\/a><a class=\"a2a_button_copy_link\" href=\"https:\/\/www.addtoany.com\/add_to\/copy_link?linkurl=https%3A%2F%2Fgradientflow.com%2Fopen-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration%2F&amp;linkname=Open-source%20AI%20that%20pays%20for%20itself%3A%20Block%E2%80%99s%20Vision%20for%20AI%20Integration\" title=\"Copy Link\" rel=\"nofollow noopener\" target=\"_blank\"><\/a><\/p>\n<p>The post <a href=\"https:\/\/gradientflow.com\/open-source-ai-that-pays-for-itself-blocks-vision-for-ai-integration\/\">Open-source AI that pays for itself: Block\u2019s Vision for AI Integration<\/a> appeared first on <a href=\"https:\/\/gradientflow.com\/\">Gradient Flow<\/a>.<\/p>\n<\/div>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>Just when I think I\u2019ve grasped the full landscape of AI coding assistants, another compelling tool I\u2019d never encountered invariably surfaces. codename goose (hereafter \u201cGoose\u201d), an open-source agent used weekly&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-2269","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/2269","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/comments?post=2269"}],"version-history":[{"count":0,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/posts\/2269\/revisions"}],"wp:attachment":[{"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/media?parent=2269"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/categories?post=2269"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/musictechohio.online\/site\/wp-json\/wp\/v2\/tags?post=2269"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}