FAANGineering - BlogFlock 2025-06-27T04:22:56.408Z BlogFlock Google Developers Blog, The GitHub Blog, Nextdoor Engineering - Medium, Engineering at Meta, Netflix TechBlog - Medium, Etsy Engineering | Code as Craft Introducing Gemma 3n: The developer guide - Google Developers Blog https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/ 2025-06-26T17:27:41.000Z The Gemma 3n model has been fully released, building on the success of previous Gemma models and bringing advanced on-device multimodal capabilities to edge devices with unprecedented performance. Explore Gemma 3n's innovations, including its mobile-first architecture, MatFormer technology, Per-Layer Embeddings, KV Cache Sharing, and new audio and MobileNet-V5 vision encoders, and how developers can start building with it today. Unlock deeper insights with the new Python client library for Data Commons - Google Developers Blog https://developers.googleblog.com/en/pythondatacommons/ 2025-06-26T16:22:41.000Z Google has released a new Python client library for Data Commons – an open-source knowledge graph that unifies public statistical data, and enhances how data developers can leverage Data Commons by offering improved features, support for custom instances, and easier access to a vast array of statistical variables – developed with contributions from The ONE Campaign. Simulating a neural operating system with Gemini 2.5 Flash-Lite - Google Developers Blog https://developers.googleblog.com/en/simulating-a-neural-operating-system-with-gemini-2-5-flash-lite/ 2025-06-25T18:42:41.000Z A research prototype simulating a neural operating system generates UI in real-time adapting to user interactions with Gemini 2.5 Flash-Lite, using interaction tracing for contextual awareness, streaming the UI for responsiveness, and achieving statefulness with an in-memory UI graph. From pair to peer programmer: Our vision for agentic workflows in GitHub Copilot - The GitHub Blog https://github.blog/?p=88977 2025-06-25T16:00:00.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>Software development has always been a deeply human, collaborative process. When we introduced <a href="https://github.com/features/copilot">GitHub Copilot</a> in 2021 as an &ldquo;<a href="https://github.blog/news-insights/product-news/introducing-github-copilot-ai-pair-programmer/">AI pair programmer</a>,&rdquo; it was designed to help developers stay in the flow, reduce boilerplate work, and accelerate coding.</p> <p>But what if Copilot could be more than just an assistant? What if it could actively collaborate with you&mdash;working alongside you on synchronous tasks, tackling issues independently, and even reviewing your code?</p> <p>That&rsquo;s the future we&rsquo;re building.</p> <h2 class="wp-block-heading" id="h-our-vision-for-what-s-next-nbsp">Our vision for what&rsquo;s next&nbsp;</h2> <p>Today, AI agents in GitHub Copilot don&rsquo;t just assist developers but actively solve problems through multi-step reasoning and execution. These agents are capable of:</p> <ul class="wp-block-list"> <li><strong>Independent problem solving:</strong> Copilot will break down complex tasks and take the necessary steps to solve them, providing updates along the way.</li> <li><strong>Adaptive collaboration:</strong> Whether working in sync with you or independently in the background, Copilot will iterate on its own outputs to drive progress.</li> <li><strong>Proactive code quality:</strong> Copilot will proactively assist with tasks like issue resolution, testing, and code reviews, ensuring higher-quality, maintainable code.</li> </ul> <p>Rather than fitting neatly into synchronous or asynchronous categories, the future of Copilot lies in its ability to flexibly transition between modes&mdash;executing tasks independently while keeping you informed and in control. This evolution will allow you to focus on higher-level decision-making while Copilot takes on more of the execution.</p> <p>Let&rsquo;s explore what&rsquo;s already here&mdash;and what&rsquo;s coming next.</p> <aside data-color-mode="light" data-dark-theme="dark" data-light-theme="light_dimmed" class="wp-block-group post-aside--large p-4 p-md-6 is-style-light-dimmed has-global-padding is-layout-constrained wp-block-group-is-layout-constrained is-style-light-dimmed--1" style="border-top-width:4px"> <h3 class="wp-block-heading h5-mktg gh-aside-title is-typography-preset-h5" id="h-why-independent-agents-and-why-now" style="margin-top:0">Why independent agents&mdash;and why now?</h3> <p>Modern development isn&rsquo;t linear. We context switch between features, bug fixes, dependency bumps, and reviews every day. A truly useful AI must:</p> <ol class="wp-block-list"> <li><strong>Act independently</strong>: plan multi&#8209;step tasks and execute them without hand holding.</li> <li><strong>Stay transparent</strong>: share its plan and progress, so you can intervene instantly.</li> <li><strong>Earn trust</strong>: test its own work and explain every change.</li> </ol> <p>Copilot&rsquo;s new <strong>agentic architecture</strong> is designed around these guardrails.</p> </aside> <h2 class="wp-block-heading" id="h-copilot-in-action-taking-steps-toward-our-vision-nbsp">Copilot in action: Taking steps toward our vision&nbsp;</h2> <h3 class="wp-block-heading" id="h-agent-mode-a-real-time-ai-teammate-inside-your-ide">Agent mode: A real-time AI teammate inside your IDE</h3> <p>If you&rsquo;ve used <a href="https://github.blog/news-insights/product-news/github-copilot-the-agent-awakens/">agent mode with GitHub Copilot</a> (and you should, because it&rsquo;s fantastic), you&rsquo;ve already experienced an independent AI agent at work.&nbsp;</p> <p>Agent mode lives where you code and feels like handing your computer to a teammate for a minute: it types on your screen while you look on, and can grab the mouse. When you prompt it, the agent takes control, works through the problem, and reports its work back to you with regular check-in points. It can:</p> <ul class="wp-block-list"> <li><strong>Read your entire workspace</strong> to understand context.</li> <li><strong>Plan multi&#8209;step fixes or refactors</strong> (and show you the plan first).</li> <li><strong>Apply changes, run tests, and iterate</strong> in a tight feedback loop.</li> <li><strong>Ask for guidance</strong> whenever intent is ambiguous.</li> <li><strong>Run and refine its own work</strong> through an &ldquo;agentic loop&rdquo;&mdash;planning, applying changes, testing, and iterating.</li> </ul> <p>Rather than just responding to requests, Copilot in agent mode actively works toward your goal. You define the outcome, and it determines the best approach&mdash;seeking feedback from you as needed, testing its own solutions, and refining its work in real time.&nbsp;</p> <p>Think of it as pair programming in fast forward: you&rsquo;re watching the task unfold in real time, free to jump in or redirect at any step. &#10024;</p> <figure class="wp-block-video"><video controls poster="https://github.blog/wp-content/uploads/2025/06/Screenshot-2025-06-20-at-12.31.22&#8239;PM.png" src="https://github.blog/wp-content/uploads/2025/06/copilot-agent-ga.mp4"></video></figure> <h3 class="wp-block-heading" id="h-coding-agent-an-ai-teammate-that-works-while-you-don-t-nbsp">Coding agent: An AI teammate that works while you don&rsquo;t&nbsp;</h3> <p>Not all coding happens in real time. Sometimes, you need to hand off tasks to a teammate and check back later.</p> <p>That&rsquo;s where <strong>our coding agent</strong> comes in&mdash;and it&rsquo;s our first step in transforming Copilot into an independent agent. Coding agent spins up its <strong>own secure dev environment</strong> in the cloud. You can assign multiple issues to Copilot, then dive into other work (or grab a cup of coffee!) while it handles the heavy lifting. It can:</p> <ul class="wp-block-list"> <li><strong>Clone your repo and bootstrap tooling</strong> in isolation.</li> <li><strong>Break the issue into steps</strong>, implement changes, and write or update tests.</li> <li><strong>Validate its work </strong>by running your tests and linter.</li> <li><strong>Open a draft PR</strong> and iterate based on your PR review comments.</li> <li><strong>Stream progress updates</strong> so you can peek in&mdash;or jump in&mdash;any time.</li> </ul> <p>Working with coding agent is like asking a teammate in another room&mdash;with their own laptop and setup&mdash;to tackle an issue. You&rsquo;re free to work on something else, but you can pop in for status or feedback whenever you like.</p> <figure class="wp-block-video"><video controls poster="https://github.blog/wp-content/uploads/2025/06/Screenshot-2025-06-20-at-12.35.06&#8239;PM.png" src="https://github.blog/wp-content/uploads/2025/06/Copilot-Coding-Agent-Overview-v3-Burned.mp4"></video></figure> <h2 class="wp-block-heading" id="h-less-todo-more-done-the-next-stage-of-copilot-s-agentic-future">Less TODO, more done: The next stage of Copilot&rsquo;s agentic future</h2> <p>The next stage of Copilot is being built on three converging pillars:</p> <ol class="wp-block-list"> <li><strong>Smarter, leaner models.</strong> Ongoing breakthroughs in large language models keep driving accuracy up while pushing latency and cost down. Expanded context windows now span entire monoliths, giving Copilot the long-range &ldquo;memory&rdquo; it needs to reason through complex codebases and return answers grounded in your real code.<br></li> <li><strong>Deeper contextual awareness.</strong> Copilot increasingly understands the full story behind your work&mdash;issues, pull-request history, dependency graphs, even private runbooks and API specs (via MCP). By tapping this richer context, it can suggest changes that align with project intent, not just syntax.<br></li> <li><strong>Open, composable foundation.</strong> We&rsquo;re designing Copilot to slot into <em>your</em> stack&mdash;not the other way around. You choose the editor, models, and tools; Copilot plugs in, learns your patterns, and amplifies them. You&rsquo;re in the driver&rsquo;s seat, steering the AI to build, test, and ship code faster than ever.</li> </ol> <p>Taken together, these pillars move Copilot beyond a single assistant toward a flexible AI teammate&mdash;one that can help any team, from three developers in a garage to thousands in a global enterprise, plan, code, test, and ship with less friction and more speed.</p> <p>So, get ready for what&rsquo;s next. The next wave is already on its way.&nbsp;</p> <div class="wp-block-group post-content-cta has-global-padding is-layout-constrained wp-block-group-is-layout-constrained"> <p><strong>Learn more</strong> about <a href="https://github.com/features/copilot">GitHub Copilot &gt;</a></p> </div> </body></html> <p>The post <a href="https://github.blog/news-insights/product-news/from-pair-to-peer-programmer-our-vision-for-agentic-workflows-in-github-copilot/">From pair to peer programmer: Our vision for agentic workflows in GitHub Copilot</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Using KerasHub for easy end-to-end machine learning workflows with Hugging Face - Google Developers Blog https://developers.googleblog.com/en/load-model-weights-from-safetensors-into-kerashub-multi-framework-machine-learning/ 2025-06-25T01:22:42.000Z KerasHub enables users to mix and match model architectures and weights across different machine learning frameworks, allowing checkpoints from sources like Hugging Face Hub (including those created with PyTorch) to be loaded into Keras models for use with JAX, PyTorch, or TensorFlow. This flexibility means you can leverage a vast array of community fine-tuned models while maintaining full control over your chosen backend framework. Imagen 4 is now available in the Gemini API and Google AI Studio - Google Developers Blog https://developers.googleblog.com/en/imagen-4-now-available-in-the-gemini-api-and-google-ai-studio/ 2025-06-24T22:07:42.000Z Imagen 4, Google's advanced text-to-image model, is now available in paid preview via the Gemini API and Google AI Studio, offering significant quality improvements, especially for text generation within images. The Imagen 4 family includes Imagen 4 for general tasks and Imagen 4 Ultra for high-precision prompt adherence, with all generated images featuring a non-visible SynthID watermark. Supercharge your notebooks: The new AI-first Google Colab is now available to everyone - Google Developers Blog https://developers.googleblog.com/en/new-ai-first-google-colab-now-available-to-everyone/ 2025-06-24T17:47:45.000Z The new AI-first Google Colab enhances productivity with improvements powered by features like iterative querying for conversational coding, a next-generation Data Science Agent for autonomous workflows, and effortless code transformation. Early adopters report a dramatic productivity boost, accelerating ML projects, debugging code faster, and effortlessly creating high-quality visualizations. Why developer expertise matters more than ever in the age of AI - The GitHub Blog https://github.blog/?p=89040 2025-06-24T17:04:47.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><em>Editor&rsquo;s note: This piece was originally published in our LinkedIn newsletter, Branching Out_. </em><a href="https://www.linkedin.com/newsletters/branching-out-6958196028076429312/"><em>Sign up now for more career-focused content &gt;&nbsp;</em></a></td></tr></tbody></table></figure> <p>AI tools seem to be everywhere. With the tap of a key, they provide ready answers to queries, autocomplete faster than our brains can, and even suggest entire blocks of code. Research has shown that <a href="https://github.com/features/copilot">GitHub Copilot</a> enables developers to <a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">code up to 55% faster</a>. Junior developers, specifically, <a href="https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-affects-highly-skilled-workers?utm_source=chatgpt.com">may see a 27% to 39% increase in output with AI assistance according to MIT</a>, showing even greater productivity gains from their adoption of AI than more experienced developers.&nbsp;</p> <p><strong>But here&rsquo;s the question: you may be coding faster with AI, but when was the last time you asked yourself </strong><strong><em>why</em></strong><strong> before adopting a suggestion from an AI coding assistant?&nbsp;</strong></p> <p>Being a developer is not just about producing code. It&rsquo;s about understanding <em>why</em> the code works, how it fits into the bigger picture, and what happens when things break down. The best developers know how to think critically about new problems and take a systems view of solving them. That kind of expertise is what keeps software resilient, scalable, and secure, especially as AI accelerates how quickly we ship. Without it, we risk building faster but breaking more.</p> <p><a href="https://timesofindia.indiatimes.com/technology/tech-news/github-ceo-thomas-dohmke-to-startups-your-companies-would-struggle-without-developers-as-ai-coding-assistants-can-only-/articleshow/121844990.cms?utm_source=chatgpt.com">Our CEO, Thomas Dohmke, put it bluntly at VivaTech</a>: &ldquo;Startups can launch with AI&#8209;generated code, but they can&rsquo;t scale without experienced developers.&rdquo; Developer expertise is the multiplier on AI, not the bottleneck.</p> <p>We&rsquo;re not saying you have to reject AI to be a great developer. At GitHub, we believe AI is a superpower, one that helps you move faster and build better when used thoughtfully. Your role as a developer in the age of AI is to be the human-in-the-loop: the person who knows why code works, why it sometimes doesn&rsquo;t, what the key requirements in your environment are, and how to debug, guide AI tools, and go beyond vibe coding.&nbsp;</p> <p>After all, AI can help you write code a lot faster, but only developer expertise turns that speed into resilient, scalable, and secure software.</p> <figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>TL;DR:</strong> AI pair&#8209;programming makes you faster, but it can&rsquo;t replace the judgment that keeps software safe and maintainable. This article offers three concrete ways to level&#8209;up your expertises.</td></tr></tbody></table></figure> <h2 class="wp-block-heading" id="h-ai-s-productivity-dividend-developer-experience-greater-impact">AI&rsquo;s productivity dividend + developer experience = greater impact</h2> <figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th><strong>Benefit</strong></th><th><strong>How human judgment multiplies the value</strong></th></tr></thead><tbody><tr><td>&#9201;&#65039; Faster commits (up to 55&#8239;% quicker task completion)</td><td>Devs run thoughtful code reviews, write tests, and surface edge cases so speed never comes at the cost of quality.</td></tr><tr><td>&#129504; Lower cognitive load</td><td>Freed-up mental bandwidth lets developers design better architectures, mentor teammates, and solve higher-order problems.</td></tr><tr><td>&#127793; Easier onboarding for juniors</td><td>Senior engineers provide context, establish standards, and turn AI suggestions into teachable moments building long-term expertise.</td></tr><tr><td>&#129302; Automated boilerplate</td><td>Devs tailor scaffolding to real project needs, question assumptions, and refactor early to keep tech-debt in check and systems secure.</td></tr></tbody></table></figure> <p>Speed without judgment can mean:</p> <ul class="wp-block-list"> <li>Security vulnerabilities that static analysis can&rsquo;t spot on its own.</li> <li>Architecture choices that don&rsquo;t scale beyond the demo.</li> <li>Documentation drift that leaves humans and models guessing.</li> </ul> <p>The remedy? Double down on the fundamentals that AI still can&rsquo;t master.</p> <h2 class="wp-block-heading" id="h-mastering-the-fundamentals-3-key-parts-of-your-workflow-to-focus-on-when-using-ai">Mastering the fundamentals: 3 key parts of your workflow to focus on when using AI</h2> <p>As the home for all developers, we&rsquo;ve seen it again and again: becoming AI-savvy starts with the old-school basics. You know, the&nbsp;classic tools and features you used before AI became a thing (we know, it&rsquo;s hard to remember such a time!). We believe that only by mastering the fundamentals can you get the most value, at scale, out of&nbsp;AI developer tools like GitHub Copilot.&nbsp;</p> <p>A junior developer who jumps into their first AI-assisted project without having a foundational understanding of the basics (like pull requests, code reviews, and documentation) may ship fast, but without context or structure, they risk introducing bugs, missing edge cases, or confusing collaborators. That&rsquo;s not an AI problem. It&rsquo;s a fundamentals problem.</p> <p>Let&rsquo;s revisit the core skills every developer should bring to the table, AI or not. With the help of a few of our experts, we&rsquo;ll show you how to level them up so you can dominate in the age of AI.</p> <h3 class="wp-block-heading" id="h-1-push-for-excellence-in-the-pull-request">1. Push for excellence in the pull request</h3> <p>At the heart of developer collaboration, pull requests are about clearly communicating your intent, explaining your reasoning, and making it easier for others (humans and AI alike!) to engage with your work.</p> <p>A well&#8209;scoped PR communicates <em>why</em> a change exists&mdash;not just <em>what</em> changed. That context feeds human reviewers and Copilot alike.</p> <p>As GitHub developer advocate <a href="https://github.com/ladykerr">Kedasha Kerr</a> advises, start by keeping your pull requests small and focused. A tight, purposeful pull request is easier to review, less likely to introduce bugs, and faster to merge. It also gives your reviewers, as well as AI tools like Copilot, a clean scope to work with.</p> <p>Your pull request description is where clarity counts. Don&rsquo;t just list what changed&mdash;explain <em>why</em> it changed. Include links to related issues, conversations, or tracking tickets to give your teammates the full picture. If your changes span multiple files, suggest where to start reviewing. And be explicit about what kind of feedback you&rsquo;re looking for: a quick sanity check? A deep dive? Let your reviewers know.</p> <p>Before you ask for a review, review it yourself. Kedasha recommends running your tests, previewing your changes, and catching anything unclear or unpolished. This not only respects your reviewers&rsquo; time, it improves the quality of your code and deepens your understanding of the work.</p> <p>A thoughtful pull request is a signal of craftsmanship. It builds trust with your team, strengthens your communication skills, and gives Copilot better context to support you going forward. That&rsquo;s a win for you, your team, and your future self.</p> <p><strong>Here&rsquo;s a quick 5&#8209;item PR checklist to reference as you work:&nbsp;</strong></p> <ol class="wp-block-list"> <li><strong>Scope &le; 300 lines</strong> (or break it up).</li> <li><strong>Title = verb + object</strong> (e.g., <em>Refactor auth middleware to async</em>).</li> <li><strong>Description answers &ldquo;why now?&rdquo;</strong> and links to the issue.</li> <li><strong>Highlight breaking changes</strong> with &#9888;&#65039; BREAKING in bold.</li> <li><strong>Request specific feedback</strong> (e.g., <em>Concurrency strategy OK?</em>).</li> </ol> <p>Drop this snippet into <strong><code>.github/pull_request_template.md</code></strong> and merge.</p> <p><a href="https://github.blog/developer-skills/github/beginners-guide-to-github-creating-a-pull-request/"><em>Learn more about creating a great pull request &gt;</em></a><em>&nbsp;</em></p> <h3 class="wp-block-heading" id="h-2-rev-up-your-code-reviews">2. Rev up your code reviews</h3> <p>AI can generate code in seconds, but knowing how to review that code is where real expertise develops. Every pull request is a conversation: &ldquo;I believe this improves the codebase, do you agree?&rdquo; As GitHub staff engineer <a href="https://github.com/cheshire137">Sarah Vessels</a> explains, good code reviews don&rsquo;t just catch bugs; they teach, transfer knowledge, and help teams move faster with fewer costly mistakes.</p> <p>And let&rsquo;s be honest: as developers, we often read and review far more code than we actually write (and that&rsquo;s ok!). No matter if code comes from a colleague or an AI tool, code reviews are a fundamental part of being a developer&mdash;and building a strong code review practice is critical, especially as the volume of code increases.&nbsp;</p> <p>You should start by reviewing your own pull requests before assigning them to others. Leave comments where you&rsquo;d have questions as a reviewer. This not only helps you spot problems early, but also provides helpful context for your teammates. Keep pull requests small and focused. The smaller the diff, the easier it is to review, debug, and even roll back if something breaks in production. In DevOps organizations, especially large ones, small, frequent commits also help reduce merge conflicts and keep deployment pipelines flowing smoothly.&nbsp;</p> <p>As a reviewer, focus on clarity. Ask questions, challenge assumptions, and check how code handles edge cases or unexpected data. If you see a better solution, offer a specific example rather than just saying &ldquo;this could be better.&rdquo; Affirm good choices too: calling out strong design decisions helps reinforce shared standards and makes the review process less draining for authors.</p> <p>Code reviews give you daily reps to build technical judgement, deepen your understanding of the codebase, and earn trust with your team. In an AI-powered world, they&rsquo;re also a key way to level up by helping you slow down, ask the right questions, and spot patterns AI might miss.</p> <p><strong>Here are some heuristics to keep in mind when reviewing code:</strong></p> <ul class="wp-block-list"> <li><strong>Read the tests first</strong>. They encode intent.</li> <li><strong>Trace data flow</strong> for user input to DB writes to external calls.</li> <li><strong>Look for hidden state</strong> in globals, singletons, and caches.</li> <li><strong>Ask &ldquo;What happens under load?&rdquo;</strong> even if performance isn&rsquo;t in scope.</li> <li><strong>Celebrate good patterns</strong> to reinforce team standards.</li> </ul> <p><a href="https://github.blog/developer-skills/github/how-to-review-code-effectively-a-github-staff-engineers-philosophy/"><em>Learn more about how to review code effectively &gt;</em></a></p> <h3 class="wp-block-heading" id="h-3-invest-in-documentation">3. Invest in documentation&nbsp;</h3> <p>Strong pull requests and code reviews help your team build better software today. But documentation makes it easier to build better software tomorrow. In the AI era, where code can be generated in seconds, clear, thorough documentation remains one of the most valuable&mdash;and overlooked&mdash;skills a developer can master.</p> <p>Good documentation helps everyone stay aligned: your team, new contributors, stakeholders, and yes, even AI coding agents (docs make great context for any AI model, after all). The clearer your docs, the more effective AI tools like Copilot can be when generating code, tests, or summaries that rely on understanding your project&rsquo;s structure. As GitHub&rsquo;s software engineer <a href="https://github.com/brittanyellich">Brittany Ellich</a> and technical writer <a href="https://github.com/sabrowning1">Sam Browning</a> explain, well-structured docs accelerate onboarding, increase adoption, and make collaboration smoother by reducing back and forth.</p> <p>The key is to keep your documentation clear, concise, and structured. Use plain language, focus on the information people actually need, and avoid overwhelming readers with too many edge cases or unnecessary details. Organize your docs with the <a href="https://diataxis.fr/">Di&aacute;taxis framework</a>, which breaks documentation into four categories:</p> <ul class="wp-block-list"> <li>Tutorials for hands-on learning with step-by-step guides</li> <li>How-to guides for task-oriented steps with bulleted or numbered list</li> <li>Explanations for deeper understanding</li> <li>Reference for technical specs such as API specs</li> </ul> <p>When your docs follow a clear structure, contributors know exactly where to find what they need and where to add new information as your project evolves.</p> <p>In short: great documentation forces you to sharpen your own understanding of the system you&rsquo;re building. That kind of clarity compounds over time and is exactly the kind of critical thinking that makes you a stronger developer.</p> <p><a href="https://github.blog/developer-skills/documentation-done-right-a-developers-guide/"><em>Learn more about how to document your project effectively &gt;</em></a></p> <h3 class="wp-block-heading" id="h-a-level-up-dev-toolkit">A level&#8209;up dev toolkit</h3> <p>To make things simple, here&rsquo;s a skills progression matrix to keep in mind no matter what level you&rsquo;re at.&nbsp;</p> <figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th><strong>Skill</strong></th><th><strong>Junior</strong></th><th><strong>Mid&#8209;level</strong></th><th><strong>Senior</strong></th></tr></thead><tbody><tr><td>Pull requests</td><td>Describes <em>what</em> changed</td><td>Explains <em>why</em> and links issues</td><td>Anticipates perf/security impact &amp; suggests review focus</td></tr><tr><td>Code reviews</td><td>Leaves &#128077;/&#128078;</td><td>Gives actionable comments</td><td>Mentors, models architecture trade&#8209;offs</td></tr><tr><td>Documentation</td><td>Updates README</td><td>Writes task&#8209;oriented guides</td><td>Curates docs as a product with metrics</td></tr></tbody></table></figure> <p><strong>And here are some quick&#8209;wins you can copy today:</strong></p> <ul class="wp-block-list"> <li><code>.github/CODEOWNERS</code> to auto&#8209;route reviews</li> <li>PR and issue templates for consistent context</li> <li>GitHub Skills course: <a href="https://github.com/skills/communicate-using-markdown">Communicating with Markdown</a></li> </ul> <h2 class="wp-block-heading" id="h-the-bottom-line">The bottom line</h2> <p>In the end, AI is changing how we write code, and curiosity, judgment, and critical thinking are needed more than ever. The best developers don&rsquo;t just accept what AI suggests. They ask why. They provide context. They understand the fundamentals. They think in systems, write with intention, and build with care.&nbsp;</p> <p>So keep asking why. Stay curious. Continue learning. That&rsquo;s what sets great developers apart&mdash;and it&rsquo;s how you&rsquo;ll survive and thrive in an AI-powered future.</p> <div class="wp-block-group post-content-cta has-global-padding is-layout-constrained wp-block-group-is-layout-constrained"> <p><strong>Want to get started?</strong> <a href="https://github.com/features/copilot/?utm_source=newsletter&amp;utm_medium=social&amp;utm_campaign=copilot_free_launch">Explore GitHub Copilot &gt;</a></p> </div> </body></html> <p>The post <a href="https://github.blog/developer-skills/career-growth/why-developer-expertise-matters-more-than-ever-in-the-age-of-ai/">Why developer expertise matters more than ever in the age of AI</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Gemini 2.5 for robotics and embodied intelligence - Google Developers Blog https://developers.googleblog.com/en/gemini-25-for-robotics-and-embodied-intelligence/ 2025-06-24T13:27:39.000Z Gemini 2.5 Pro and Flash are transforming robotics by enhancing coding, reasoning, and multimodal capabilities, including spatial understanding. These models are used for semantic scene understanding, code generation for robot control, and building interactive applications with the Live API, with a strong emphasis on safety improvements and community applications. Multilingual innovation in LLMs: How open models help unlock global communication - Google Developers Blog https://developers.googleblog.com/en/unlock-global-communication-gemma-projects/ 2025-06-23T20:07:41.000Z Developers adapt LLMs like Gemma for diverse languages and cultural contexts, demonstrating AI's potential to bridge global communication gaps by addressing challenges like translating ancient texts, localizing mathematical understanding, and enhancing cultural sensitivity in lyric translation. Google Cloud donates A2A to Linux Foundation - Google Developers Blog https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/ 2025-06-23T16:52:40.000Z Google, along with Amazon and Cisco, announces the formation of the Agent2Agent Foundation under the Linux Foundation, establishing A2A as an industry standard for AI agent interoperability, fostering a diverse ecosystem, ensuring neutral governance, and accelerating secure innovation in AI applications. Gemini Code Assist in Apigee API Management now generally available - Google Developers Blog https://developers.googleblog.com/en/gemini-code-assist-in-apigee-api-management-now-generally-available/ 2025-06-18T16:37:40.000Z Gemini Code Assist in Apigee API Management enhances API development with AI-assisted features like natural language API creation, AI-generated summaries, and iterative design, allowing seamless integration with your organization's existing API ecosystem and ensuring consistency, security, and reduced duplication, while offering enterprise-grade security and a streamlined development workflow. GitHub Copilot Spaces: Bring the right context to every suggestion - The GitHub Blog https://github.blog/?p=88913 2025-06-18T16:00:00.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>When generative AI tools guess what you need, the magic only lasts as long as the guesses are right. Add an unfamiliar codebase, a security checklist your team keeps in a wiki, or a one&#8209;off Slack thread that explains <em>why</em> something matters, and even the most and even the most powerful model may fill in gaps with assumptions rather than having access to your specific context and knowledge.</p> <p><a href="https://docs.github.com/en/copilot/using-github-copilot/copilot-spaces/creating-and-using-copilot-spaces">GitHub&#8239;Copilot&#8239;Spaces</a> fixes that problem by letting you <strong>bundle the exact context Copilot should read</strong>&mdash;code, docs, transcripts, sample queries, you name it&mdash;into a reusable &ldquo;space.&rdquo; Once a space is created, every Copilot chat, completion, or command is grounded in that curated knowledge, producing answers that feel like they came from your organization&rsquo;s resident expert instead of a generic model.<a href="https://docs.github.com/copilot/using-github-copilot/copilot-spaces/about-organizing-and-sharing-context-with-copilot-spaces?utm_source=chatgpt.com">&nbsp;</a></p> <p>In this article, we&rsquo;ll walk through:</p> <ul class="wp-block-list"> <li>A 5&#8209;minute quick&#8209;start guide to creating your first space</li> <li>Tips for personalizing Copilot&rsquo;s tone, style, and conventions with custom instructions</li> <li>Real&#8209;world recipes for accessibility, data queries, and onboarding</li> <li>Collaboration, security, and what&rsquo;s next on the roadmap (spoiler: IDE integration and Issues/PR support)</li> </ul> <aside data-color-mode="light" data-dark-theme="dark" data-light-theme="light_dimmed" class="wp-block-group post-aside--large p-4 p-md-6 is-style-light-dimmed has-global-padding is-layout-constrained wp-block-group-is-layout-constrained is-style-light-dimmed--1" style="border-top-width:4px"> <h3 class="wp-block-heading h5-mktg gh-aside-title is-typography-preset-h5" id="h-want-to-learn-more-try-our-docs-nbsp" style="margin-top:0">Want to learn more? Try our Docs.&nbsp;</h3> <p>We have everything you need to get started&mdash;including pro tips on the context that&rsquo;s most helpful in your workflows.</p> <p><a href="https://docs.github.com/en/copilot/using-github-copilot/copilot-spaces/about-organizing-and-sharing-context-with-copilot-spaces">Explore Docs &gt;</a></p> </aside> <h2 class="wp-block-heading" id="h-why-context-is-the-new-bottleneck-for-ai-assisted-development">Why context is the new bottleneck for AI&#8209;assisted development</h2> <p>Large language models (LLMs) thrive on patterns, but day&#8209;to&#8209;day engineering work is full of <em>un</em>patterned edge cases, including:</p> <ul class="wp-block-list"> <li>A monorepo that mixes modern React with legacy jQuery</li> <li>Organizational wisdom buried in Slack threads or internal wikis</li> <li>Organization&#8209;specific security guidelines that differ from upstream OSS docs</li> </ul> <p>Without that context, an AI assistant can only guess. But with Copilot Spaces, you choose which files, documents, or free&#8209;text snippets matter, drop them into a space, and let Copilot use that context to answer questions or write code. As Kelly Henckel, PM for GitHub Spaces, said in our <a href="https://www.youtube.com/watch?v=a0LWEWLUt48">GitHub Checkout</a> episode, &ldquo;Spaces make it easy to organize and share context, so Copilot acts like a subject matter expert.&rdquo; The result? Fewer wrong guesses, less copy-pasting, and code that&rsquo;s commit-ready.</p> <h2 class="wp-block-heading" id="h-what-exactly-is-a-copilot-space">What exactly <em>is</em> a Copilot Space?</h2> <p>Think of a space as a secure, shareable <strong>container of knowledge</strong> plus <strong>behavioral instructions</strong>:</p> <figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th></th><th><strong>What it holds</strong></th><th><strong>Why it matters</strong></th></tr></thead><tbody><tr><td>Attachments</td><td>Code files, entire folders, Markdown docs, transcripts, or any plain text you add</td><td>Gives Copilot the ground truth for answers</td></tr><tr><td>Custom instructions</td><td>Short system prompts to set tone, coding style, or reviewer expectations</td><td>Lets Copilot match your house rules</td></tr><tr><td>Sharing &amp; permissions</td><td>Follows the same role/visibility model you already use on GitHub</td><td>No new access control lists to manage</td></tr><tr><td>Live updates</td><td>Files stay in sync with the branch you referenced</td><td>Your space stays up to date with your codebase</td></tr></tbody></table></figure> <p><br>Spaces are available to <strong>anyone with a Copilot license (Free, Individual, Business, or Enterprise)</strong> while the feature is in public preview. Admins can enable it under <strong>Settings &#8239;&gt; Copilot &gt; Preview features</strong>.</p> <p><strong>TL;DR</strong>: A space is like pinning your team&rsquo;s <em>collective brain</em> to the Copilot sidebar and letting everyone query it in plain language.</p> <h2 class="wp-block-heading" id="h-quick-start-guide-how-to-build-your-first-space-in-5-minutes">Quick-start guide: How to build your first space in 5 minutes</h2> <ol class="wp-block-list"> <li><strong>Navigate </strong>to&#8239;<a href="https://github.com/copilot/spaces">github.com/copilot/spaces</a> and click <strong>Create space</strong>.</li> <li><strong>Name it clearly</strong>. For example, <code>frontend&#8209;styleguide</code>.</li> <li><strong>Add a description</strong> so teammates know when&mdash;<em>and when not</em>&mdash;to use it.</li> <li><strong>Attach context</strong>:</li> </ol> <ul class="wp-block-list"> <li>From repos: Pull in folders like&nbsp;<code>src/components</code> or individual files such as <code>eslint.config.js</code>.</li> <li>Free&#8209;text hack: Paste a Slack thread, video transcript, onboarding checklist, or even a JSON schema into the <em>Text</em> tab. Copilot treats it like any other attachment.</li> </ul> <ol start="5" class="wp-block-list"> <li><strong>Write custom instructions</strong>. A sentence or two is enough:</li> </ol> <ul class="wp-block-list"> <li>&ldquo;Respond as a senior React reviewer. Enforce our ESLint rules and tailwind class naming conventions.&rdquo;</li> </ul> <ol start="6" class="wp-block-list"> <li><strong>Save and test it</strong>. You&rsquo;re done. Ask Copilot a question in the Space chat&mdash;e.g., &ldquo;Refactor this <code>&lt;Button&gt;</code> component to match our accessibility checklist&rdquo;&mdash;and watch it cite files you just attached.</li> </ol> <aside data-color-mode="light" data-dark-theme="dark" data-light-theme="light_dimmed" class="wp-block-group post-aside--large p-4 p-md-6 is-style-light-dimmed has-global-padding is-layout-constrained wp-block-group-is-layout-constrained is-style-light-dimmed--2" style="border-top-width:4px"> <h3 class="wp-block-heading h5-mktg gh-aside-title is-typography-preset-h5" id="h-pro-tip-keep-spaces-focused" style="margin-top:0">Pro tip: Keep spaces focused</h3> <p>Instead of dumping your entire repo into one space, create smaller, purpose&#8209;built spaces like: <em>Accessibility</em>, <em>Data&#8209;Queries</em>, <em>Auth&#8209;Model</em>, etc. Kelly, the PM behind the feature, uses this pattern internally at GitHub to make subject&#8209;matter expertise reusable.&nbsp;</p> </aside> <h2 class="wp-block-heading" id="h-personalize-copilot-s-coding-style-and-voice-too-nbsp">Personalize Copilot&rsquo;s coding style (and voice, too)&nbsp;</h2> <p>Custom instructions are the &ldquo;personality layer&rdquo; of a space and where spaces shine because they live <em>alongside</em> the attachments. This allows you to do powerful things with a single sentence, including:</p> <ul class="wp-block-list"> <li><strong>Enforce conventions</strong> <ul class="wp-block-list"> <li>&nbsp;&ldquo;Always prefer Vue 3 <code>script setup</code> syntax and Composition API for examples.&rdquo;</li> </ul> </li> <li><strong>Adopt a team tone</strong> <ul class="wp-block-list"> <li>&ldquo;Answer concisely. Include a one&#8209;line summary before code blocks.&rdquo;</li> </ul> </li> <li><strong>Teach Copilot project&#8209;specific vocabulary</strong> <ul class="wp-block-list"> <li>&nbsp;&ldquo;Call it &lsquo;scenario ID&rsquo; (SCID), not test case ID.&rdquo;</li> </ul> </li> </ul> <p>During the GitHub Checkout interview, Kelly shared how she built a personal space for a nonprofit side project: She attached only the Vue front&#8209;end folder <em>plus</em> instructions on her preferred conventions, and Copilot delivered commit&#8209;ready code snippets that matched her style guide on the first try.</p> <h2 class="wp-block-heading" id="h-automate-your-workflow-three-real-world-recipes">Automate your workflow: three real&#8209;world recipes</h2> <h3 class="wp-block-heading" id="h-1-accessibility-compliance-assistant">1. Accessibility compliance assistant</h3> <p><em>Space ingredients</em></p> <ul class="wp-block-list"> <li>Markdown docs on WCAG criteria and GitHub&rsquo;s internal &ldquo;Definition of Done&rdquo;</li> <li>Custom instruction: &ldquo;When answering, cite the doc section and provide a code diff if changes are required.&rdquo;</li> </ul> <p><strong>How it helps</strong>: Instead of pinging the accessibility lead on Slack, you can use Spaces to ask questions like &ldquo;What steps are needed for MAS&#8209;C compliance on this new modal?&rdquo; Copilot summarizes the relevant checkpoints, references the doc anchor, and even suggests ARIA attributes or color&#8209;contrast fixes. GitHub&rsquo;s own accessibility SME, Katherine, pinned this space in Slack so anyone filing a review gets instant, self&#8209;service guidance.</p> <h3 class="wp-block-heading" id="h-2-data-query-helper-for-complex-schemas">2. Data&#8209;query helper for complex schemas</h3> <p><em>Space ingredients</em></p> <ul class="wp-block-list"> <li>YAML schema files for 40+ event tables</li> <li>Example KQL snippets saved as <code>.sql</code> files</li> <li>Instruction: &ldquo;Generate KQL only, no prose explanations unless asked.&rdquo;</li> </ul> <p><strong>How it helps: </strong>Product managers and support engineers who <em>don&rsquo;t</em> know your database structures can ask, &ldquo;Average PR review time last 7&#8239;days?&rdquo; Copilot autocompletes a valid KQL query with correct joins and lets them iterate. Result: lets PMs and support self-serve without bugging data science teams.</p> <h3 class="wp-block-heading" id="h-3-onboarding-hub-and-knowledge-base-in-one-link">3. Onboarding Hub and knowledge base in one link</h3> <p><em>Space ingredients</em></p> <ul class="wp-block-list"> <li>Key architecture diagrams exported as SVG text</li> <li>ADRs and design docs from multiple repos</li> <li>Custom instruction: &ldquo;Answer like a mentor during onboarding; link to deeper docs.&rdquo;</li> </ul> <p><strong>How it helps: </strong>New hires type &ldquo;How does our auth flow handle SAML?&rdquo; and get a structured answer with links and diagrams, all without leaving GitHub. Because spaces stay in sync with <code>main</code>, updates to ADRs propagate automatically&mdash;no stale wikis.</p> <h2 class="wp-block-heading" id="h-collaboration-that-feels-native-to-github">Collaboration that feels native to GitHub</h2> <p>Spaces respect the same permission model you already use:</p> <ul class="wp-block-list"> <li><strong>Personal spaces</strong>: visible only to you unless shared</li> <li><strong>Organization&#8209;owned spaces</strong>: use repo or team permissions to gate access</li> <li><strong>Read&#8209;only vs. edit&#8209;capable</strong>: let SMEs maintain the canon while everyone else consumes</li> </ul> <p>Sharing is as simple as sending the space URL or pinning it to a repo README. Anyone with access and a Copilot license can start chatting instantly.</p> <h2 class="wp-block-heading" id="h-what-s-next-for-copilot-spaces">What&rsquo;s next for Copilot Spaces?</h2> <p>We&rsquo;re working to bring Copilot Spaces to more of your workflows, and are currently developing:</p> <ul class="wp-block-list"> <li><strong>Issues and PR attachments</strong> to bring inline discussions and review notes into the same context bundle.</li> <li><strong>IDE Integration</strong>: Query Spaces in VS Code for tasks like writing tests to match your team&rsquo;s patterns.</li> <li><strong>Org&#8209;wide discoverability</strong> to help you browse spaces like you browse repos today, so new engineers can search &ldquo;Payments SME&rdquo; and start chatting.</li> </ul> <p>Your feedback will shape those priorities. <a href="https://github.com/orgs/community/discussions/160840?utm_source=chatgpt.com">Drop your ideas or pain points in the public discussion</a> or, if you&rsquo;re an enterprise customer, through your account team.&nbsp;</p> <h2 class="wp-block-heading" id="h-get-started-today">Get started today</h2> <p>Head to <strong>github.com/copilot/spaces</strong>, spin up your first space, and let us know how it streamlines your workflow. Here&rsquo;s how to get it fully set up on your end:&nbsp;</p> <ol class="wp-block-list"> <li><strong>Flip the preview toggle</strong>:&#8239;<em>Settings &gt; Copilot&#8239; &gt; &#8239;Preview features&#8239;&gt; Enable Copilot Spaces.</em></li> <li><strong>Create one small, high&#8209;impact space</strong>&mdash;maybe your team&rsquo;s code&#8209;review checklist or a set of common data queries.</li> <li><strong>Share the link</strong> in Slack or a README and watch the pings to subject&#8209;matter experts drop.</li> <li><strong>Iterate</strong>: prune unused attachments, refine instructions, or split a giant space into smaller ones.</li> </ol> <p>Copilot Spaces is free during the public preview and doesn&rsquo;t count against your Copilot seat entitlements when you use the base model. We can&rsquo;t wait to see what you build when Copilot has the <em>right</em> context at its fingertips.</p> </body></html> <p>The post <a href="https://github.blog/ai-and-ml/github-copilot/github-copilot-spaces-bring-the-right-context-to-every-suggestion/">GitHub Copilot Spaces: Bring the right context to every suggestion</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Gemini 2.5: Updates to our family of thinking models - Google Developers Blog https://developers.googleblog.com/en/gemini-2-5-thinking-model-updates/ 2025-06-17T16:47:40.000Z Google is releasing updates to its Gemini 2.5 model family, including the generally available and stable Gemini 2.5 Pro and Flash, and the new Gemini 2.5 Flash-Lite "thinking models" in preview, offering enhanced performance and accuracy, with Flash-Lite providing a lower-cost option. 5 tips for using GitHub Copilot with issues to boost your productivity - The GitHub Blog https://github.blog/?p=88879 2025-06-17T16:00:00.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>Managing issues in software development can be tedious and time-consuming. But what if your AI peer programmer could streamline this process for you? <a href="https://github.com/features/copilot">GitHub Copilot</a>&lsquo;s latest issue management features can help developers create, organize, and even solve issues. Below, we&rsquo;ll dig into these features and how they can save time, reduce friction, and maintain consistency across your projects.</p> <figure class="wp-block-video"><video controls poster="https://github.blog/wp-content/uploads/2025/06/Screenshot-2025-06-16-at-3.31.56&#8239;PM.png" src="https://github.blog/wp-content/uploads/2025/06/Copilot-creates-issues-final-1.mp4"></video></figure> <h2 class="wp-block-heading" id="h-1-image-to-issue-turn-screenshots-into-instant-bug-reports">1. Image to issue: Turn screenshots into instant bug reports</h2> <p>Writing detailed bug reports is often repetitive and frustrating, leading to inconsistent documentation. Copilot&rsquo;s image to issue feature significantly reduces this friction.</p> <p>Simply paste a screenshot of the bug into Copilot chat with a brief description prompt Copilot to create an issue for you, then Copilot will analyze the image and generate a comprehensive bug report for you. No more struggling to describe visual glitches or UI problems&mdash;the image will speak for itself, and Copilot will handle the documentation.</p> <p>For example, if you encounter a UI alignment issue or a visual glitch that&rsquo;s hard to describe, just capture a screenshot, paste it into Copilot, and briefly mention the problem. In the animation above, the user&rsquo;s prompt was &ldquo;create me a bug issue because markdown tables are not rendering properly in the comments.&rdquo; Copilot then automatically drafted a report, including steps to reproduce the bug.</p> <p>To get the most out of this feature, consider annotating your screenshots clearly&mdash;highlighting or circling the problematic area&mdash;to help Copilot generate even more precise issue descriptions.</p> <p><a href="https://docs.github.com/en/copilot/using-github-copilot/using-github-copilot-to-create-issues">Dive into the documentation to learn more</a>.</p> <h2 class="wp-block-heading" id="h-2-get-the-details-right-templates-tags-and-types">2. Get the details right: Templates, tags, and types</h2> <p>Projects quickly become disorganized when team members skip adding proper metadata. Incorrect templates, missing labels, or wrong issue types make tracking and prioritization difficult.</p> <p>Copilot solves this by automatically inferring the best template based on your prompt. It also adds appropriate labels and issue types without requiring you to navigate multiple dropdown menus or memorize tagging conventions.</p> <p>Need something specific? Simply ask Copilot to add particular labels or switch templates. If you change templates after drafting, Copilot will automatically reformat your content&mdash;no manual copying required.</p> <h2 class="wp-block-heading" id="h-3-stay-organized-with-versioning-and-milestones">3. Stay organized with versioning and milestones</h2> <p>Keeping issues updated and properly categorized is crucial for clear communication, maintaining project velocity, and ensuring visibility into progress. But with so much else to do, it&rsquo;s easy to let this work fall by the wayside.</p> <p>With Copilot, adding projects and milestones is as simple as typing a prompt. You can also specify exactly how you want issues organized. For example, ask Copilot to use the &ldquo;Bug Report&rdquo; or &ldquo;Feature Request&rdquo; template, add labels like <code>priority: high</code>, <code>frontend</code>, or <code>needs-triage</code>, or set the issue type to &ldquo;Task&rdquo; or &ldquo;Epic.&rdquo; Copilot will apply these details automatically, ensuring your issues are consistently categorized.</p> <p>Additionally, Copilot tracks all changes, making them easily referenceable. You can review issue history and revert changes if needed, ensuring nothing important gets lost.</p> <h2 class="wp-block-heading" id="h-4-batch-create-multiple-issues-at-once">4. Batch create multiple issues at once</h2> <p>Sometimes you need to log several issues after a customer meeting, user testing session, or bug bash. Traditionally, this means repeating the same creation process multiple times.</p> <p>Copilot supports multi-issue drafting, allowing you to create multiple issues in a single conversation. Whether logging feature requests or documenting bugs, batch creation saves significant time.</p> <p>Simply prompt Copilot to create the issues, describe each one, and Copilot will draft them all. For example, you could give the following prompt to create two issues at once:</p> <pre class="wp-block-code language-plaintext"><code>Create me issues for the following features: - Line Breaks Ignored in Rendered Markdown Despite Double-Space - Bold and Italic Markdown Styles Not Applied When Combine</code></pre> <p>You will still need to review and finalize each one, but the drafting process is streamlined into a single workflow.</p> <h2 class="wp-block-heading" id="h-5-let-ai-help-fix-your-bugs-with-copilot-coding-agent">5. Let AI help fix your bugs with Copilot coding agent</h2> <p><a href="https://github.blog/developer-skills/github/how-to-create-issues-and-pull-requests-in-record-time-on-github/">Creating issues</a> is only half the battle&mdash;fixing them is where the real work begins. You can now <a href="https://github.blog/ai-and-ml/github-copilot/assigning-and-completing-issues-with-coding-agent-in-github-copilot/">assign issues</a> directly to Copilot. Just ask <a href="https://docs.github.com/en/copilot/using-github-copilot/coding-agent/enabling-copilot-coding-agent">Copilot coding agent</a> to take ownership of the issue, and your AI coding assistant will start analyzing the bug. Copilot can even suggest draft pull requests with potential fixes.</p> <p>This seamless handoff reduces context-switching and accelerates resolution times, allowing your team to focus on more complex challenges.</p> <h2 class="wp-block-heading" id="h-beyond-copilot-issues-enhancements-on-github">Beyond Copilot: Issues enhancements on GitHub</h2> <p>While Copilot is already revolutionizing issue management, we at GitHub are always looking for ways to enhance the overall issues experience. For example, you can now:</p> <ul class="wp-block-list"> <li>Standardize <a href="https://docs.github.com/en/issues/tracking-your-work-with-issues/configuring-issues/managing-issue-types-in-an-organization">issue types</a> across repositories for consistent tracking and reporting.</li> <li>Break down complex tasks into <a href="https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/adding-sub-issues">sub-issues</a> for better progress management.</li> <li>Use <a href="https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/filtering-and-searching-issues-and-pull-requests#building-advanced-filters-for-issues">advanced search</a> capabilities with logical operators to quickly find exactly what you need.</li> <li>Manage larger projects with expanded limits supporting up to 50,000 items.</li> </ul> <h2 class="wp-block-heading" id="h-kickstart-enhanced-issue-management-today">Kickstart enhanced issue management today</h2> <p>Ready to transform your issue management workflow with GitHub Copilot? Head to <a href="https://github.com/copilot">github.com/copilot</a> and try prompts like:</p> <ul class="wp-block-list"> <li>&ldquo;Create me an issue for&hellip;&rdquo;</li> <li>&ldquo;Log a bug for&hellip;&rdquo;</li> <li>Or simply upload a screenshot and mention you want to file a bug.</li> </ul> <p>Experience firsthand how Copilot makes issue management feel less like administrative overhead and more like a conversation with your AI pair programmer.</p> <div class="wp-block-group post-content-cta has-global-padding is-layout-constrained wp-block-group-is-layout-constrained"> <p><strong>Learn more</strong> about <a href="https://docs.github.com/en/copilot/using-github-copilot/using-github-copilot-to-create-issues">creating issues with Copilot &gt;</a></p> </div> </body></html> <p>The post <a href="https://github.blog/ai-and-ml/github-copilot/5-tips-for-using-github-copilot-with-issues-to-boost-your-productivity/">5 tips for using GitHub Copilot with issues to boost your productivity</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Highlights from Git 2.50 - The GitHub Blog https://github.blog/?p=88787 2025-06-16T17:12:27.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>The open source Git project just <a href="https://lore.kernel.org/git/xmqq1prj1umb.fsf@gitster.g/T/#u">released Git 2.50</a> with features and bug fixes from 98 contributors, 35 of them new. We last caught up with you on the latest in Git back when <a href="https://github.blog/open-source/git/highlights-from-git-2-49/">2.49 was released</a>.</p> <figure style="margin-top: 0 !important;margin-bottom: 0 !important" class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td>&#128161; Before we get into the details of this latest release, we wanted to remind you that <a href="https://git-merge.com">Git Merge</a>, the conference for Git users and developers is back this year on September 29-30, in San Francisco. Git Merge will feature talks from developers working on Git, and in the Git ecosystem. Tickets are on sale now; check out <a href="https://git-merge.com">the website</a> to learn more.</td></tr></tbody></table></figure> <p>With that out of the way, let&rsquo;s take a look at some of the most interesting features and changes from Git 2.50.</p> <h2 class="wp-block-heading" id="h-improvements-for-multiple-cruft-packs">Improvements for multiple cruft packs</h2> <p></p><p>When we covered <a href="https://github.blog/open-source/git/highlights-from-git-2-43/#multiple-cruft-packs">Git 2.43</a>, we talked about newly added support for <a href="https://github.blog/open-source/git/highlights-from-git-2-43/#multiple-cruft-packs">multiple cruft packs</a>. Git 2.50 improves on that with better command-line ergonomics, and some important bugfixes. In case you&rsquo;re new to the series, need a refresher, or aren&rsquo;t familiar with <a href="https://github.blog/2022-09-13-scaling-gits-garbage-collection/">cruft packs</a>, here&rsquo;s a brief overview:</p><p>Git <a href="https://git-scm.com/book/en/v2/Git-Internals-Git-Objects">objects</a> may be either reachable or unreachable. The set of reachable objects is everything you can walk to starting from one of your repository&rsquo;s <a href="https://git-scm.com/book/en/v2/Git-Internals-Git-References">references</a>: traversing from commits to their parent(s), trees to their sub-tree(s), and so on. Any object that you didn&rsquo;t visit by repeating that process over all of your references is unreachable.</p> <p></p><p id="return">In <a href="https://github.blog/open-source/git/highlights-from-git-2-37/">Git 2.37</a>, Git introduced <a href="https://git-scm.com/docs/cruft-packs/2.37.0">cruft packs</a>, a new way to store your repository&rsquo;s unreachable objects. A cruft pack looks like an ordinary <a href="https://git-scm.com/book/en/v2/Git-Internals-Packfiles">packfile</a> with the addition of an <code>.mtimes</code> file, which is used to keep track of when each object was most recently written in order to determine when it is safe<sup><a href="#footnote">1</a></sup> to discard it.</p><p>However, updating the cruft pack could be cumbersome&ndash;particularly in repositories with many unreachable objects&ndash;since a repository&rsquo;s cruft pack must be rewritten in order to add new objects. Git 2.43 began to address this through a new command-line option: <code>git repack --max-cruft-size</code>. This option was designed to split unreachable objects across multiple packs, each no larger than the value specified by <code>--max-cruft-size</code>. But there were a couple of problems:</p> <ul class="wp-block-list"> <li>If you&rsquo;re familiar with <code>git repack</code>&rsquo;s <code>--max-pack-size</code> option, <code>--max-cruft-size</code>&rsquo;s behavior is quite confusing. The former option specifies the maximum size an individual pack can be, while the latter involves how and when to move objects between multiple packs.</li> <li>The feature was broken to begin with! Since <code>--max-cruft-size</code> <em>also</em> imposes on cruft packs the same pack-size constraints as <code>--max-pack-size</code> does on non-cruft packs, it is often impossible to get the behavior you want.</li> </ul> <p>For example, suppose you had two 100 MiB cruft packs and ran <code>git repack --max-cruft-size=200M</code>. You might expect Git to merge them into a single 200 MiB pack. But since <code>--max-cruft-size</code> also dictates the maximum size of the output pack, Git will refuse to combine them, or worse: rewrite the same pack repeatedly.</p> <p>Git 2.50 addresses both of these issues with a new option: <code>--combine-cruft-below-size</code>. Instead of specifying the maximum size of the output pack, it determines which existing cruft pack(s) are eligible to be combined. This is particularly helpful for repositories that have accumulated many unreachable objects spread across multiple cruft packs. With this new option, you can gradually reduce the number of cruft packs in your repository over time by combining existing ones together.</p> <p>With the introduction of <code>--combine-cruft-below-size</code>, Git 2.50 repurposed <code>--max-cruft-size</code> to behave as a cruft pack-specific override for <code>--max-pack-size</code>. Now <code>--max-cruft-size</code> only determines the size of the outgoing pack, not which packs get combined into it.</p> <p>Along the way, a bug was uncovered that prevented objects stored in multiple cruft packs from being &ldquo;freshened&rdquo; in <a href="https://lore.kernel.org/git/c0c926adde2b7c8f4b53b7a274d5b8c040f77e62.1740680964.git.me@ttaylorr.com/">certain circumstances</a>. In other words, some unreachable objects don&rsquo;t have their modification times updated when they are rewritten, leading to them being removed from the repository earlier than they otherwise would have been. Git 2.50 squashes this bug, meaning that you can now efficiently manage multiple cruft packs and freshen their objects to your heart&rsquo;s content.</p> <p>[<a href="https://github.com/git/git/compare/6a9e1c3507818fc7a7c301c16fda5ceecb82ae72...484d7adcdadbb72a3e0106c4fa49260cf1099b9a">source</a>, <a href="https://github.com/git/git/compare/f3db666cca0170a43ed602e7130c705882ce7574...08f612ba7000bf181ef6d8baed9ece322e567efd">source</a>]</p> <h2 class="wp-block-heading" id="h-incremental-multi-pack-reachability-bitmaps">Incremental multi-pack reachability bitmaps</h2> <p>&#8203;&#8203;Back in <a href="https://github.blog/open-source/git/highlights-from-git-2-47">our coverage of Git 2.47</a>, we talked about preliminary support for <a href="https://github.blog/open-source/git/highlights-from-git-2-47/#incremental-multi-pack-indexes">incremental multi-pack indexes</a>. Multi-pack indexes (MIDXs) act like a single pack <code>*.idx</code> file for objects spread across multiple packs.</p> <p>Multi-pack indexes are extremely useful to accelerate object lookup performance in large repositories by binary searching through a single index containing most of your repository&rsquo;s contents, rather than repeatedly searching through each individual packfile. But multi-pack indexes aren&rsquo;t just useful for accelerating object lookups. They&rsquo;re also the basis for multi-pack reachability bitmaps, the MIDX-specific analogue of classic single-pack reachability bitmaps. If neither of those are familiar to you, don&rsquo;t worry; here&rsquo;s a brief refresher. Single-pack <a href="https://git-scm.com/docs/bitmap-format/2.50.0">reachability bitmaps</a> store a collection of <a href="https://en.wikipedia.org/wiki/Bit_array">bitmaps</a> corresponding to a selection of commits. Each bit position in a pack bitmap refers to one object in that pack. In each individual commit&rsquo;s bitmap, the set bits correspond to objects that are reachable from that commit, and the unset bits represent those that are not.</p> <p>Multi-pack bitmaps were introduced to take advantage of the substantial performance increase afforded to us by reachability bitmaps. Instead of having bitmaps whose bit positions correspond to the set of objects in a single pack, a multi-pack bitmap&rsquo;s bit positions correspond to the set of objects in a multi-pack index, which may include objects from arbitrarily many individual packs. If you&rsquo;re curious to learn more about how multi-pack bitmaps work, you can read our earlier post <a href="https://github.blog/2021-04-29-scaling-monorepo-maintenance/"><em>Scaling monorepo maintenance</em></a>.</p> <p>However, like cruft packs above, multi-pack indexes can be cumbersome to update as your repository grows larger, since each update requires rewriting the entire multi-pack index and its corresponding bitmap, regardless of how many objects or packs are being added. In Git 2.47, the file format for multi-pack indexes became incremental, allowing multiple multi-pack index layers to be layered on top of one another forming a chain of MIDXs. This made it much easier to add objects to your repository&rsquo;s MIDX, but the incremental MIDX format at the time did not yet have support for multi-pack bitmaps.</p> <p>Git 2.50 brings support for the multi-pack reachability format to incremental MIDX chains, with each MIDX layer having its own <code>*.bitmap</code> file. These bitmap layers can be used in conjunction with one another to provide reachability information about selected commits at any layer of the MIDX chain. In effect, this allows extremely large repositories to quickly and efficiently add new reachability bitmaps as new commits are pushed to the repository, regardless of how large the repository is.</p> <p></p><p>This feature is still considered highly experimental, and support for repacking objects into incremental multi-pack indexes and bitmaps is still fairly bare-bones. This is an active area of development, so we&rsquo;ll make sure to cover any notable developments to incremental multi-pack reachability bitmaps in this series in the future.</p><p>[<a href="https://github.com/git/git/compare/6e2a3b8ae0e07c0c31f2247fec49b77b5d903a83...27afc272c49137460fe9e58e1fcbe4c1d377b304">source</a>]</p> <h2 class="wp-block-heading" id="h-the-ort-merge-engine-replaces-recursive">The <code>ORT</code> merge engine replaces <code>recursive</code></h2> <p></p><p>This release also saw some exciting updates related to merging. Way back when Git 2.33 was released, we talked about a new merge engine called &ldquo;ORT&rdquo; (standing for &ldquo;Ostensibly Recursive&rsquo;s Twin&rdquo;).</p><p>ORT is a from-scratch rewrite of Git&rsquo;s old merging engine, called &ldquo;recursive.&rdquo; ORT is significantly faster, more maintainable, and has many new features that were difficult to implement on top of its predecessor.</p><p>One of those features is the ability for Git to determine whether or not two things are mergeable without actually persisting any new objects necessary to construct the merge in the repository. Previously, the only way to tell whether two things are mergeable was to run <code>git merge-tree --write-tree</code> on them. That works, but in this example <code>merge-tree</code> wrote any new objects generated by the merge into the repository. Over time, these can accumulate and cause performance issues. In Git 2.50, you can make the same determination without writing any new objects by using <code>merge-tree</code>&rsquo;s new <code>--quiet</code> mode and relying on its exit code.</p><p>Most excitingly in this release is that ORT has entirely superseded recursive, and recursive is no longer part of Git&rsquo;s source code. When ORT was first introduced, it was only accessible through <code>git merge</code>&rsquo;s <code>-s</code> option to select a strategy. In Git 2.34, ORT became <a href="http://recursive">the default choice</a> over <code>recursive</code>, though the latter was still available in case there were bugs or behavior differences between the two. Now, 16 versions and two and a half years later, recursive has been completely removed from Git, with its author, Elijah Newren, <a href="https://lore.kernel.org/git/pull.1898.git.1743436279.gitgitgadget@gmail.com/">writing</a>:</p> <blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"> <p>As a wise man once told me, &ldquo;Deleted code is debugged code!&rdquo;</p> </blockquote> <p></p><p>As of Git 2.50, recursive has been completely <s>debugged</s> deleted. For more about ORT&rsquo;s internals and its development, check out this five part series from Elijah <a href="https://blog.palantir.com/optimizing-gits-merge-machinery-1-127ceb0ef2a1">here</a>, <a href="https://blog.palantir.com/optimizing-gits-merge-machinery-2-d81391b97878">here</a>, <a href="https://blog.palantir.com/optimizing-gits-merge-machinery-3-2dc7c7436978">here</a>, <a href="https://blog.palantir.com/optimizing-gits-merge-machinery-part-iv-5bbc4703d050">here</a>, and <a href="https://blog.palantir.com/optimizing-gits-merge-machinery-part-v-46ff3710633e">here</a>.</p><p>[<a href="https://github.com/git/git/compare/17d9dbd3c270aaa33487f6a03d128c47aea6b309...29d7bf19512d8ca97be5cf708ca2e0bcc29408ab">source</a>, <a href="https://github.com/git/git/compare/8d6413a1bef7876b9c17a79358bd70b764ffacba...947e219fb6b1acc3d276d0b50ebf411c252a40bd">source</a>, <a href="https://github.com/git/git/compare/fe7ae3b87ef866e4818a106e8ce6e3d821ed76d7...170e30d6957e1f7b8d88046ae122f98d57dca988">source</a>]</p> <hr class="wp-block-separator has-alpha-channel-opacity"> <ul class="wp-block-list"> <li><p>If you&rsquo;ve ever scripted around your repository&rsquo;s objects, you are likely familiar with <code>git cat-file</code>, Git&rsquo;s purpose-built tool to list objects and print their contents. <code>git cat-file</code> has many modes, like <code>--batch</code> (for printing out the contents of objects), or <code>--batch-check</code> (for printing out certain information about objects without printing their contents).</p><p>Oftentimes it is useful to dump the set of all objects of a certain type in your repository. For commits, <code>git rev-list</code> can easily enumerate a set of commits. But what about, say, trees? In the past, to filter down to just the tree objects from a list of objects, you might have written something like:</p><pre>$ git cat-file --batch-check='%(objecttype) %(objectname)' \<br>&nbsp; &nbsp; --buffer &lt;in | perl -ne 'print "$1\n" if /^tree ([0-9a-f]+)/'</pre>Git 2.50 brings Git&rsquo;s object filtering mechanism used in partial clones to <code>git cat-file</code>, so the above can be rewritten a little more concisely like:<pre>$ git cat-file --batch-check='%(objectname)' --filter='object:type=tree' &lt;in</pre> <p>[<a href="https://github.com/git/git/compare/9bdd7ecf7ec90433fc1803bf5d608d08857b3b49...8002e8ee1829f0c727aa2f7d9c18ad706cb63565">source</a>] </p> </li> <li>While we&rsquo;re on the topic, let&rsquo;s discuss a little-known <code>git cat-file</code> command-line option: <code>--allow-unknown-type</code>. This arcane option was used with objects that have a type other than <code>blob</code>, <code>tree</code>, <code>commit</code>, or <code>tag</code>. This is a quirk dating back a little more than <a href="https://github.com/git/git/compare/13f4f046929de00a8c16171c5e08cdcae887b54d...5ba9a93b39bef057be54ecf7933386a582981625">a decade ago</a> that allows <code>git hash-object</code> to write objects with arbitrary types. In the time since, this feature has gotten very little use. In fact, <code>git cat-file -p --allow-unknown-type</code> can&rsquo;t even print out the contents of one of these objects! <p></p><pre> $ oid="$(git hash-object -w -t notatype --literally /dev/null)" $ git cat-file -p $oid fatal: invalid object type </pre> <p>This release makes the <code>--allow-unknown-type</code> option silently do nothing, and removes support from git hash-object to write objects with unknown types in the first place.</p> <p>[<a href="https://github.com/git/git/compare/b6fa7fbcd1b6791675c0b36636745e467419a522...141f8c8c0535004fa5432d9a6d57bf08129a7dd8">source</a>]</p> </li> <li><p>The <code>git maintenance</code> command learned a number of new tricks this release as well. It can now perform a few new different kinds of tasks, like <code>worktree-prune</code>, <code>rerere-gc</code>, and <code>reflog-expire</code>. <code>worktree-prune</code> mirrors <code>git gc</code>&rsquo;s functionality to remove stale or broken Git <a href="https://git-scm.com/docs/git-worktree/2.50.0">worktrees</a>. <code>rerere-gc</code> also mirrors existing functionality exposed via&nbsp;<code>git gc</code> to expire old <code>rerere</code> entries from previously recorded <a href="https://git-scm.com/docs/git-rerere/2.50.0">merge conflict resolutions</a>. Finally, <code>reflog-expire</code> can be used to remove stale unreachable objects from out of the <a href="https://git-scm.com/docs/git-reflog/2.50.0">reflog</a>.</p><p><code>git maintenance</code> also ships with new configuration for the existing <code>loose-objects</code> task. This task removes lingering loose objects that have since been packed away, and then makes new pack(s) for any loose objects that remain. The size of those packs was previously fixed at a maximum of 50,000, and can now be configured by the <code>maintenance.loose-objects.batchSize</code> configuration.</p> <p>[<a href="https://github.com/git/git/compare/1d01042e314c0965845cae1fbcd0bc7e21f1b608...283621a553b60b26f14b9cf7e8b8c852ddba55d9">source</a>, <a href="https://github.com/git/git/compare/1a1661bd41697a106481e9e2467d0f5a0697349a...8e0a1ec0762405e045d924eed68b872fd29844c9">source</a>, <a href="https://github.com/git/git/compare/7b7fe0a898978618c36432f1f89b29cd412c7a23...6540560fd6c91091f6cf1eaedd034bc1827e1506">source</a>]</p> </li> <li><p>If you&rsquo;ve ever needed to recover some work you lost, you may be familiar with Git&rsquo;s <a href="https://git-scm.com/docs/git-reflog/2.50.0">reflog</a> feature, which allows you to track changes to a reference over time. For example, you can go back and revisit earlier versions of your repository&rsquo;s main branch by doing <code>git show main@{2}</code> (to show <code>main</code> prior to the two most recent updates) or <code>main@{1.week.ago}</code> (to show where your copy of the branch was at a week ago).</p><p>Reflog entries can accumulate over time, and you can reach for <code>git reflog expire</code> in the event you need to clean them up. But how do you delete the entirety of a branch&rsquo;s reflog? If you&rsquo;re not yet running Git 2.50 and thought &ldquo;surely it&rsquo;s <code>git reflog delete</code>&rdquo;, you&rsquo;d be wrong! Prior to Git 2.50, the only way to delete a branch&rsquo;s entire reflog was to do <code>git reflog expire $BRANCH --expire=all</code>.</p><p>In Git 2.50, a new <code>delete</code> sub-command was introduced, so you can accomplish the same as above with the much more natural <code>git reflog delete $BRANCH</code>.</p> <p>[<a href="https://github.com/git/git/compare/ee847e0034dbfde11f901fbfb74d210c1edad496...d1270689a11e1e0dcf19d0257ce773a1d63d02d8">source</a>]</p></li> <li><p>Speaking of references, Git 2.50 also received some attention to how references are processed and used throughout its codebase. When using the low-level <code>git update-ref</code> command, Git used to spend time checking whether or not the proposed refname could also be a valid object ID, making its lookups ambiguous. Since <code>update-ref</code> is such a low-level command, this check is no longer done, delivering some performance benefits to higher-level commands that rely on <code>update-ref</code> for their functionality.</p><p>Git 2.50 also learned how to cache whether or not any prefix of a proposed reference name already exists (for example, you can&rsquo;t create a reference <code>ref/heads/foo/bar/baz</code> if either <code>refs/heads/foo/bar</code> or <code>refs/heads/foo</code> already exists).</p><p>Finally, in order to make those checks, Git used to create a new reference iterator for each individual prefix. Git 2.50&rsquo;s reference backends learned how to &ldquo;seek&rdquo; existing iterators, saving time by being able to reuse the same iterator when checking each possible prefix.</p><p>[<a href="https://github.com/git/git/compare/01d17c05305edefbbe62926f5a5425207324a87f...87d297f48367737444810f8c3e76ef88cb6aa4e3">source</a>]</p></li> <li><p>If you&rsquo;ve ever had to tinker with Git&rsquo;s low-level <a href="https://curl.se/">curl</a> configuration, you may be familiar with Git&rsquo;s <a href="https://git-scm.com/docs/git-config/2.49.0#Documentation/git-config.txt-httplowSpeedLimithttplowSpeedTime">configuration options</a> for tuning HTTP connections, like <code>http.lowSpeedLimit</code> and <code>http.lowSpeedTime</code> which are used to terminate an HTTP connection that is transferring data too slowly.</p><p>These options can be useful when fine-tuning Git to work in complex networking environments. But what if you want to tweak Git&rsquo;s <a href="https://en.wikipedia.org/wiki/Keepalive#TCP_keepalive">TCP Keepalive</a> behavior? This can be useful to control when and how often to send keepalive probes, as well as how many to send, before terminating a connection that hasn&rsquo;t sent data recently.</p><p>Prior to Git 2.50, this wasn&rsquo;t possible, but this version introduces three new configuration options: <code>http.keepAliveIdle</code>, <code>http.keepAliveInterval</code>, and <code>http.keepAliveCount</code> which can be used to control the fine-grained behavior of curl&rsquo;s TCP probing (provided your operating system supports it).</p><p>[<a href="https://github.com/git/git/compare/c6b3824a193bc263a764d17def7df7f09ef82a2d...46e6f9af3ec063529738f4b5b0b97c28c005c365">source</a>]</p></li> <li><p>Git is famously portable and runs on a wide variety of operating systems and environments with very few dependencies. Over the years, various parts of Git have been written in Perl, including some commands like <a href="https://github.com/git/git/blob/5cde71d64aff03d305099b4d239552679ecfaab6/git-add--interactive.perl">the original implementation</a> of <code>git add -i</code> . These days, very few remaining Git commands are written in Perl.</p><p>This version reduces Git&rsquo;s usage of Perl by removing it as a dependency of the test suite and documentation toolchain. Many Perl one-liners from Git&rsquo;s test suite were rewritten to use other Shell functions or builtins, and some were rewritten as tiny C programs. For the handful of remaining hard dependencies on Perl, those tests will be skipped on systems that don&rsquo;t have a working Perl.</p> <p>[<a href="https://github.com/git/git/compare/8f490db4e200edd22e247ec07fb1349a26c155b2...7a7b6022670c7946afea73a1eeb2ddc32d756624">source</a>, <a href="https://github.com/git/git/compare/a819a3da85655031a23abae0f75d0910697fb92c...a7fa5b2f0ccb567a5a6afedece113f207902fa6f">source</a>]</p> </li> <!-- wp:list-item&gt;--> <li> <p>This release also shipped a minor cosmetic update to <code>git rebase -i</code>. When starting a rebase, your <code>$EDITOR</code> might appear with contents that look something like: </p><pre> pick c108101daa foo pick d2a0730acf bar pick e5291f9321 baz </pre> <p>You can edit that list to <code>break</code>, <code>reword</code>, or <code>exec</code> (among many others), and Git will happily execute your rebase. But if you change the commit message in your rebase&rsquo;s TODO script, they won&rsquo;t actually change!</p><p>That&rsquo;s because the commit messages shown in the TODO script are just meant to help you identify which commits you&rsquo;re rebasing. (If you want to rewrite any commit messages along the way, you can use the <code>reword</code> command instead). To clarify that these messages are cosmetic, Git will now prefix them with a <code>#</code> comment character like so: </p><pre> pick c108101daa # foo pick d2a0730acf # bar pick e5291f9321 # baz </pre> <p>[<a href="https://github.com/git/git/compare/f9cdaa2860e20f3f36595646b7a82082aa772df8...e42667241de12840ef58c0ba1c060b86c850bae0">source</a>]</p> <p><!-- /wp:list-item&gt;--> </p></li> <li><p>Long time readers of this series will recall <a href="https://github.blog/open-source/git/highlights-from-git-2-36/">our coverage</a> of Git&rsquo;s <code>bundle</code> <a href="https://git-scm.com/book/en/v2/Git-Tools-Bundling">feature</a> (when Git added support for partial bundles), though we haven&rsquo;t covered Git&rsquo;s <code>bundle-uri</code> <a href="https://git-scm.com/docs/bundle-uri/2.50.0">feature</a>. Git bundles are a way to package your repositories contents: both its objects and the references that point at them into a single <code>*.bundle</code> file.</p><p>While Git has had support for bundles since as early as <a href="https://github.com/git/git/compare/1db8b60b2a6ef0cc0f7cc7d0783b7cda2ce894ca...64d99e9c5a4a3fb35d803894992764a6e288de5d">v1.5.1</a> (nearly 18 years ago!), its <code>bundle-uri</code> feature is <a href="https://github.com/git/git/compare/83937e9592832408670da38bfe6e96c90ad63521...89c6e450fe4a919ecb6fa698005a935531c732cf">much newer</a>. In short, the <code>bundle-uri</code> feature allows a server to serve part of a clone by first directing the client to download a <code>*.bundle</code> file. After the client does so, it will try to perform a fill-in fetch to gather any missing data advertised by the server but not part of the bundle.</p><p>To speed up this fill-in fetch, your Git client will advertise any references that it picked up from the <code>*.bundle</code> itself. But in previous versions of Git, this could sometimes result in <em>slower</em> clones overall! That&rsquo;s because up until Git 2.50, Git would only advertise the branches in <code>refs/heads/*</code> when asking the server to send the remaining set of objects.</p><p>Git 2.50 now includes advertises all references it knows about from the <code>*.bundle</code> when doing a fill-in fetch on the server, making <code>bundle-uri</code>-enabled clones much faster.</p><p>For more details about these changes, you can check out <a href="https://blog.gitbutler.com/going-down-the-rabbit-hole-of-gits-new-bundle-uri/">this blog post</a> from Scott Chacon.</p><p>[<a href="https://github.com/git/git/compare/0b8d22fd4030832fa64933721fa162feaa9c69d9...435b076ceb6e42c2c4c66422c036a02982b36bd4">source</a>]</p></li> <li><p>Last but not least, <code>git add -p</code> (and <code>git add -i</code>) now work much more smoothly in <a href="https://github.blog/open-source/git/bring-your-monorepo-down-to-size-with-sparse-checkout/">sparse checkouts</a> by no longer having to expand the <a href="https://github.blog/open-source/git/make-your-monorepo-feel-small-with-gits-sparse-index/">sparse index</a>. This follows in a long line of work that has been gradually adding sparse-index compatibility to Git commands that interact with the index.</p><p>Now you can interactively stage parts of your changes before committing in a sparse checkout without having to wait for Git to populate the sparsified parts of your repository&rsquo;s index. Give it a whirl on your local sparse checkout today!</p><p>[<a href="https://github.com/git/git/compare/6b6c366e79a1e688526ece01cd1d6a2fa46d0071...ecf9ba20e35ded94d6b1f44f83bb9f7c32162654">source</a>]</p></li> </ul> <p><!-- /wp:post-content --></p> <p><!-- wp:separator --></p> <hr class="wp-block-separator has-alpha-channel-opacity"> <!-- /wp:separator --> <p><!-- wp:heading {"level":3} --></p> <h3 class="wp-block-heading" id="h-the-rest-of-the-iceberg">The rest of the iceberg</h3> <p><!-- /wp:heading --></p> <p><!-- wp:paragraph --></p> <p>That&rsquo;s just a sample of changes from the latest release. For more, check out the release notes for <a href="https://github.com/git/git/blob/v2.50.0/Documentation/RelNotes/2.50.0.adoc">2.50</a>, or <a href="https://github.com/git/git/tree/v2.50.0/Documentation/RelNotes">any previous version</a> in <a href="https://github.com/git/git">the Git repository</a>.</p> <p><!-- /wp:paragraph --></p> <p><!-- wp:group {"metadata":{"name":"Custom CTA","categories":["text"],"patternName":"github-2021/custom-cta"},"className":"post-content-cta","layout":{"type":"constrained"}} --></p> <div class="wp-block-group post-content-cta"><!-- wp:paragraph --> <p><strong>&#127881; Git turned 20 this year!</strong> Celebrate by watching <a href="https://github.blog/open-source/git/git-turns-20-a-qa-with-linus-torvalds/">our interview of Linus Torvalds</a>, where we discuss how it forever changed software development.</p> <p><!-- /wp:paragraph --></p></div> <p><!-- /wp:group --></p> <p><!-- wp:paragraph {"fontSize":"small"} --></p> <p class="has-small-font-size" id="footnote"><sup>1</sup>&nbsp;It&rsquo;s never <a href="https://github.blog/engineering/scaling-gits-garbage-collection/#mitigating-object-deletion-raciness">truly safe</a> to remove an unreachable object from a Git repository that is accepting incoming writes, because marking an object as unreachable can race with incoming reference updates, pushes, etc. At GitHub, we use Git&rsquo;s &ndash;expire-to feature (which we wrote about in our <a href="https://github.blog/open-source/git/highlights-from-git-2-39/">coverage of Git 2.39</a>) in something we call &ldquo;<a href="https://github.blog/engineering/scaling-gits-garbage-collection/#limbo-repositories">limbo repositories</a>&rdquo; to quickly recover objects that shouldn&rsquo;t have been deleted, before deleting them for good. &nbsp;<a href="#return">&#8617;&#65039;</a></p> <p><!-- /wp:paragraph --></p></body></html> <p>The post <a href="https://github.blog/open-source/git/highlights-from-git-2-50/">Highlights from Git 2.50</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> How the GitHub billing team uses the coding agent in GitHub Copilot to continuously burn down technical debt - The GitHub Blog https://github.blog/?p=88668 2025-06-12T16:00:00.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>One of the beautiful things about software is that it&rsquo;s always evolving. However, each piece carries the weight of past decisions made when it was created. Over time, quick fixes, &ldquo;temporary&rdquo; workarounds, and deadline compromises compound into tech debt. Like financial debt, the longer you wait to address it, the more expensive it becomes.</p> <p>It&rsquo;s challenging to prioritize tech debt fixes when deadlines loom and feature requests keep streaming in. Tech debt work feels like a luxury when you&rsquo;re constantly in reactive mode. Fixing what&rsquo;s broken today takes precedence over preventing something from possibly breaking tomorrow. Occasionally that accumulated tech debt even results in full system rewrites, which are time-consuming and costly, just to achieve parity with existing systems.</p> <p>Common approaches to managing tech debt, like gardening weeks (dedicated sprints for tech debt) and extended feature timelines, don&rsquo;t work well. Gardening weeks treat tech debt as an exception rather than ongoing maintenance, often leaving larger problems unaddressed while teams postpone smaller fixes. Extended timelines create unrealistic estimates that can break trust between engineering and product teams.</p> <p>The fundamental problem is treating tech debt as something that interrupts normal development flow. What if instead you could chip away at tech debt continuously, in parallel with regular work, without disrupting sprint commitments or feature delivery timelines?</p> <h2 class="wp-block-heading" id="h-using-ai-agents-to-routinely-tackle-tech-debt">Using AI agents to routinely tackle tech debt</h2> <p><strong>Managing tech debt is a big opportunity for AI agents like the coding agent in <a href="https://github.com/features/copilot">GitHub Copilot</a>.</strong></p> <p>With AI agents like the <a href="https://github.blog/ai-and-ml/github-copilot/agent-mode-101-all-about-github-copilots-powerful-mode/">coding agent in GitHub Copilot</a>, tech debt items no longer need to go into the backlog to die. While you&rsquo;re focusing on the new features and architectural changes that you need to bring to your evolving codebase, you can assign GitHub Copilot to complete tech debt tasks at the same time.&nbsp;</p> <p>Here are some examples of what the coding agent can do:</p> <ul class="wp-block-list"> <li><strong>Improve code test coverage</strong>: Have limited code testing coverage but know you&rsquo;ll never get the buy-in to spend time writing more tests? Assign issues to GitHub Copilot to increase test coverage. The agent will take care of it and ping you when the tests are ready to review.</li> <li><strong>Swap out dependencies</strong>: Need to swap out a mocking library for a different one, but know it will be a long process? Assign the issue to swap out the library to GitHub Copilot. It can work through that swap while you&rsquo;re focusing your attention elsewhere.</li> <li><strong>Standardize patterns across codebases</strong>: Are there multiple ways to return and log errors in your codebase, making it hard to investigate issues when they occur and leading to confusion during development? Assign an issue to GitHub Copilot to standardize a single way of returning and logging errors.</li> <li><strong>Optimize frontend loading patterns</strong>: Is there an area where you are making more API calls than your application really needs? Ask GitHub Copilot to change the application to only make those API calls when the data is requested, instead of on every page load.</li> <li><strong>Identify and eliminate dead code</strong>: Is there anywhere in your project where you may have unused functions, outdated endpoints, or stale config hanging out? Ask GitHub Copilot to look for these and suggest ways to safely remove them.</li> </ul> <p>If those examples sound very specific, it&rsquo;s because they are. These are all real changes that my team has tackled using GitHub Copilot coding agent&mdash;and these changes probably wouldn&rsquo;t have occurred without it. <strong>The ability for us to tackle tech debt continuously while delivering features has grown exponentially</strong>, and working AI agents into our workflow has proven to be incredibly valuable. We&rsquo;ve been able to reduce the time it takes to remove tech debt from weeks of intermittent, split focus to a few minutes of writing an issue and a few hours reviewing and iterating on a pull request.</p> <aside data-color-mode="light" data-dark-theme="dark" data-light-theme="light_dimmed" class="wp-block-group post-aside--large p-4 p-md-6 is-style-light-dimmed has-global-padding is-layout-constrained wp-block-group-is-layout-constrained is-style-light-dimmed--1" style="border-top-width:4px"> <h3 class="wp-block-heading h5-mktg gh-aside-title is-typography-preset-h5" id="h-what-s-the-difference-between-agent-mode-and-coding-agent-in-github-copilot" style="margin-top:0">What&rsquo;s the difference between agent mode and coding agent in GitHub Copilot?</h3> <p>While they&rsquo;re both AI agents, they&rsquo;re tuned for different parts of your day-to-day workflows.&nbsp;<a href="https://github.blog/developer-skills/github/less-todo-more-done-the-difference-between-coding-agent-and-agent-mode-in-github-copilot/" target="_blank" rel="noreferrer noopener">See how to use them both</a>&nbsp;to work more efficiently.</p> </aside> <p>This isn&rsquo;t about replacing human engineers; it&rsquo;s about amplifying what we do best. While agents handle the repetitive, time-consuming work of refactoring legacy code, updating dependencies, and standardizing patterns across codebases, we can focus on architecture decisions, feature innovation, and solving complex business problems. The result is software that stays healthier over time, teams that ship faster, and engineers who spend their time on work that actually energizes them.</p> <h2 class="wp-block-heading" id="h-when-ai-is-your-copilot-you-still-have-to-do-the-work">When AI is your copilot, you still have to do the work</h2> <p>The more I learn about AI, the more I realize just how critical humans are in the entire process. AI agents excel at well-defined, repetitive tasks, the kind of tech debt work that&rsquo;s important but tedious. But when it comes to larger architectural decisions or complex business logic changes, human judgment is still irreplaceable.</p> <p>Since we are engineers, we know the careful planning and tradeoff considerations that come with our craft. One wrong semicolon, and the whole thing can come crashing down. This is why every prompt requires careful consideration and each change to your codebase requires thorough review.</p> <p>Think of it as working with a brilliant partner that can write clean code all day but needs guidance on what actually matters for your application. The AI agent brings speed and consistency; it never gets tired, never cuts corners because it&rsquo;s Friday afternoon, and can maintain focus across hundreds of changes. But you bring the strategic thinking: knowing which tech debt to tackle first, understanding the business impact of different approaches, and recognizing when a &ldquo;quick fix&rdquo; might create bigger problems down the line.</p> <p>The magic happens in the interaction between human judgment and AI execution. You define the problem, set the constraints, and validate the solution. The agent handles the tedious implementation details that would otherwise consume hours of your time. This partnership lets you operate at a higher level while still maintaining quality and control.</p> <h2 class="wp-block-heading" id="h-tips-to-make-the-most-of-the-coding-agent-in-github-copilot">Tips to make the most of the coding agent in GitHub Copilot</h2> <p>Here&rsquo;s what I&rsquo;ve learned from using the coding agent in GitHub Copilot for the past few months:</p> <ol class="wp-block-list"> <li><strong>Write </strong><a href="https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot"><strong>Copilot Instructions</strong></a><strong> for your repository.</strong> This results in a much better experience. You can even ask your agent to write the instructions for you to get started, which is how I did it! Include things like the scripts that you need to run during development to format and lint (looking at you, <code>go fmt</code>).</li> <li><strong>Work in digestible chunks.</strong> This isn&rsquo;t necessarily because the agent needs to work in small chunks. I learned the hard way that it will make some pretty ambitious, sweeping changes if you don&rsquo;t explicitly state which areas of your codebase you want changed. However, reviewing a 100+-file pull request is not my idea of a good time, so working in digestible chunks generally makes for a better experience for me as the reviewer. What this looks like for me is instead of writing an issue that says &ldquo;Improve test coverage for this application&rdquo;, I create multiple issues assigned to GitHub Copilot that &ldquo;improve test coverage for file X&rdquo; or &ldquo;improve test coverage for folder Y&rdquo;, to better scope the changes that I need to review.</li> <li><strong>Master the art of effective prompting.</strong> The quality of what you get from AI agents depends heavily on how well you communicate your requirements. Be specific about the context, constraints, and coding standards you want the agent to follow.</li> <li><strong>Always review the code thoroughly.</strong> While AI agents can handle repetitive tasks well, they don&rsquo;t understand business logic like you do. Making code review a central part of your workflow ensures quality while still benefiting from the automation. This is one of the reasons why I love the GitHub Copilot coding agent. It uses the same code review tools that I use every day to review code from my colleagues, making it incredibly easy to fit into my workflow.</li> </ol> <h2 class="wp-block-heading" id="h-the-future-belongs-to-software-engineers-who-embrace-ai-tools">The future belongs to software engineers who embrace AI tools</h2> <p>We&rsquo;re at a pivotal moment in software engineering. For too long, tech debt has been the silent productivity killer. It&rsquo;s the thing we all know needs attention but rarely gets prioritized until it becomes a crisis. AI coding agents are giving us the opportunity to change that equation entirely.</p> <p>The engineers who learn to effectively collaborate with AI agents&mdash;the ones who master the art of clear prompting, thoughtful code review, and strategic task delegation&mdash;will have a massive advantage. They&rsquo;ll be able to maintain codebases that their peers struggle with, tackle tech debt that others avoid, and potentially eliminate the need for those expensive, time-consuming rewrites that have plagued our industry for decades.</p> <p>But this transformation requires intentional effort. You need to experiment with these tools, learn their strengths and limitations, and integrate them into your workflow. The technology is ready; the question is whether you&rsquo;ll take advantage of it.</p> <p>If you haven&rsquo;t started exploring how AI agents can help with your tech debt, now is the perfect time to begin. Your future self, who is more productive, less frustrated, and focused on the creative aspects of engineering, will thank you. More importantly, so will your users, who&rsquo;ll benefit from a more stable, well-maintained application that continues to evolve instead of eventually requiring significant downtime for a complete rebuild.</p> <div class="wp-block-group post-content-cta has-global-padding is-layout-constrained wp-block-group-is-layout-constrained"> <p>Assign your tech debt to <a href="https://docs.github.com/en/copilot/using-github-copilot/coding-agent/enabling-copilot-coding-agent">GitHub Copilot coding agent</a> in your repositories today!</p> </div> </body></html> <p>The post <a href="https://github.blog/ai-and-ml/github-copilot/how-the-github-billing-team-uses-the-coding-agent-in-github-copilot-to-continuously-burn-down-technical-debt/">How the GitHub billing team uses the coding agent in GitHub Copilot to continuously burn down technical debt</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix - Netflix TechBlog - Medium https://medium.com/p/6a6aee261d8d 2025-06-12T14:56:32.000Z <p>By <a href="https://www.linkedin.com/in/ahutter/">Alex Hutter</a>, <a href="https://www.linkedin.com/in/bertails/">Alexandre Bertails</a>, <a href="https://www.linkedin.com/in/clairezwang0612/">Claire Wang</a>, <a href="https://www.linkedin.com/in/haoyuan-h-98b587134/">Haoyuan He</a>, <a href="https://www.linkedin.com/in/kishore-banala/">Kishore Banala</a>, <a href="https://www.linkedin.com/in/peterroyal/">Peter Royal</a>, <a href="https://www.linkedin.com/in/shervinafshar/">Shervin Afshar</a></p><p>As Netflix’s offerings grow — across films, series, games, live events, and ads — so does the complexity of the systems that support it. Core business concepts like ‘actor’ or ‘movie’ are modeled in many places: in our Enterprise GraphQL Gateway powering internal apps, in our asset management platform storing media assets, in our media computing platform that powers encoding pipelines, to name a few. Each system models these concepts differently and in isolation, with little coordination or shared understanding. While they often operate on the same concepts, these systems remain largely unaware of that fact, and of each other.</p><figure><img alt="Spider-Man Pointing meme with each Spider-Man labelled as: “it’s a movie”, “it’s a tv show”, “it’s a game”." src="https://cdn-images-1.medium.com/max/1024/0*wNYAhebbErEdYROL" /></figure><p>As a result, several challenges emerge:</p><ul><li><strong>Duplicated and Inconsistent Models</strong> — Teams re-model the same business entities in different systems, leading to conflicting definitions that are hard to reconcile.</li><li><strong>Inconsistent Terminology</strong> — Even within a single system, teams may use different terms for the same concept, or the same term for different concepts, making collaboration harder.</li><li><strong>Data Quality Issues</strong> — Discrepancies and broken references are hard to detect across our many microservices. While identifiers and foreign keys exist, they are inconsistently modeled and poorly documented, requiring manual work from domain experts to find and fix any data issues.</li><li><strong>Limited Connectivity</strong> — Within systems, relationships between data are constrained by what each system supports. Across systems, they are effectively non-existent.</li></ul><p>To address these challenges, we need new foundations that allow us to define a model once, at the conceptual level, and reuse those definitions everywhere. But it isn’t enough to just document concepts; we need to connect them to real systems and data. And more than just connect, we have to project those definitions outward, generating schemas and enforcing consistency across systems. The conceptual model must become part of the control plane.</p><p>These were the core ideas that led us to build UDA.</p><h3>Introducing UDA</h3><p><strong>UDA (Unified Data Architecture)</strong> is the foundation for connected data in <a href="https://netflixtechblog.com/netflix-studio-engineering-overview-ed60afcfa0ce">Content Engineering</a>. It enables teams to model domains once and represent them consistently across systems — powering automation, discoverability, and <a href="https://en.wikipedia.org/wiki/Semantic_interoperability">semantic interoperability</a>.</p><p><strong>Using UDA, users and systems can:</strong></p><p><strong>Register and connect domain models </strong>— formal conceptualizations of federated business domains expressed as data.</p><ul><li><strong>Why? </strong>So everyone uses the same official definitions for business concepts, which avoids confusion and stops different teams from rebuilding similar models in conflicting ways.</li></ul><p><strong>Catalog and map domain models to data containers</strong>, such as GraphQL type resolvers served by a <a href="https://netflixtechblog.com/open-sourcing-the-netflix-domain-graph-service-framework-graphql-for-spring-boot-92b9dcecda18">Domain Graph Service</a>, <a href="https://netflixtechblog.com/data-mesh-a-data-movement-and-processing-platform-netflix-1288bcab2873">Data Mesh sources</a>, or Iceberg tables, through their representation as a graph.</p><ul><li><strong>Why?</strong> To make it easy to find where the actual data for these business concepts lives (e.g., in which specific database, table, or service) and understand how it’s structured there.</li></ul><p><strong>Transpile domain models into schema definition languages</strong> like GraphQL, Avro, SQL, RDF, and Java, while preserving semantics.</p><ul><li><strong>Why? </strong>To automatically create consistent technical data structures (schemas) for various systems directly from the domain models, saving developers manual effort and reducing errors caused by out-of-sync definitions.</li></ul><p><strong>Move data faithfully between data containers</strong>, such as from federated GraphQL entities to <a href="https://netflixtechblog.com/data-mesh-a-data-movement-and-processing-platform-netflix-1288bcab2873">Data Mesh</a> (a general purpose data movement and processing platform for moving data between Netflix systems at scale), Change Data Capture (CDC) sources to joinable Iceberg Data Products.</p><ul><li><strong>Why? </strong>To save developer time by automatically handling how data is moved and correctly transformed between different systems. This means less manual work to configure data movement, ensuring data shows up consistently and accurately wherever it’s needed.</li></ul><p><strong>Discover and explore domain concepts </strong>via search and graph traversal.</p><ul><li><strong>Why? </strong>So anyone can more easily find the specific business information they’re looking for, understand how different concepts and data are related, and be confident they are accessing the correct information.</li></ul><p><strong>Programmatically introspect the knowledge graph</strong> using Java, GraphQL, or SPARQL.</p><ul><li><strong>Why?</strong> So developers can build smarter applications that leverage this connected business information, automate more complex data-dependent workflows, and help uncover new insights from the relationships in the data.</li></ul><p><strong>This post introduces the foundations of UDA</strong> as a knowledge graph, connecting domain models to data containers through mappings, and grounded in an in-house <a href="https://en.wikipedia.org/wiki/Metamodeling#:~:text=A%20metamodel%2F%20surrogate%20model%20is,representing%20input%20and%20output%20relations">metamodel</a>, or model of models, called Upper. Upper defines the language for domain modeling in UDA and enables projections that automatically generate schemas and pipelines across systems.</p><figure><img alt="Image of the UDA knowledge graph. A central node representing a domain model is connected to other nodes representing Data Mesh, GraphQL, and Iceberg data containers." src="https://cdn-images-1.medium.com/max/1024/1*j1I2cLD0vtfE9IQfNiUwVQ.png" /><figcaption>The same domain model can be connected to semantically equivalent data containers in the UDA knowledge graph.</figcaption></figure><p><strong>This post also highlights two systems</strong> that leverage UDA in production:</p><p><strong>Primary Data Management (PDM)</strong> is our platform for managing authoritative reference data and taxonomies. PDM turns domain models into flat or hierarchical taxonomies that drive a generated UI for business users. These taxonomy models are projected into Avro and GraphQL schemas, automatically provisioning data products in the Warehouse and GraphQL APIs in the <a href="https://netflixtechblog.com/how-netflix-scales-its-api-with-graphql-federation-part-1-ae3557c187e2">Enterprise Gateway</a>.</p><p><strong>Sphere</strong> is our self-service operational reporting tool for business users. Sphere uses UDA to catalog and relate business concepts across systems, enabling discovery through familiar terms like ‘actor’ or ‘movie.’ Once concepts are selected, Sphere walks the knowledge graph and generates SQL queries to retrieve data from the warehouse, no manual joins or technical mediation required.</p><h4>UDA is a Knowledge Graph</h4><p><strong>UDA needs to solve the </strong><a href="https://en.wikipedia.org/wiki/Data_integration"><strong>data integration</strong></a><strong> problem. </strong>We needed a data catalog unified with a schema registry, but with a hard requirement for <a href="https://en.wikipedia.org/wiki/Semantic_integration#:~:text=Semantic%20integration%20is%20the%20process,from%20diverse%20sources">semantic integration</a>. Connecting business concepts to schemas and data containers in a graph-like structure, grounded in strong semantic foundations, naturally led us to consider a <a href="https://en.wikipedia.org/wiki/Knowledge_graph">knowledge graph</a> approach.</p><p><strong>We chose RDF and SHACL as the foundation for UDA’s knowledge graph</strong>. But operationalizing them at enterprise scale surfaced several challenges:</p><ul><li><strong>RDF lacked a usable information model.</strong> While RDF offers a flexible graph structure, it provides little guidance on how to organize data into <a href="https://www.w3.org/TR/rdf12-concepts/#dfn-named-graph">named graphs</a>, manage ontology ownership, or define governance boundaries. Standard <a href="https://www.w3.org/2001/sw/wiki/Linking_patterns">follow-your-nose mechanisms</a> like owl:imports apply only to ontologies and don’t extend to named graphs; we needed a generalized mechanism to express and resolve dependencies between them.</li><li><strong>SHACL is not a modeling language for enterprise data.</strong> Designed to validate native RDF, SHACL assumes globally unique URIs and a single data graph. But enterprise data is structured around local schemas and typed keys, as in GraphQL, Avro, or SQL. SHACL could not express these patterns, making it difficult to model and validate real-world data across heterogeneous systems.</li><li><strong>Teams lacked shared authoring practices.</strong> Without strong guidelines, teams modeled their ontologies inconsistently breaking semantic interoperability. Even subtle differences in style, structure, or naming led to divergent interpretations and made transpilation harder to define consistently across schemas.</li><li><strong>Ontology tooling lacked support for collaborative modeling.</strong> Unlike GraphQL Federation, ontology frameworks had no built-in support for modular contributions, team ownership, or safe federation. Most engineers found the tools and concepts unfamiliar, and available authoring environments lacked the structure needed for coordinated contributions.</li></ul><p><strong>To address these challenges, UDA adopts a named-graph-first information model.</strong> Each named graph conforms to a governing model, itself a named graph in the knowledge graph. This systematic approach ensures resolution, modularity, and enables governance across the entire graph. While a full description of UDA’s information infrastructure is beyond the scope of this post, the next sections explain how UDA bootstraps the knowledge graph with its metamodel and uses it to model data container representations and mappings.</p><h4>Upper is Domain Modeling</h4><p><strong>Upper is a language for formally describing domains — business or system — and their concepts</strong>. <a href="https://en.wikipedia.org/wiki/Conceptualization_(information_science)">These concepts are organized into domain models</a>: controlled vocabularies that define classes of keyed entities, their attributes, and their relationships to other entities, which may be keyed or nested, within the same domain or across domains. Keyed concepts within a domain model can be organized in taxonomies of types, which can be as complex as the business or the data system needs them to be. Keyed concepts can also be extended from other domain models — that is, new attributes and relationships can be <a href="https://tomgruber.org/writing/onto-design.pdf#page=4">contributed monotonically</a>. Finally, Upper ships with a rich set of datatypes for attribute values, which can also be customized per domain.</p><figure><img alt="Visualization of the UDA graph representation of a One Piece character. The Character node in the graph is connected to a Devil Fruit node. The Devil Fruit node is connected to a Devil Fruit Type node." src="https://cdn-images-1.medium.com/max/1024/0*A_-GpZLvqbxuVdkH" /><figcaption><em>The graph representation of the onepiece: domain model from our UI. Depicted here you can see how Characters are related to Devil Fruit, and that each Devil Fruit has a type.</em></figcaption></figure><p><strong>Upper domain models are data</strong>. They are expressed as <a href="https://www.w3.org/TR/rdf12-concepts/">conceptual RDF</a> and organized into named graphs, making them introspectable, queryable, and versionable within the UDA knowledge graph. This graph unifies not just the domain models themselves, but also the schemas they transpile to — GraphQL, Avro, Iceberg, Java — and the mappings that connect domain concepts to concrete data containers, such as GraphQL type resolvers served by a <a href="https://netflixtechblog.com/open-sourcing-the-netflix-domain-graph-service-framework-graphql-for-spring-boot-92b9dcecda18">Domain Graph Service</a>, <a href="https://netflixtechblog.com/data-mesh-a-data-movement-and-processing-platform-netflix-1288bcab2873">Data Mesh sources</a>, or Iceberg tables, through their representations. Upper raises the level of abstraction above traditional ontology languages: it defines a strict subset of <a href="https://www.w3.org/2001/sw/wiki/Main_Page">semantic technologies</a> from the W3C tailored and generalized for domain modeling. It builds on ontology frameworks like RDFS, OWL, and SHACL so domain authors can model effectively without even needing to learn what an ontology is.</p><figure><img alt="Screenshot of UDA UI showing domain model for One Piece serialized as Turtle." src="https://cdn-images-1.medium.com/max/1024/1*SGMUpJucEWhdlZsd4blz3A.png" /><figcaption>UDA domain model for One Piece. <a href="https://github.com/Netflix-Skunkworks/uda/blob/9627a97fcd972a41ec910be3f928ea7692d38714/uda-intro-blog/onepiece.ttl">Link to full definition</a>.</figcaption></figure><p><strong>Upper is the metamodel for Connected Data in UDA — the model for all models</strong>. It is designed as a bootstrapping <a href="https://en.wikipedia.org/wiki/Upper_ontology">upper ontology</a>, which means that Upper is <em>self-referencing</em>, because it models itself as a domain model; <em>self-describing</em>, because it defines the very concept of a domain model; and <em>self-validating</em>, because it conforms to its own model. This approach enables UDA to bootstrap its own infrastructure: Upper itself is projected into a generated Jena-based Java API and GraphQL schema used in GraphQL service federated into Netflix’s Enterprise GraphQL gateway. These same generated APIs are then used by the projections and the UI. Because all domain models are <a href="https://en.wikipedia.org/wiki/Conservative_extension">conservative extensions</a> of Upper, other system domain models — including those for GraphQL, Avro, Data Mesh, and Mappings — integrate seamlessly into the same runtime, enabling consistent data semantics and interoperability across schemas.</p><figure><img alt="Screenshot of an IDE. It shows Java code using the generated API from the Upper metamodel to traverse and print terms from a domain domain in the top while the bottom contains the output of an execution." src="https://cdn-images-1.medium.com/max/1024/0*5tJcW2A6lLrNi257" /><figcaption>Traversing a domain model programmatically using the Java API generated from the Upper metamodel.</figcaption></figure><h4>Data Container Representations</h4><p><strong>Data containers are repositories of information. </strong>They contain instance data that conform to their own schema languages or type systems: federated entities from GraphQL services, Avro records from Data Mesh sources, rows from Iceberg tables, or objects from Java APIs. Each container operates within the context of a system that imposes its own structural and operational constraints.</p><figure><img alt="Screenshot of a UI showing details for a Data Mesh Source containing One Piece Characters." src="https://cdn-images-1.medium.com/max/1024/1*qUzAb6-TC2HL8qAWAW1Xlw.png" /><figcaption>A Data Mesh source is a data container.</figcaption></figure><p><strong>Data container </strong><a href="https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning"><strong>representations</strong></a><strong> are data.</strong> They are faithful interpretations of the members of data systems as graph data. UDA captures the definition of these systems as their own domain models, the system domains. These models encode both the information architecture of the systems and the schemas of the data containers within. They provide a blueprint for translating the systems into graph representations.</p><figure><img alt="Screenshot of an IDE showing two files open side by side. On the left is a system domain model for Data Mesh. On the right is a representation of a Data Mesh source containing One Piece Character data." src="https://cdn-images-1.medium.com/max/1024/0*6QzelmSRrIj1G881" /><figcaption><em>Side by side/super imposed image of data container schema and representation. </em><a href="https://github.com/Netflix-Skunkworks/uda/blob/9627a97fcd972a41ec910be3f928ea7692d38714/uda-intro-blog/onepiece_character_data_container.ttl"><em>Link to full data container representation</em></a><em>.</em></figcaption></figure><p><strong>UDA catalogs the data container representations into the knowledge graph.</strong> It records the coordinates and metadata of the underlying data assets, but unlike a traditional catalog, it only tracks assets that are semantically connected to domain models. This enables users and systems to connect concepts from domain models to the concrete locations where corresponding instance data can be accessed. Those connections are called <em>Mappings</em>.</p><h4>Mappings</h4><p><strong>Mappings are data that connect domain models to data containers.</strong> Every element in a domain model is addressable, from the domain model itself down to specific attributes and relationships. Likewise, data container representations make all components addressable, from an Iceberg table to an individual column, or from a GraphQL type to a specific field. A Mapping connects nodes in a subgraph of the domain model to nodes in a subgraph of a container representation. Visually, the Mapping is the set of arcs that link those two graphs together.</p><figure><img alt="Screenshot of UDA UI showing a mapping between a concept in UDA and a Data Mesh Source." src="https://cdn-images-1.medium.com/max/1024/1*it3X5Vu8plWX5QvN_AJkgw.png" /><figcaption><em>A mapping between a domain model and a Data Mesh Source from the UDA UI. </em><a href="https://github.com/Netflix-Skunkworks/uda/blob/9627a97fcd972a41ec910be3f928ea7692d38714/uda-intro-blog/onepiece_character_mappings.ttl"><em>Link to full mapping</em></a><em>.</em></figcaption></figure><p><strong>Mappings enable discovery.</strong> Starting from a domain concept, users and systems can walk the knowledge graph to find where that concept is materialized — in which data system, in which container, and even how a specific attribute or relationship is physically accessed. The inverse is also supported: given a data container, one can trace back to the domain concepts it participates in.</p><p><strong>Mappings shape UDA’s approach to semantic data integration.</strong> Most existing schema languages are not expressive enough in capturing richer semantics of a domain to address requirements for data integration (<a href="https://doi.org/10.1007/978-3-319-49340-4_8">for example</a>, “accessibility of data, providing semantic context to support its interpretation, and establishing meaningful links between data”). A trivial example of this could be seen in the lack of built-in facilities in Avro to represent foreign keys, making it very hard to express how entities relate across Data Mesh sources. Mappings, together with the corresponding system domain models, allow for such relationships, and many other constraints, to be defined in the domain models and used programmatically in actual data systems.</p><p><strong>Mappings enable intent-based automation.</strong> Data is not always available in the systems where consumers need it. Because Mappings encode both meaning and location, UDA can reason about how data should move, preserving semantics, without requiring the consumer to specify how it should be done. Beyond the cataloging use case, connecting to existing containers, UDA automatically derives <em>canonical Mappings</em> from registered domain models as part of the projection process.</p><h4>Projections</h4><p><strong>A projection produces a concrete data container.</strong> These containers, such as a GraphQL schema or a Data Mesh source, implement the characteristics derived from a registered domain model. Each projection is a concrete realization of Upper’s denotational semantics, ensuring <a href="https://en.wikipedia.org/wiki/Semantic_interoperability">semantic interoperability</a> across all containers projected from the same domain model.</p><p><strong>Projections produce consistent public contracts across systems.</strong> The data containers generated by projections encode data contracts in the form of schemas, derived by transpiling a domain model into the target container’s schema language. UDA currently supports transpilation to GraphQL and Avro schemas.</p><p>The GraphQL transpilation produces a schema that adheres to the <a href="https://spec.graphql.org/October2021/#sec-Overview">official GraphQL spec</a> with the ability to generate all GraphQL types defined in the spec. Given that the UDA domain model can be federated, it also supports generating federated graphQL schemas. Below is an example of a transpiled GraphQL schema.</p><figure><img alt="Screenshot of an IDE showing two files open side by side. On the left is the definition of a Character in UDA. On the right is transpiled GraphQL schema." src="https://cdn-images-1.medium.com/max/1024/0*NPXB3ujnUGSIklei" /><figcaption><em>Domain model on the left, with transpiled GraphQL schema on the right. </em><a href="https://github.com/Netflix-Skunkworks/uda/blob/9627a97fcd972a41ec910be3f928ea7692d38714/uda-intro-blog/onepiece.graphqls"><em>Link to full transpiled GraphQL schema</em></a><em>.</em></figcaption></figure><p>The Avro transpilation produces a schema that is a Data Mesh flavor of Avro, which includes some customization on top of the <a href="https://avro.apache.org/docs/1.12.0/specification/">official Avro spec</a>. This schema is used to automatically create a Data Mesh source container. Below is an example of a transpiled Avro schema.</p><figure><img alt="Screenshot of an IDE showing two files open side by side. On the left is the definition of a Devil Fruit in UDA. On the right is transpiled Avro schema." src="https://cdn-images-1.medium.com/max/1024/0*uVInkj5S3PYTqNA-" /><figcaption><em>Domain model on the left, with transpiled Avro schema on the right. </em><a href="https://github.com/Netflix-Skunkworks/uda/blob/9627a97fcd972a41ec910be3f928ea7692d38714/uda-intro-blog/onepiece.avro"><em>Link to full transpiled Avro schema</em></a><em>.</em></figcaption></figure><p><strong>Projections can automatically populate data containers. </strong>Some projections, such as those to GraphQL schemas or Data Mesh sources produce empty containers that require developers to populate the data. This might be creating GraphQL APIs or pushing events onto Data Mesh sources. Conversely, other containers, like Iceberg Tables, are automatically created and populated by UDA. For Iceberg Tables, UDA leverages the Data Mesh platform to automatically create data streams to move data into tables. This process utilizes much of the same infrastructure detailed in this blog post <a href="https://netflixtechblog.com/data-movement-in-netflix-studio-via-data-mesh-3fddcceb1059">here</a>.</p><p><strong>Projections have mappings. </strong>UDA automatically generates and manages mappings between the newly created data containers and the projected domain model.</p><h3>Early Adopters</h3><h4>Controlled Vocabularies (PDM)</h4><p>The full range of Netflix’s business activities relies on a sprawling data model that captures the details of our many business processes. Teams need to be able to coordinate operational activities to ensure that content production is complete, advertising campaigns are in place, and promotional assets are ready to deploy. We implicitly depend upon a singular definition of shared concepts, such as content production is complete. Multiple definitions create coordination challenges. Software (and humans) don’t know that the definitions mean the same thing.</p><p>We started the Primary Data Management (PDM) initiative to create unified and consistent definitions for the core concepts in our data model. These definitions form <strong>controlled vocabularies</strong>, standardized and governed lists for what values are permitted within certain fields in our data model.</p><p><strong>Primary Data Management (PDM) is a single place where business users can manage controlled vocabularies. </strong>Our data model governance has been scattered across different tools and teams creating coordination challenges. This is an information management problem relating to the definition, maintenance and consistent use of reference data and taxonomies. This problem is not unique to Netflix, so we looked outward for existing solutions to this problem.</p><figure><img alt="Screenshot of PDM UI" src="https://cdn-images-1.medium.com/max/1024/0*GJMad4GU29YxPONf" /><figcaption>Managing the taxonomy of One Piece characters in PDM.</figcaption></figure><p><strong>PDM uses the Simple Knowledge Organization System (</strong><a href="https://www.w3.org/TR/skos-primer"><strong>SKOS</strong></a><strong>)</strong> <strong>model</strong>. It is a W3C data standard designed for modeling knowledge. Its terminology is abstract, with Concepts that can be organized into ConceptSchemes and properties to describe various types of relationships. Every system is hardcoded against <em>something</em>, that’s how software knows how to manipulate data. We want a system that can work with a data model as its input, so we still need <em>something</em> concrete to build the software against. This is what SKOS provides, a generic basis for modeling knowledge that our system can understand.</p><p><strong>PDM uses Domain Models to integrate SKOS into the rest of Content Engineering’s ecosystem. </strong>A core premise of the system is that it takes a domain model as input, and everything that <em>can</em> be derived <em>is</em> derived from that model. PDM builds a user interface based upon the model definition and leverages UDA to project this model into type-safe interfaces for other systems to use. The system will provision a Domain Graph Service (DGS) within our federated GraphQL API environment using a GraphQL schema that UDA projects from the domain model. UDA is also used to provision data movement pipelines which are able to feed our <a href="https://netflixtechblog.com/how-netflix-content-engineering-makes-a-federated-graph-searchable-5c0c1c7d7eaf">GraphSearch</a> infrastructure as well as move data into the warehouse. The data movement systems use Avro schemas, and UDA creates a projection from the domain model to Avro.</p><p><strong>Consumers of controlled vocabularies never know they’re using SKOS. </strong>Domain models use terms that fit in with the domain. SKOS’s generic notion of <em>broader</em> and <em>narrower</em> to define a hierarchy are hidden from consumers as super-properties within the model. This allows consumers to work with language that is familiar to them while enabling PDM to work with any model. The best of both worlds.</p><h4>Operational Reporting (Sphere)</h4><p><strong>Operational reporting serves the detailed day-to-day activities and processes of a business domain.</strong> It is a reporting paradigm specialized in covering high-resolution, low-latency data sets.</p><p><strong>Operational reporting systems should generate reports without relying on technical intermediaries. </strong>Operational reporting systems need to address the persistent challenge of empowering business users to explore and obtain the data they need, when they need it. Without such self-service systems, requests for new reports or data extracts often result in back-and-forth exchanges, where the initial query may not exactly meet business users’ expectations, requiring further clarification and refinement.</p><p><strong>Data discovery and query generation are two relevant aspects of data integration. </strong>Supplying end-users with an accurate, contextual, and user-friendly data discovery experience provides a basis for query generation mechanism which produces syntactically correct and semantically reliable queries.</p><p><strong>Operational reports are predominantly run on data hydrated from GraphQL services into the Data Warehouse. </strong>You can read about our journey from conventional data movement to streaming data pipelines based on CDC and GraphQL hydration in <a href="https://netflixtechblog.com/data-movement-in-netflix-studio-via-data-mesh-3fddcceb1059">this blog post</a>. Among the challenging byproducts of this approach was that a single, distinct data concept is now present in two places (GraphQL and data warehouse), with some disparity in semantic context to guide and support the interpretations and connectivity of that data. To address this, we formulate a mechanism to use the syntax and semantics captured in the federated schema from <a href="https://netflixtechblog.com/how-netflix-scales-its-api-with-graphql-federation-part-1-ae3557c187e2">Netflix’s Enterprise GraphQL</a> and populate <em>representational domain models</em> in UDA to preserve those details and add more.</p><p><strong>Domain models enable the data discovery experience. </strong>Metadata aggregated from various data-producing systems is captured in UDA domain models using a unified vocabulary. This metadata is surfaced for the users’ search and discovery needs; instead of specifying exact tables and join keys, users simply can search for familiar business concepts such as ‘actors’ or ‘movies’. We use UDA models to disambiguate and resolve the intended concepts and their related data entities.</p><p><strong>UDA knowledge graph is the data landscape for query generation. </strong>Once concepts are discovered and their mappings to corresponding data containers are identified and located in the knowledge graph, we use them to establish join strategies. Through graph traversal, we identify <em>boundaries</em> and <em>islands</em> within the data landscape. This ensures only feasible, joinable combinations are selected while weeding out semantically incorrect and non-executable query candidates.</p><figure><img alt="Screenshot of Sphere’s UI" src="https://cdn-images-1.medium.com/max/1024/0*EFEfzwY-3Tb6521O" /><figcaption>Generating a report in Sphere.</figcaption></figure><p><strong>Sphere is a UDA-powered self-service operational reporting system. </strong>The solution based on knowledge graphs described above is called Sphere. Seeing self-service operational reporting through this lens, we can improve business users’ agency in access to operational data. They are empowered to explore, assemble, and refine reports at the conceptual level, while technical complexities are managed by the system.</p><h3>Stay Tuned</h3><p>UDA marks a fundamental shift in how we approach data modeling within Content Engineering. By providing a unified knowledge graph composed of what we know about our various data systems and the business concepts within them, we’ve made information more consistent, connected, and discoverable across our organization. We’re excited about future applications of these ideas such as:</p><ul><li>Supporting additional projections like Protobuf/gRPC</li><li>Materializing the knowledge graph of instance data for querying, profiling, and management</li><li>Finally solving some of the initial <a href="https://netflixtechblog.com/how-netflix-content-engineering-makes-a-federated-graph-searchable-5c0c1c7d7eaf">challenges</a> posed by Graph Search (that actually inspired some of this work)</li></ul><p>If you’re interested in this space, we’d love to connect — whether you’re exploring new roles down the road or just want to swap ideas.</p><p>Expect to see future blog posts exploring PDM and Sphere in more detail soon!</p><h4>Credits</h4><p>Thanks to <a href="https://www.linkedin.com/in/andreaslegenbauer/">Andreas Legenbauer</a>, <a href="https://www.linkedin.com/in/bernardo-g-4414b41/">Bernardo Gomez Palacio Valdes</a>, <a href="https://www.linkedin.com/in/czhao/">Charles Zhao</a>, <a href="https://www.linkedin.com/in/christopherchonguw/">Christopher Chong</a>, <a href="https://www.linkedin.com/in/deepa-krishnan-593b60/">Deepa Krishnan</a>, <a href="https://www.linkedin.com/in/gpesma/">George Pesmazoglou</a>, <a href="https://www.linkedin.com/in/jsilvax/">Jessica Silva</a>, <a href="https://www.linkedin.com/in/katherine-anderson-77074159/">Katherine Anderson</a>, <a href="https://www.linkedin.com/in/malikday/">Malik Day</a>, <a href="https://www.linkedin.com/in/ritabogdanovashapkina/">Rita Bogdanova</a>, <a href="https://www.linkedin.com/in/ruoyunzheng/">Ruoyun Zheng</a>, <a href="https://www.linkedin.com/in/shawn-s-b80821b0/">Shawn Stedman</a>, <a href="https://www.linkedin.com/in/suchitagoyal/">Suchita Goyal</a>, <a href="http://www.linkedin.com/in/utkarshshrivastava/">Utkarsh Shrivastava</a>, <a href="https://www.linkedin.com/in/yoomikoh/">Yoomi Koh</a>, <a href="https://www.linkedin.com/in/yuliashmeleva/">Yulia Shmeleva</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6a6aee261d8d" width="1" height="1" alt=""><hr><p><a href="https://netflixtechblog.com/uda-unified-data-architecture-6a6aee261d8d">Model Once, Represent Everywhere: UDA (Unified Data Architecture) at Netflix</a> was originally published in <a href="https://netflixtechblog.com">Netflix TechBlog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> GitHub Availability Report: May 2025 - The GitHub Blog https://github.blog/?p=88665 2025-06-11T23:24:41.000Z <p>In May, we experienced three incidents that resulted in degraded performance across GitHub services.</p> <p><strong>May 1 22:09 UTC (lasting 1 hour and 4 minutes)</strong></p> <p>On May 1, 2025, from 22:09 UTC to 23:13 UTC, the Issues service was degraded and users weren&#8217;t able to upload attachments. The root cause was identified to be a new feature which added a custom header to all client-side HTTP requests, causing CORS errors when uploading attachments to our provider. We estimate that ~130k users were impacted by the incident for ~45min.</p> <p>We mitigated the incident by rolling back the feature flag that added the new header at 22:56 UTC. In order to prevent this from happening again, we are adding new metrics to monitor and ensure the safe rollout of changes to client-side requests. We have since deployed an augmented version of the feature based on learnings from this incident that is performing well in production.</p> <p><strong>May 28 09:45 UTC (lasting 5 hours)</strong></p> <p>On May 28, 2025, from approximately 09:45 UTC to 14:45 UTC, GitHub Actions experienced delayed job starts for workflows in public repos using Ubuntu-24 standard hosted runners. This was caused by a misconfiguration in backend caching behavior after a failover, which led to duplicate job assignments reducing overall capacity in the impacted hosted runner pools. Approximately 19.7% of Ubuntu-24 hosted runner jobs on public repos were delayed. Other hosted runners, self-hosted runners, and private repo workflows were unaffected.</p> <p>By 12:45 UTC, the configuration issue was fixed through updates to the backend cache. The pools were also scaled up to more quickly work through the backlog of queued jobs until queuing impact was fully mitigated at 14:45 UTC. We are improving failover resiliency and validation to reduce the likelihood of similar issues in the future.</p> <p><strong>May 30 08:10 UTC (lasting 7 hours and 50 minutes)</strong></p> <p>On May 30, 2025, between 08:10 UTC and 16:00 UTC, the Microsoft Teams GitHub integration service experienced a complete service outage.</p> <p>During this period, the integration was unable to process user requests or deliver notifications, resulting in a 100% error rate across all functionality, with the exception of link previews. This outage was caused by an authentication issue with our downstream authentication provider.</p> <p>While the appropriate monitoring was in place, the alerting thresholds were not sufficiently sensitive to trigger a timely response, resulting in a delay in incident detection and engagement. Once engaged, our team worked closely with the downstream provider to diagnose and resolve the authentication failure. However, longer-than-expected response times from the provider contributed to the extended duration of the outage.</p> <p>We mitigated the incident by working with our provider to restore service functionality and are working to migrate to more durable authentication methods to reduce the risk of similar issues in the future.</p> <hr class="wp-block-separator has-alpha-channel-opacity"/> <p>Please follow our <a href="https://www.githubstatus.com/">status page</a> for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the <a href="https://github.blog/category/engineering/">GitHub Engineering Blog</a>.</p> <p>The post <a href="https://github.blog/news-insights/company-news/github-availability-report-may-2025/">GitHub Availability Report: May 2025</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Google Pay inside sandboxed iframe for PCI DSS v4 compliance - Google Developers Blog https://developers.googleblog.com/en/google-pay-inside-sandboxed-iframe-for-pci-dss-v4-compliance/ 2025-06-10T16:45:58.000Z Use a sandboxed iframe to implement Google Pay on checkout pages, which helps comply with PCI DSS v4 requirements by isolating scripts. Shopify successfully implemented this method and passed the PCI DSS v4 audit.