FAANGineering - BlogFlock 2025-04-29T11:48:49.545Z BlogFlock Google Developers Blog, The GitHub Blog, Nextdoor Engineering - Medium, Engineering at Meta, Netflix TechBlog - Medium, Etsy Engineering | Code as Craft Usability and safety updates to Google Auth Platform - Google Developers Blog https://developers.googleblog.com/en/usability-and-safety-updates-to-google-auth-platform/ 2025-04-28T19:29:01.000Z Updates to the Google Auth Platform include changes to OAuth configuration, client secrets display, and automatic deletion of unused clients, making the platform more secure and easier to use. How Meta understands data at scale - Engineering at Meta https://engineering.fb.com/?p=22393 2025-04-28T16:30:19.000Z <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Managing and understanding large-scale data ecosystems is a significant challenge for many organizations, requiring innovative solutions to efficiently safeguard user data. Meta&#8217;s vast and diverse systems make it particularly challenging to comprehend its structure, meaning, and context at scale.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">To address these challenges, we made </span><a href="https://about.fb.com/news/2025/01/meta-8-billion-investment-privacy/" target="_blank" rel="noopener"><span style="font-weight: 400;">substantial</span></a><span style="font-weight: 400;"> investments in advanced </span><b>data understanding </b><span style="font-weight: 400;">technologies, as part of our </span><a href="https://engineering.fb.com/2025/01/22/security/how-meta-discovers-data-flows-via-lineage-at-scale/" target="_blank" rel="noopener"><span style="font-weight: 400;">Privacy Aware Infrastructure (PAI)</span></a><span style="font-weight: 400;">. Specifically, we have adopted a &#8220;shift-left&#8221; approach, integrating data schematization and annotations early in the product development process. We also created a </span><b>universal privacy taxonomy</b><span style="font-weight: 400;">, a standardized framework providing a common semantic vocabulary for data privacy management across Meta&#8217;s products that ensures quality data understanding and provides developers with reusable and efficient compliance tooling.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We discovered that a flexible and incremental approach was necessary to onboard the wide variety of systems and languages used in building Meta’s products. Additionally, continuous collaboration between privacy and product teams was essential to unlock the value of data understanding at scale.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We embarked on the journey of understanding data across Meta a decade ago with millions of assets in scope ranging from structured and unstructured, processed by millions of flows across many of the Meta App offerings. Over the past 10 years, Meta has cataloged millions of data assets and is classifying them daily, supporting numerous privacy initiatives across our product groups. Additionally, our continuous understanding approach ensures that privacy considerations are embedded at every stage of product development. </span></li> </ul> <p><span style="font-weight: 400;">At Meta, we have a deep responsibility to protect the privacy of our community. We’re upholding that by investing our vast engineering capabilities into building cutting-edge privacy technology. We believe that privacy drives product innovation. This led us to develop our </span><a href="https://engineering.fb.com/2024/08/27/security/privacy-aware-infrastructure-purpose-limitation-meta/" target="_blank" rel="noopener"><span style="font-weight: 400;">Privacy Aware Infrastructure (PAI)</span></a><span style="font-weight: 400;">, which integrates efficient and reliable privacy tools into Meta’s systems to address needs such as </span><a href="https://engineering.fb.com/2024/08/27/security/privacy-aware-infrastructure-purpose-limitation-meta/" target="_blank" rel="noopener"><span style="font-weight: 400;">purpose limitation</span></a><span style="font-weight: 400;">—restricting how data can be used while also unlocking opportunities for product innovation by ensuring transparency in data flows </span></p> <p><span style="font-weight: 400;">Data understanding is an early step in PAI. It involves capturing the structure and meaning of data assets, such as tables, logs, and AI models. Over the past decade, we have gained a deeper understanding of our data, by embedding privacy considerations into every stage of product development, ensuring a more secure and responsible approach to data management.</span></p> <p><img fetchpriority="high" decoding="async" class="alignnone size-large wp-image-22394" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-1.png?w=1024" alt="" width="1024" height="315" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-1.png 1999w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-1.png?resize=916,281 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-1.png?resize=768,236 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-1.png?resize=1024,315 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-1.png?resize=1536,472 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-1.png?resize=96,29 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-1.png?resize=192,59 192w" sizes="(max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">We embarked on our data understanding journey by employing heuristics and </span><a href="https://engineering.fb.com/2020/07/21/security/data-classification-system/"><span style="font-weight: 400;">classifiers</span></a><span style="font-weight: 400;"> to automatically detect semantic types from user-generated content. This approach has evolved significantly over the years, enabling us to scale to millions of assets. However, conducting these processes outside of developer workflows presented challenges in terms of accuracy and timeliness. Delayed classifications often led to confusion and unnecessary work, while the results were difficult to consume and interpret.</span></p> <h2><span style="font-weight: 400;">Data understanding at Meta using PAI</span></h2> <p><span style="font-weight: 400;">To address shortcomings, we invested in </span><b>data understanding</b><span style="font-weight: 400;"> by capturing asset structure (</span><b>schematization</b><span style="font-weight: 400;">), describing meaning (</span><b>annotation</b><span style="font-weight: 400;">), and </span><b>inventorying</b><span style="font-weight: 400;"> it into </span><b>OneCatalog (</b><b>Meta’s system that discovers, registers, and enumerates all data assets)</b> <span style="font-weight: 400;">across all Meta technologies. We developed tools and APIs for developers to organize assets, classify data, and auto-generate annotation code. Despite significant investment, the journey was not without challenges, requiring innovative solutions and collaboration across the organization.</span></p> <table border="1"> <tbody> <tr> <td style="vertical-align: top;" bgcolor="#cfe2f3"><b>Challenge</b></td> <td style="vertical-align: top;" bgcolor="#cfe2f3"><b>Approach</b></td> </tr> <tr> <td style="vertical-align: top;"><b><i>Understanding at scale </i></b><i><span style="font-weight: 400;">(lack of foundation)</span></i></p> <p><span style="font-weight: 400;">At Meta, we manage </span><b>hundreds of data systems and millions of assets</b><span style="font-weight: 400;"> across our family of apps.</span></p> <p><span style="font-weight: 400;">Each product features its own distinct data model, physical schema, query language, and access patterns. This diversity created a unique hurdle for offline assets: the inability to reuse schemas due to the limitations of physical table schemas in adapting to changing definitions. Specifically, renaming columns or making other modifications had far-reaching downstream implications, rendering schema evolution challenging, thus propagation required careful coordination to ensure consistency and accuracy across multiple systems and assets. </span></td> <td style="vertical-align: top;"><span style="font-weight: 400;">We introduced a </span><b>shared asset schema format </b><span style="font-weight: 400;">as a logical representation of the asset schema that can be translated back and forth with the system-specific format. Additionally, it offers tools to automatically </span><b>classify data</b><span style="font-weight: 400;"> and </span><b>send out annotation changes to asset owners for review</b><span style="font-weight: 400;">, effectively managing long-tail systems.</span></td> </tr> <tr> <td style="vertical-align: top;"><b><i>Inconsistent definitions </i></b><i><span style="font-weight: 400;">(lack of shared understanding)</span></i></p> <p><span style="font-weight: 400;">We encountered difficulties with </span><b>diverse data systems</b><span style="font-weight: 400;"> that store data in various formats, and </span><b>customized data labels</b><span style="font-weight: 400;"> that made it challenging to recognize identical data elements when they are stored across multiple systems.</span></td> <td style="vertical-align: top;"><span style="font-weight: 400;">We introduced a unified</span><b> taxonomy of semantic types</b><span style="font-weight: 400;">, which are compiled into different languages. This ensured that all systems can share the same canonical set of labels.</span></td> </tr> <tr> <td style="vertical-align: top;"><b><i>Missing annotations </i></b><i><span style="font-weight: 400;">(lack of quality)</span></i></p> <p><span style="font-weight: 400;">A solution that relied solely on data scanning and pattern matching was prone to false positives due to </span><b>limited contextual information</b><span style="font-weight: 400;">. For instance, a 64-bit integer could be misclassified as either a timestamp or a user identifier without additional context. Moreover, manual human labeling is </span><b>not feasible at scale</b><span style="font-weight: 400;"> because it relies heavily on individual developers&#8217; expertise and knowledge.</span></td> <td style="vertical-align: top;"><span style="font-weight: 400;">We shifted left by combining </span><b>schematization</b><span style="font-weight: 400;"> together with </span><b>annotations in code</b><span style="font-weight: 400;">, in addition improving and utilizing </span><b>multiple classification signals</b><span style="font-weight: 400;">. Strict measurements provided precision/recall guarantees. Protection was embedded in everything we built, without requiring every developer to be a privacy expert.</span></td> </tr> <tr> <td style="vertical-align: top;"><b><i>Organizational barriers</i></b><i><span style="font-weight: 400;"> (lack of a unified approach)</span></i></p> <p><span style="font-weight: 400;">Meta&#8217;s data systems, with their bespoke schematization and practices, posed significant challenges in understanding data across the company. As we navigated complex interactions and with ever evolving privacy requirements, it became clear that fragmented approaches to data understanding hindered our ability to grasp data comprehensively.</span></td> <td style="vertical-align: top;"><span style="font-weight: 400;">By collaborating with asset owners to develop intuitive tooling and improve coverage, we tackled adoption barriers such as poor developer experience and inaccurate classification. This effort laid the groundwork for a unified data understanding foundation, which was seamlessly integrated into the developer workflow. As a result, we drove a cultural shift towards reusable and efficient privacy practices, ultimately delivering value to product teams and fostering a more cohesive approach to data management.</span></td> </tr> </tbody> </table> <h2></h2> <h2>Walkthrough<span style="font-weight: 400;">: Understanding user data for the “Beliefs” feature in Facebook Dating </span></h2> <p><span style="font-weight: 400;">To illustrate our approach and dive into the technical solution, let’s consider a scenario involving structured user data. When creating a profile on the Facebook Dating app, users have the option to include their religious views to help match with others who share similar values.</span></p> <p><img decoding="async" class="alignnone size-large wp-image-22395" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-2.png?w=1024" alt="" width="1024" height="825" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-2.png 1999w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-2.png?resize=916,738 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-2.png?resize=768,619 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-2.png?resize=1024,825 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-2.png?resize=1536,1237 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-2.png?resize=96,77 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-2.png?resize=192,155 192w" sizes="(max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">On Facebook Dating, religious views are subject to purpose limitation requirements. </span><span style="font-weight: 400;">Our five-step approach to data understanding provides a precise, end-to-end view of how we track and protect sensitive data assets, including those related to religious views:</span></p> <p><img decoding="async" class="alignnone size-large wp-image-22396" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?w=1024" alt="" width="1024" height="281" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png 1999w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=916,251 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=768,211 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=1024,281 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=1536,421 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=96,26 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=192,53 192w" sizes="(max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">Even a simple feature can involve data being processed by dozens of heterogenous systems, making end-to-end data protection critical. To ensure comprehensive protection, it is essential to apply the necessary steps to all systems that store or process data, including </span><b>distributed systems</b><span style="font-weight: 400;"> (web systems, chat, mobile and backend services) and </span><b>data warehouses</b><span style="font-weight: 400;">.</span></p> <p><span style="font-weight: 400;">Consider the data flow from online systems to the data warehouse, as shown in the diagram below. To ensure that religious belief data is identified across all these systems, we have implemented measures to </span><a href="https://engineering.fb.com/2024/08/27/security/privacy-aware-infrastructure-purpose-limitation-meta/#:~:text=Continuously%20enforce%20and%20monitor%20data%20flows" target="_blank" rel="noopener"><span style="font-weight: 400;">prevent its use for any purpose other than the stated one</span></a><span style="font-weight: 400;">.</span></p> <p><img loading="lazy" decoding="async" class="alignnone size-large wp-image-22412" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png?w=1024" alt="" width="1024" height="575" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png 1770w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png?resize=916,514 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png?resize=768,431 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png?resize=1024,575 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png?resize=1536,863 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-4b.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <h3><span style="font-weight: 400;">Step 1 &#8211; Schematizing</span></h3> <p><span style="font-weight: 400;">As part of the PAI initiative, Meta developed DataSchema, a standard format that is used to capture the structure and relationships of all data assets, independent of system implementation. Creating a canonical representation for compliance tools. Understanding DataSchema requires grasping </span><b>schematization</b><span style="font-weight: 400;">, which defines the logical structure and relationships of data assets, specifying field names, types, metadata, and policies.</span></p> <p><span style="font-weight: 400;">Implemented using the </span><a href="https://thrift.apache.org/docs/idl.html" target="_blank" rel="noopener"><span style="font-weight: 400;">Thrift Interface Description Language</span></a><span style="font-weight: 400;">, DataSchema is compatible with Meta systems and languages. It describes over 100 million schemas across more than 100 data systems, covering granular data units like database tables, key-value stores, data streams from distributed systems (such as those used for logging), processing pipelines, and AI models. Essentially, a data asset is like a class with annotated attributes. </span></p> <p><span style="font-weight: 400;">Let&#8217;s examine the source of truth (SoT) for a user&#8217;s dating profile schema, modeled in DataSchema. This schema includes the names and types of fields and subfields:</span></p> <pre class="line-numbers"><code class="language-none"> - user_id (uint) - name (string) - age (uint) - religious_views (enum) - photos (array&lt;struct&gt;): - url (url) - photo (blob) - caption (string) - uploaded_date (timestamp) </code><i>Dating</i><i> pr</i><i>ofile</i><i> DataSchema</i></pre> <p><span style="font-weight: 400;">The canonical SoT schema serves as the foundation for all downstream representations of the dating profile data. In practice, this schema is often translated into system-specific schemas (source of record &#8211; “SoR”), optimized for developer experience and system implementation in each environment. </span></p> <h3><span style="font-weight: 400;">Step 2 &#8211; Predicting metadata at scale</span></h3> <p><span style="font-weight: 400;">Building on this schematization foundation, we used annotations to describe data, enabling us to quickly and reliably locate user data, such as religious beliefs, across Meta&#8217;s vast data landscape. This is achieved through a universal</span><b> privacy taxonomy</b><span style="font-weight: 400;">, a framework that provides a common semantic vocabulary for data privacy management across Meta&#8217;s apps. It offers a consistent language for data description and understanding, independent of specific programming languages or technologies.</span></p> <p><span style="font-weight: 400;">The </span><b>u</b><span style="font-weight: 400;">niversal privacy taxonomy works alongside </span><b>data classification</b><span style="font-weight: 400;">, which scans systems across Meta&#8217;s product family to ensure compliance with privacy policies. These systems use taxonomy labels to identify and classify data elements, ensuring privacy commitments are met and data is handled appropriately according to its classification.</span></p> <p><b>Privacy annotations</b><span style="font-weight: 400;"> are represented by taxonomy </span><a href="https://en.wikipedia.org/wiki/Faceted_classification"><span style="font-weight: 400;">facets</span></a><span style="font-weight: 400;"> and their values. For example, an asset might pertain to an </span><span style="font-weight: 400; font-family: 'courier new', courier;">Actor.Employee</span><span style="font-weight: 400;">, with data classified as </span><span style="font-weight: 400;">S<span style="font-family: 'courier new', courier;">emanticType.Email</span></span><span style="font-weight: 400;"> and originating from </span><span style="font-weight: 400; font-family: 'courier new', courier;">DataOrigin.onsite</span><span style="font-weight: 400;">, not a third party. The </span><span style="font-weight: 400; font-family: 'courier new', courier;">SemanticType</span><span style="font-weight: 400;"> annotation is our standard facet for describing the meaning, interpretation, or context of data, such as user names, email addresses, phone numbers, dates, or locations. </span></p> <p><span style="font-weight: 400;">Below, we illustrate the semantic type taxonomy node for our scenario, </span><span style="font-weight: 400; font-family: 'courier new', courier;">Faith Spirituality</span><span style="font-weight: 400;">:</span></p> <p><img loading="lazy" decoding="async" class="alignnone size-large wp-image-22398" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-5.png?w=1024" alt="" width="1024" height="490" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-5.png 1999w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-5.png?resize=916,438 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-5.png?resize=768,367 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-5.png?resize=1024,490 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-5.png?resize=1536,735 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-5.png?resize=96,46 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-5.png?resize=192,92 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">As data models and collected data evolve, annotations can become outdated or incorrect. Moreover, new assets may lack annotations altogether. To address this, PAI utilizes various techniques to continuously verify our understanding of data elements and maintain accurate, up-to-date annotations:</span></p> <p><img loading="lazy" decoding="async" class="alignnone size-large wp-image-22399" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-6.png?w=1024" alt="" width="1024" height="588" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-6.png 1999w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-6.png?resize=916,526 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-6.png?resize=768,441 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-6.png?resize=1024,588 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-6.png?resize=1536,881 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-6.png?resize=96,55 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-6.png?resize=192,110 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">Our classification system leverages </span><b>machine learning models and heuristics</b><span style="font-weight: 400;"> to predict data types by sampling data, extracting features, and inferring annotation values. Efficient data sampling, such as Bernoulli sampling, and processing techniques enable scaling to billions of data elements with low-latency classifications. </span></p> <p><span style="font-weight: 400;">Key components include:</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><b>Scheduling component</b><span style="font-weight: 400;">: manages the set of data assets to scan, accommodating different data system architectures by either pulling data via APIs or receiving data pushed directly into the scanning service.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Scanning service</b><span style="font-weight: 400;">: processes and analyzes data from various sources by accumulating samples in memory, deserializing rows (e.g., JSON) into fields and sub-fields, and extracting features using APIs available in multiple languages (C++, Python, Hack). It ensures comprehensive data capture, even for ephemeral data.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Classification service</b><span style="font-weight: 400;">: utilizes heuristic rules and machine learning models to classify data types with high accuracy.</span> <ul> <li style="font-weight: 400;" aria-level="2"><b>Heuristic rules</b><span style="font-weight: 400;">: handle straightforward, deterministic cases by identifying specific data formats like dates, phone numbers, and user IDs.</span></li> <li style="font-weight: 400;" aria-level="2"><b>Machine learning models</b><span style="font-weight: 400;">: trained on labeled datasets using supervised learning and improved through unsupervised learning to identify patterns and anomalies in unlabeled data.</span></li> <li style="font-weight: 400;" aria-level="2"><b>Ground truth calibration and verification</b><span style="font-weight: 400;">: ensures system accuracy and reliability, allowing for model fine-tuning and improved classification performance.</span></li> </ul> </li> <li style="font-weight: 400;" aria-level="1"><b>Lineage and propagation: </b><span style="font-weight: 400;">We integrate classification rules with high-confidence lineage signals to ensure accurate data tracking and management. Our propagation mechanism enables the seamless annotation of data as needed, ensuring that exact copies of data across systems receive equivalent classification. This approach not only maintains data integrity but also optimizes the developer experience by streamlining the process of managing data classifications across our diverse systems.</span></li> </ul> <h3><span style="font-weight: 400;">Step 3 &#8211; Annotating</span></h3> <p><span style="font-weight: 400;">The integration of metadata predictions and developer input creates a comprehensive picture of a data asset&#8217;s structure (schema) and its meaning (annotation). This is achieved by attaching these elements to individual fields in data assets, providing a thorough understanding of the data.</span></p> <p><span style="font-weight: 400;">Building on the</span> <span style="font-weight: 400;">predicting data at scale initiative (step 2), where we utilize the </span><b>u</b><span style="font-weight: 400;">niversal privacy taxonomy and classification systems to identify and classify data elements, the generated metadata predictions are then used to help developers annotate their data assets efficiently and correctly.</span></p> <p><b>Portable annotation APIs: </b><span style="font-weight: 400;">seamlessly integrate into developer workflows ensuring:</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Consistent representation of data across all systems at Meta.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Accurate understanding of data, enabling the application of privacy safeguards at scale.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Efficient evidencing of compliance with regulatory requirements.</span></li> </ul> <p><b>Metadata predictions and developer input: </b><span style="font-weight: 400;">Two key components work together to create a comprehensive data asset picture:</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><b>Metadata predictions</b><span style="font-weight: 400;">: Classifiers generate predictions to aid developers in annotating data assets efficiently and correctly. If the confidence score exceeds a certain threshold, assignment can be automated, saving developer time.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Developer input</b><span style="font-weight: 400;">: Developers manually refine and verify annotations, ensuring that the data&#8217;s context and privacy requirements are accurately captured. Human oversight guarantees the accuracy and reliability of the data asset picture.</span></li> </ul> <pre class="line-numbers"><code class="language-none">- user_id (enum) → SemanticType::id_userID - name (string) → SemanticType::identity_name - age (uint) → SemanticType::age - religious_views (enum) → SemanticType::faithSpirituality - photos (array&lt;struct&gt;): - url (url) → SemanticType::electronicID_uri_mediaURI_imageURL - photo (blob) → SemanticType::media_image - caption (string) → SemanticType::media_text_naturalLanguageText - uploaded_date (timestamp) → SemanticType::uploadedTime </code></pre> <p><i></i><b>Ensuring complete schemas with annotations: </b><span style="font-weight: 400;">To maintain a high standard of data understanding, we have integrated data understanding into our data model lifecycle. This includes auto-generating code to represent the schema of newly created assets when missing, ensuring that no new assets are created without a proper schema.</span></p> <p><span style="font-weight: 400;">For example, in the context of our religious beliefs in Facebook Dating, we have defined its structure, including fields like ‘</span><span style="font-weight: 400; font-family: 'courier new', courier;">Name</span><span style="font-weight: 400;">,’ ‘</span><span style="font-weight: 400; font-family: 'courier new', courier;">EmailAddress</span><span style="font-weight: 400;">,’ and ‘</span><span style="font-weight: 400; font-family: 'courier new', courier;">Religion</span><span style="font-weight: 400;">.’ Furthermore, we have annotated the asset with </span><span style="font-weight: 400; font-family: 'courier new', courier;">Actor::user()</span><span style="font-weight: 400;">, signifying that the data pertains to a user of our products. This level of detail enables us to readily identify fields containing privacy-related data and implement appropriate protective measures, such as applying the applicable purpose limitation policy.</span></p> <p><span style="font-weight: 400;">In the case of the &#8220;dating profile&#8221; data asset, we have defined its structure, including fields like ‘</span><span style="font-weight: 400; font-family: 'courier new', courier;">Name</span><span style="font-weight: 400;">’: </span></p> <pre class="line-numbers"><code class="language-none">final class DatingProfileSchema extends DataSchemaDefinition { &lt;&lt;__Override&gt;&gt; public function configure(ISchemaConfig $config): void { $config-&gt;metadataConfig()-&gt;description('Represents a dating profile); $config-&gt;annotationsConfig()-&gt;annotations(Actor::user()); } &lt;&lt;__Override&gt;&gt; public function getFields(): dict&lt;string, ISchemaField&gt; { return dict[ 'Name' =&gt; StringField::create("name") -&gt;annotations(SemanticType::identity_name()) -&gt;example('John Doe'), 'Age' =&gt; StringInt::create('age') -&gt;description(“The age of the user.”) -&gt;annotations(SemanticType::age()) -&gt;example('24'), 'ReligiousViews' =&gt; EnumStringField::create('religious_views') -&gt;annotations(SemanticType::faithSpirituality()) -&gt;example('Atheist'), ]; } } </code></pre> <p><span style="font-weight: 400;">In order to optimize for developer experience, the details of the schema representation differ in each environment. For example, in the data warehouse, it&#8217;s represented as a Dataset &#8211; an in-code Python class capturing the asset&#8217;s schema and metadata. Datasets provide a native API for creating data pipelines. </span></p> <p><span style="font-weight: 400;">Here is an example of such a schema:</span></p> <pre class="line-numbers"><code class="language-none">​​@hive_dataset( "dim_all_dating_users", // table name "dating", // namespace oncall="dating_analytics", description="This is the primary Dating user dimension table containing one row per Dating user per day along with their profile, visitation, and key usage information.", metadata=Metadata(Actor.User), ) class dim_all_dating_users(DataSet): ds: Varchar = Partition("datestamp") userid: DatingUserID = Column("User id of the profile") email: EmailAddress = Column("User's email address"), age: PersonAge = Column("User's stated age on date ds") religious_views: ReligionOptions = Column("User's provided religious views") </code></pre> <p><span style="font-weight: 400;">Our warehouse schema incorporates </span><a href="https://engineering.fb.com/2022/11/30/data-infrastructure/static-analysis-sql-queries/" target="_blank" rel="noopener"><b>rich types</b></a><span style="font-weight: 400;">, a privacy-aware type system designed to enhance data understanding and facilitate effective data protection. Rich types, such as </span><span style="font-weight: 400; font-family: 'courier new', courier; color: #008000;">DatingUserID</span><span style="font-weight: 400;">, </span><span style="font-weight: 400; font-family: 'courier new', courier; color: #008000;">EmailAddress</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;"><span style="font-family: 'courier new', courier; color: #008000;">PersonAge</span>,</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400; font-family: 'courier new', courier; color: #008000;">ReligionOptions</span><span style="font-weight: 400;">, are integrated into the schema, offering a comprehensive approach to data management while encoding privacy metadata. They provide a developer-friendly way to annotate data and enable the enforcement of data quality rules and constraints at the type level, ensuring data consistency and accuracy across the warehouse. For instance, they can </span><a href="https://engineering.fb.com/2022/11/30/data-infrastructure/static-analysis-sql-queries/#:~:text=Enhanced%20type%2Dchecking" target="_blank" rel="noopener"><span style="font-weight: 400;">detect issues like joining columns with different types of user IDs</span></a><span style="font-weight: 400;"> or mismatched enums before code execution. </span></p> <p><span style="font-weight: 400;">Here is an example definition:</span></p> <pre class="line-numbers"><code class="language-none">ReligionOptions = enum_from_items( "ReligionOptions", items=[ EnumItem("Atheist", "Atheist"), EnumItem("Buddhist", "Buddhist"), EnumItem("Christian", "Christian"), EnumItem("Hindu", "Hindu"), EnumItem("Jewish", "Jewish"), EnumItem("Muslim", "Muslim"), ... ], annotations=(SemanticType.faithSpirituality,), ) </code></pre> <h3><span style="font-weight: 400;">Step 4 &#8211; Inventorying assets and systems</span></h3> <p><span style="font-weight: 400;">A central inventory system is crucial for managing data assets and their metadata, offering capabilities like search and compliance tracking. Meta&#8217;s </span><b>OneCatalog</b><span style="font-weight: 400;"> is a comprehensive system that discovers, registers, and enumerates all data assets across Meta&#8217;s apps, providing inventory for easier management and tracking. </span></p> <p><b>Key functions of OneCatalog:</b></p> <ul> <li style="font-weight: 400;" aria-level="1"><b>Registering all data systems</b><span style="font-weight: 400;">: OneCatalog defines a data system as a logical abstraction over resources that persist data for a common purpose. It exhaustively examines resources across Meta&#8217;s environments to discover and register all data systems hosting data assets.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Enumerating all data assets</b><span style="font-weight: 400;">: Eligible data systems must enumerate their assets through the asset enumeration platform, generating a comprehensive list of assets and their metadata in the central inventory. These assets are grouped by &#8220;asset classes&#8221; based on shared patterns, enabling efficient management and understanding of data assets.</span></li> </ul> <p><img loading="lazy" decoding="async" class="alignnone size-large wp-image-22413" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-7b.png?w=1024" alt="" width="1024" height="472" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-7b.png 1926w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-7b.png?resize=916,422 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-7b.png?resize=768,354 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-7b.png?resize=1024,472 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-7b.png?resize=1536,707 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-7b.png?resize=96,44 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-7b.png?resize=192,88 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><span style="font-weight: 400;">Guarantees provided by OneCatalog:</span></p> <p><b>Completeness</b><span style="font-weight: 400;">: The system regularly checks for consistency between the data defined in its configuration and the actual data stored in the inventory. This ongoing comparison ensures that all relevant data assets are accurately accounted for and up-to-date.</span></p> <p><b>Freshness</b><span style="font-weight: 400;">: In addition to regularly scheduled pull-based enumeration, the system subscribes to changes in data systems and updates its inventory in real time.</span></p> <p><b>Uniqueness of asset ID (XID)</b><span style="font-weight: 400;">: Each asset is assigned a globally unique identifier, similar to URLs, which facilitates coordination between multiple systems and the exchange of information about assets by providing a shared key. The globally unique identifier follows a human-readable structure, e.g., </span><b>asset://[asset-class]/[asset-name].</b></p> <p><img loading="lazy" decoding="async" class="alignnone size-large wp-image-22414" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-8b.png?w=1024" alt="" width="1024" height="347" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-8b.png 1977w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-8b.png?resize=916,310 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-8b.png?resize=768,260 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-8b.png?resize=1024,347 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-8b.png?resize=1536,520 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-8b.png?resize=96,32 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-8b.png?resize=192,65 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><b>Unified UI: </b><span style="font-weight: 400;">On top of the inventory, OneCatalog provides a unified user interface that consolidates all asset metadata, serving as the central hub for asset information. This interface offers a single point of access to view and manage assets, streamlining the process of finding and understanding data.</span></p> <p><span style="font-weight: 400;">For example, in the context of our &#8220;religious beliefs in the Dating app&#8221; scenario, we can use OneCatalog&#8217;s unified user interface to view the warehouse dating profile table asset, providing a comprehensive overview of its metadata and relationships.</span></p> <p><img loading="lazy" decoding="async" class="alignnone size-large wp-image-22402" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-9.png?w=1024" alt="" width="1024" height="820" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-9.png 1486w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-9.png?resize=916,734 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-9.png?resize=768,615 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-9.png?resize=1024,820 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-9.png?resize=96,77 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-9.png?resize=192,154 192w" sizes="auto, (max-width: 992px) 100vw, 62vw" /></p> <p><b>Compliance and privacy assurance: </b><span style="font-weight: 400;">OneCatalog&#8217;s central inventory is utilized by various privacy teams across Meta to ensure that data assets meet requirements. With its completeness and freshness guarantees, OneCatalog serves as a reliable source of truth for privacy and compliance efforts.</span></p> <p><span style="font-weight: 400;">By providing a single view of all data assets, OneCatalog enables teams to efficiently identify and address potential risks or vulnerabilities, such as unsecured data or unauthorized access.</span></p> <h3><span style="font-weight: 400;">Step 5 &#8211; Maintaining data understanding</span></h3> <p><span style="font-weight: 400;">To maintain high coverage and quality of schemas and annotations across Meta&#8217;s diverse apps, we employed a robust process that involves measuring precision and recall for both predicted metadata and developer-provided annotations. This enables us to guide the implementation of our privacy and security controls and ensure their effectiveness.</span></p> <p><span style="font-weight: 400;">By leveraging data understanding, tooling can quickly build end-to-end compliance solutions. With schema and annotations now front and center, we&#8217;ve achieved continuous understanding, enabling our engineers to easily track and protect user data, implement various security and privacy controls, and build new features at scale.</span></p> <p><span style="font-weight: 400;">Our strategy for maintaining data understanding over time includes:</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><b>Shifting left on creation time</b><span style="font-weight: 400;">: We provided intuitive APIs for developers to provide metadata at asset creation time, ensuring that schemas and annotations were applied consistently in downstream use cases.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Detecting and fixing annotation gaps</b><span style="font-weight: 400;">: We surfaced prediction signals to detect coverage and quality gaps and evolved our prediction and annotation capabilities to ensure new systems and workflows were covered.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Collecting ground truth</b><span style="font-weight: 400;">: We established a baseline to measure automated systems against, with the help of subject matter experts, to continuously measure and improve them.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Providing canonical consumption APIs</b><span style="font-weight: 400;">: We developed canonical APIs for common compliance usage patterns, such as detecting user data, to ensure consistent interpretation of metadata and low entry barriers.</span></li> </ul> <h2><span style="font-weight: 400;">Putting it all together</span></h2> <p><span style="font-weight: 400;">Coming back to our scenario: As developers on the Facebook Dating team collect or generate new data, they utilize familiar APIs that help them schematize and annotate their data. These APIs provide a consistent and intuitive way to define the structure and meaning of the data.</span></p> <p><span style="font-weight: 400;">When collecting data related to &#8220;Faith Spirituality,&#8221;the developers use a data classifier that confirms their semantic type annotations once the data is scanned during testing. This ensures that the data is accurately labeled and can be properly handled by downstream systems.</span></p> <p><span style="font-weight: 400;">To ensure the quality of the classification system, ground truth created by subject matter experts is used to measure its accuracy. A feedback loop between the product and PAI teams keeps the unified taxonomy updated, ensuring that it remains relevant and effective.</span></p> <p><span style="font-weight: 400;">By using canonical and catalogued metadata, teams across Meta can implement privacy controls that are consistent and effective. This enables the company to maintain user trust and meet requirements.</span></p> <p><span style="font-weight: 400;">In this scenario, the developers on the Facebook Dating team are:</span></p> <p><img decoding="async" class="alignnone size-large wp-image-22396" src="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?w=1024" alt="" width="1024" height="281" srcset="https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png 1999w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=916,251 916w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=768,211 768w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=1024,281 1024w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=1536,421 1536w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=96,26 96w, https://engineering.fb.com/wp-content/uploads/2025/04/Data-understand-Meta_image-3.png?resize=192,53 192w" sizes="(max-width: 992px) 100vw, 62vw" /></p> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Schematizing and annotating their data using familiar APIs.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Using a data classifier to confirm semantic type annotations.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Leveragig ground truth to measure the quality of the classification system.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Utilizing a feedback loop to keep the unified taxonomy updated.</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Implementing privacy controls using canonical and catalogued metadata.</span></li> </ul> <h2><span style="font-weight: 400;">Learnings and takeaways</span></h2> <p><span style="font-weight: 400;">Building an understanding of all data at Meta was a monumental effort that not only required novel infrastructure but also the contribution of thousands of engineers across all teams at Meta, and years of investment.</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><b>Canonical everything</b><span style="font-weight: 400;">: Data understanding at scale relies on a canonical catalog of systems, asset classes, assets, and taxonomy labels, each with globally unique identifiers. This foundation enables an ecosystem of compliance tooling, separating the concerns of data understanding from consuming canonical metadata.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Incremental and flexible approach</b><span style="font-weight: 400;">: To tackle the challenge of onboarding hundreds of systems across Meta, we developed a platform that supports pulling schemas from existing implementations. We layered </span><a href="https://developers.facebook.com/blog/post/2021/04/26/eli5-ent-schema-as-code-go/"><span style="font-weight: 400;">solutions to enhance existing untyped APIs</span></a><span style="font-weight: 400;">, meeting developers where they are—whether in code, configuration, or a UI defining their use case and data model. This incremental and flexible approach delivers value at every step.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Collaborating for data classification excellence</b><span style="font-weight: 400;">: Building the platform was just the beginning. The infrastructure and privacy teams also collaborated with subject matter experts to develop best-in-class classifiers for our data, addressing some of the most challenging problems. These include detecting user-generated content, classifying data embedded in blobs, and creating a governed taxonomy that allows every developer to describe their data with the right level of detail.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Community engagement with a tight feedback loop</b><span style="font-weight: 400;">: Our success in backfilling schemas and integrating with the developer experience was made possible by a strong partnership with product teams. By co-building solutions and establishing an immediate feedback loop, we refined our approach, addressed misclassifications, and improved classification quality. This collaboration is crucial to our continued evolution and refinement of data understanding. </span></li> </ul> <h2><span style="font-weight: 400;">The future of data understanding</span></h2> <p><span style="font-weight: 400;">Data understanding has become a crucial component of Meta&#8217;s PAI initiative, enabling us to protect user data in a sustainable and effective manner. By creating a comprehensive understanding of our data, we can address privacy challenges durably and more efficiently than traditional methods.</span></p> <p><span style="font-weight: 400;">Our approach to data understanding aligns closely with the developer workflow, involving the creation of typed data models, collection of annotated data, and processing under relevant policies. At Meta’s scale, this approach has saved significant engineering effort by automating annotation on millions of assets (i.e., fields, columns, tables) with specific labels from an inventory that are deemed commitment-critical. This automation has greatly reduced the manual effort required for annotation, allowing teams to focus on higher-priority tasks. </span></p> <p><span style="font-weight: 400;">As data understanding continues to evolve, it is expected to have a significant impact on various aspects of operations and product offerings. Here are some potential future use cases:</span></p> <ul> <li style="font-weight: 400;" aria-level="1"><b>Improved AI and machine learning</b><span style="font-weight: 400;">: leveraging data understanding to improve the accuracy of AI-powered content moderation and recommendation systems.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Streamlined developer workflows</b><span style="font-weight: 400;">: integrating data understanding into Meta&#8217;s internal development tools to provide clear data context and reduce confusion.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Operational and developer efficiency</b><span style="font-weight: 400;">: By automating data classification and annotation for millions of assets across Meta&#8217;s platforms, we can significantly improve operational efficiency. This automation enables us to leverage metadata for various use cases, such as accelerating product innovation. For instance, we&#8217;re now utilizing this metadata to help developers efficiently find the right data assets, streamlining their workflow and reducing the time spent on manual searches.</span></li> <li style="font-weight: 400;" aria-level="1"><b>Product innovation</b><span style="font-weight: 400;">: With a comprehensive understanding of data, Meta can drive product innovation by leveraging insights to create personalized and engaging user experiences.</span></li> </ul> <p><span style="font-weight: 400;">While there is still more work to be done, such as evolving taxonomies to meet future compliance needs and developing novel ways to schematize data, we are excited about the potential of data understanding. By harnessing canonical metadata, we can deepen our shared understanding of data, unlocking unprecedented opportunities for innovation not only at Meta, but across the industry.</span></p> <h2><span style="font-weight: 400;">Acknowledgements</span></h2> <p><i><span style="font-weight: 400;">The authors would like to acknowledge the contributions of many current and former Meta employees who have played a crucial role in developing data understanding over the years. In particular, we would like to extend special thanks to (in alphabetical order) Adrian Zgorzalek, Alex Gorelik, Alex Uslontsev, Andras Belokosztolszki, Anthony O’Sullivan, Archit Jain, Aygun Aydin, Ayoade Adeniyi, Ben Warren, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Bob Baldwin&quot;,&quot;per_e&quot;:&quot;bobbaldwin@fb.com&quot;,&quot;type&quot;:&quot;person&quot;}">Bob Baldwin</span></i><i><span style="font-weight: 400;">, Brani Stojkovic, Brian Romanko, Can Lin, Carrie (Danning) Jiang, Chao Yang, Chris Ventura, Daniel Ohayon, Danny Gagne, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;David Taieb&quot;,&quot;per_e&quot;:&quot;dtaieb@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">David Taieb</span></i><i><span style="font-weight: 400;">, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Dong Jia&quot;,&quot;per_e&quot;:&quot;djia@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Dong Jia</span></i><i><span style="font-weight: 400;">, Dong Zhao, Eero Neuenschwander, Fang Wang, Ferhat Sahinkaya, Ferdi Adeputra, Gayathri Aiyer, George Stasa, Guoqiang Jerry Chen, Haiyang Han, Ian Carmichael, Jerry Pan, Jiang Wu, Johnnie Ballentyne, Joanna Jiang, Jonathan Bergeron, Joseph Li, Jun Fang, Kaustubh Karkare, Komal Mangtani, Kuldeep Chaudhary, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Matthieu Martin&quot;,&quot;per_e&quot;:&quot;matthieu@fb.com&quot;,&quot;type&quot;:&quot;person&quot;}">Matthieu Martin</span></i><i><span style="font-weight: 400;">, Marc Celani, Max Mazzeo, Mital Mehta, Nevzat Sevim, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Nick Gardner&quot;,&quot;per_e&quot;:&quot;nikgardner@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Nick Gardner</span></i><i><span style="font-weight: 400;">, Lei Zhang, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Luiz Ribeiro&quot;,&quot;per_e&quot;:&quot;luizribeiro@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Luiz Ribeiro</span></i><i><span style="font-weight: 400;">, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Oliver Dodd&quot;,&quot;per_e&quot;:&quot;oliverdodd@fb.com&quot;,&quot;type&quot;:&quot;person&quot;}">Oliver Dodd</span></i><i><span style="font-weight: 400;">, Perry Stoll, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Prashanth Bandaru&quot;,&quot;per_e&quot;:&quot;pbandaru@fb.com&quot;,&quot;type&quot;:&quot;person&quot;}">Prashanth Bandaru</span></i><i><span style="font-weight: 400;">, Piyush Khemka, Rahul Nambiar, Rajesh Nishtala, Rituraj Kirti, Roger (Wei) Li, Rujin Cao, Sahil Garg, Sean Wang, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Satish Sampath&quot;,&quot;per_e&quot;:&quot;ssm@fb.com&quot;,&quot;type&quot;:&quot;person&quot;}">Satish Sampath</span></i><i><span style="font-weight: 400;">,  Seth Silverman, Shridhar Iyer, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Sriguru Chakravarthi&quot;,&quot;per_e&quot;:&quot;srigurunath@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Sriguru Chakravarthi</span></i><i><span style="font-weight: 400;">, Sushaant Mujoo, Susmit Biswas, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Taha Bekir Eren&quot;,&quot;per_e&quot;:&quot;tahaeren@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Taha Bekir Eren</span></i><i><span style="font-weight: 400;">, Tony Harper, Vineet Chaudhary, Vishal Jain, </span></i><i><span style="font-weight: 400;" data-rich-links="{&quot;per_n&quot;:&quot;Vitali Haravy&quot;,&quot;per_e&quot;:&quot;vharavy@meta.com&quot;,&quot;type&quot;:&quot;person&quot;}">Vitali Haravy</span></i><i><span style="font-weight: 400;">, Vlad Fedorov, Vlad Gorelik, Wolfram Schuttle, Xiaotian Guo, Yatu Zhang, Yi Huang, Yuxi Zhang, Zejun Zhang and Zhaohui Zhang. We would also like to express our gratitude to all reviewers of this post, including (in alphabetical order) Aleksandar Ilic, Avtar Brar, Brianna O&#8217;Steen, Chloe Lu, Chris Wiltz, Imogen Barnes, Jason Hendrickson, Rituraj Kirti, Xenia Habekoss and Yuri Claure. We would like to especially thank Jonathan Bergeron for overseeing the effort and providing all of the guidance and valuable feedback, and Ramnath Krishna Prasad for pulling required support together to make this blog post happen.</span></i></p> <p>The post <a rel="nofollow" href="https://engineering.fb.com/2025/04/28/security/how-meta-understands-data-at-scale/">How Meta understands data at scale</a> appeared first on <a rel="nofollow" href="https://engineering.fb.com">Engineering at Meta</a>.</p> GitHub for Beginners: Building a REST API with Copilot - The GitHub Blog https://github.blog/?p=86980 2025-04-28T13:00:42.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>Welcome to the next episode in our GitHub for Beginners series, where we&rsquo;re diving into the world of <a href="https://www.youtube.com/watch?v=n0NlxUyA7FI&amp;list=PL0lo9MOBetEFcp4SCWinBdpml9B2U25-f&amp;index=6">GitHub Copilot</a>. This is our fifth episode, and we&rsquo;ve already talked about Copilot in general, some of its essential features, how to write good prompts, and a bit about security best practices. We have all the previous episodes on <a href="https://github.blog/tag/github-for-beginners/">our blog</a> and available <a href="https://www.youtube.com/playlist?list=PL0lo9MOBetEFcp4SCWinBdpml9B2U25-f">as videos</a>.</p> <p>Today we&rsquo;re diving a little deeper&mdash;we&rsquo;re using GitHub Copilot to help us build a backend REST API. We&rsquo;ll walk through how to build the backend for Planventure, a travel itinerary builder that helps users plan their trips. You can find a full description of what we&rsquo;ll be building in <a href="http://gh.io/planventure">this repository</a>.</p> <div class="mod-yt position-relative" style="height: 0; padding-bottom: calc((9 / 16)*100%);"> <iframe loading="lazy" class="position-absolute top-0 left-0 width-full height-full" src="https://www.youtube.com/embed/CJUbQ1QiBUY?feature=oembed" title="YouTube video player" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe> </div> <aside class="p-4 p-md-6 post-aside--large"><p class="h5-mktg gh-aside-title">For the demos in this series, we&rsquo;re using GitHub Copilot in Visual Studio Code</p><p>Copilot is available in other IDEs, but the available functionality may vary depending on your environment.</p> </aside> <h2 id="what-youll-need-and-what-were-building" id="what-youll-need-and-what-were-building" >What you&rsquo;ll need and what we&rsquo;re building<a href="#what-youll-need-and-what-were-building" class="heading-link pl-2 text-italic text-bold" aria-label="What you&rsquo;ll need and what we&rsquo;re building"></a></h2> <p><strong>Before we get started, here&rsquo;s what you&rsquo;ll need to install:</strong></p> <ul> <li>A code editor like <a href="https://code.visualstudio.com/">VS Code</a> </li> <li><a href="https://www.python.org/downloads/">Python</a> </li> <li><a href="https://marketplace.visualstudio.com/items?itemName=qwtel.sqlite-viewer">SQLite Viewer</a> to view database tables </li> <li>An API client like <a href="https://github.com/usebruno/bruno">Bruno</a> to test your routes </li> <li>Access to <a href="http://gh.io/gfb-copilot">GitHub Copilot</a> &mdash; sign up for free! </li> </ul> <p><strong>What we&rsquo;re building:</strong></p> <p>We&rsquo;re creating a minimum viable product (MVP) for Planventure&rsquo;s backend API. In the next episode, we&rsquo;ll build the frontend to connect to the API.</p> <p><strong>Here&rsquo;s what we&rsquo;ll include:</strong></p> <ul> <li>REST API built with Flask </li> <li>Database support using SQLAlchemy </li> <li>User <strong>authentication</strong> (with password hashing and JWTs) </li> <li>Full CRUD functionality for trips</li> </ul> <p><strong>Each trip will include:</strong></p> <ul> <li>Destination </li> <li>Start and end dates </li> <li>A basic itinerary with location coordinates (perfect for maps)</li> </ul> <p>At the end of this, we want to have a working API that handles user authentication and basic trip management.</p> <h2 id="step-1-setting-up-the-environment" id="step-1-setting-up-the-environment" >Step 1: Setting up the environment<a href="#step-1-setting-up-the-environment" class="heading-link pl-2 text-italic text-bold" aria-label="Step 1: Setting up the environment"></a></h2> <p>Fork and clone <a href="http://gh.io/planventure">the planventure repository</a>.</p> <p>Once you&rsquo;ve done that, <code>cd</code> into the <em>planventure-api</em> directory in your code editor.</p> <pre><code class="language-plaintext">git clone https://github.com/github-samples/planventure cd planventure-api </code></pre> <p>Open up your terminal and run the following commands to create and activate a virtual environment.</p> <pre><code class="language-plaintext">python3 -m venv venv source venv/bin/activate </code></pre> <p>Assuming you&rsquo;re using VS Code, you&rsquo;ll see <code>venv</code> in the status bar at the bottom. If you&rsquo;re using another editor, be sure to verify that you&rsquo;re in your virtual environment.</p> <p>Staying in your terminal, run this command to install dependencies:</p> <pre><code class="language-plaintext">pip install -r requirements.txt </code></pre> <p>Run the following command in your terminal to start the server:</p> <pre><code class="language-plaintext">flask run --debug </code></pre> <p>You should see your server running on <code>127.0.0.1:5000</code>. Use Bruno to make a <code>GET</code> request to that URL and you should see the following welcome message:</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/bruno-welcomeapi.png"><img data-recalc-dims="1" fetchpriority="high" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/bruno-welcomeapi.png?resize=1024%2C571" alt='A screenshot of the welcome message. It says, "Welcome to PlanVenture API".' width="1024" height="571" class="alignnone size-full wp-image-86996 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/bruno-welcomeapi.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/bruno-welcomeapi.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/bruno-welcomeapi.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/bruno-welcomeapi.png?w=1024 1024w" sizes="(max-width: 1000px) 100vw, 1000px" /></a></p> <p>Remember to add <code>venv/</code> to your <code>.gitignore</code>!</p> <h2 id="step-2-create-the-database" id="step-2-create-the-database" >Step 2: Create the Database<a href="#step-2-create-the-database" class="heading-link pl-2 text-italic text-bold" aria-label="Step 2: Create the Database"></a></h2> <p>Now that you&rsquo;ve got your environment created, it&rsquo;s time to start using Copilot to generate some code. We&rsquo;ll use Copilot Chat and Copilot Edits to generate our database setup.</p> <p>Open the <em>app.py</em> file and Copilot Chat, choose the Ask option, and enter the following prompt:</p> <pre><code class="language-plaintext">@workspace Update the Flask app with SQLAlchemy and basic configurations </code></pre> <p>At the bottom of the chat window, click the model selector and select the Claude 3.5 Sonnet model and send the prompt.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/model_picker.png"><img data-recalc-dims="1" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/model_picker.png?resize=867%2C360" alt="A screenshot showing the location of the model picker. It's located at the bottom right of the chat box." width="867" height="360" class="alignnone size-full wp-image-86997 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/model_picker.png?w=867 867w, https://github.blog/wp-content/uploads/2025/04/model_picker.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/model_picker.png?w=768 768w" sizes="(max-width: 867px) 100vw, 867px" /></a></p> <p>Copilot will provide a plan, a code block, and a summary of suggested changes.</p> <p>Whenever you receive code suggestions from Copilot, it&rsquo;s important to review them and understand the changes it&rsquo;s suggesting. After reviewing the changes, hover over the code block and click the <strong>Apply in editor</strong> button&mdash;the first button in the top-right of the suggested code.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/apply_in_editor.png"><img data-recalc-dims="1" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/apply_in_editor.png?resize=1024%2C571" alt="A screenshot shoring the location of the &quot;Apply in editor&quot; button. It's in the top-right of the suggested code box." width="1024" height="571" class="alignnone size-full wp-image-86998 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/apply_in_editor.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/apply_in_editor.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/apply_in_editor.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/apply_in_editor.png?w=1024 1024w" sizes="(max-width: 1000px) 100vw, 1000px" /></a></p> <p>Select the option to apply the changes into the active editor. Copilot will make these updates in the open file. Click <strong>Accept</strong> to add those changes.</p> <div style="width: 1426px;" class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]--> <video class="wp-video-shortcode" id="video-86980-1" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/apply_in_editor_vid.mp4#t=0.001?_=1" /><a href="https://github.blog/wp-content/uploads/2025/04/apply_in_editor_vid.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/apply_in_editor_vid.mp4#t=0.001</a></video></div> <p>If Copilot provided suggestions that need to be in a new file, hover over the change and select the three dots, then select <strong>Insert into New File</strong> and save.</p> <div style="width: 1426px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-2" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/insert_new_file.mp4#t=0.001?_=2" /><a href="https://github.blog/wp-content/uploads/2025/04/insert_new_file.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/insert_new_file.mp4#t=0.001</a></video></div> <p>Now we need to update our dependencies. Go back to Copilot Chat and enter the following prompt:</p> <pre><code class="language-plaintext">@workspace update requirements.txt with the necessary packages for Flask API with SQLAlchemy and JWT </code></pre> <p>Open the <code>requirements.txt</code> file. Hover over the suggested changes, click the <strong>Apply in editor</strong> button, and once again select the option to apply the changes into the active editor. Just like before, accept the changes and save the file.</p> <h2 id="step-3-time-to-create-some-models" id="step-3-time-to-create-some-models" >Step 3: Time to create some models<a href="#step-3-time-to-create-some-models" class="heading-link pl-2 text-italic text-bold" aria-label="Step 3: Time to create some models"></a></h2> <p>Now it&rsquo;s time to create our <code>User</code> and <code>Trip</code> models. <a href="https://github.com/github-samples/planventure/issues/2">This GitHub issue</a> describes our requirements. According to the issue, the User model needs an email, a password, and must include password hashing and timestamps. The Trip model needs to have a destination, start and end dates, coordinates, and an itinerary with a user relationship.</p> <p>To do all of this, we&rsquo;re going to use Copilot Edits.</p> <p>Click on the chat icon in your editor, and from the dropdown, select Edits.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/edits_dropdown.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/edits_dropdown.png?resize=578%2C210" alt='A screenshot showing the location of the "Edit" dropdown, near the model selector.' width="578" height="210" class="alignnone size-full wp-image-87001 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/edits_dropdown.png?w=578 578w, https://github.blog/wp-content/uploads/2025/04/edits_dropdown.png?w=300 300w" sizes="auto, (max-width: 578px) 100vw, 578px" /></a></p> <p>Open your <code>app.py</code> and <code>requirements.txt</code> file and drag them into the chat window. Then choose the Claude 3.5 Sonnet model from the model picker.</p> <p>Now send the following prompt to Copilot Edits:</p> <pre><code class="language-plaintext">Create SQLAlchemy User model with email, password_hash, and timestamps. Add code in new files. </code></pre> <p>Copilot Edits will then create a plan to do what you asked. It creates some new files and folders, as well as updates existing files. Accept the changes, review the code, and make any corrections necessary before saving the files.</p> <div style="width: 1426px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-3" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/edits_prompt_flow.mp4#t=0.001?_=3" /><a href="https://github.blog/wp-content/uploads/2025/04/edits_prompt_flow.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/edits_prompt_flow.mp4#t=0.001</a></video></div> <p>Check the video for examples of things to look for, but remember that your changes might be different. Copilot can provide different suggestions for the same prompts.</p> <p>Now create a python script to create the database tables. Send Copilot Edits the following prompt:</p> <pre><code class="language-plaintext">Update code to be able to create the db tables with a python shell script. </code></pre> <p>As before, Copilot Edits will go to work and create or edit files as necessary. Accept the code and review the changes, making sure these changes address your needs. After saving the reviewed changes, go back to your terminal and initialize the database by running:</p> <pre><code class="language-plaintext">python3 init_db.py </code></pre> <p>If Copilot Edits created a different filename than <em>init_db.py</em>, use the filename in your project.</p> <p>After the script runs, you&rsquo;ll see a new <em>planventure.db</em> file in your project. However, you need to install an extension in order to see the tables. Navigate to the extensions tab and search for &ldquo;<a href="https://marketplace.visualstudio.com/items?itemName=qwtel.sqlite-viewer">SqLite viewer</a>&rdquo;. Click <strong>Install</strong>, and then click on <em>planventure.db</em> to get a look at the table.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/sqlite_viewer.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/sqlite_viewer.png?resize=1024%2C571" alt="A screenshot showing how to install SQLite Viewer." width="1024" height="571" class="alignnone size-full wp-image-87007 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/sqlite_viewer.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/sqlite_viewer.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/sqlite_viewer.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/sqlite_viewer.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a></p> <p>Now that you&rsquo;ve created the User model, you need the Trip model. Navigate back to Copilot Edits and send:</p> <pre><code class="language-plaintext">Create SQLAlchemy Trip model with user relationship, destination, start date, end date, coordinates, and itinerary </code></pre> <p>By now you can probably guess what you need to do. It&rsquo;s time to accept the changes and review them, making sure to correct anything that wasn&rsquo;t exactly what you needed. Go ahead and save the files after you&rsquo;re satisfied with the updates.</p> <p>Go back to your terminal and initialize the database again to add the trips table:</p> <pre><code class="language-plaintext">python3 init_db.py </code></pre> <p>Open <em>planventure.db</em> in your editor and verify that the trips table exists.</p> <div style="width: 1426px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-4" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/init_db.mp4#t=0.001?_=4" /><a href="https://github.blog/wp-content/uploads/2025/04/init_db.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/init_db.mp4#t=0.001</a></video></div> <p>With both of these models added, it seems like a good time to commit these changes before moving on to the next step. Click the source control button in the left-hand bar in your editor. Hover over the <strong>Changes</strong> row, and click the plus icon to stage your changes for all of the files below.</p> <div style="width: 1426px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-5" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/stage_commit.mp4#t=0.001?_=5" /><a href="https://github.blog/wp-content/uploads/2025/04/stage_commit.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/stage_commit.mp4#t=0.001</a></video></div> <p>In the box for a commit message, click the sparkles button. Copilot will then generate a commit message for you in the box, describing all of the changes! Don&rsquo;t forget to click the <strong>Commit</strong> button to commit your changes and update your repository.</p> <h2 id="step-4-adding-authentication" id="step-4-adding-authentication" >Step 4: Adding authentication<a href="#step-4-adding-authentication" class="heading-link pl-2 text-italic text-bold" aria-label="Step 4: Adding authentication"></a></h2> <p>We all know that security is important, so it&rsquo;s time to start talking about adding some authentication to our API. <a href="https://github.com/github-samples/planventure/issues/3">This issue</a> lays out our authentication requirements. According to the issue, we want to make sure we include password hashing and salt for extra security. To get started, we&rsquo;re once again going to count on Copilot Edits.</p> <p>Navigate back to Edits and send it the following prompt:</p> <pre><code class="language-plaintext">Create password hasing and salt utility functions for the User model. </code></pre> <p>As always, read the summary of what Copilot did and make sure to review each individual file for the relevant changes. Once you&rsquo;re satisfied with the updates, save the files.</p> <p>Next, you need to set up the JWT token and validation. Send the following prompt to Copilot Edits:</p> <pre><code class="language-plaintext">Setup JWT token generation and validation functions. </code></pre> <p>Review the changes, make any adjustments as necessary, and then save the files. Remember that we can&rsquo;t predict what tweaks might be necessary. This is because Copilot might not give the exact same response to the same prompt. That&rsquo;s part of the nature of any generative AI, and part of what makes them so versatile. It&rsquo;s also why you always need to carefully review the suggestions and make sure you understand them.</p> <p>Now you need to create the actual route so that we can register a new user. Copilot Edits come to our aid again here. Send it the following prompt:</p> <pre><code class="language-plaintext">Create auth routes for user registration with email validation. </code></pre> <p>Go ahead and accept and review these changes. Don&rsquo;t forget to read the summary that Copilot provides in the Copilot Edits window to help with understanding the updates. Save the files.</p> <div style="width: 1426px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-6" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/register_route.mp4#t=0.001?_=6" /><a href="https://github.blog/wp-content/uploads/2025/04/register_route.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/register_route.mp4#t=0.001</a></video></div> <p>And now you have enough code to go ahead and test the route! Head back to Bruno and create a <code>POST</code> request. You&rsquo;ll start with the same URL you used earlier, back when you sent the <code>GET</code> request to verify the server was up and running. Paste that URL into the <code>POST</code> box, and then add <code>/auth/register</code> to the end. So if your URL was <code>127.0.0.1:5000</code>, the full URL would be <code>127.0.0.1:5000/auth/register</code>. Click the <strong>Body</strong> tab and select <strong>JSON</strong> from the pull down menu at the top. Enter the following text into the large box for the body.</p> <pre><code class="language-plaintext">{ "email": "test@email.com", "password": "test1234" } </code></pre> <p>Such a secure password! Obviously, this is just for the demo. When you&rsquo;re creating actual accounts, make sure to use a stronger, more secure password. Note that you don&rsquo;t need to set up an auth token yet because you&rsquo;ll get that access token from the server when you register.</p> <p>Continuing with setting up our test, click the <strong>Headers</strong> tab. Add a header with the name <code>Content-Type</code> and a value of <code>application/JSON</code>.</p> <p>Now it&rsquo;s time to test the connection. Click the arrow at the end of the <code>POST</code> row. If everything is working and set up correctly, you&rsquo;ll receive a 200 response.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/200-response.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/200-response.png?resize=1024%2C571" alt="A screenshot showing the 200 response." width="1024" height="571" class="alignnone size-full wp-image-87014 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/200-response.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/200-response.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/200-response.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/200-response.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a></p> <p>Once you get that response, go back to VS Code and open the <em>planventure.db</em> file. Refresh the file and select the <strong>users</strong> table. The user you just added will be represented in the table.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/users-table.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/users-table.png?resize=1024%2C571" alt="A screenshot of the users table." width="1024" height="571" class="alignnone size-full wp-image-87015 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/users-table.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/users-table.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/users-table.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/users-table.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a></p> <p>With this piece complete, you now need to add the login route. In order to do this, you&rsquo;re going to keep using our good friend Copilot Edits. Send it the following prompt:</p> <pre><code class="language-plaintext">Create login route with JWT token generation. </code></pre> <p>Copilot will make suggestions to give you the full route that you can accept and test right away in Bruno. Switch back to Bruno, and hover over the <strong>POST register</strong> request in the left-hand window under <strong>Planventure</strong>. Click the <code>...</code> to show the context menu, and select <strong>Clone</strong>. Change the request name to <code>login</code> and click <strong>Clone</strong>. Click the arrow to send the command, and you should receive a message that the login was successful.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/login_route.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/login_route.png?resize=1024%2C571" alt="A screenshot of the message showing the login was successful." width="1024" height="571" class="alignnone size-full wp-image-87016 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/login_route.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/login_route.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/login_route.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/login_route.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a></p> <p>Congratulations, your login route is working! Users can now register and log in successfully.</p> <p>The next thing you need to do is add an auth middleware to protect your routes. Head back to Copilot Edits to continue coding using natural language and send it this prompt:</p> <pre><code class="language-plaintext">Create auth middleware to protect routes. </code></pre> <p>Review and accept the changes, then head back to Bruno to test the routes again. Navigate back to your <strong>POST login</strong> request, and click the arrow once again to send the command. If you receive a message that the login was successful, everything&rsquo;s working perfectly.</p> <p>Now that you&rsquo;ve finished adding authentication to your code, make sure to commit your changes and push them up to GitHub. Remember that you can use the sparkles button to have Copilot create a commit description for you! Just remember to review it before submitting.</p> <h2 id="step-5-adding-trips" id="step-5-adding-trips" >Step 5: Adding trips<a href="#step-5-adding-trips" class="heading-link pl-2 text-italic text-bold" aria-label="Step 5: Adding trips"></a></h2> <p>You&rsquo;ve got the users, but in order to use Planventure, users need to be able to add trips. The scope of this work is captured in <a href="https://github.com/github-samples/planventure/issues/4">this issue</a>. In order to cover our needs, the trip route needs full CRUD operations with a default itinerary template. Since we&rsquo;ve had so much success with it so far, let&rsquo;s keep using Copilot Edits.</p> <p>Send it the following prompt to get started:</p> <pre><code class="language-plaintext">Create Trip routes blueprint with CRUD operations. </code></pre> <p>The expectation is that you&rsquo;ll get the <code>CREATE</code>, <code>READ</code>, <code>UPDATE</code>, and <code>DELETE</code> routes with this prompt. Just like you did before when adding authentication, make sure to review the changes and that you understand what changed. Make any necessary adjustments, and then save the files.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/crud.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/crud.png?resize=1024%2C571" alt="A screenshot showing the full CRUD for trips route." width="1024" height="571" class="alignnone size-full wp-image-87017 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/crud.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/crud.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/crud.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/crud.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a></p> <p>Once you&rsquo;ve accepted these changes, use Copilot Chat to generate some sample data. It&rsquo;s much easier to have Copilot create it than to do so manually. Open up your chat window and send it the following prompt:</p> <pre><code class="language-plaintext">Create example json to test the trips route </code></pre> <p>Copilot will create some sample data that you can use to test your routes. Hover over the supplied JSON code and click the <strong>Copy</strong> button.</p> <p>Go back to Bruno, and clone the <strong>POST login</strong> request. Change the name to <strong>CREATE trip</strong> and click the <strong>Clone</strong> button. Change the <strong>POST</strong> address from <code>&lt;IP&gt;/auth/login</code> to <code>&lt;IP&gt;/api/trips</code>. Click the <strong>Body</strong> tab and replace the body with the JSON you copied to the clipboard in the previous step.</p> <div style="width: 1426px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-7" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/cloning_post_request.mp4#t=0.001?_=7" /><a href="https://github.blog/wp-content/uploads/2025/04/cloning_post_request.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/cloning_post_request.mp4#t=0.001</a></video></div> <p>Before sending this request, we need to add our authorization token. Navigate to the <strong>POST login</strong> request by clicking that tab at the top of the window. Click the arrow to send the command, and then copy the <code>token</code> value. Click the <strong>POST CREATE trip</strong> tab at the top, and then the <strong>Auth</strong> tab underneath the POST address. Make sure that the type of authorization is <strong>Bearer Token</strong>, and then replace any text in the <strong>Token</strong> field with the token you copied from the login request.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/bearer_token.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/bearer_token.png?resize=1024%2C571" alt="A screenshot showing the bearer token." width="1024" height="571" class="alignnone size-full wp-image-87018 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/bearer_token.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/bearer_token.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/bearer_token.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/bearer_token.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a></p> <p>If you run into an error, copy the error message and head back to VS Code to get Copilot&rsquo;s help debugging the error. Use the <code>/fix</code> slash command in the Copilot Chat window, and then paste the error message. To watch a demo of this process, you can check the video version of this <a href="https://youtu.be/CJUbQ1QiBUY">GitHub for Beginners episode</a>.</p> <p>After performing any necessary debugging, you should receive a response that the trip was successfully created. You can verify this by going back to VS Code, opening the <em>planventure.db</em> file, refreshing it, and looking at the <strong>trips</strong> table.</p> <div style="width: 1426px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-8" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/post_debugging.mp4#t=0.001?_=8" /><a href="https://github.blog/wp-content/uploads/2025/04/post_debugging.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/post_debugging.mp4#t=0.001</a></video></div> <p>You can now add trips to your database! Now it&rsquo;s time to test getting a trip id.</p> <p>Go back to Bruno and clone the <strong>POST CREATE trip</strong> request. Rename it to <code>GET trip id</code> and click <strong>Clone</strong>. Click <strong>POST</strong> at the top to open a dropdown menu, and select <strong>GET</strong> to change the request to a GET request. Click the <strong>Body</strong> tab and delete the current body. Update the URL to include the ID of the trip you want to fetch. For example, <code>127.0.0.1:5000/api/trips/1</code>. Click the arrow to send the command, and the response should include all of the trip data in the database that matches that trip id. Just like before, if you run into any errors, you can use Copilot Chat to help you debug them until you&rsquo;re able to successfully receive a response.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/trip_by_id.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/trip_by_id.png?resize=1024%2C571" alt="A screenshot of a trip by id." width="1024" height="571" class="alignnone size-full wp-image-87023 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/trip_by_id.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/trip_by_id.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/trip_by_id.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/trip_by_id.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a></p> <p>With this working, it&rsquo;s time to add the default itinerary template. If you guessed that you&rsquo;d be using Copilot Edits to help you out here, you&rsquo;d be right! Go ahead and send it the following prompt:</p> <pre><code class="language-plaintext">Create function to generate default itinerary template. </code></pre> <p>As always, give the code changes a review and make sure you know what the code does. Save the files, and give it a test. Go back to Bruno, and select the <strong>POST CREATE trip</strong> request. Change something in the body, such as the destination, and send the request by clicking the arrow icon. If you run into any errors, use Copilot Chat to help you debug them and suggest code changes to address them.</p> <div style="width: 1938px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-9" width="1938" height="1080" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/debugging_copilot_edited.mp4#t=0.001?_=9" /><a href="https://github.blog/wp-content/uploads/2025/04/debugging_copilot_edited.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/debugging_copilot_edited.mp4#t=0.001</a></video></div> <p>When you successfully add a new trip, you then want to verify that it uses the default itinerary. Select the <strong>GET GET trip id</strong> request, and update the URL to match the id of the trip you just created. For example, if this was the second trip you added, change the last part of the URL to <code>/2</code>. Check the response, and verify that it&rsquo;s using the default itinerary. If it is, well done! You&rsquo;ve now addressed the requested changes for this issue! That means it&rsquo;s time to commit these changes.</p> <h2 id="dont-forget-about-cors" id="dont-forget-about-cors" >Don&rsquo;t forget about CORS<a href="#dont-forget-about-cors" class="heading-link pl-2 text-italic text-bold" aria-label="Don&rsquo;t forget about CORS"></a></h2> <p>There&rsquo;s a lot more we can add to this API, but for now this is getting good enough for an MVP. Before declaring it done, let&rsquo;s add a basic health check endpoint. Head over to Copilot Edits, and send it the following prompt:</p> <pre><code class="language-plaintext">Setup CORS configuration for React frontend. </code></pre> <p>Review the changes, then go ahead and accept them.</p> <p><a href="https://github.blog/wp-content/uploads/2025/04/cors_applied.png"><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/cors_applied.png?resize=1024%2C571" alt="A screenshot showing how to setup CORS configuration for React frontend." width="1024" height="571" class="alignnone size-full wp-image-87028 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/cors_applied.png?w=1426 1426w, https://github.blog/wp-content/uploads/2025/04/cors_applied.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/cors_applied.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/cors_applied.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a></p> <p>With this, you&rsquo;ve completed the MVP of the API. That means you just created an API with GitHub Copilot!</p> <h2 id="some-finishing-touches" id="some-finishing-touches" >Some finishing touches<a href="#some-finishing-touches" class="heading-link pl-2 text-italic text-bold" aria-label="Some finishing touches"></a></h2> <p>Before declaring this MVP complete, you should add some documentation to the README. Luckily, Copilot Chat helps speed up this process. Open up Copilot Chat and send it the following prompt:</p> <pre><code class="language-plaintext">@workspace create a detailed README about the Planventure API </code></pre> <p>Copilot will generate a README for you, describing how the API works. Hover over the text, select the <code>...</code> button, and select <strong>Insert into New File</strong>. Save the file as <code>README.md</code>, and you now have a valid README file for your project. Simple and easy! With this, there&rsquo;s no reason for anyone to have empty README files in their projects.</p> <div style="width: 1426px;" class="wp-video"><video class="wp-video-shortcode" id="video-86980-10" width="1426" height="794" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/readme_copilot.mp4#t=0.001?_=10" /><a href="https://github.blog/wp-content/uploads/2025/04/readme_copilot.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/readme_copilot.mp4#t=0.001</a></video></div> <p>You should also write some tests and include them in your project before declaring this fully complete, but that&rsquo;s beyond the scope of this episode. Just remember that it&rsquo;s something you should do, and you can use Copilot Chat to help you do it! Especially if you use the <code>/tests</code> slash command. Spoiler alert: we&rsquo;ll be covering this in a future episode!</p> <h2 id="your-next-steps" id="your-next-steps" >Your next steps<a href="#your-next-steps" class="heading-link pl-2 text-italic text-bold" aria-label="Your next steps"></a></h2> <p>That was a lot! But, you have now built an entire API and used the power of Copilot to do it. Not only that, but you used Copilot to help create documentation, making it easier for others to pick up, use, and contribute to your project.</p> <p>Check out <a href="https://gh.io/planventure">the repo</a> so you can build this project from scratch, and be sure to read the README so you know which branch to start from.</p> <p>Don&rsquo;t forget that you can <a href="https://gh.io/gfb-copilot">use GitHub Copilot for free</a>! If you have any questions, pop them in the <a href="https://github.com/orgs/community/discussions/152688">GitHub Community thread</a>, and we&rsquo;ll be sure to respond. Join us for the next part in this series, where we&rsquo;ll build a full app using this API we created.</p> <p>Happy coding!</p> </body></html> <p>The post <a href="https://github.blog/ai-and-ml/github-copilot/github-for-beginners-building-a-rest-api-with-copilot/">GitHub for Beginners: Building a REST API with Copilot</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> How the GitHub CLI can now enable triangular workflows - The GitHub Blog https://github.blog/?p=85920 2025-04-25T16:00:37.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>Most developers are familiar with the standard Git workflow. You create a branch, make changes, and push those changes back to the same branch on the main repository. Git calls this a centralized workflow. It&rsquo;s straightforward and works well for many projects.</p> <p>However, sometimes you might want to pull changes from a different branch directly into your feature branch to help you keep your branch updated without constantly needing to merge or rebase. However, you&rsquo;ll still want to push local changes to your own branch. This is where triangular workflows come in.</p> <p>It&rsquo;s possible that some of you have already used triangular workflows, even without knowing it. When you fork a repo, contribute to your fork, then open a pull request back to the original repo, you&rsquo;re working in a triangular workflow. While this can work seamlessly on github.com, the process hasn&rsquo;t always been seamless with the <a href="https://cli.github.com/">GitHub CLI</a>.</p> <p>The GitHub CLI team has recently made improvements (released in <a href="https://github.com/cli/cli/releases/tag/v2.71.2">v2.71.2</a>) to better support these triangular workflows, ensuring that the <code>gh pr</code> commands work smoothly with your Git configurations. So, whether you&rsquo;re working on a centralized workflow or a more complex triangular one, the GitHub CLI will be better equipped to handle your needs.</p> <p>If you&rsquo;re already familiar with how Git handles triangular workflows, feel free to skip ahead to learn about how to use <code>gh pr</code> commands with triangular workflows. Otherwise, let&rsquo;s get into the details of how Git and the GitHub CLI have historically differed, and how four-and-a-half years after it was first requested, we have finally unlocked managing pull requests using triangular workflows in the GitHub CLI.</p> <h2 id="first-a-lesson-in-git-fundamentals" id="first-a-lesson-in-git-fundamentals" >First, a lesson in Git fundamentals<a href="#first-a-lesson-in-git-fundamentals" class="heading-link pl-2 text-italic text-bold" aria-label="First, a lesson in Git fundamentals"></a></h2> <p>To provide a framework for what we set out to do, it&rsquo;s important to first understand some Git basics. Git, at its core, is a way to store and catalog changes on a repository and communicate those changes between copies of that repository. This workflow typically looks like the diagram below:</p> <figure id="attachment_86915" class="wp-caption alignnone mx-0"><a href="https://github.blog/wp-content/uploads/2025/03/triangular-image-1.png"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="457" height="512" src="https://github.blog/wp-content/uploads/2025/03/triangular-image-1.png?resize=457%2C512" alt="Figure 1: A typical git branch setup" class="width-fit size-full wp-image-86915 width-fit" srcset="https://github.blog/wp-content/uploads/2025/03/triangular-image-1.png?w=457 457w, https://github.blog/wp-content/uploads/2025/03/triangular-image-1.png?w=268 268w" sizes="(max-width: 457px) 100vw, 457px" /></a><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 1: A typical git branch setup</figcaption></figure> <p>The building blocks of this diagram illustrate two important Git concepts you likely use every day, a <strong>ref</strong> and <strong>push/pull</strong>.</p> <h3 id="refs" id="refs" >Refs<a href="#refs" class="heading-link pl-2 text-italic text-bold" aria-label="Refs"></a></h3> <p>A <strong>ref</strong> is a reference to a repository and branch. It has two parts: the <strong>remote</strong>, usually a name like <em>origin</em> or <em>upstream</em>, and the <strong>branch</strong>. If the remote is the local repository, it is blank. So, in the example above, <em>origin/branch</em> in the purple box is a <strong>remote ref</strong>, referring to a branch named <em>branch</em> on the repository name <em>origin</em>, while <em>branch</em> in the green box is a <strong>local ref</strong>, referring to a branch named <em>branch</em> on the local machine.</p> <p>While working with GitHub, the remote ref is usually the repository you are hosting on GitHub. In the diagram above, you can consider the purple box GitHub and the green box your local machine.</p> <h3 id="pushing-and-pulling" id="pushing-and-pulling" >Pushing and pulling<a href="#pushing-and-pulling" class="heading-link pl-2 text-italic text-bold" aria-label="Pushing and pulling"></a></h3> <p>A <strong>push</strong> and a <strong>pull</strong> refer to the same action, but from two different perspectives. Whether you are pushing or pulling is determined by whether you are sending or receiving the changes. I can push a commit to your repo, or you can pull that commit from my repo, and the references to that action would be the same.</p> <p>To disambiguate this, we will refer to different refs as the <strong>headRef</strong> or <strong>baseRef</strong>, where the <strong>headRef</strong> is sending the changes (<em>pushing</em> them) and the <strong>baseRef</strong> is receiving the changes (<em>pulling</em> them).</p> <figure id="attachment_85923" class="wp-caption alignnone mx-0"><img data-recalc-dims="1" decoding="async" width="864" height="128" src="https://github.blog/wp-content/uploads/2025/03/image2_e1d22b.png?resize=864%2C128" alt="Figure 2: Disambiguating headRef and baseRef for push/pull operations." class="width-fit size-full wp-image-85923 width-fit" srcset="https://github.blog/wp-content/uploads/2025/03/image2_e1d22b.png?w=864 864w, https://github.blog/wp-content/uploads/2025/03/image2_e1d22b.png?w=300 300w, https://github.blog/wp-content/uploads/2025/03/image2_e1d22b.png?w=768 768w" sizes="(max-width: 864px) 100vw, 864px" /><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 2: Disambiguating headRef and baseRef for push/pull operations</figcaption></figure> <p>When dealing with a branch, we&rsquo;ll often refer to the headRef of its pull operations as its <strong>pullRef</strong> and the baseRef of its push operations as its <strong>pushRef</strong>. That&rsquo;s because, in these instances, the working branch is the pull&rsquo;s baseRef and the push&rsquo;s headRef, so they&rsquo;re already disambiguated.</p> <h4 id="the-push-revision-syntax" id="the-push-revision-syntax" >The <code>@{push}</code> revision syntax<a href="#the-push-revision-syntax" class="heading-link pl-2 text-italic text-bold" aria-label="The &lt;code&gt;@{push}&lt;/code&gt; revision syntax"></a></h4> <p>Turns out, Git has a handy built-in tool for referring to the pushRef for a branch: the <code>@{push}</code> revision syntax. You can usually determine a branch&rsquo;s pushRef by running the following command:</p> <p><code>git rev-parse --abbrev-ref @{push}</code></p> <p>This will result in a human-readable ref, like <strong>origin/branch</strong>, if one can be determined.</p> <h4 id="pull-requests" id="pull-requests" >Pull Requests<a href="#pull-requests" class="heading-link pl-2 text-italic text-bold" aria-label="Pull Requests"></a></h4> <p>On GitHub, a <strong>pull request</strong> is a proposal to integrate changes from one ref to another. In particular, they act as a simple &ldquo;pause&rdquo; before performing the actual integration operation, often called a <strong>merge</strong>, when changes are being pushed from ref to another. This pause allows for humans (code reviews) and robots (GitHub Copilot reviews and GitHub Actions workflows) to check the code before the changes are integrated. The name <em>pull request</em> came from this language specifically: You are requesting that a ref pulls your changes into itself.</p> <figure id="attachment_85924" class="wp-caption alignnone mx-0"><img data-recalc-dims="1" decoding="async" width="864" height="266" src="https://github.blog/wp-content/uploads/2025/03/image3_6e731a.png?resize=864%2C266" alt="Figure 3: Demonstrating how GitHub Pull Requests correspond to pushing and pulling." class="width-fit size-full wp-image-85924 width-fit" srcset="https://github.blog/wp-content/uploads/2025/03/image3_6e731a.png?w=864 864w, https://github.blog/wp-content/uploads/2025/03/image3_6e731a.png?w=300 300w, https://github.blog/wp-content/uploads/2025/03/image3_6e731a.png?w=768 768w" sizes="(max-width: 864px) 100vw, 864px" /><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 3: Demonstrating how GitHub Pull Requests correspond to pushing and pulling</figcaption></figure> <h2 id="common-git-workflows" id="common-git-workflows" >Common Git workflows<a href="#common-git-workflows" class="heading-link pl-2 text-italic text-bold" aria-label="Common Git workflows"></a></h2> <p>Now that you understand the basics, let&rsquo;s talk about the workflows we typically use with Git every day.</p> <p>A <strong>centralized workflow</strong> is how most folks interact with Git and GitHub. In this configuration, any given branch is pushing and pulling from a remote ref with the same branch name. For most of us, this type of configuration is set up by default when we clone a repo and push a branch. It is the situation shown in Figure 1.</p> <p>In contrast, a <strong>triangular workflow</strong> pushes to and pulls from <em>different</em> refs. A common use case for this configuration is to pull directly from a remote repository&rsquo;s default branch into your local feature branch, eliminating the need to run commands like <code>git rebase &lt;default&gt;</code> or <code>git merge &lt;default&gt;</code> on your feature branch to ensure the branch you&rsquo;re working on is always up to date with the default branch. However, when pushing changes, this configuration will typically push to a remote ref with the same branch name as the feature branch.</p> <figure id="attachment_86920" class="wp-caption alignnone mx-0"><a href="https://github.blog/wp-content/uploads/2025/04/triangular-image-4.png"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1600" height="703" src="https://github.blog/wp-content/uploads/2025/04/triangular-image-4.png?resize=1600%2C703" alt="Figure 4: juxtaposing centralized workflows from triangular workflows." class="width-fit size-full wp-image-86920 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/triangular-image-4.png?w=1600 1600w, https://github.blog/wp-content/uploads/2025/04/triangular-image-4.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/triangular-image-4.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/triangular-image-4.png?w=1024 1024w, https://github.blog/wp-content/uploads/2025/04/triangular-image-4.png?w=1536 1536w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 4: juxtaposing centralized workflows from triangular workflows.</figcaption></figure> <p>We complete the triangle when considering pull requests: the <strong>headRef</strong> is the <strong>pushRef</strong> for the local ref and the <strong>baseRef</strong> is the <strong>pullRef</strong> for the local branch:</p> <figure id="attachment_85926" class="wp-caption alignnone mx-0"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1549" height="1069" src="https://github.blog/wp-content/uploads/2025/03/image5_9b14d8.png?resize=1549%2C1069" alt="Figure 5: a triangular workflow" class="width-fit size-full wp-image-85926 width-fit" srcset="https://github.blog/wp-content/uploads/2025/03/image5_9b14d8.png?w=1549 1549w, https://github.blog/wp-content/uploads/2025/03/image5_9b14d8.png?w=300 300w, https://github.blog/wp-content/uploads/2025/03/image5_9b14d8.png?w=768 768w, https://github.blog/wp-content/uploads/2025/03/image5_9b14d8.png?w=1024 1024w, https://github.blog/wp-content/uploads/2025/03/image5_9b14d8.png?w=1536 1536w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 5: a triangular workflow</figcaption></figure> <p>We can go one step further and set up triangular workflows using <em>different</em> remotes as well. This most commonly occurs when you&rsquo;re developing on a fork. In this situation, you usually give the fork and source remotes different names. I&rsquo;ll use <em>origin</em> for the fork and <em>upstream</em> for the source, as these are common names used in these setups. This functions exactly the same as the triangular workflows above, but the <strong>remotes</strong> and <strong>branches</strong> on the <strong>pushRef</strong> and <strong>pullRef</strong> are different:</p> <figure id="attachment_86922" class="wp-caption alignnone mx-0"><a href="https://github.blog/wp-content/uploads/2025/04/triangular-image-6.png"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1600" height="704" src="https://github.blog/wp-content/uploads/2025/04/triangular-image-6.png?resize=1600%2C704" alt="Figure 6: juxtaposing triangular workflows and centralized workflows with different remotes such as with forks" class="width-fit size-full wp-image-86922 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/triangular-image-6.png?w=1600 1600w, https://github.blog/wp-content/uploads/2025/04/triangular-image-6.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/triangular-image-6.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/triangular-image-6.png?w=1024 1024w, https://github.blog/wp-content/uploads/2025/04/triangular-image-6.png?w=1536 1536w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 6: juxtaposing triangular workflows and centralized workflows with different remotes such as with forks</figcaption></figure> <h3 id="using-a-git-configuration-file-for-triangular-workflows" id="using-a-git-configuration-file-for-triangular-workflows" >Using a Git configuration file for triangular workflows<a href="#using-a-git-configuration-file-for-triangular-workflows" class="heading-link pl-2 text-italic text-bold" aria-label="Using a Git configuration file for triangular workflows"></a></h3> <p>There are two primary ways that you can set up a triangular workflow using the <a href="https://git-scm.com/docs/git-config">Git configuration &ndash; typically defined in a `.git/config` or `.gitconfig` file</a>. Before explaining these, let&rsquo;s take a look at what the relevant bits of a typical configuration look like in a repo&rsquo;s `.git/config` file for a centralized workflow:</p> <pre><code class="language-plaintext">[remote &ldquo;origin&rdquo;] url = https://github.com/OWNER/REPO.git fetch = +refs/heads/*:refs/remotes/origin/* [branch &ldquo;default&rdquo;] remote = origin merge = refs/heads/default [branch &ldquo;branch&rdquo;] remote = origin merge = refs/heads/branch </code></pre> <p><em>Figure 7: A typical Git configuration setup found in .git/config</em></p> <p>The <code>[remote &ldquo;origin&rdquo;]</code> part is naming the Git repository located at <code>github.com/OWNER/REPO.git</code> to <em>origin,</em> so we can reference it elsewhere by that name. We can see that reference being used in the specific <code>[branch]</code> configurations for both the <em>default</em> and <em>branch</em> branches in their <code>remote</code> keys. This key, in conjunction with the branch name, typically makes up the branch&rsquo;s <strong>pushRef</strong>: in this example, it is <em>origin/branch</em>.</p> <p>The <code>remote</code> and <code>merge</code> keys are combined to make up the branch&rsquo;s <strong>pullRef</strong>: in this example, it is <em>origin/branch</em>.</p> <h3 id="setting-up-a-triangular-branch-workflow" id="setting-up-a-triangular-branch-workflow" >Setting up a triangular branch workflow<a href="#setting-up-a-triangular-branch-workflow" class="heading-link pl-2 text-italic text-bold" aria-label="Setting up a triangular branch workflow"></a></h3> <p>The simplest way to assemble a triangular workflow is to set the branch&rsquo;s <code>merge</code> key to a different branch name, like so:</p> <pre><code class="language-plaintext">[branch &ldquo;branch&rdquo;] remote = origin merge = refs/heads/default </code></pre> <p><em>Figure 8: a triangular branch&rsquo;s Git configuration found in .git/config</em></p> <p>This will result in the branch <strong>pullRef</strong> as <em>origin/default</em>, but <strong>pushRef</strong> as <em>origin/branch</em>, as shown in Figure 9.</p> <figure id="attachment_86923" class="wp-caption alignnone mx-0"><a href="https://github.blog/wp-content/uploads/2025/04/triangular-image-9.png"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1225" height="1066" src="https://github.blog/wp-content/uploads/2025/04/triangular-image-9.png?resize=1225%2C1066" alt="Figure 9: A triangular branch workflow" class="width-fit size-full wp-image-86923 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/triangular-image-9.png?w=1225 1225w, https://github.blog/wp-content/uploads/2025/04/triangular-image-9.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/triangular-image-9.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/triangular-image-9.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 9: A triangular branch workflow</figcaption></figure> <h3 id="setting-up-a-triangular-fork-workflow" id="setting-up-a-triangular-fork-workflow" >Setting up a triangular fork workflow<a href="#setting-up-a-triangular-fork-workflow" class="heading-link pl-2 text-italic text-bold" aria-label="Setting up a triangular fork workflow"></a></h3> <p>Working with triangular forks requires a bit more customization than triangular branches because we are dealing with multiple remotes. Thus, our remotes in the Git config will look different than the one shown previously in Figure 7:</p> <pre><code class="language-plaintext">[remote &ldquo;upstream&rdquo;] url = https://github.com/ORIGINALOWNER/REPO.git fetch = +refs/heads/*:refs/remotes/upstream/* [remote &ldquo;origin&rdquo;] url = https://github.com/FORKOWNER/REPO.git fetch = +refs/heads/*:refs/remotes/origin/* </code></pre> <p><em>Figure 10: a Git configuration for a multi-remote Git setup found in .git/config</em></p> <p><em>Upstream</em> and <em>origin</em> are the most common names used in this construction, so I&rsquo;ve used them here, but they can be named anything you want<sup id="fnref-85920-1"><a href="#fn-85920-1" class="jetpack-footnote" title="Read footnote.">1</a></sup>.</p> <p>However, toggling a branch&rsquo;s <code>remote</code> key between <em>upstream</em> and <em>origin</em> won&rsquo;t actually set up a triangular fork workflow&mdash;it will just set up a centralized workflow with either of those remotes, like the centralized workflow shown in Figure 6. Luckily, there are two common Git configuration options to change this behavior.</p> <h4 id="setting-a-branchs-pushremote" id="setting-a-branchs-pushremote" >Setting a branch&rsquo;s <code>pushremote</code><a href="#setting-a-branchs-pushremote" class="heading-link pl-2 text-italic text-bold" aria-label="Setting a branch&rsquo;s &lt;code&gt;pushremote&lt;/code&gt;"></a></h4> <p>A branch&rsquo;s configuration has a key called <code>pushremote</code> that does exactly what the name suggests: configures the remote that the branch will push to. A triangular fork workflow config using <code>pushremote</code> may look like this:</p> <pre><code class="language-plaintext">[branch &ldquo;branch&rdquo;] remote = upstream merge = refs/heads/default pushremote = origin </code></pre> <p><em>Figure 11: a triangular fork&rsquo;s Git config using pushremote found in .git/config</em></p> <p>This assembles the triangular fork repo we see in Figure 12. The <strong>pullRef</strong> is <em>upstream/default</em>, as determined by combining the <code>remote</code> and <code>merge</code> keys, while the <strong>pushRef</strong> is <em>origin/branch</em>, as determined by combining the <code>pushremote</code> key and the branch name.</p> <figure id="attachment_86924" class="wp-caption alignnone mx-0"><a href="https://github.blog/wp-content/uploads/2025/04/triangular-image-12.png"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1226" height="1067" src="https://github.blog/wp-content/uploads/2025/04/triangular-image-12.png?resize=1226%2C1067" alt="Figure 12: A triangular fork workflow" class="width-fit size-full wp-image-86924 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/triangular-image-12.png?w=1226 1226w, https://github.blog/wp-content/uploads/2025/04/triangular-image-12.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/triangular-image-12.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/triangular-image-12.png?w=1024 1024w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></a><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 12: A triangular fork workflow</figcaption></figure> <h4 id="setting-a-repos-remote-pushdefault" id="setting-a-repos-remote-pushdefault" >Setting a repo&rsquo;s <code>remote.pushDefault</code><a href="#setting-a-repos-remote-pushdefault" class="heading-link pl-2 text-italic text-bold" aria-label="Setting a repo&rsquo;s &lt;code&gt;remote.pushDefault&lt;/code&gt;"></a></h4> <p>To configure all branches in a repository to have the same behavior as what you&rsquo;re seeing in Figure 12, you can instead set the repository&rsquo;s <code>pushDefault</code>. The config for this is below:</p> <pre><code class="language-plaintext">[remote] pushDefault = origin [branch &ldquo;branch&rdquo;] remote = upstream merge = refs/heads/default </code></pre> <p><em>Figure 13: a triangular fork&rsquo;s Git config using remote.pushDefault found in .git/config</em></p> <p>This assembles the same triangular fork repo as shown in Figure 12 above, however this time the <strong>pushRef</strong> is determined by combining the <code>remote.pushDefault</code> key and the branch name, resulting in <em>origin/branch</em>.</p> <p>When using the branch&rsquo;s <code>pushremote</code> and the repo&rsquo;s <code>remote.pushDefault</code> keys together, Git will preferentially resolve the branch&rsquo;s configuration over the repo&rsquo;s, so the remote set on <code>pushremote</code> supersedes the remote set on <code>remote.pushDefault</code>.</p> <h2 id="updating-the-gh-pr-command-set-to-reflect-git" id="updating-the-gh-pr-command-set-to-reflect-git" >Updating the <code>gh pr</code> command set to reflect Git<a href="#updating-the-gh-pr-command-set-to-reflect-git" class="heading-link pl-2 text-italic text-bold" aria-label="Updating the &lt;code&gt;gh pr&lt;/code&gt; command set to reflect Git"></a></h2> <p>Previously, the <code>gh pr</code> command set did not resolve <strong>pushRefs</strong> and <strong>pullRefs</strong> in the same way that Git does. This was due to technical design decisions that made this change both difficult and complex. Instead of discussing that complexity&mdash;a big enough topic for a whole article in itself&mdash;I&rsquo;m going to focus here on what you can now <em>do</em> with the updated <code>gh pr</code> command set.</p> <p><strong>If you set up triangular Git workflows in the manner described above, we will automatically resolve <code>gh pr</code> commands in accordance with your Git configuration.</strong></p> <p>To be slightly more specific, when trying to resolve a pull request for a branch, the GitHub CLI will respect whatever <code>@{push}</code> resolves to first, if it resolves at all. Then it will fall back to respect a branch&rsquo;s <code>pushremote,</code> and if that isn&rsquo;t set, finally look for a repo&rsquo;s <code>remote.pushDefault</code> config settings.</p> <p>What this means is that the CLI is assuming your branch&rsquo;s <strong>pullRef</strong> is the pull request&rsquo;s <strong>baseRef</strong> and the branch&rsquo;s <strong>pushRef</strong> is the pull requests <strong>headRef</strong>. In other words, if you&rsquo;ve configured <code>git pull</code> and <code>git push</code> to work, then <code>gh pr</code> commands should just work.<sup id="fnref-85920-2"><a href="#fn-85920-2" class="jetpack-footnote" title="Read footnote.">2</a></sup> The diagram below, a general version of Figure 5, demonstrates this nicely:</p> <figure id="attachment_85930" class="wp-caption alignnone mx-0"><img data-recalc-dims="1" loading="lazy" decoding="async" width="965" height="719" src="https://github.blog/wp-content/uploads/2025/03/image9.png?resize=965%2C719" alt="Figure 14: the triangular workflow supported by the GitHub CLI with respect to a branch&rsquo;s pullRef and pushRef. This is the generalized version of Figure 5" class="width-fit size-full wp-image-85930 width-fit" srcset="https://github.blog/wp-content/uploads/2025/03/image9.png?w=965 965w, https://github.blog/wp-content/uploads/2025/03/image9.png?w=300 300w, https://github.blog/wp-content/uploads/2025/03/image9.png?w=768 768w" sizes="auto, (max-width: 965px) 100vw, 965px" /><figcaption class="text-mono color-fg-muted mt-14px f5-mktg">Figure 14: the triangular workflow supported by the GitHub CLI with respect to a branch&rsquo;s pullRef and pushRef. This is the generalized version of Figure 5</figcaption></figure> <h2 id="conclusion" id="conclusion" >Conclusion<a href="#conclusion" class="heading-link pl-2 text-italic text-bold" aria-label="Conclusion"></a></h2> <p>We&rsquo;re constantly working to improve the GitHub CLI, and we&rsquo;d like the behavior of the GitHub CLI to reasonably reflect the behavior of Git. This was a team effort&mdash;everyone contributed to understanding, reviewing, and testing the code to enable this enhanced <code>gh pr</code> command set functionality.</p> <p>It also couldn&rsquo;t have happened without the support of our contributors, so we extend our thanks to them:</p> <ul> <li><code>@Frederick888</code> for opening the <a href="https://github.com/cli/cli/pull/9208">original pull request</a> </li> <li><code>@benknoble</code> for his support with pull request review and feedback </li> <li><code>@phil-blain</code> for <a href="https://github.com/cli/cli/issues/575#issuecomment-668213138">highlighting the configurations</a> we&rsquo;ve talked about here on the <a href="https://github.com/cli/cli/issues/575">original issue</a> </li> <li><code>@neutrinoceros</code> and <code>@rd-yan-farba</code> for reporting a <a href="https://github.com/search?q=repo%3Acli%2Fcli+10352+10346&amp;type=issues">couple of bugs</a> that the team fixed in <a href="https://github.com/cli/cli/releases/tag/v2.66.1">v2.66.1</a></li> <li><code>@pdunnavant</code> for <a href="https://github.com/cli/cli/issues/10857">reporting the bug</a> that we fixed in v2.71.1</li> <li><code>@cs278</code> for <a href="https://github.com/cli/cli/issues/10862">reporting the bug</a> that we fixed in v2.71.2.</li> </ul> <p>CLI native support for triangular workflows was 4.5 years in the making, and we&rsquo;re proud to have been able to provide this update for the community.</p> <p>The GitHub CLI Team<br> <code>@andyfeller</code>, <code>@babakks</code>, <code>@bagtoad</code>, <code>@jtmcg</code>, <code>@mxie</code>, <code>@RyanHecht</code>, and <code>@williammartin</code></p> <div class="footnotes"> <hr> <ol> <li id="fn-85920-1"> Some commands in gh are opinionated about remote names and will resolve remotes in this order: upstream, github, origin, <code>&lt;other remotes unstably sorted&gt;</code>. There is a convenience command you can run to supersede this:* <code>gh repo set-default [&lt;repository&gt;]</code> <em>to override the default behavior above and preferentially resolve</em> <code>&lt;repository&gt;</code> <em>as the default remote repo.</em>&nbsp;<a href="#fnref-85920-1" title="Return to main content.">&#8617;</a> </li> <li id="fn-85920-2"> If you find a git configuration that doesn&rsquo;t work, please open an issue in the OSS repo so we can fix it.&nbsp;<a href="#fnref-85920-2" title="Return to main content.">&#8617;</a> </li> </ol> </div> </body></html> <p>The post <a href="https://github.blog/open-source/git/how-the-github-cli-can-now-enable-triangular-workflows/">How the GitHub CLI can now enable triangular workflows</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> A guide to deciding what AI model to use in GitHub Copilot - The GitHub Blog https://github.blog/?p=86942 2025-04-24T16:00:51.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>To ensure that you have access to the best technology available, we&rsquo;re continuously adding support for new models to <a href="https://github.com/features/copilot">GitHub Copilot</a>. That being said, we know it can be hard to keep up with so many new models being released all the time.</p> <p>All of this raises an obvious question: Which model should you use?</p> <p>You can <a href="https://github.blog/ai-and-ml/github-copilot/which-ai-model-should-i-use-with-github-copilot/">read our recent blog post</a> for an overview of the models currently available in Copilot and their strengths, or <a href="https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task">check out our documentation</a> for a deep dive comparing different models and tasks. But the AI landscape moves quickly. <strong>In this article we&rsquo;ll explore a framework&mdash;including a few strategies&mdash;for evaluating whether any given AI model is a good fit for <em>your</em> use, even as new models continue to appear at a rapid pace.</strong></p> <p>It&rsquo;s hard to go wrong with our base model, which has been fine-tuned specifically for programming-related tasks. But depending on what you&rsquo;re working on, you likely have varying needs and preferences. There&rsquo;s no single &ldquo;best&rdquo; model. Some may favor a more verbose model for chat, while others prefer a terse one, for example.</p> <p>We spoke with several developers about their model selection process. Keep reading to discover how to apply their strategies to your own needs.</p> <p><em>&#128161; Watch the video below for tips on prompt engineering to get the best results.</em></p> <div class="mod-yt position-relative" style="height: 0; padding-bottom: calc((9 / 16)*100%);"> <iframe loading="lazy" class="position-absolute top-0 left-0 width-full height-full" src="https://www.youtube.com/embed/LAF-lACf2QY?feature=oembed" title="YouTube video player" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe> </div> <h2 id="why-use-multiple-models" id="why-use-multiple-models" >Why use multiple models?<a href="#why-use-multiple-models" class="heading-link pl-2 text-italic text-bold" aria-label="Why use multiple models?"></a></h2> <p>There&rsquo;s no reason you have to pick one model and stick with it. Since you can easily switch between models for both <a href="https://docs.github.com/en/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat">chat</a> and <a href="https://docs.github.com/en/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-code-completion">code completion</a> with GitHub Copilot, you can use different models for different use cases.</p> <figure class="gh-full-blockquote mx-0 pl-6 mt-6 mt-md-7 mb-7 mb-md-8"><blockquote><p>It's kind of like dogfooding your own stack: You won&rsquo;t know if it really fits your workflow until you've shipped some real code with it.</p></blockquote><figcaption class="text-mono color-fg-muted f5-mktg mt-3"> - Anand Chowdhary, FirstQuadrant CTO and co-founder</figcaption></figure> <h3 id="chat-vs-code-completion" id="chat-vs-code-completion" >Chat vs. code completion<a href="#chat-vs-code-completion" class="heading-link pl-2 text-italic text-bold" aria-label="Chat vs. code completion"></a></h3> <p>Using one model for chat and another for autocomplete is one of the most common patterns we see among developers. Generally, developers prefer autocompletion models because they&rsquo;re fast and responsive, which they need if they&rsquo;re looking for suggestions as they think and type. Developers are more tolerant of latency in chat, when they&rsquo;re in more of an exploratory state of mind (like considering a complex refactoring job, for instance).</p> <h3 id="reasoning-models-for-certain-programming-tasks" id="reasoning-models-for-certain-programming-tasks" >Reasoning models for certain programming tasks<a href="#reasoning-models-for-certain-programming-tasks" class="heading-link pl-2 text-italic text-bold" aria-label="Reasoning models for certain programming tasks"></a></h3> <p>Reasoning models like OpenAI o1 often respond slower than traditional LLMs such as GPT-4o or Claude Sonnet 3.5. That&rsquo;s in large part because these models break a prompt down into parts and consider multiple approaches to a problem. That introduces latency in their response times, but makes them more effective at completing complex tasks. Many developers prefer these more deliberative models for particular tasks.</p> <p>For instance, Fatih Kadir Ak&#305;n, a developer relations manager, uses o1 when starting new projects from scratch. &ldquo;Reasoning models better &lsquo;understand&rsquo; my vision and create more structured projects than non-reasoning models,&rdquo; he explains.</p> <p>FirstQuadrant CTO and co-founder Anand Chowdhary favors reasoning models for large-scale code refactoring jobs. &ldquo;A model that rewrites complex backend code without careful reasoning is rarely accurate the first time,&rdquo; he says. &ldquo;Seeing the thought process also helps me understand the changes.&rdquo;</p> <p>When creating technical interview questions for her newsletter, GitHub Senior Director of Developer Advocacy, Cassidy Williams mixes models for certain tasks. When she writes a question, she uses GPT-4o to refine the prose, and then Claude 3.7 Sonnet Thinking to verify code accuracy. &ldquo;Reasoning models help ensure technical correctness because of their multi-step process,&rdquo; she says. &ldquo;If they initially get something wrong, they often correct themselves in later steps so the final answer is more accurate.&rdquo;</p> <figure class="gh-full-blockquote mx-0 pl-6 mt-6 mt-md-7 mb-7 mb-md-8"><blockquote><p>There&rsquo;s some subjectivity, but I compare model output based on the code structure, patterns, comments, and adherence to best practices.</p></blockquote><figcaption class="text-mono color-fg-muted f5-mktg mt-3"> - Portilla Edo, cloud infrastructure engineering lead</figcaption></figure> <h2 id="what-to-look-for-in-a-new-ai-model" id="what-to-look-for-in-a-new-ai-model" >What to look for in a new AI model<a href="#what-to-look-for-in-a-new-ai-model" class="heading-link pl-2 text-italic text-bold" aria-label="What to look for in a new AI model"></a></h2> <p>Let&rsquo;s say a new model just dropped and you&rsquo;re ready to try it out. Here are a few things to consider before making it your new go-to.</p> <h3 id="recentness" id="recentness" >Recentness<a href="#recentness" class="heading-link pl-2 text-italic text-bold" aria-label="Recentness"></a></h3> <p>Different models use different training data. That means one model might have more recent data than another, and therefore might be trained on new versions of the programming languages, frameworks, and libraries you use.</p> <p>&ldquo;When I&rsquo;m trying out a new model, one of the first things I do is check how up to date it is,&rdquo; says Xavier Portilla Edo, a cloud infrastructure engineering lead. He typically does this by creating a project manifest file for the project to see what version numbers Copilot autocomplete suggests. &ldquo;If the versions are quite old, I&rsquo;ll move on,&rdquo; he says.</p> <h3 id="speed-and-responsiveness" id="speed-and-responsiveness" >Speed and responsiveness<a href="#speed-and-responsiveness" class="heading-link pl-2 text-italic text-bold" aria-label="Speed and responsiveness"></a></h3> <p>As mentioned, developers tend to tolerate more latency in a chat than in autocomplete. But responsiveness is still important in chat. &ldquo;I enjoy bouncing ideas off a model and getting feedback,&rdquo; says Rishab Kumar, a staff developer evangelist at Twilio. &ldquo;For that type of interaction, I need fast responses so I can stay in the flow.&rdquo;</p> <h3 id="accuracy" id="accuracy" >Accuracy<a href="#accuracy" class="heading-link pl-2 text-italic text-bold" aria-label="Accuracy"></a></h3> <p>Naturally, you need to evaluate which models produce the best code. &ldquo;There&rsquo;s some subjectivity, but I compare model output based on the code structure, patterns, comments, and adherence to best practices,&rdquo; Portilla Edo says. &ldquo;I also look at how readable and maintainable the code is&mdash;does it follow naming conventions? Is it modular? Are the comments helpful or just restating what the code does? These are all signals of quality that go beyond whether the code simply runs.&rdquo;</p> <h2 id="how-to-test-an-ai-model-in-your-workflow" id="how-to-test-an-ai-model-in-your-workflow" >How to test an AI model in your workflow<a href="#how-to-test-an-ai-model-in-your-workflow" class="heading-link pl-2 text-italic text-bold" aria-label="How to test an AI model in your workflow"></a></h2> <p>OK, so now you know what to look for in a model. But how do you actually evaluate it for responsiveness and correctness? You use it, of course.</p> <h3 id="start-with-a-simple-app" id="start-with-a-simple-app" >Start with a simple app<a href="#start-with-a-simple-app" class="heading-link pl-2 text-italic text-bold" aria-label="Start with a simple app"></a></h3> <p>Ak&#305;n will generally start with a simple todo app written in vanilla JavaScript. &ldquo;I just check the code, and how well it&rsquo;s structured,&rdquo; he says. Similarly, Kumar will start with a websocket server in Python. The idea is to start with something that you understand well enough to evaluate, and then layer on more complexity. &ldquo;Eventually I&rsquo;ll see if it can build something in 3D using 3js,&rdquo; Ak&#305;n says.</p> <p>Portilla Edo starts by prompting a new model he wants to evaluate in Copilot Chat. &ldquo;I usually ask it for simple things, like a function in Go, or a simple HTML file,&rdquo; he says. Then he moves on to autocompletion to see how the model performs there.</p> <h3 id="use-it-as-a-daily-driver-for-a-while" id="use-it-as-a-daily-driver-for-a-while" >Use it as a &ldquo;daily driver&rdquo; for a while<a href="#use-it-as-a-daily-driver-for-a-while" class="heading-link pl-2 text-italic text-bold" aria-label="Use it as a &ldquo;daily driver&rdquo; for a while"></a></h3> <p>Chowdhary prefers to just jump in and start using a model. &ldquo;When a new model drops, I swap it into my workflow as my daily driver and just live with it for a bit,&rdquo; he says. &ldquo;Available benchmarks and tests only tell you part of the story. I think the real test is seeing if it actually improves your day to day.&rdquo;</p> <p>For example, he checks to see if it actually speeds up his debugging jobs or produces cleaner refactors. &ldquo;It&rsquo;s kind of like dogfooding your own stack: You won&rsquo;t know if it really fits your workflow until you&rsquo;ve shipped some real code with it,&rdquo; he says. &ldquo;After evaluating it for a bit, I decide whether to stick with the new model or revert to my previous choice.&rdquo;</p> <h2 id="take-this-with-you" id="take-this-with-you" >Take this with you<a href="#take-this-with-you" class="heading-link pl-2 text-italic text-bold" aria-label="Take this with you"></a></h2> <p>What just about everyone agrees on is that the best way to evaluate a model is to use it.</p> <p>The important thing is to keep learning. &ldquo;You don&rsquo;t need to be switching models all the time, but it&rsquo;s important to know what&rsquo;s going on,&rdquo; Chowdhary says. &ldquo;The state of the art is moving quickly. It&rsquo;s easy to get left behind.&rdquo;</p> <h3 id="additional-resources" id="additional-resources" >Additional resources<a href="#additional-resources" class="heading-link pl-2 text-italic text-bold" aria-label="Additional resources"></a></h3> <ul> <li><a href="http://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task">Choosing the right AI model for your task</a> </li> <li><a href="https://docs.github.com/en/copilot/using-github-copilot/ai-models/comparing-ai-models-using-different-tasks">Examples for AI model comparison</a> </li> <li><a href="https://github.blog/ai-and-ml/github-copilot/which-ai-model-should-i-use-with-github-copilot/">Which AI models should I use with GitHub Copilot?</a></li> </ul> <div class="post-content-cta"><p><a href="https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task">Learn more about AI models.</a></p> </div> </body></html> <p>The post <a href="https://github.blog/ai-and-ml/github-copilot/a-guide-to-deciding-what-ai-model-to-use-in-github-copilot/">A guide to deciding what AI model to use in GitHub Copilot</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Achieve real-time interaction: Build with the Live API - Google Developers Blog https://developers.googleblog.com/en/achieve-real-time-interaction-build-with-the-live-api/ 2025-04-23T21:24:02.000Z Explore real world applications for the Live API for Gemini models, now updated to include enhanced features for real-time audio, video, and text processing, improved session management, control over interactions, and richer output options. Get ready for Google I/O: Program lineup revealed - Google Developers Blog https://developers.googleblog.com/en/google-io-program-lineup-revealed/ 2025-04-23T17:04:04.000Z Google I/O's agenda is live, with keynotes and sessions scheduled for May 20-21, focusing on AI advancements, Android development, and web technologies. Register now to explore the full program, join us during the event for livestreams, on-demand sessions, and codelabs. From prompt to production: Building a landing page with Copilot agent mode - The GitHub Blog https://github.blog/?p=86884 2025-04-23T16:06:05.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>GitHub Copilot has quickly become an integral part of how I build. Whether I&rsquo;m exploring new ideas or scaffolding full pages, using Copilot&rsquo;s agent mode in my IDE helps me move faster&mdash;and more confidently&mdash;through each step of the development process.</p> <p><a href="https://github.blog/news-insights/product-news/github-copilot-agent-mode-activated/#agent-mode-in-vs-code">GitHub Copilot agent mode</a> is an interactive chat experience built right into your IDE that turns Copilot into an active participant in your development workflow. After you give it a prompt, agent mode streamlines complex coding tasks by autonomously iterating on its own code, identifying and fixing errors, suggesting and executing terminal commands, and resolving runtime issues with self-healing capabilities.</p> <p>And here&rsquo;s the best part: You can attach images, reference files, and give natural language instructions, and Copilot will generate and modify code directly in your project!</p> <p>In this post, I&rsquo;ll walk you through how I built a <strong>developer-focused landing page&mdash;</strong>from product requirements to code&mdash;using GitHub Copilot agent mode and the Claude 3.5 Sonnet model. This kind of build could easily take a few hours if I did it all by myself. But with Copilot, I had a working prototype in <strong>under 30 minutes!</strong> You&rsquo;ll see how I used design artifacts, inline chat, and Copilot&rsquo;s awareness of context to go from idea &rarr; design &rarr; code, with minimal friction.</p> <p><strong>You can also watch the full build in the video above!</strong></p> <aside class="p-4 p-md-6 post-aside--large"><p class="h5-mktg gh-aside-title">Not sure how to use agent mode in GitHub Copilot?</p><p>Don&rsquo;t sweat it&mdash;we have a guide for you on everything you need to know to get started (plus, details on how to use other GitHub Copilot features, too). <a href="https://github.blog/ai-and-ml/github-copilot/mastering-github-copilot-when-to-use-ai-agent-mode/">Learn more &gt;</a></p> </aside> <h2 id="designing-with-ai-from-prd-to-ui" id="designing-with-ai-from-prd-to-ui" >Designing with AI: From PRD to UI<a href="#designing-with-ai-from-prd-to-ui" class="heading-link pl-2 text-italic text-bold" aria-label="Designing with AI: From PRD to UI"></a></h2> <p>Before I wrote a single line of code, I needed a basic product vision. I started by <a href="https://github.com/copilot">using GitHub Copilot on GitHub.com</a> to generate a lightweight product requirements document (PRD) using GPT-4o. Here was my prompt:</p> <p><em>&gt; &ldquo;Describe a landing page for developers in simple terms.&rdquo;</em></p> <p>Copilot returned a structured but simple <a href="https://github.com/copilot/share/0a631304-4100-80a1-a951-3e0864370837">outline of a PRD</a> for a developer-focused landing page. I then passed this PRD into Claude 3.5 Sonnet and asked it to generate a design based on that prompt.</p> <p><img data-recalc-dims="1" fetchpriority="high" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/copilot-claude-prd-design.png?resize=1024%2C576" alt='An image showing the interface for GitHub Copilot responding to the prompt "Describe a landing page for developers in simple terms" with a a structured but simple outline of a PRD for a developer-focused landing page.' width="1024" height="576" class="alignnone size-full wp-image-86885 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/copilot-claude-prd-design.png?w=1920 1920w, https://github.blog/wp-content/uploads/2025/04/copilot-claude-prd-design.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/copilot-claude-prd-design.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/copilot-claude-prd-design.png?w=1024 1024w, https://github.blog/wp-content/uploads/2025/04/copilot-claude-prd-design.png?w=1536 1536w" sizes="(max-width: 1000px) 100vw, 1000px" /></p> <p>Claude gave me a clean, organized layout with common landing page sections: a hero, feature list, API examples, a dashboard preview, and more. This was more than enough for me to get started.</p> <p>You can explore the full design that Claude <a href="https://gh.io/devflow-design">built here</a>; it&rsquo;s pretty cool.</p> <h2 id="setting-up-the-project" id="setting-up-the-project" >Setting up the project<a href="#setting-up-the-project" class="heading-link pl-2 text-italic text-bold" aria-label="Setting up the project"></a></h2> <p>For the tech stack, I chose <a href="https://astro.build/">Astro</a> because of its performance and flexibility. I paired it with Tailwind CSS and React for styling and component architecture. I started in a blank directory and ran the following commands:</p> <pre><code class="language-plaintext">npm create astro@latest npx astro add react npx astro add tailwind </code></pre> <p>I initialized the project, configured Tailwind, and opened it in VS Code with GitHub Copilot agent mode enabled (<a href="https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode">learn how to enable it with our docs</a>!). Once the server was running, I was ready to start building.</p> <h2 id="building-section-by-section-with-copilot-agent-mode" id="building-section-by-section-with-copilot-agent-mode" ><strong>Building section by section with Copilot agent mode</strong><a href="#building-section-by-section-with-copilot-agent-mode" class="heading-link pl-2 text-italic text-bold" aria-label="&lt;strong&gt;Building section by section with Copilot agent mode&lt;/strong&gt;"></a></h2> <p>Copilot agent mode really shines when translating visual designs into production-ready code because it understands both image and code context in your project. By attaching a screenshot and specifying which file to edit, I could prompt it to scaffold new components, update layout structure, and even apply Tailwind styles&mdash;all without switching tabs or writing boilerplate manually.</p> <p>For our project here, this meant I could take screenshots of each section from Claude&rsquo;s design and drop them directly into Copilot&rsquo;s context window.</p> <div class="content-table-wrap"><table style="border: 1px black"> <tbody> <tr> <td>&#128161; <strong>Pro tip:</strong> When building from a visual design like this, I recommend working on one section at a time. This not only keeps the context manageable for the model, but also makes it easier to debug if something goes off track. You&rsquo;ll know exactly where to look! </td> </tr> </tbody> </table></div> <h3 id="creating-the-hero-and-navigation-section" id="creating-the-hero-and-navigation-section" >Creating the hero and navigation section<a href="#creating-the-hero-and-navigation-section" class="heading-link pl-2 text-italic text-bold" aria-label="Creating the hero and navigation section"></a></h3> <p>I opened <code>index.astro</code>, attached the design screenshot, and typed the following prompt:</p> <p><em>&gt; &ldquo;Update index.astro to reflect the attached design. Add a new navbar and hero section to start the landing page.&rdquo;</em></p> <p>Copilot agent mode then returned the following:</p> <ul> <li>Created <code>Navbar.astro</code> and <code>Hero.astro</code> </li> <li>Updated <code>index.astro</code> to render them </li> <li>Applied Tailwind styling based on the visual layout</li> </ul> <p>And here&rsquo;s what I got:</p> <div style="width: 1706px;" class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]--> <video class="wp-video-shortcode" id="video-86884-1" width="1706" height="1096" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/dev-flow-final-landing.mp4#t=0.001?_=1" /><a href="https://github.blog/wp-content/uploads/2025/04/dev-flow-final-landing.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/dev-flow-final-landing.mp4#t=0.001</a></video></div> <p>Now, this is beautiful! Though it doesn&rsquo;t have the image on the right <a href="https://gh.io/devflow-design">per the design</a>, it did a very good job of getting the initial design down. We&rsquo;ll go back in later to update the section to be exactly what we want.</p> <h2 id="commit-early-and-often" id="commit-early-and-often" >Commit early and often<a href="#commit-early-and-often" class="heading-link pl-2 text-italic text-bold" aria-label="Commit early and often"></a></h2> <div class="content-table-wrap"><table style="border: 1px black"> <tbody> <tr> <td>&#128161; <strong>Pro tip:</strong> When building with AI tools, <strong>commit early and often</strong>. I&rsquo;ve seen too many folks lose progress when a prompt goes sideways. </td> </tr> </tbody> </table></div> <p>And in case you didn&rsquo;t know, GitHub Copilot can help here too. After staging your changes in the Source Control panel, click the &#10024; sparkles icon to automatically generate a commit message. It&rsquo;s a small step that can save you a lot of time (and heartache).</p> <div style="width: 1614px;" class="wp-video"><video class="wp-video-shortcode" id="video-86884-2" width="1614" height="908" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/dev-flow-commit-code.mp4#t=0.001?_=2" /><a href="https://github.blog/wp-content/uploads/2025/04/dev-flow-commit-code.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/dev-flow-commit-code.mp4#t=0.001</a></video></div> <h2 id="improve-accuracy-with-copilot-custom-instructions" id="improve-accuracy-with-copilot-custom-instructions" >Improve accuracy with Copilot custom instructions<a href="#improve-accuracy-with-copilot-custom-instructions" class="heading-link pl-2 text-italic text-bold" aria-label="Improve accuracy with Copilot custom instructions"></a></h2> <p>One of the best ways to improve the quality of GitHub Copilot&rsquo;s suggestions&mdash;especially in multi-file projects&mdash;is by providing it with <a href="https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot"><strong>custom instructions</strong></a>. These are short, structured notes that describe your tech stack, project structure, and any conventions or tools you&rsquo;re using.</p> <p>Instead of repeatedly adding this contextual detail to your chat questions, you can create a file in your repository that automatically adds this information for you. The additional information won&rsquo;t be displayed in the chat, but is available to Copilot&mdash;allowing it to generate higher-quality responses.</p> <p>To give Copilot better context, I created a <code>CopilotInstructions.md</code> file describing my tech stack:</p> <ul> <li>Astro v5 </li> <li>Tailwind CSS v4 </li> <li>React </li> <li>TypeScript</li> </ul> <p>When Copilot agent mode referenced this file when making suggestions, I noticed the results became more accurate and aligned with my setup.</p> <p>Here&rsquo;s what some of the file looked like:</p> <pre><code class="language-plaintext"># GitHub Copilot Project Instructions ## Project Overview This is an Astro project that uses React components and Tailwind CSS for styling. When making suggestions, please consider the following framework-specific details and conventions. ## Tech Stack - Astro v5.x - React as UI library - Tailwind CSS for styling (v4.x) - TypeScript for type safety ## Project Structure ``` &#9500;&#9472;&#9472; src/ &#9474; &#9500;&#9472;&#9472; components/ # React and Astro components &#9474; &#9500;&#9472;&#9472; layouts/ # Astro layout components &#9474; &#9500;&#9472;&#9472; pages/ # Astro pages and routes &#9474; &#9500;&#9472;&#9472; styles/ # Global styles &#9474; &#9492;&#9472;&#9472; utils/ # Utility functions &#9500;&#9472;&#9472; public/ # Static assets &#9492;&#9472;&#9472; astro.config.mjs # Astro configuration ``` ## Component Conventions ### Astro Components - Use `.astro` extension - Follow kebab-case for filenames - Example structure: ```astro --- // Imports and props interface Props { title: string; } const { title } = Astro.props; --- &lt;div class="component-wrapper"&gt; &lt;h1&gt;{title}&lt;/h1&gt; &lt;slot /&gt; &lt;/div&gt; &lt;style&gt; /* Scoped styles if needed */ &lt;/style&gt; ``` </code></pre> <p>You can explore the full instructions <a href="https://github.com/LadyKerr/devflow-landing/blob/main/.github/copilot-instructions.md">file in my repo</a>, along with the full code, setup instructions, and a link to the deployed landing page.</p> <h2 id="iterating-on-your-designs-by-prompting-copilot" id="iterating-on-your-designs-by-prompting-copilot" >Iterating on your designs by prompting Copilot<a href="#iterating-on-your-designs-by-prompting-copilot" class="heading-link pl-2 text-italic text-bold" aria-label="Iterating on your designs by prompting Copilot"></a></h2> <p>I then repeated the same process to build each new section. Here&rsquo;s what this looked like in practice:</p> <h3 id="built-by-developers-section" id="built-by-developers-section" >&ldquo;Built by Developers&rdquo; section<a href="#built-by-developers-section" class="heading-link pl-2 text-italic text-bold" aria-label="&ldquo;Built by Developers&rdquo; section"></a></h3> <p><em>&gt; &ldquo;Add a new section to the landing page called &lsquo;By Developers&rsquo; and follow the attached design.&rdquo;</em></p> <p>Copilot generated a reusable component with feature cards structured in a Tailwind-styled grid.</p> <p><img data-recalc-dims="1" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/by-devs-section.png?resize=1024%2C445" alt="An image showing a reusable component with feature cards structured in a Tailwind-styled grid." width="1024" height="445" class="alignnone size-full wp-image-86891 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/by-devs-section.png?w=1614 1614w, https://github.blog/wp-content/uploads/2025/04/by-devs-section.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/by-devs-section.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/by-devs-section.png?w=1024 1024w, https://github.blog/wp-content/uploads/2025/04/by-devs-section.png?w=1536 1536w" sizes="(max-width: 1000px) 100vw, 1000px" /></p> <h3 id="api-development-section" id="api-development-section" ><strong>&ldquo;API development&rdquo; section</strong><a href="#api-development-section" class="heading-link pl-2 text-italic text-bold" aria-label="&lt;strong&gt;&ldquo;API development&rdquo; section&lt;/strong&gt;"></a></h3> <p><em>&gt; &ldquo;Add the API development section based on the design.&rdquo;</em></p> <p>This section featured interactive code samples in tabs. Copilot interpreted that from the screenshot and added UI logic to switch between examples&mdash;<em>without me asking.</em></p> <div style="width: 1900px;" class="wp-video"><video class="wp-video-shortcode" id="video-86884-3" width="1900" height="1068" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/api-design-section.mp4#t=0.001?_=3" /><a href="https://github.blog/wp-content/uploads/2025/04/api-design-section.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/api-design-section.mp4#t=0.001</a></video></div> <h3 id="dashboard-preview-section" id="dashboard-preview-section" >&ldquo;Dashboard preview&rdquo; section<a href="#dashboard-preview-section" class="heading-link pl-2 text-italic text-bold" aria-label="&ldquo;Dashboard preview&rdquo; section"></a></h3> <p><em>&gt; &ldquo;Now add the dashboard management section on the landing page based on the design.&rdquo;</em></p> <p>I uploaded a screenshot of my editor as a placeholder image, and Copilot added it seamlessly to the new component.</p> <p><img data-recalc-dims="1" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/dashboard_api.png?resize=512%2C339" alt="A screenshot of the dashboard management section." width="512" height="339" class="alignnone size-full wp-image-86893 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/dashboard_api.png?w=512 512w, https://github.blog/wp-content/uploads/2025/04/dashboard_api.png?w=300 300w" sizes="(max-width: 512px) 100vw, 512px" /></p> <p>It&rsquo;s so amazing how fast we&rsquo;re building this landing page. Look at the progress we&rsquo;ve already made!</p> <h2 id="smart-suggestions-fast-results" id="smart-suggestions-fast-results" >Smart suggestions, fast results<a href="#smart-suggestions-fast-results" class="heading-link pl-2 text-italic text-bold" aria-label="Smart suggestions, fast results"></a></h2> <p>Even with sections like &ldquo;Trusted by Developers&rdquo; and &ldquo;Try it Yourself,&rdquo; Copilot created placeholder images, added semantic HTML, and applied Tailwind styling&mdash;all based on a single image and prompt. &#129327;</p> <p><img data-recalc-dims="1" loading="lazy" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/trusted-by-devs.png?resize=1024%2C693" alt='A screenshot of the "Trusted by developers worldwide" section of the landing page.' width="1024" height="693" class="alignnone size-full wp-image-86894 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/trusted-by-devs.png?w=1621 1621w, https://github.blog/wp-content/uploads/2025/04/trusted-by-devs.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/trusted-by-devs.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/trusted-by-devs.png?w=1024 1024w, https://github.blog/wp-content/uploads/2025/04/trusted-by-devs.png?w=1536 1536w" sizes="auto, (max-width: 1000px) 100vw, 1000px" /></p> <p>When I updated the final hero section to match the layout more closely, Copilot flagged and fixed TypeScript issues without being prompted.</p> <p>That might sound small, but <strong>it&rsquo;s a big deal</strong>. It means Copilot agent mode wasn&rsquo;t just taking instructions&mdash;it was actively understanding my codebase, looking at my terminal, identifying problems, and <strong>resolving them in real time.</strong> This reduced my need to context switch, so I could focus on shipping!</p> <p>This wasn&rsquo;t just a series of generated components. It was a <strong>fully structured, landing page</strong> built with modern best practices baked in. And I didn&rsquo;t have to build it alone!</p> <h2 id="wrapping-up" id="wrapping-up" >Wrapping up:<a href="#wrapping-up" class="heading-link pl-2 text-italic text-bold" aria-label="Wrapping up:"></a></h2> <p>With GitHub Copilot agent mode and Claude working together, I was able to:</p> <ul> <li>Generate a usable PRD and design mockup with a single prompt </li> <li>Build a responsive Astro-based landing page in less than thirty minutes </li> <li>Scaffold, test, and iterate on each section with minimal manual coding </li> <li>Use natural language to stay in the flow as I developed</li> </ul> <div style="width: 1706px;" class="wp-video"><video class="wp-video-shortcode" id="video-86884-4" width="1706" height="1096" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/dev-flow-final-landing_64d2b2.mp4#t=0.001?_=4" /><a href="https://github.blog/wp-content/uploads/2025/04/dev-flow-final-landing_64d2b2.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/dev-flow-final-landing_64d2b2.mp4#t=0.001</a></video></div> <h2 id="whats-next" id="whats-next" >What&rsquo;s next?<a href="#whats-next" class="heading-link pl-2 text-italic text-bold" aria-label="What&rsquo;s next?"></a></h2> <p>To complete this project, I updated the README with a clear project structure, added instructions for getting started, and staged it for deployment. From here, you can:</p> <ul> <li>Deploy it with GitHub Pages, Netlify, or your host of choice </li> <li>Set up GitHub Actions for CI/CD </li> <li>Add unit tests or accessibility checks </li> <li>Replace placeholder content with real data (like logos, dashboard, and profile images) </li> <li>Add new pages based on the Navbar</li> </ul> <p>Want to explore it yourself?</p> <ul> <li><a href="https://github.com/LadyKerr/devflow-landing">View the repository</a> </li> <li><a href="https://ladykerr.github.io/devflow-landing/">View the live demo</a></li> </ul> <h2 id="take-this-with-you" id="take-this-with-you" ><strong>Take this with you</strong><a href="#take-this-with-you" class="heading-link pl-2 text-italic text-bold" aria-label="&lt;strong&gt;Take this with you&lt;/strong&gt;"></a></h2> <p>AI tools like GitHub Copilot agent mode are transforming how we build, but like any tool, their power depends on how well we use them. Adding context, being explicit, and committing often made building this web page smooth and successful.</p> <p>If you&rsquo;re thinking about building with GitHub Copilot, give this workflow a try:</p> <ol> <li>Start with a PRD using Copilot on GitHub.com </li> <li>Generate a design from your PRD with Claude </li> <li>Use Copilot Agent in your IDE to code it, step by step.</li> </ol> <p>Until next time, happy coding!</p> </body></html> <p>The post <a href="https://github.blog/ai-and-ml/github-copilot/from-prompt-to-production-building-a-landing-page-with-copilot-agent-mode/">From prompt to production: Building a landing page with Copilot agent mode</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Exploring GitHub CLI: How to interact with GitHub’s GraphQL API endpoint - The GitHub Blog https://github.blog/?p=86433 2025-04-22T16:00:01.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>You might have heard of the <a href="https://cli.github.com/">GitHub CLI</a> and <a href="https://github.blog/developer-skills/github/how-to-level-up-your-git-game-with-github-cli/">all of the awesome things</a> you can do with it. However, one of its hidden superpowers is the ability to execute complex queries and mutations through GitHub&rsquo;s GraphQL API. This post will walk you through what GitHub&rsquo;s GraphQL API endpoint is and how to query it with the GitHub CLI.</p> <h2 id="what-is-graphql" id="what-is-graphql" >What is GraphQL?<a href="#what-is-graphql" class="heading-link pl-2 text-italic text-bold" aria-label="What is GraphQL?"></a></h2> <p>Let&rsquo;s start with the basics: <a href="https://graphql.org/learn">GraphQL</a> is a query language for APIs and a runtime for executing those queries against your data. Unlike traditional REST APIs that provide fixed data structures from predefined endpoints, GraphQL allows clients to request exactly the data they need in a single request. This single-request approach reduces network overhead, speeds up application performance, and simplifies client-side logic by eliminating the need to reconcile multiple API responses&mdash;a capability that has been openly available since the specification was <a href="https://spec.graphql.org/">open sourced in 2015</a>.</p> <p>GraphQL operations come in two primary types: queries and mutations. <strong>Queries</strong> are read-only operations that retrieve data without making any changes&mdash;similar to GET requests in REST. <strong>Mutations</strong>, on the other hand, are used to modify server-side data (create, update, or delete)&mdash;comparable to POST, PATCH, PUT, and DELETE in REST APIs. This clear separation between reading and writing operations makes GraphQL interactions predictable while maintaining the flexibility to precisely specify what data should be returned after a change is made.</p> <h2 id="how-is-graphql-used-at-github" id="how-is-graphql-used-at-github" >How is GraphQL used at GitHub?<a href="#how-is-graphql-used-at-github" class="heading-link pl-2 text-italic text-bold" aria-label="How is GraphQL used at GitHub?"></a></h2> <p>GitHub <a href="https://github.blog/developer-skills/github/the-github-graphql-api/">implemented GraphQL in 2016</a> to address limitations of RESTful APIs. This adoption has significantly enhanced the developer experience when working with GitHub data. With the GraphQL endpoint, you can retrieve a repository&rsquo;s issues, its labels, assignees, and comments with a single GraphQL query. Using our REST APIs, this would have otherwise taken several sets of nested calls.</p> <p>Some GitHub data and operations are only accessible through the GraphQL API (such as discussions, projects, and some enterprise settings), others exclusively through REST APIs (such as querying actions workflows, runners, or logs), and some using either endpoint (such as repositories, issues, pull requests, and user information). GitHub&rsquo;s GraphQL endpoint is accessible at <a href="https://api.github.com/graphql"><code>api.github.com/graphql</code></a> and you can explore the full schema in our <a href="https://docs.github.com/graphql/overview/about-the-graphql-api">GraphQL documentation</a> or through the interactive <a href="https://docs.github.com/graphql/overview/explorer">GraphQL Explorer</a>.</p> <p>A key consideration when choosing between the REST API and the GraphQL API is how the rate limits are calculated. As a quick summary for how this is implemented:</p> <ul> <li><strong>REST API</strong>: Limited by number of requests (typically 5,000 requests per hour for authenticated users and up to 15,000 for GitHub Apps installed in an Enterprise)</li> <li><strong>GraphQL API</strong>: Limited by &ldquo;points&rdquo; (typically 5,000 points per hour for authenticated users but can go up to 10,000-12,500 points per hour for GitHub Apps)</li> </ul> <p>Each GraphQL query costs at least one point, but the cost increases based on the complexity of your query (number of nodes requested, connections traversed, etc.). The GraphQL API provides a <code>rateLimit</code> field you can include in your queries to check your current limit status.</p> <p>For scenarios where you need to fetch related data that would otherwise require multiple REST calls, GraphQL is often more rate limit friendly because:</p> <ul> <li>One complex GraphQL query might cost 5-10 points but replace 5-10 separate REST API calls.</li> <li>You avoid &ldquo;over-fetching&rdquo; data you don&rsquo;t need, which indirectly helps with rate limits.</li> <li>The GraphQL API allows for more granular field selection, potentially reducing the complexity and point cost.</li> </ul> <p>However, poorly optimized GraphQL queries that request large amounts of nested data could potentially use up your rate limit faster than equivalent REST requests&mdash;and quickly run into <a href="https://docs.github.com/graphql/overview/rate-limits-and-node-limits-for-the-graphql-api#secondary-rate-limits">secondary rate limit</a> issues.</p> <p>A quick rule of thumb on deciding between which to use:</p> <ul> <li>For querying relational objects, such as GitHub Projects and their issues, GraphQL is often more effective, especially if it&rsquo;s a discrete number of items.</li> <li>For bulk data of one type or single data points, such as pulling in a list of repository names in an organization, the REST API is often preferred.</li> </ul> <p>Sometimes there isn&rsquo;t a right or wrong answer; so as long as the object exists, try one out!</p> <h2 id="why-use-github-cli-for-graphql" id="why-use-github-cli-for-graphql" >Why use GitHub CLI for GraphQL?<a href="#why-use-github-cli-for-graphql" class="heading-link pl-2 text-italic text-bold" aria-label="Why use GitHub CLI for GraphQL?"></a></h2> <p>While many developers start with <a href="https://docs.github.com/graphql/overview/explorer">GitHub&rsquo;s GraphQL Explorer</a> on the web, <code>curl</code>, or other API querying tools, there&rsquo;s a more streamlined approach: using built-in GraphQL support in the GitHub CLI. Before diving into the how-to, let&rsquo;s understand why GitHub CLI is often my go-to tool for GraphQL queries and mutations:</p> <ol> <li>Authentication is handled automatically: No need to manage personal access tokens manually.</li> <li>Streamlined syntax: Simpler than crafting <code>curl</code> commands.</li> <li>Local development friendly: Run queries and mutations right from your terminal.</li> <li>JSON processing: Built-in options for filtering and formatting results.</li> <li>Pagination support: Ability to work with cursor-based pagination in GraphQL responses.</li> <li>Consistent experience: Same tool you&rsquo;re likely using for other GitHub tasks.</li> </ol> <h2 id="how-to-get-started-with-gh-api-graphql" id="how-to-get-started-with-gh-api-graphql" >How to get started with <code>gh api graphql</code><a href="#how-to-get-started-with-gh-api-graphql" class="heading-link pl-2 text-italic text-bold" aria-label="How to get started with &lt;code&gt;gh api graphql&lt;/code&gt;"></a></h2> <p>First, ensure you have <a href="https://cli.github.com">GitHub CLI installed</a> and <a href="https://cli.github.com/manual/gh_auth_login">authenticated</a> with <code>gh auth login</code>. The basic syntax for making a GraphQL query with <a href="https://cli.github.com/manual/gh_api"><code>gh api graphql</code></a> is:</p> <pre><code class="language-sh">gh api graphql -H X-Github-Next-Global-ID:1 -f query=' query { viewer { login name bio } } ' </code></pre> <p>This simple query returns your GitHub username, the name you have defined in your profile, and your bio. The <code>-f</code> flag defines form variables, with <code>query=</code> being the GraphQL query itself.</p> <p>Here&rsquo;s our example output:</p> <pre><code class="language-json">{ "data": { "viewer": { "login": "joshjohanning", "name": "Josh Johanning", "bio": "DevOps Architect | GitHub" } } } </code></pre> <h2 id="running-queries-and-mutations" id="running-queries-and-mutations" >Running queries and mutations<a href="#running-queries-and-mutations" class="heading-link pl-2 text-italic text-bold" aria-label="Running queries and mutations"></a></h2> <h3 id="basic-query-example" id="basic-query-example" >Basic query example<a href="#basic-query-example" class="heading-link pl-2 text-italic text-bold" aria-label="Basic query example"></a></h3> <p>Let&rsquo;s try something more practical&mdash;fetching information about a repository. To get started, we&rsquo;ll use the following query:</p> <pre><code class="language-sh">gh api graphql -H X-Github-Next-Global-ID:1 -f query=' query($owner:String!, $repo:String!) { repository(owner:$owner, name:$repo) { name description id stargazerCount forkCount issues(states:OPEN) { totalCount } } } ' -F owner=octocat -F repo=Hello-World </code></pre> <p>The <code>-F</code> flag sets variable values that are referenced in the query with <code>$variable</code>.</p> <p>Here&rsquo;s our example output:</p> <pre><code class="language-json">{ "data": { "repository": { "name": "Hello-World", "description": "My first repository on GitHub!", "id": "R_kgDOABPHjQ", "stargazerCount": 2894, "forkCount": 2843, "issues": { "totalCount": 1055 } } } } </code></pre> <div class="content-table-wrap"><table style="border: 1px black"> <tbody> <tr> <td> &#128161; <strong>Tip</strong>: The <code>-H X-Github-Next-Global-ID:1</code> parameter sets an HTTP header that instructs GitHub&rsquo;s GraphQL API to use the <a href="https://docs.github.com/graphql/guides/migrating-graphql-global-node-ids">new global node ID format</a> rather than the legacy format. While your query will function without this header, including it prevents deprecation warnings when referencing node IDs (such as when passing <code>repository.ID</code> in subsequent operations). GitHub recommends adopting this format for all new integrations to ensure long-term compatibility. </td> </tr> </tbody> </table></div> <h3 id="running-mutations" id="running-mutations" >Running mutations<a href="#running-mutations" class="heading-link pl-2 text-italic text-bold" aria-label="Running mutations"></a></h3> <p>Mutations work similarly. Here&rsquo;s how to create a new issue:</p> <pre><code class="language-sh">gh api graphql -H X-Github-Next-Global-ID:1 -f query=' mutation($repositoryId:ID!, $title:String!, $body:String) { createIssue(input:{repositoryId:$repositoryId, title:$title, body:$body}) { issue { url number title body state } } } ' -F repositoryId="R_kgDOABPHjQ" -F title="Creating issue with GraphQL" -F body="Issue body created via GraphQL\!" </code></pre> <p>Make sure to update the <code>repositoryId</code> parameter with the actual repository&rsquo;s GraphQL ID (an example of returning a repository&rsquo;s ID is shown in the basic query above!).</p> <p>Here&rsquo;s our example output:</p> <pre><code class="language-json">{ "data": { "createIssue": { "issue": { "url": "https://github.com/octocat/Hello-World/issues/3706", "number": 3706, "title": "Creating issue with GraphQL", "body": "Issue body created via GraphQL!", "state": "OPEN" } } } } </code></pre> <h2 id="filtering-graphql-results" id="filtering-graphql-results" >Filtering GraphQL results<a href="#filtering-graphql-results" class="heading-link pl-2 text-italic text-bold" aria-label="Filtering GraphQL results"></a></h2> <p>GitHub CLI supports <a href="https://github.com/jqlang/jq">JQ</a>-style filtering for extracting specific parts of the response, which is invaluable when you need to parse just the repository names or URLs from a query for use in automation scripts. Here is an example of using the <code>--jq</code> flag:</p> <pre><code class="language-sh">gh api graphql -H X-Github-Next-Global-ID:1 -f query=' query($owner:String!, $repo:String!) { repository(owner:$owner, name:$repo) { issues(first:3, states:OPEN) { nodes { number title url } } } } ' -F owner=octocat -F repo=Hello-World --jq '.data.repository.issues.nodes[]' </code></pre> <p>The <code>--jq</code> flag accepts JQ expressions to process JSON output. This query returns just the array of issues, without the surrounding GraphQL response structure.</p> <p>Here&rsquo;s our example output:</p> <pre><code class="language-json">{ "number": 26, "title": "test issue", "url": "https://github.com/octocat/Hello-World/issues/26" } { "number": 27, "title": "just for test", "url": "https://github.com/octocat/Hello-World/issues/27" } { "number": 28, "title": "Test", "url": "https://github.com/octocat/Hello-World/issues/28" } </code></pre> <p>We could have modified the <code>--jq</code> flag to just return the issue URLs, like so:</p> <pre><code class="language-sh">gh api graphql -H X-Github-Next-Global-ID:1 -f query=' query($owner:String!, $repo:String!) { repository(owner:$owner, name:$repo) { issues(first:3, states:OPEN) { nodes { number title url } } } } ' -F owner=octocat -F repo=Hello-World --jq '.data.repository.issues.nodes[].url' </code></pre> <p>Here&rsquo;s our example output:</p> <pre><code class="language-plaintext">https://github.com/octocat/Hello-World/issues/26 https://github.com/octocat/Hello-World/issues/27 https://github.com/octocat/Hello-World/issues/28 </code></pre> <h2 id="handling-pagination" id="handling-pagination" >Handling pagination<a href="#handling-pagination" class="heading-link pl-2 text-italic text-bold" aria-label="Handling pagination"></a></h2> <p>GitHub&rsquo;s GraphQL API limits results to a maximum of <a href="https://docs.github.com/graphql/guides/using-pagination-in-the-graphql-api#about-pagination">100 items per page</a>, which means you&rsquo;ll need pagination to retrieve larger datasets.</p> <p>Pagination in GraphQL works by returning a &ldquo;cursor&rdquo; with each page of results, which acts as a pointer to where the next set of results should begin. When you request the next page, you provide this cursor to indicate where to start.</p> <p>The easiest way to handle this pagination in the GitHub CLI is with the <code>--paginate</code> flag, which automatically collects all pages of results for you by managing these cursors behind the scenes. Here&rsquo;s what that looks like in a query:</p> <pre><code class="language-sh">gh api graphql --paginate -H X-Github-Next-Global-ID:1 -f query=' query($owner:String!, $repo:String!, $endCursor:String) { repository(owner:$owner, name:$repo) { issues(first:100, after:$endCursor, states:OPEN, orderBy:{field:CREATED_AT, direction:DESC}) { pageInfo { hasNextPage endCursor } nodes { number title createdAt } } } } ' -F owner=octocat -F repo=Hello-World </code></pre> <p>The pageInfo object with its <code>hasNextPage</code> and <code>endCursor</code> fields is essential for pagination. When you use the <code>--paginate</code> flag, GitHub CLI automatically uses these fields to fetch all available pages for your query, combining the results into a single response.</p> <p>Here&rsquo;s our example output:</p> <pre><code class="language-json">{ "data": { "repository": { "issues": { "pageInfo": { "hasNextPage": true, "endCursor": "Y3Vyc29yOnYyOpK5MjAyNC0xMi0zMFQxNDo0ODo0NC0wNjowMM6kunD3" }, "nodes": [ { "number": 3708, "title": "Creating issue with GraphQL once more", "createdAt": "2025-04-02T18:15:11Z", "author": { "login": "joshjohanning" } }, { "number": 3707, "title": "Creating issue with GraphQL again", "createdAt": "2025-04-02T18:15:02Z", "author": { "login": "joshjohanning" } }, { "number": 3706, "title": "Creating issue with GraphQL", "createdAt": "2025-04-02T18:14:37Z", "author": { "login": "joshjohanning" } }, &hellip; and so on ] } } } } </code></pre> <p>This approach works great for moderate amounts of data, but keep in mind that GitHub&rsquo;s GraphQL API has <a href="https://docs.github.com/graphql/overview/rate-limits-and-node-limits-for-the-graphql-api">rate limits</a>, so extremely large queries might need to implement delays between requests.</p> <div class="content-table-wrap"><table style="border: 1px black"> <tbody> <tr> <td>&#128161; <strong>Important limitation</strong>: The <code>--paginate</code> flag can only handle pagination for a single connection at a time. For example, when listing repository issues as shown above, it can paginate through all issues, but cannot simultaneously paginate through each issue&rsquo;s comments. For nested pagination, you&rsquo;ll need to implement custom logic.</td> </tr> </tbody> </table></div> <h2 id="building-complex-scripts-chaining-graphql-queries-together" id="building-complex-scripts-chaining-graphql-queries-together" >Building complex scripts: Chaining GraphQL queries together<a href="#building-complex-scripts-chaining-graphql-queries-together" class="heading-link pl-2 text-italic text-bold" aria-label="Building complex scripts: Chaining GraphQL queries together"></a></h2> <p>When working with GitHub&rsquo;s GraphQL API, you often need to connect multiple queries to accomplish a complex task. Let&rsquo;s look at how to chain GraphQL calls together using the GitHub CLI:</p> <pre><code class="language-sh">ISSUE_ID=$(gh api graphql -H X-Github-Next-Global-ID:1 -f query=' query($owner: String!, $repo: String!, $issue_number: Int!) { repository(owner: $owner, name: $repo) { issue(number: $issue_number) { id } } } ' -F owner=joshjohanning -F repo=graphql-fun -F issue_number=1 --jq '.data.repository.issue.id') gh api graphql -H GraphQL-Features:sub_issues -H X-Github-Next-Global-ID:1 -f query=' query($issueId: ID!) { node(id: $issueId) { ... on Issue { subIssuesSummary { total completed percentCompleted } } } }' -F issueId="$ISSUE_ID" </code></pre> <p>Here&rsquo;s what this shell script is doing:</p> <ol> <li>The first query captures an issue&rsquo;s ID using the repository name and issue number</li> <li>The <code>--jq</code> flag extracts just the ID value and stores it in a variable</li> <li>The second query passes this ID to retrieve a summary of sub-issues</li> </ol> <p>Here&rsquo;s our example output:</p> <pre><code class="language-json">{ "data": { "node": { "subIssuesSummary": { "total": 3, "completed": 1, "percentCompleted": 33 } } } } </code></pre> <h2 id="take-this-with-you" id="take-this-with-you" >Take this with you<a href="#take-this-with-you" class="heading-link pl-2 text-italic text-bold" aria-label="Take this with you"></a></h2> <p>The <a href="https://cli.github.com/manual/gh_api"><code>gh api graphql</code></a> command provides a convenient way to interact with GitHub&rsquo;s GraphQL API directly from your terminal. It eliminates the need for token management, simplifies query syntax and formatting, and handles basic pagination that would otherwise be complex to implement. Whether you&rsquo;re running complex queries or simple mutations, this approach offers a streamlined developer experience.</p> <p>Next time you need to interact with GitHub&rsquo;s GraphQL API, skip the GraphQL Explorer on the web and try the GitHub CLI approach. It might just become your preferred method for working with GitHub&rsquo;s powerful GraphQL API capabilities.</p> <aside class="p-4 p-md-6 post-aside--large"><p class="h5-mktg gh-aside-title">Install GitHub CLI</p><p>GitHub CLI is a versatile tool to help you build your workflows. <strong><a href="https://cli.github.com/">Install</a> the latest version today.</strong> If you come up with something awesome, please <strong>share it in the <a href="https://github.com/cli/cli/discussions">CLI Discussions</a> section.</strong> We&rsquo;d love to see it!</p> </aside> </body></html> <p>The post <a href="https://github.blog/developer-skills/github/exploring-github-cli-how-to-interact-with-githubs-graphql-api-endpoint/">Exploring GitHub CLI: How to interact with GitHub&#8217;s GraphQL API endpoint</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Racing into 2025 with new GitHub Innovation Graph data - The GitHub Blog https://github.blog/?p=86572 2025-04-21T17:00:13.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>We launched the <a href="https://innovationgraph.github.com/">GitHub Innovation Graph</a> to give developers, researchers, and policymakers an easy way to analyze trends in <a href="https://github.com/github/innovationgraph/blob/main/docs/datasheet.md">public software collaboration activity</a> around the world. With today&rsquo;s quarterly<sup id="fnref-86572-1"><a href="#fn-86572-1" class="jetpack-footnote" title="Read footnote.">1</a></sup> release, updated through December 2024, we now have five full years of data.</p> <p>To help us celebrate, we&rsquo;ve created some animated bar charts showcasing the growth in developers and pushes of some of the top economies around the world over time. Enjoy!</p> <h2 id="animated-bar-charts" id="animated-bar-charts" >Animated bar charts<a href="#animated-bar-charts" class="heading-link pl-2 text-italic text-bold" aria-label="Animated bar charts"></a></h2> <div style="width: 2288px;" class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]--> <video class="wp-video-shortcode" id="video-86572-1" width="2288" height="1712" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_global_with_eu_git_pushes_desktop.mp4#t=0.001?_=1" /><a href="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_global_with_eu_git_pushes_desktop.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/bar_chart_race_global_with_eu_git_pushes_desktop.mp4#t=0.001</a></video></div> <p>What a photo finish! The European Union surpassing the United States in cumulative git pushes was certainly a highlight, but we&rsquo;d also note the significant movements of Brazil and Korea in climbing up the rankings.</p> <div style="width: 2286px;" class="wp-video"><video class="wp-video-shortcode" id="video-86572-2" width="2286" height="1712" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_global_with_eu_repositories_desktop.mp4#t=0.001?_=2" /><a href="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_global_with_eu_repositories_desktop.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/bar_chart_race_global_with_eu_repositories_desktop.mp4#t=0.001</a></video></div> <p>Another close race, this time showing India outpacing the European Union in repositories between Q2 and Q3 2024.</p> <div style="width: 2288px;" class="wp-video"><video class="wp-video-shortcode" id="video-86572-3" width="2288" height="1712" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_apac_developers_desktop.mp4#t=0.001?_=3" /><a href="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_apac_developers_desktop.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/bar_chart_race_apac_developers_desktop.mp4#t=0.001</a></video></div> <p>Zooming into economies in APAC, we can appreciate the speed of developer growth in India, more than quadrupling in just 5 years.</p> <div style="width: 2288px;" class="wp-video"><video class="wp-video-shortcode" id="video-86572-4" width="2288" height="1712" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_emea_developers_desktop.mp4#t=0.001?_=4" /><a href="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_emea_developers_desktop.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/bar_chart_race_emea_developers_desktop.mp4#t=0.001</a></video></div> <p>Flying over to EMEA, we saw very impressive growth from Nigeria, which rose up from rank 20 in Q1 2020 to rank 11 in Q4 2024.</p> <div style="width: 2288px;" class="wp-video"><video class="wp-video-shortcode" id="video-86572-5" width="2288" height="1712" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_latam_developers_desktop.mp4#t=0.001?_=5" /><a href="https://github.blog/wp-content/uploads/2025/04/bar_chart_race_latam_developers_desktop.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/bar_chart_race_latam_developers_desktop.mp4#t=0.001</a></video></div> <p>Finally, in LATAM, it was exciting to see how close most of the economies are in developer counts (with the exception of Brazil), with frequent back-and-forth swaps in rankings between economies like Argentina and Colombia, or Guatemala and Bolivia.</p> <p>Want to explore more? Dive into the <a href="https://github.com/github/innovationgraph">datasets</a> yourself. We can&rsquo;t wait to check out what you build.</p> <h2 id="global-line-charts" id="global-line-charts" >Global line charts<a href="#global-line-charts" class="heading-link pl-2 text-italic text-bold" aria-label="Global line charts"></a></h2> <p>We&rsquo;ve also made a feature update that will enable you to quickly understand the global scale of some of the metrics we publish, including the numbers of public git pushes, repositories, developers, and organizations on GitHub worldwide.</p> <p>Simply follow the installation steps for our newly released <a href="https://github.com/github/github-mcp-server">GitHub MCP Server</a>, and you&rsquo;ll be able to prompt GitHub Copilot in agent mode within VS Code to retrieve the CSVs from the data repo using the <strong>get_file_contents</strong> tool. Then, you can have the agent sum up the latest values for you.</p> <p><img data-recalc-dims="1" fetchpriority="high" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/GitHub-MCP-Server.png?resize=1024%2C295" alt="A portion of the README.md file for the github/github-mcp-server repo which contains the easy installation buttons for installing the GitHub MCP server onto VS Code." width="1024" height="295" class="alignnone size-full wp-image-86578 width-fit" srcset="https://github.blog/wp-content/uploads/2025/04/GitHub-MCP-Server.png?w=1600 1600w, https://github.blog/wp-content/uploads/2025/04/GitHub-MCP-Server.png?w=300 300w, https://github.blog/wp-content/uploads/2025/04/GitHub-MCP-Server.png?w=768 768w, https://github.blog/wp-content/uploads/2025/04/GitHub-MCP-Server.png?w=1024 1024w, https://github.blog/wp-content/uploads/2025/04/GitHub-MCP-Server.png?w=1536 1536w" sizes="(max-width: 1000px) 100vw, 1000px" /></p> <p>Afterward, you can double-check its results with these handy charts that we&rsquo;ve added to their respective global metrics pages for <a href="https://innovationgraph.github.com/global-metrics/git-pushes">git pushes</a>, <a href="https://innovationgraph.github.com/global-metrics/repositories">repositories</a>, <a href="https://innovationgraph.github.com/global-metrics/developers">developers</a>, and <a href="https://innovationgraph.github.com/global-metrics/organizations">organizations</a>. Check them out below.</p> <a href="https://github.blog/news-insights/policy-news-and-insights/racing-into-2025-with-new-github-innovation-graph-data/#gallery-86572-1-slideshow">Click to view slideshow.</a> <div class="footnotes"> <hr> <ol> <li id="fn-86572-1"> The GitHub Innovation Graph reports metrics according to calendar year quarters, which correspond to the following: Q1: January 1 to March 31; Q2: April 1 to June 30; July 1 to September 30; and Q4: October 1 to December 31.&nbsp;<a href="#fnref-86572-1" title="Return to main content.">&#8617;</a> </li> </ol> </div> </body></html> <p>The post <a href="https://github.blog/news-insights/policy-news-and-insights/racing-into-2025-with-new-github-innovation-graph-data/">Racing into 2025 with new GitHub Innovation Graph data</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> How to take climate action with your code - The GitHub Blog https://github.blog/?p=86529 2025-04-21T13:00:35.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>Climate change is one of the most pressing issues of this century. We are working with developers to leverage technology to create a greener world. So, this Earth Day, we&rsquo;re excited to launch the <a href="https://github.com/social-impact/focus-areas/environmental-sustainability/climate-action-plan-for-developers">Climate Action Plan for Developers</a>.</p> <p>We&rsquo;ve curated tools and projects to help you kick-start your climate action journey and contribute to achieving net zero carbon emissions. Explore over 60,000 green software and climate-focused repositories on GitHub.</p> <p>Not sure where to start? Take a look below at a few highlights that can help you start to green your code today.</p> <h2 id="%f0%9f%9a%80-speed-scale" id="%f0%9f%9a%80-speed-scale" >&#128640; Speed &amp; Scale<a href="#%f0%9f%9a%80-speed-scale" class="heading-link pl-2 text-italic text-bold" aria-label="&#128640; Speed &amp; Scale"></a></h2> <div class="mod-yt position-relative" style="height: 0; padding-bottom: calc((9 / 16)*100%);"> <iframe loading="lazy" class="position-absolute top-0 left-0 width-full height-full" src="https://www.youtube.com/embed/QyEVrNqFr_c?feature=oembed" title="YouTube video player" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe> </div> <p>Speed &amp; Scale is a global initiative to move leaders to act on the climate crisis. Their team has developed a net zero action plan, with 10 objectives and 49 key results that track yearly progress.</p> <p><a href="https://speedandscale.com/?utm_source=github&amp;utm_medium=referral&amp;utm_campaign=github_april2025">Learn about their action plan</a></p> <h2 id="%e2%9a%a1%ef%b8%8f-electricity-maps" id="%e2%9a%a1%ef%b8%8f-electricity-maps" >&#9889;&#65039; Electricity Maps<a href="#%e2%9a%a1%ef%b8%8f-electricity-maps" class="heading-link pl-2 text-italic text-bold" aria-label="&#9889;&#65039; Electricity Maps"></a></h2> <div class="mod-yt position-relative" style="height: 0; padding-bottom: calc((9 / 16)*100%);"> <iframe loading="lazy" class="position-absolute top-0 left-0 width-full height-full" src="https://www.youtube.com/embed/geOYLVdtmQQ?feature=oembed" title="YouTube video player" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe> </div> <p>Electricity Maps is the leading electricity grid API, offering a single source for accessing carbon intensity and energy mix globally. As a developer you can go beyond just viewing the maps to pull data from their API, download data files, and even contribute to their open source project.</p> <p><a href="https://www.electricitymaps.com/free-tier-api?utm_source=github&amp;utm_campaign=CAP4D">Access the Electricity Maps API</a></p> <h2 id="%f0%9f%96%a5%ef%b8%8f-codecarbon" id="%f0%9f%96%a5%ef%b8%8f-codecarbon" >&#128421;&#65039; CodeCarbon<a href="#%f0%9f%96%a5%ef%b8%8f-codecarbon" class="heading-link pl-2 text-italic text-bold" aria-label="&#128421;&#65039; CodeCarbon"></a></h2> <div class="mod-yt position-relative" style="height: 0; padding-bottom: calc((9 / 16)*100%);"> <iframe loading="lazy" class="position-absolute top-0 left-0 width-full height-full" src="https://www.youtube.com/embed/Tki1yfdDOdE?feature=oembed" title="YouTube video player" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe> </div> <p>CodeCarbon is a lightweight software package that allows for integration into any Python project to track and reduce CO2 emissions from your computing. Get started with using the software package and check out the opportunities to help support this open source project.</p> <p><a href="https://mlco2.github.io/codecarbon/">Get started with the software package</a></p> <h2 id="%f0%9f%8c%b3-climatetriage-by-opensustain-tech" id="%f0%9f%8c%b3-climatetriage-by-opensustain-tech" >&#127795; ClimateTriage, by OpenSustain.Tech<a href="#%f0%9f%8c%b3-climatetriage-by-opensustain-tech" class="heading-link pl-2 text-italic text-bold" aria-label="&#127795; ClimateTriage, by OpenSustain.Tech"></a></h2> <div class="mod-yt position-relative" style="height: 0; padding-bottom: calc((9 / 16)*100%);"> <iframe loading="lazy" class="position-absolute top-0 left-0 width-full height-full" src="https://www.youtube.com/embed/1AgOJl93ywY?feature=oembed" title="YouTube video player" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe> </div> <p>ClimateTriage helps developers discover a meaningful way to contribute to open source projects focused on climate technology and sustainability. Harness the power of open source collaboration to tackle environmental challenges such as climate change, clean energy, biodiversity, and natural resource conservation. Whether you&rsquo;re an experienced developer, a scientist, or a newcomer looking to contribute, connect you with opportunities to use your skills to create a sustainable future.</p> <p><a href="https://climatetriage.com/?utm_source=CAP4D">Get started with a Good First Issue</a></p> <h2 id="%f0%9f%92%aa-use-github-copilot-and-codecarbon-for-greener-code" id="%f0%9f%92%aa-use-github-copilot-and-codecarbon-for-greener-code" >&#128170; Use GitHub Copilot and CodeCarbon for greener code<a href="#%f0%9f%92%aa-use-github-copilot-and-codecarbon-for-greener-code" class="heading-link pl-2 text-italic text-bold" aria-label="&#128170; Use GitHub Copilot and CodeCarbon for greener code"></a></h2> <div class="mod-yt position-relative" style="height: 0; padding-bottom: calc((9 / 16)*100%);"> <iframe loading="lazy" class="position-absolute top-0 left-0 width-full height-full" src="https://www.youtube.com/embed/eEr0BR1PbLA?feature=oembed" title="YouTube video player" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe> </div> <p>Computational tasks, especially in AI, have a growing carbon footprint. Learn how CodeCarbon, an open-source Python library, helps measure CO2 emissions from your code. Together with GitHub Copilot, integrate CodeCarbon into your projects, allowing you to track energy use and optimize for sustainability.</p> <p><a href="https://github.com/features/copilot?ef_id=_k_Cj0KCQjwzYLABhD4ARIsALySuCQDO3q9088nASNBx-pfdW2D2uaXHlCHjv18m4xUQp1-jPLbkt7hEqIaAu8nEALw_wcB_k_&amp;OCID=AIDcmmb150vbv1_SEM__k_Cj0KCQjwzYLABhD4ARIsALySuCQDO3q9088nASNBx-pfdW2D2uaXHlCHjv18m4xUQp1-jPLbkt7hEqIaAu8nEALw_wcB_k_&amp;gad_source=1&amp;gbraid=0AAAAADcJh_uFADnlHvlqUd_QugGUmmiQP&amp;gclid=Cj0KCQjwzYLABhD4ARIsALySuCQDO3q9088nASNBx-pfdW2D2uaXHlCHjv18m4xUQp1-jPLbkt7hEqIaAu8nEALw_wcB">Get started with GitHub Copilot for free today</a></p> <div class="post-content-cta"><p><strong>Learn more</strong> about how you can <a href="https://github.com/social-impact/focus-areas/environmental-sustainability/climate-action-plan-for-developers">take climate action today.</a></p> </div> </body></html> <p>The post <a href="https://github.blog/open-source/social-impact/how-to-take-climate-action-with-your-code/">How to take climate action with your code</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> How to make your images in Markdown on GitHub adjust for dark mode and light mode - The GitHub Blog https://github.blog/?p=86598 2025-04-18T19:30:46.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>GitHub supports dark mode and light mode, and as developers, we can make our README images look great in both themes. Here&rsquo;s a quick guide to using the <code>&lt;picture&gt;</code> element in your GitHub Markdown files to dynamically switch images based on the user&rsquo;s color scheme.</p> <p>When <a href="https://docs.github.com/en/get-started/accessibility/managing-your-theme-settings">developers switch to GitHub&rsquo;s dark mode (or vice versa)</a>, standard images can look out of place, with bright backgrounds or clashing colors.</p> <p>Instead of forcing a one-size-fits-all image, you can tailor your visuals to blend seamlessly with the theme. It&rsquo;s a small change, but it can make your project look much more polished.</p> <h2 id="one-snippet-two-themes" id="one-snippet-two-themes" ><strong>One snippet, two themes!</strong><a href="#one-snippet-two-themes" class="heading-link pl-2 text-italic text-bold" aria-label="&lt;strong&gt;One snippet, two themes!&lt;/strong&gt;"></a></h2> <p>Here&rsquo;s the magic snippet you can copy into your README (or any Markdown file):</p> <pre><code>&lt;picture&gt; &lt;source media="(prefers-color-scheme: dark)" srcset="dark-mode-image.png"&gt; &lt;source media="(prefers-color-scheme: light)" srcset="light-mode-image.png"&gt; &lt;img alt="Fallback image description" src="default-image.png"&gt; &lt;/picture&gt; </code></pre> <p>Now, we say it&rsquo;s magic, but let&rsquo;s take a peek behind the curtain to show how it works:</p> <ul> <li>The <code>&lt;picture&gt;</code> tag lets you define multiple image sources for different scenarios. </li> <li>The <code>&lt;source media="..."&gt;</code> attribute matches the user&rsquo;s color scheme. <ul> <li>When <code>media="(prefers-color-scheme: dark)"</code>, the browser loads the <code>srcset</code> image when GitHub is in dark mode. </li> <li>Similarly, when <code>media="(prefers-color-scheme: light)"</code>, the browser loads the <code>srcset</code> image when GitHub is in light mode. </li> </ul> </li> <li>If the browser doesn&rsquo;t support the <code>&lt;picture&gt;</code> element, or the user&rsquo;s system doesn&rsquo;t match any defined media queries, the fallback <code>&lt;img&gt;</code> tag will be used.</li> </ul> <p>You can use this approach in your repo README files, documentation hosted on GitHub, and any other Markdown files rendered on GitHub.com!</p> <h2 id="demo" id="demo" ><strong>Demo</strong><a href="#demo" class="heading-link pl-2 text-italic text-bold" aria-label="&lt;strong&gt;Demo&lt;/strong&gt;"></a></h2> <p>What&rsquo;s better than a demo to help you get started? Here&rsquo;s what this looks like in practice:<br> <div style="width: 1920px;" class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]--> <video class="wp-video-shortcode" id="video-86598-1" width="1920" height="1080" preload="metadata" controls="controls"><source type="video/mp4" src="https://github.blog/wp-content/uploads/2025/04/Toggle-Dark-and-Light-Mode-on-GitHub-.mp4#t=0.001?_=1" /><a href="https://github.blog/wp-content/uploads/2025/04/Toggle-Dark-and-Light-Mode-on-GitHub-.mp4#t=0.001">https://github.blog/wp-content/uploads/2025/04/Toggle-Dark-and-Light-Mode-on-GitHub-.mp4#t=0.001</a></video></div></p> </body></html> <p>The post <a href="https://github.blog/developer-skills/github/how-to-make-your-images-in-markdown-on-github-adjust-for-dark-mode-and-light-mode/">How to make your images in Markdown on GitHub adjust for dark mode and light mode</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Cracking the code: How to wow the acceptance committee at your next tech event - The GitHub Blog https://github.blog/?p=86554 2025-04-18T16:48:20.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p><a href="https://githubuniverse.com/">GitHub Universe</a> returns to San Francisco on October 28 and 29&mdash;bringing together the builders, dreamers, and changemakers shaping the future of software. From first-time speakers with big ideas to DevRel pros with demos to share and business leaders rethinking workflows with AI, we believe that a diverse range of voices belong on our stage.</p> <p>But writing a compelling conference session submission can feel like decoding a complex algorithm. What makes your idea stand out? How do you grab the content committee&rsquo;s attention? And what if you&rsquo;ve never done this before?</p> <p>Good news: we&rsquo;ve cracked the code, and we&rsquo;re sharing it with you.</p> <p>Here are four proven tips to help you put together a proposal that&rsquo;s clear, compelling, and uniquely you.</p> <p><a href="https://reg.githubuniverse.com/flow/github/universe25/cfs/page/cfs-landing"><strong>Apply to speak or nominate a speaker</strong></a> to take the stage at GitHub Universe by Friday, May 2 at 11:59 pm PT to be considered.</p> <h2 id="1-find-something-youre-truly-passionate-about-%f0%9f%92%a1" id="1-find-something-youre-truly-passionate-about-%f0%9f%92%a1" >1. Find something you&rsquo;re truly passionate about &#128161;<a href="#1-find-something-youre-truly-passionate-about-%f0%9f%92%a1" class="heading-link pl-2 text-italic text-bold" aria-label="1. Find something you&rsquo;re truly passionate about &#128161;"></a></h2> <p><img data-recalc-dims="1" fetchpriority="high" decoding="async" src="https://github.blog/wp-content/uploads/2025/04/Screenshot-2025-04-17-at-11.01.17%E2%80%AFAM.png?resize=1024%2C546" alt="A Venn diagram titled 'Signature talk formula' showing the intersection of three circles labeled 'What you know', 'What you are passionate about', and 'What the audience cares about'. The diagram is displayed on a dark background with the circles in blue, teal, and purple, illustrating how effective talks should combine knowledge, passion, and audience relevance." width="1024" height="546" class="alignnone size-full wp-image-86556 width-fit"></p> <p>Here&rsquo;s the truth: passion is magnetic. If you&rsquo;re excited about your topic, it <em>shows</em>. It pulses through your proposal, powers your delivery onstage, and pulls in your audience&mdash;content committee included.</p> <p>Instead of chasing the latest trends, talk about something that <em>lights you up</em>. Maybe it&rsquo;s a story from building an open source project in your off-hours. Maybe it&rsquo;s how your team shipped something new using GitHub Copilot. Or maybe it&rsquo;s the unexpected way you quickly scaled developer experience across a global org. Your unique perspective is your superpower.</p> <p>Content committees can sense authenticity. They&rsquo;re not just looking for polished buzzwords. They&rsquo;re looking for people who care deeply and can teach others something meaningful.</p> <p>&#127908; <strong>Pro tip:</strong> If it&rsquo;s a topic you&rsquo;d talk about over lunch with a teammate or geek out about on a podcast, it&rsquo;s probably a great fit.</p> <h2 id="2-write-a-title-they-cant-ignore-%e2%9c%8d%ef%b8%8f" id="2-write-a-title-they-cant-ignore-%e2%9c%8d%ef%b8%8f" >2. Write a title they can&rsquo;t ignore &#9997;&#65039;<a href="#2-write-a-title-they-cant-ignore-%e2%9c%8d%ef%b8%8f" class="heading-link pl-2 text-italic text-bold" aria-label="2. Write a title they can&rsquo;t ignore &#9997;&#65039;"></a></h2> <p>Think of your session title like an email subject line&mdash;it&rsquo;s your chance to make a strong first impression, and it needs to do the heavy lifting for you. A strong title shouldn&rsquo;t just sound good. It should clearly communicate <em>what</em> your talk is about and <em>why</em> it matters.</p> <p><strong>Let&rsquo;s take our title as an example:</strong></p> <ul> <li>&#9989; <strong>Engaging</strong>: &ldquo;Cracking the Code&rdquo; suggests there&rsquo;s an inside strategy, and it sparks curiosity. </li> <li> <p>&#9989; <strong>Clear</strong>: &ldquo;How to wow the acceptance committee at your next tech event&rdquo; leaves no doubt about the topic.</p> </li> <li> <p>&#9989; <strong>Action-oriented</strong>: It promises practical takeaways, not just theory.</p> </li> <li> <p>&#9989; <strong>Balanced</strong>: It walks the line between fun and professional.</p> </li> </ul> <p>Avoid vague titles (&ldquo;A new approach to software&rdquo;) or clickbait (&ldquo;This one trick will fix your codebase&rdquo;). Instead, aim for clarity <em>with flair</em>. Give the content committee a reason to want to learn more along with the confidence that your talk can deliver.</p> <p><strong>&#127908; Pro tip:</strong> After you write your title, ask yourself&mdash;would I attend this session? Would I understand what I&rsquo;m getting from it in five seconds?</p> <div class="mod-yt position-relative" style="height: 0; padding-bottom: calc((9 / 16)*100%);"> <iframe loading="lazy" class="position-absolute top-0 left-0 width-full height-full" src="https://www.youtube.com/embed/HR2dzmykMyk?feature=oembed" title="YouTube video player" allow="accelerometer; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe> </div> <h2 id="3-make-it-easy-for-the-content-committee-to-say-yes-%e2%9c%85" id="3-make-it-easy-for-the-content-committee-to-say-yes-%e2%9c%85" >3. Make it easy for the content committee to say yes &#9989;<a href="#3-make-it-easy-for-the-content-committee-to-say-yes-%e2%9c%85" class="heading-link pl-2 text-italic text-bold" aria-label="3. Make it easy for the content committee to say yes &#9989;"></a></h2> <p>The content committee is rooting for you, but you&rsquo;ve got to help them out. The best submissions remove all ambiguity and make a strong case for why this session matters<em>.</em></p> <p><strong>Here&rsquo;s how:</strong></p> <ul> <li><strong>Be specific about your audience</strong>: Who is this for? Senior engineers? OSS maintainers? Platform teams? Product leads? </li> <li> <p><strong>Spell out the takeaways</strong>: What will people learn? Tools, frameworks, fresh mindsets?</p> </li> <li> <p><strong>Tie it to the event</strong>: Why does this belong at GitHub Universe? How does it support the event&rsquo;s themes?</p> </li> </ul> <p>Also, show that your content has a life beyond the stage:</p> <ul> <li>Can your session be turned into a blog, case study, or video? </li> <li> <p>Is your abstract compelling enough to be featured in a marketing email or keynote recap?</p> </li> <li> <p>Will attendees be able to apply what they learned the next day?</p> </li> </ul> <p><strong>&#127908; Hot tip:</strong> Think beyond the talk itself. That&rsquo;s pure gold for event organizers.</p> <h2 id="4-seal-the-deal-with-your-online-presence-%f0%9f%8c%90" id="4-seal-the-deal-with-your-online-presence-%f0%9f%8c%90" >4. Seal the deal with your online presence &#127760;<a href="#4-seal-the-deal-with-your-online-presence-%f0%9f%8c%90" class="heading-link pl-2 text-italic text-bold" aria-label="4. Seal the deal with your online presence &#127760;"></a></h2> <p>Yes, your session submission is the star, but reviewers on the content committee can also look you up. Your online presence helps us understand:</p> <ul> <li>Your credibility and expertise </li> <li> <p>Your speaking experience (or potential!)</p> </li> <li> <p>How easy it will be to promote you as a speaker</p> </li> </ul> <p>You don&rsquo;t need a massive following. But you <em>do</em> want a strong, relevant footprint. Here are a few tips to consider:</p> <hr> <p><strong>On LinkedIn:</strong></p> <a href='https://github.blog/developer-skills/career-growth/cracking-the-code-how-to-wow-the-acceptance-committee-at-your-next-tech-event/attachment/linkedin_headline-bio/'><img data-recalc-dims="1" decoding="async" width="300" height="268" src="https://github.blog/wp-content/uploads/2025/04/Linkedin_Headline-bio.png?fit=300%2C268&#038;resize=300%2C268" class="attachment-medium size-medium" alt="A LinkedIn profile card showing professional information. The profile belongs to Cassidy Williams who uses She/Her pronouns. Her title lists multiple roles: Developer advocate, educator, advisor, software engineer, and memer. She&#039;s based in Chicago, Illinois, United States. The profile shows she has 17,124 followers and over 500 connections. The card includes a circular profile photo and partial view of keyboard keys in the upper right corner." srcset="https://github.blog/wp-content/uploads/2025/04/Linkedin_Headline-bio.png?w=594 594w, https://github.blog/wp-content/uploads/2025/04/Linkedin_Headline-bio.png?w=300 300w" sizes="(max-width: 300px) 100vw, 300px" /></a> <a href='https://github.blog/developer-skills/career-growth/cracking-the-code-how-to-wow-the-acceptance-committee-at-your-next-tech-event/attachment/linkedin_speaking-experience/'><img data-recalc-dims="1" decoding="async" width="300" height="267" src="https://github.blog/wp-content/uploads/2025/04/LinkedIn_Speaking-experience.png?fit=300%2C267&#038;resize=300%2C267" class="attachment-medium size-medium" alt="Two social media posts from Joseph Katsioloudes, a Tech Speaker in cyber security. The left post shows a selfie taken at a conference in Seattle with an audience visible in the background. The post mentions CyberWeek by ThinkCyber Foundation with GitHub Security Lab as a sponsor. A tag indicates he&#039;s with Nancy G. The right post mentions returning to London for a guest lecture, showing what appears to be a lecture hall. Both posts show profile pictures and engagement information." srcset="https://github.blog/wp-content/uploads/2025/04/LinkedIn_Speaking-experience.png?w=596 596w, https://github.blog/wp-content/uploads/2025/04/LinkedIn_Speaking-experience.png?w=300 300w" sizes="(max-width: 300px) 100vw, 300px" /></a> <a href='https://github.blog/developer-skills/career-growth/cracking-the-code-how-to-wow-the-acceptance-committee-at-your-next-tech-event/attachment/linkedin_engage-network/'><img data-recalc-dims="1" loading="lazy" decoding="async" width="300" height="268" src="https://github.blog/wp-content/uploads/2025/04/LinkedIn_Engage-network.png?fit=300%2C268&#038;resize=300%2C268" class="attachment-medium size-medium" alt="A social media profile and post from Jeffrey Berthiaume, Technology Innovator. Left side shows his profile with specialties in iOS, tvOS, Vision Pro, IoT, and Emerging Tech, including Connect and Message buttons. Right side displays his post about creating an app called &#039;nanglish&#039; with his kids during the holiday season. The post includes screenshots of the colorful app interface showing a grid of different colored squares. The post has engagement options below it and indicates a repost from BrainXchange which has 4,364 followers." srcset="https://github.blog/wp-content/uploads/2025/04/LinkedIn_Engage-network.png?w=594 594w, https://github.blog/wp-content/uploads/2025/04/LinkedIn_Engage-network.png?w=300 300w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a> <hr> <ul> <li>Use a headline that highlights your expertise, not just your title. </li> <li> <p>Make your &ldquo;About&rdquo; section shine with links to talks, blogs, and projects.</p> </li> <li> <p>Add speaking experience under &ldquo;Experience&rdquo; or &ldquo;Featured.&rdquo;</p> </li> </ul> <hr> <p><strong>On GitHub:</strong></p> <a href='https://github.blog/developer-skills/career-growth/cracking-the-code-how-to-wow-the-acceptance-committee-at-your-next-tech-event/attachment/github_readme/'><img data-recalc-dims="1" loading="lazy" decoding="async" width="300" height="268" src="https://github.blog/wp-content/uploads/2025/04/GitHub_README.png?fit=300%2C268&#038;resize=300%2C268" class="attachment-medium size-medium" alt="A GitHub profile page for Kedasha Kerr (username LadyKerr). The profile has a dark theme with a circular profile picture on the left showing a person with long braided hair, glasses, and red lipstick against an orange background. The profile introduction starts with &#039;Hey, I&#039;m Kedasha!&#039; followed by a partial bio mentioning she&#039;s a Software Engineer passionate about creation and learning. She describes herself as a Developer Advocate @github and Technical Content Creator. The profile includes a Follow button, a pinned repository called &#039;mealmetrics-copilot&#039; that was forked from another repository, and a small cartoon avatar wearing a red cap. Her Instagram handle @itsthatladdydev is also mentioned." srcset="https://github.blog/wp-content/uploads/2025/04/GitHub_README.png?w=594 594w, https://github.blog/wp-content/uploads/2025/04/GitHub_README.png?w=300 300w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a> <a href='https://github.blog/developer-skills/career-growth/cracking-the-code-how-to-wow-the-acceptance-committee-at-your-next-tech-event/attachment/github_contributions/'><img data-recalc-dims="1" loading="lazy" decoding="async" width="300" height="267" src="https://github.blog/wp-content/uploads/2025/04/GitHub_Contributions.png?fit=300%2C267&#038;resize=300%2C267" class="attachment-medium size-medium" alt="GitHub contribution activity chart showing 2,593 contributions in the last year. The chart displays a grid of contribution squares organized by day of week (Monday, Wednesday, Friday) and month (March through September). Each square is colored in varying shades of green indicating different levels of activity, with darker green representing more contributions on those days. Below the chart are links to GitHub profiles (@github, @github-samples, @octobooth) and an activity overview section showing contributions to repositories including github/devrel and github/gh-skyline. A small note says &#039;Learn how we count contributions&#039; under the chart." srcset="https://github.blog/wp-content/uploads/2025/04/GitHub_Contributions.png?w=596 596w, https://github.blog/wp-content/uploads/2025/04/GitHub_Contributions.png?w=300 300w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a> <a href='https://github.blog/developer-skills/career-growth/cracking-the-code-how-to-wow-the-acceptance-committee-at-your-next-tech-event/attachment/github_recent-activity/'><img data-recalc-dims="1" loading="lazy" decoding="async" width="300" height="268" src="https://github.blog/wp-content/uploads/2025/04/GitHub_Recent-activity.png?fit=300%2C268&#038;resize=300%2C268" class="attachment-medium size-medium" alt="GitHub profile pinned repositories section on a dark theme. Six repositories are displayed: &#039;octolamp&#039; (a 3D printed, GitHub infused smart light with 689 stars and 34 forks), &#039;DasDeployer&#039; (a Raspberry Pi powered manual release approval gate for Azure Pipelines written in Python with 95 stars and 5 forks), &#039;rpi-cluster&#039; (brief instructions about a Raspberry Pi Cluster visible in background on calls), &#039;PumpkinPi&#039; (spooky build status indicator with 76 stars, written in Python), &#039;smart-xmas&#039; (repository for adding something with 203 stars and 6 forks), and &#039;Camera Setup&#039; (with numbered instructions visible in a readme file)." srcset="https://github.blog/wp-content/uploads/2025/04/GitHub_Recent-activity.png?w=594 594w, https://github.blog/wp-content/uploads/2025/04/GitHub_Recent-activity.png?w=300 300w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a> <hr> <ul> <li>Update your profile README with your focus areas and links. </li> <li> <p>Pin key repos or projects you&rsquo;ve contributed to.</p> </li> <li> <p>Be active in discussions, even if most of your code is private.</p> </li> </ul> <p><strong>&#127908; Hot tip:</strong> Post about your submission journey! Sharing your process helps you engage with the community and might even inspire someone else to apply.</p> <h2 id="ready-to-take-the-stage" id="ready-to-take-the-stage" >Ready to take the stage?<a href="#ready-to-take-the-stage" class="heading-link pl-2 text-italic text-bold" aria-label="Ready to take the stage?"></a></h2> <p>You&rsquo;ve got the ideas. Now you&rsquo;ve got the blueprint. If you&rsquo;ve made it this far, we hope you feel ready&mdash;and excited&mdash;to throw your hat in the ring. Let&rsquo;s recap:</p> <ol> <li><strong>Lead with passion</strong> to find a topic you care deeply about. </li> <li> <p><strong>Craft a clear, compelling title</strong> that grabs attention and gives the content committee an immediate idea of your session topic and takeaways.</p> </li> <li> <p><strong>Make your submission a no-brainer</strong> by showing how it aligns with the event and adds value.</p> </li> <li> <p><strong>Polish your online presence</strong>&mdash;it might just tip the scale in your favor.</p> </li> </ol> <p>Whether you&rsquo;re a seasoned speaker or stepping into the spotlight for the first time, we can&rsquo;t wait to hear from you. And if you don&rsquo;t have a session idea this year, you can also nominate a speaker who deserves to take the stage. Submit a session proposal or a speaker nomination from now until Friday, May 2 at 11:59 pm PT to be considered!</p> <aside class="p-4 p-md-6 post-aside--large"><p><a href="https://reg.githubuniverse.com/flow/github/universe25/cfs/page/cfs-landing"><strong>Apply to speak at GitHub Universe or nominate a speaker &gt;</strong></a></p> <p>&#127903;&#65039; Registration for the main event isn&rsquo;t open yet, but if you want to be the first to know when tickets go on sale, <a href="https://githubuniverse.com/">sign up here to get notified</a>.</p> </aside> </p><p>Let&rsquo;s build the future together&mdash;one session at a time. &#128171;</p> </body></html> <p>The post <a href="https://github.blog/developer-skills/career-growth/cracking-the-code-how-to-wow-the-acceptance-committee-at-your-next-tech-event/">Cracking the code: How to wow the acceptance committee at your next tech event</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Gemma 3 QAT Models: Bringing state-of-the-Art AI to consumer GPUs - Google Developers Blog https://developers.googleblog.com/en/gemma-3-quantized-aware-trained-state-of-the-art-ai-to-consumer-gpus/ 2025-04-18T13:34:01.000Z The release of int4 quantized versions of Gemma 3 models, optimized with Quantization Aware Training (QAT) brings significantly reduced memory requirements, allowing users to run powerful models like Gemma 3 27B on consumer-grade GPUs such as the NVIDIA RTX 3090. Which AI model should I use with GitHub Copilot? - The GitHub Blog https://github.blog/?p=86536 2025-04-17T21:19:31.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p><em>This was originally published on our developer newsletter, GitHub Insider, which offers tips and tricks for devs at every level. If you&rsquo;re not subscribed, <a href="https://resources.github.com/newsletter/">go do that now</a>&mdash;you won&rsquo;t regret it (we promise).</em></p> <p>If you&rsquo;ve ever wondered which AI model is the best fit for your <a href="https://github.com/features/copilot">GitHub Copilot</a> project, you&rsquo;re not alone. Since each model has its own strengths, picking the right one can feel somewhat mysterious.</p> <aside class="p-4 p-md-6 post-aside--large"><p class="h5-mktg gh-aside-title">Big disclaimer!</p><p>AI moves fast, so these recommendations are subject to change. It&rsquo;s mid-April 2025 right now, though things will probably be different within a week of posting. Zoom zoom zoom.</p> </aside> <p>With models that prioritize speed, depth, or a balance of both, it helps to know what each one brings to the table. Let&rsquo;s break it down together. &#128071;</p> <h2 id="the-tldr" id="the-tldr" >The TL;DR<a href="#the-tldr" class="heading-link pl-2 text-italic text-bold" aria-label="The TL;DR"></a></h2> <ul> <li>&#128179; <strong>Balance between cost and performance</strong>: Go with GPT-4.1, GPT-4o, or Claude 3.5 Sonnet. </li> <li><strong>&#129689; Fast, lightweight tasks</strong>: o4-mini or Claude 3.5 Sonnet are your buddies. </li> <li><strong>&#128142; Deep reasoning or complex debugging</strong>: Think Claude 3.7 Sonnet, o3, or GPT 4.5. </li> <li><strong>&#128444;&#65039; Multimodal inputs (like images)</strong>: Check out Gemini 2.0 Flash or GPT-4o.</li> </ul> <p>Your mileage may vary and it&rsquo;s always good to try things yourself before taking someone else&rsquo;s word for it, but this is how these models were designed to be used. All that being said&hellip;</p> <p><strong>Let&rsquo;s talk models.</strong></p> <h2 id="%f0%9f%8f%8e%ef%b8%8f-ai-models-designed-for-coding-speed" id="%f0%9f%8f%8e%ef%b8%8f-ai-models-designed-for-coding-speed" >&#127950;&#65039; AI models designed for coding speed<a href="#%f0%9f%8f%8e%ef%b8%8f-ai-models-designed-for-coding-speed" class="heading-link pl-2 text-italic text-bold" aria-label="&#127950;&#65039; AI models designed for coding speed"></a></h2> <h3 id="o4-mini-and-o3-mini-the-speed-demons-%f0%9f%98%88" id="o4-mini-and-o3-mini-the-speed-demons-%f0%9f%98%88" >o4-mini and o3-mini: The speed demons &#128520;<a href="#o4-mini-and-o3-mini-the-speed-demons-%f0%9f%98%88" class="heading-link pl-2 text-italic text-bold" aria-label="o4-mini and o3-mini: The speed demons &#128520;"></a></h3> <p>Fast, efficient, and cost-effective, o4-mini and o3-mini are ideal for simple coding questions and quick iterations. If you&rsquo;re looking for a no-frills model, use these.</p> <p><strong>&#9989; Use them for:</strong></p> <ul> <li>Quick prototyping. </li> <li>Explaining code snippets. </li> <li>Learning new programming concepts. </li> <li>Generating boilerplate code.</li> </ul> <p><strong>&#128064; You may prefer another model:</strong> If your task spans multiple files or calls for deep reasoning, a higher&#8209;capacity model such as <strong>GPT&#8209;4.5</strong> or <strong>o3</strong> can keep more context in mind. Looking for extra expressive flair? Try <strong>GPT&#8209;4o</strong>.</p> <hr> <h2 id="%e2%9a%96%ef%b8%8f-ai-models-designed-for-balance" id="%e2%9a%96%ef%b8%8f-ai-models-designed-for-balance" >&#9878;&#65039; AI models designed for balance<a href="#%e2%9a%96%ef%b8%8f-ai-models-designed-for-balance" class="heading-link pl-2 text-italic text-bold" aria-label="&#9878;&#65039; AI models designed for balance"></a></h2> <h3 id="claude-3-5-sonnet-the-budget-friendly-helper-%f0%9f%98%8a" id="claude-3-5-sonnet-the-budget-friendly-helper-%f0%9f%98%8a" >Claude 3.5 Sonnet: The budget-friendly helper &#128522;<a href="#claude-3-5-sonnet-the-budget-friendly-helper-%f0%9f%98%8a" class="heading-link pl-2 text-italic text-bold" aria-label="Claude 3.5 Sonnet: The budget-friendly helper &#128522;"></a></h3> <p>Need solid performance but watching your costs? Claude 3.5 Sonnet is like a dependable sidekick. It&rsquo;s great for everyday coding tasks without burning through your monthly usage.</p> <p><strong>&#9989; Use it for:</strong></p> <ul> <li>Writing documentation. </li> <li>Answering language-specific questions. </li> <li>Generating code snippets.</li> </ul> <p>&#128064; <strong>You may prefer another model:</strong> For elaborate multi&#8209;step reasoning or big&#8209;picture planning, consider stepping up to <strong>Claude 3.7 Sonnet</strong> or <strong>GPT&#8209;4.5</strong>.</p> <h3 id="gpt-4o-and-gpt-4-1-the-all-rounders-%f0%9f%8c%8e" id="gpt-4o-and-gpt-4-1-the-all-rounders-%f0%9f%8c%8e" >GPT-4o and <a href="https://github.blog/changelog/2025-04-14-openai-gpt-4-1-now-available-in-public-preview-for-github-copilot-and-github-models/">GPT-4.1</a>: The all-rounders &#127758;<a href="#gpt-4o-and-gpt-4-1-the-all-rounders-%f0%9f%8c%8e" class="heading-link pl-2 text-italic text-bold" aria-label="GPT-4o and &lt;a href=&quot;https://github.blog/changelog/2025-04-14-openai-gpt-4-1-now-available-in-public-preview-for-github-copilot-and-github-models/&quot;&gt;GPT-4.1&lt;/a&gt;: The all-rounders &#127758;"></a></h3> <p>These are your go-to models for general tasks. Need fast responses? Check. Want to work with text *and* images? Double check. GPT-4o and GPT-4.1 are like the Swiss Army knives of AI models: flexible, dependable, and cost-efficient.</p> <p><strong>&#9989; Use them for:</strong></p> <ul> <li>Explaining code blocks. </li> <li>Writing comments or docs. </li> <li>Generating small, reusable snippets. </li> <li>Multilingual prompts.</li> </ul> <p>&#128064; <strong>You may prefer another model:</strong> Complex architectural reasoning or multi&#8209;step debugging may land more naturally with <strong>GPT&#8209;4.5</strong> or <strong>Claude 3.7 Sonnet</strong>.</p> <hr> <h2 id="%f0%9f%a7%a0-ai-models-designed-for-deep-thinking-and-big-projects" id="%f0%9f%a7%a0-ai-models-designed-for-deep-thinking-and-big-projects" >&#129504; AI models designed for deep thinking and big projects<a href="#%f0%9f%a7%a0-ai-models-designed-for-deep-thinking-and-big-projects" class="heading-link pl-2 text-italic text-bold" aria-label="&#129504; AI models designed for deep thinking and big projects"></a></h2> <h3 id="claude-3-7-sonnet-the-architect-%f0%9f%8f%a0" id="claude-3-7-sonnet-the-architect-%f0%9f%8f%a0" >Claude 3.7 Sonnet: The architect &#127968;<a href="#claude-3-7-sonnet-the-architect-%f0%9f%8f%a0" class="heading-link pl-2 text-italic text-bold" aria-label="Claude 3.7 Sonnet: The architect &#127968;"></a></h3> <p>This one&rsquo;s the power tool for large, complex projects. From multi-file refactoring to feature development across front end and back end, Claude 3.7 Sonnet shines when context and depth matter most.</p> <p><strong>&#9989; Use it for:</strong></p> <ul> <li>Refactoring large codebases. </li> <li>Planning complex architectures. </li> <li>Designing algorithms. </li> <li>Combining high-level summaries with deep analysis.</li> </ul> <p>&#128064; <strong>You may prefer another model:</strong> For quick iterations or straightforward tasks, <strong>Claude 3.5 Sonnet</strong> or <strong>GPT&#8209;4o</strong> may deliver results with less overhead.</p> <h3 id="gemini-2-5-pro-the-researcher-%f0%9f%94%8e" id="gemini-2-5-pro-the-researcher-%f0%9f%94%8e" >Gemini 2.5 Pro: The researcher &#128270;<a href="#gemini-2-5-pro-the-researcher-%f0%9f%94%8e" class="heading-link pl-2 text-italic text-bold" aria-label="Gemini 2.5 Pro: The researcher &#128270;"></a></h3> <p>Gemini 2.5 Pro is the powerhouse for advanced reasoning and coding. It&rsquo;s built for complex tasks (think: deep debugging, algorithm design, and even scientific research). With its long-context capabilities, it can handle extensive datasets or documents with ease.</p> <p><strong>&#9989; Use it for:</strong></p> <ul> <li>Writing full functions, classes, or multi-file logic. </li> <li>Debugging complex systems. </li> <li>Analyzing scientific data and generating insights. </li> <li>Processing long documents, datasets, or codebases.</li> </ul> <p>&#128064; <strong>You may prefer another model:</strong> For cost-sensitive tasks, <strong>o4-mini</strong> or <strong>Gemini 2.0 Flash</strong> are more budget-friendly options.</p> <h3 id="gpt-4-5-the-thinker-%f0%9f%92%ad" id="gpt-4-5-the-thinker-%f0%9f%92%ad" >GPT-4.5: The thinker &#128173;<a href="#gpt-4-5-the-thinker-%f0%9f%92%ad" class="heading-link pl-2 text-italic text-bold" aria-label="GPT-4.5: The thinker &#128173;"></a></h3> <p>Got a tricky problem? Whether you&rsquo;re debugging multi-step issues or crafting full-on systems architectures, GPT-4.5 thrives on nuance and complexity.</p> <p><strong>&#9989; Use it for:</strong></p> <ul> <li>Writing detailed README files. </li> <li>Generating full functions or multi-file solutions. </li> <li>Debugging complex errors. </li> <li>Making architectural decisions.</li> </ul> <p>&#128064; <strong>You may prefer another model:</strong> When you just need a quick iteration on something small&mdash;or you&rsquo;re watching tokens&mdash;<strong>GPT&#8209;4o</strong> can finish faster and cheaper.</p> <h3 id="o3-and-o1-the-deep-diver-%f0%9f%a5%bd" id="o3-and-o1-the-deep-diver-%f0%9f%a5%bd" >o3 and o1: The deep diver &#129405;<a href="#o3-and-o1-the-deep-diver-%f0%9f%a5%bd" class="heading-link pl-2 text-italic text-bold" aria-label="o3 and o1: The deep diver &#129405;"></a></h3> <p>These models are perfect for tasks that need precision and logic. Whether you&rsquo;re optimizing performance-critical code or refactoring a messy codebase, o3 and o1 excel in breaking down problems step by step.</p> <p><strong>&#9989; Use them for:</strong></p> <ul> <li>Code optimization. </li> <li>Debugging complex systems. </li> <li>Writing structured, reusable code. </li> <li>Summarizing logs or benchmarks.</li> </ul> <p>&#128064; <strong>You may prefer another model:</strong> During early prototyping or lightweight tasks, a nimble model such as <strong>o4&#8209;mini</strong> or <strong>GPT&#8209;4o</strong> may feel snappier.</p> <hr> <h2 id="%f0%9f%96%bc%ef%b8%8f-multimodal-ai-models-designed-to-handle-it-all" id="%f0%9f%96%bc%ef%b8%8f-multimodal-ai-models-designed-to-handle-it-all" >&#128444;&#65039; Multimodal AI models designed to handle it all<a href="#%f0%9f%96%bc%ef%b8%8f-multimodal-ai-models-designed-to-handle-it-all" class="heading-link pl-2 text-italic text-bold" aria-label="&#128444;&#65039; Multimodal AI models designed to handle it all"></a></h2> <h3 id="gemini-2-0-flash-the-visual-thinker-%f0%9f%a4%94" id="gemini-2-0-flash-the-visual-thinker-%f0%9f%a4%94" >Gemini 2.0 Flash: The visual thinker &#129300;<a href="#gemini-2-0-flash-the-visual-thinker-%f0%9f%a4%94" class="heading-link pl-2 text-italic text-bold" aria-label="Gemini 2.0 Flash: The visual thinker &#129300;"></a></h3> <p>Got visual inputs like UI mockups or diagrams? Gemini 2.0 Flash lets you bring images into the mix, making it a great choice for front-end prototyping or layout debugging.</p> <p><strong>&#9989; Use it for:</strong></p> <ul> <li>Analyzing diagrams or screenshots. </li> <li>Debugging UI layouts. </li> <li>Generating code snippets. </li> <li>Getting design feedback.</li> </ul> <p>&#128064; <strong>You may prefer another model:</strong> If the job demands step&#8209;by&#8209;step algorithmic reasoning, <strong>GPT&#8209;4.5</strong> or <strong>Claude 3.7 Sonnet</strong> will keep more moving parts in scope.</p> <hr> <h2 id="so-which-model-do-i-choose" id="so-which-model-do-i-choose" >So&hellip; which model do I choose?<a href="#so-which-model-do-i-choose" class="heading-link pl-2 text-italic text-bold" aria-label="So&hellip; which model do I choose?"></a></h2> <p>Here&rsquo;s the rule of thumb: Match the model to the task. Practice really does make perfect, and as you work with different models, it&rsquo;ll become clearer which ones work best for different tasks. The more I&rsquo;ve personally used certain models, the more I&rsquo;ve learned, &ldquo;oh, I should switch for this particular task,&rdquo; and &ldquo;this one will get me there.&rdquo;</p> <p>And because I enjoy staying employed, I would love to cheekily mention that you can (and should!) use these models with&hellip;</p> <ul> <li><a href="https://github.com/features/copilot">GitHub Copilot in your favorite IDE</a> </li> <li><a href="https://github.com/copilot">GitHub Copilot on GitHub.com</a> </li> <li><a href="https://github.blog/ai-and-ml/github-copilot/mastering-github-copilot-when-to-use-ai-agent-mode/">With agent mode or Copilot Edits</a> <ul> <li><a href="https://github.blog/changelog/2025-04-11-vscode-copilot-agent-mode-in-codespaces/">With agent mode in Codespaces</a> </li> <li><a href="https://github.blog/news-insights/product-news/github-copilot-agent-mode-activated/">With agent mode in VS Code</a></li> </ul> </li> </ul> <p>Good luck, go forth, and happy coding!</p> <div class="post-content-cta"><p><a href="https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task">Learn more about AI models.</a></p> </div> </body></html> <p>The post <a href="https://github.blog/ai-and-ml/github-copilot/which-ai-model-should-i-use-with-github-copilot/">Which AI model should I use with GitHub Copilot?</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Start building with Gemini 2.5 Flash - Google Developers Blog https://developers.googleblog.com/en/start-building-with-gemini-25-flash/ 2025-04-17T19:09:03.000Z Gemini 2.5 Flash is in preview, offering improved reasoning capabilities through a "thinking" process that developers can control for cost and latency tradeoffs. This updated version aims to provide a cost-effective solution for complex tasks, balancing performance and price. Making it easier to build with the Gemini API in Google AI Studio - Google Developers Blog https://developers.googleblog.com/en/making-it-easier-to-build-with-the-gemini-api-in-google-ai-studio/ 2025-04-16T22:34:01.000Z Google AI Studio now has an expanded gallery of starter apps, and other updates including more intuitive prompting experience, native code editing, and various interactive examples demonstrating Gemini model capabilities, plus a refreshed UI – so you can continue building with the Gemini API. GitHub Availability Report: March 2025 - The GitHub Blog https://github.blog/?p=86491 2025-04-16T21:02:57.000Z <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html><body><p>In March, we experienced one incident that resulted in degraded performance across GitHub services.</p> <p><strong>March 29 7:00 UTC (lasting 58 hours)</strong></p> <p>Between March 29 7:00 UTC and March 31 17:00 UTC, GitHub experienced service degradation due to two separate, but related incidents. On March 29, users were unable to unsubscribe from GitHub marketing email subscriptions due to a service outage. Additionally, on March 31, 2025 from 7:00 UTC to 16:40 UTC users were unable to submit ebook and event registration forms on resources.github.com, also due to a service outage.</p> <p>The March 29 incident occurred due to expired credentials used for an internal service, preventing customers from being able to unsubscribe directly from marketing/sales topics through <a href="http://github.com/settings/emails">github.com/settings/emails</a> UI and from performing the double opt-in step required by some countries. A similar credential expiry on March 31 resulted in users experiencing degradation accessing resources.github.com.</p> <p>The cause of the incident was traced to an issue in the automated alerting for monitoring upcoming credential expirations. The bug in alerting resulted in the invalid credentials being discovered after they had expired. This resulted in two incidents before we could deploy a durable fix. We mitigated it by renewing the credentials and redeploying the affected services.</p> <p>To improve future response times and prevent similar issues, we have enhanced our credential expiry detection, alerting, and rotation processes, and are working on improving on-call observability.</p> <hr> <p>Please follow our <a href="https://www.githubstatus.com/">status page</a> for real-time updates on status changes and post-incident recaps. To learn more about what we&rsquo;re working on, check out the <a href="https://github.blog/category/engineering/">GitHub Engineering Blog</a>.</p> </body></html> <p>The post <a href="https://github.blog/news-insights/company-news/github-availability-report-march-2025/">GitHub Availability Report: March 2025</a> appeared first on <a href="https://github.blog">The GitHub Blog</a>.</p> Three MarTech solutions putting generative AI in marketing - Google Developers Blog https://developers.googleblog.com/en/google-martech-solutions-putting-generative-ai-in-marketing/ 2025-04-16T17:09:02.000Z Three generative AI-powered MarTech solutions from Google designed to help developers streamline marketing material creation, personalize campaigns, and enhance ad performance. Discover ViGenAiR for video ad creation, Adios for image asset management and generation, and Copycat for on-brand ad copy generation. Bring your ideas to life: Veo 2 video generation available for developers - Google Developers Blog https://developers.googleblog.com/en/veo-2-video-generation-now-generally-available/ 2025-04-15T21:39:02.000Z Generate high-quality videos from text and image prompts with Veo 2, a video generation model, now generally available in the Gemini API and Google AI Studio to enhance your content creation and marketing efforts.