GitHub Status - Incident History https://www.githubstatus.com Statuspage Tue, 01 Jul 2025 02:15:59 +0000 Disruption with Claude 3.7 Sonnet in Copilot Chat <p><small>Jun <var data-var='date'>30</var>, <var data-var='time'>19:55</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Jun <var data-var='date'>30</var>, <var data-var='time'>19:55</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].<br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Jun <var data-var='date'>30</var>, <var data-var='time'>19:14</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Claude 3.7 Sonnet model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jun <var data-var='date'>30</var>, <var data-var='time'>19:13</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Mon, 30 Jun 2025 19:55:36 +0000 https://www.githubstatus.com/incidents/kkm7hd89m0yt https://www.githubstatus.com/incidents/kkm7hd89m0yt Incident With Actions <p><small>Jun <var data-var='date'>30</var>, <var data-var='time'>19:00</var> UTC</small><br><strong>Resolved</strong> - Due to a degradation of one instance of our internal message delivery service, a percentage of jobs started between 06/30/2025 19:18 UTC and 06/30/2025 19:50 UTC failed, and are no longer retry-able. Runners assigned to these jobs will automatically recover within 24 hours, but deleting and recreating the runner will free up the runner immediately.</p> Mon, 30 Jun 2025 19:00:00 +0000 https://www.githubstatus.com/incidents/crd2y6xy6knn https://www.githubstatus.com/incidents/crd2y6xy6knn Disruption with some GitHub services <p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>23:33</var> UTC</small><br><strong>Resolved</strong> - On June 26, 2025, between 17:10 UTC and 23:30 UTC, around 40% of attempts to create a repository from a template repository failed. The failures were an unexpected result of a gap in testing and observability.<br /><br />We mitigated the incident by rolling back the deployment.<br /><br />We are working to improve our testing and automatic detection of errors associated with failed template repository creation.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>23:32</var> UTC</small><br><strong>Update</strong> - We identified an internal change that was causing errors when creating a repository from a template. This change has now been rolled back, and customers should no longer encounter errors when creating repositories from templates.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>23:05</var> UTC</small><br><strong>Update</strong> - Users may experience errors when creating a repository from a template. The error message may prompt the user to delete the repository, however this deletion attempt will not be successful. We are investigating the cause of these errors.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>23:05</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 26 Jun 2025 23:33:03 +0000 https://www.githubstatus.com/incidents/3l5g70d16ldz https://www.githubstatus.com/incidents/3l5g70d16ldz GitHub Enterprise Importer delays <p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>18:05</var> UTC</small><br><strong>Resolved</strong> - On June 26th, between 14:42UTC and 18:05UTC, the GitHub Enterprise Importer (GEI) service was in a degraded state, during which time, customers of the service experienced extended repository migration durations.<br /><br />Our investigation found that the combined effect of several database updates resulted in the severe throttling of GEI to preserve overall database health.<br /><br />We have taken steps to prevent additional impact and are working to implement additional safeguards to prevent similar incidents from occurring in the future.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>18:04</var> UTC</small><br><strong>Update</strong> - The earlier delays affecting GitHub Enterprise Importer queries and jobs have now been resolved and are operating normally. <br />Thank you for your patience while we investigated and addressed the issue.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>16:51</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate delays with GitHub Enterprise importer, and are investigating potential delays with queries and jobs.<br /><br />Next update in 60 minutes.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>15:19</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate delays with GitHub Enterprise importer, and are investigating potential delays with infrastructure. <br /><br />Next update in 60 minutes.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>14:43</var> UTC</small><br><strong>Update</strong> - GitHub Enterprise Importer is experiencing degraded throughput, resulting in significant slowdowns in migration processes and extended wait times for customers.</p><p><small>Jun <var data-var='date'>26</var>, <var data-var='time'>14:42</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 26 Jun 2025 18:05:11 +0000 https://www.githubstatus.com/incidents/hw33b7tc1lv2 https://www.githubstatus.com/incidents/hw33b7tc1lv2 Repository Navigation Bar Missing in GitHub Enterprise Cloud <p><small>Jun <var data-var='date'>24</var>, <var data-var='time'>12:26</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Jun <var data-var='date'>24</var>, <var data-var='time'>11:00</var> UTC</small><br><strong>Update</strong> - We have identified that the navigation bar is missing in GitHub Enterprise Cloud with data residency instances for the repositories related pages and are currently attempting a mitigation.</p><p><small>Jun <var data-var='date'>24</var>, <var data-var='time'>10:55</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 24 Jun 2025 12:26:04 +0000 https://www.githubstatus.com/incidents/p7mzl65jm7q3 https://www.githubstatus.com/incidents/p7mzl65jm7q3 Disruption with the GitHub mobile android application <p><small>Jun <var data-var='date'>20</var>, <var data-var='time'>11:20</var> UTC</small><br><strong>Resolved</strong> - Between June 19th, 2025 11:35 UTC and June 20th, 2025 11:20 UTC the GitHub Mobile Android application was unable to login new users. The iOS app was unaffected.<br /><br />This was due to a new GitHub App feature being tested internally, which was inadvertently enforced for all GitHub-owned applications, including GitHub Mobile.<br /><br />A mismatch in client and server expectations due to this feature caused logins to fail. We mitigated the incident by disabling the feature flag controlling the feature.<br /><br />We are working to improve our time to detection and put in place stronger guardrails that reduce impact from internal testing on applications used by all customers.<br /></p><p><small>Jun <var data-var='date'>20</var>, <var data-var='time'>10:53</var> UTC</small><br><strong>Update</strong> - We are investigating reports that some users are unable to sign in to the GitHub app on Android. Normal functionality is otherwise available. Our team is actively working to identify the cause.</p><p><small>Jun <var data-var='date'>20</var>, <var data-var='time'>10:49</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Fri, 20 Jun 2025 11:20:30 +0000 https://www.githubstatus.com/incidents/sd4v95zxm3np https://www.githubstatus.com/incidents/sd4v95zxm3np Disruption with some GitHub services <p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>23:13</var> UTC</small><br><strong>Resolved</strong> - On June 18, 2025 between 22:20 UTC and 23:00 UTC the Claude Sonnet 3.7 and Claude Sonnet 4 models for GitHub Copilot Chat experienced degraded performance. During the impact, some users would receive an immediate error when making a request to a Claude model. This was due to upstream errors with one of our model providers, which have since been resolved. We mitigated the impact by disabling the affected provider endpoints to reduce user impact, redirecting Claude Sonnet requests to additional partners.<br /><br />We are working to update our incident response playbooks for infrastructure provider outages and improve our monitoring and alerting systems to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>22:42</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Claude 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected. We recommend using Claude 3.7 as an alternative.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>22:40</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>22:39</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Wed, 18 Jun 2025 23:13:51 +0000 https://www.githubstatus.com/incidents/z8pt03f02ddv https://www.githubstatus.com/incidents/z8pt03f02ddv Partial Actions Cache degradation <p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>18:47</var> UTC</small><br><strong>Resolved</strong> - On June 18, 2025, between 08:21 UTC and 18:47 UTC, some Actions jobs experienced intermittent failures downloading from the Actions Cache service. During the incident, 17% of workflow runs experienced cache download failures, resulting in a warning message in the logs and performance degradation. The disruption was caused by a network issue in our database systems that led to a database replica getting out of sync with the primary. We mitigated the incident by routing cache download url requests to bypass the out-of-sync replica until it was fully restored.<br /><br />To prevent this class of incidents, we are developing capability in our database system to more robustly bypass out-of-sync replicas. We are also implementing improved monitoring to help us detect similar issues more quickly going forward.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>18:11</var> UTC</small><br><strong>Update</strong> - We are continuing to rollout a mitigation and are progressing towards having this rolled out for all customers.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>17:22</var> UTC</small><br><strong>Update</strong> - We are currently deploying a mitigation for this issue and will be rolling it out shortly. We will update our progress as we monitor the deployment.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>17:03</var> UTC</small><br><strong>Update</strong> - We are actively investigating and working on a mitigation for database instability leading to replication lag in the Actions Cache service. We will continue to post updates on progress towards mitigation.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>16:46</var> UTC</small><br><strong>Update</strong> - The actions cache service is experiencing degradation in a number of regions causing cache misses when attempting to download cache entries. This is not causing workflow failures, but workflow runtime might be elevated for certain runs.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>16:46</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Wed, 18 Jun 2025 18:47:45 +0000 https://www.githubstatus.com/incidents/9qcwpy3ckdrf https://www.githubstatus.com/incidents/9qcwpy3ckdrf Partial Degradation in Issues Experience <p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>17:42</var> UTC</small><br><strong>Resolved</strong> - On June 18, 2025, between 15:15 UTC and 19:29 UTC, the Issues service was degraded, and certain GraphQL queries accessing the `ReactionGroup.reactors` field returned errors. Our query routing infrastructure was impacted by exceptions from a particular database migration, resulting in errors for an average of 0.0097% of overall GraphQL requests (peaking at 0.02%).<br /><br />We mitigated the incident by reverting the migration.<br /><br />We continue to investigate the cause of the exceptions and are holding off on similar migrations until the underlying issue is understood and resolved.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>17:41</var> UTC</small><br><strong>Update</strong> - We have confirmed that we are currently within SLA for Issues experience. Remaining clean up will complete over the next few hours to fully restore the ability to search Issues by reaction as well as related GraphQL API queries.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>17:07</var> UTC</small><br><strong>Update</strong> - We have confirmed that impact is restricted to failing to display reactions on some issues and searching issues by reaction. Mitigation is in progress to restore these features and should be fully rolled out to all customers in the next few hours.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>16:25</var> UTC</small><br><strong>Update</strong> - Some users are seeing errors when accessing issues on GitHub. We have identified the problem and are working on a revert to restore full functionality.</p><p><small>Jun <var data-var='date'>18</var>, <var data-var='time'>16:21</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Issues</p> Wed, 18 Jun 2025 17:42:18 +0000 https://www.githubstatus.com/incidents/7kltzm6r774q https://www.githubstatus.com/incidents/7kltzm6r774q Incident with multiple GitHub services <p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:22</var> UTC</small><br><strong>Resolved</strong> - On June 17, 2025, between 19:32 UTC and 20:03 UTC, an internal routing policy deployment to a subset of network devices caused reachability issues for certain network address blocks within our datacenters.<br />Authenticated users of the github.com UI experienced 3-4% error rates for the duration. Authenticated callers of the API experienced 40% error rates. Unauthenticated requests to the UI and API experienced nearly 100% error rates for the duration. Actions service experienced 2.5% of runs being delayed for an average of 8 minutes and 3% of runs failing. Large File Storage (LFS) requests experienced 0.978% errors.<br />At 19:54 UTC, the deployment was rolled back, and network availability for the affected systems was restored. At 20:03 UTC, we fully restored normal operations.<br />To prevent similar issues, we are expanding our validation process for routing policy changes.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:15</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:14</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:13</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:12</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:10</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:06</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:05</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:04</var> UTC</small><br><strong>Update</strong> - We experienced problems with multiple services, causing disruptions for some users. We have identified the cause and are rolling out changes to restore normal service. Many services are recovering, but full recovery is ongoing.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:04</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:03</var> UTC</small><br><strong>Update</strong> - Pages is operating normally.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>20:01</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:55</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:55</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:54</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:54</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:53</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:53</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with many services impacting segments of customers. We will continue to keep users updated on progress towards mitigation.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:51</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:49</var> UTC</small><br><strong>Update</strong> - Pages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:49</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded availability. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:47</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>17</var>, <var data-var='time'>19:42</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 17 Jun 2025 20:22:50 +0000 https://www.githubstatus.com/incidents/y7lb2rg4btd7 https://www.githubstatus.com/incidents/y7lb2rg4btd7 Some Copilot chat models are failing requests <p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>21:07</var> UTC</small><br><strong>Resolved</strong> - <p>On June 12, 2025, between 17:55 UTC and 21:07 UTC the GitHub Copilot service was degraded and experienced unavailability for Gemini models and reduced availability for Claude models. Users experienced significantly elevated error rates for code completions, slow response times, timeouts, and chat functionality interruptions across VS Code, JetBrains IDEs, and GitHub Copilot Chat. This was due to an outage affecting one of our model providers.</p><br /><p>We mitigated the incident by temporarily disabling the affected provider endpoints to reduce user impact.</p><br /><p>We are working to update our incident response playbooks for infrastructure provider outages and improve our monitoring and alerting systems to reduce our time to detection and mitigation of issues like this one in the future.</p></p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>21:07</var> UTC</small><br><strong>Update</strong> - All impacted chat models have recovered, and users should no longer experience reduced availability.</p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>20:39</var> UTC</small><br><strong>Update</strong> - We are seeing recovery in success rates for impacted Claude models (Sonnet 4 and Opus 4), and limited recovery in Gemini models (2.5. Pro and 2.0 Flash). We will continue to monitor and provide updates until full recovery.</p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>20:21</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>20:05</var> UTC</small><br><strong>Update</strong> - Claude Sonnet 4 and Opus 4 models continue to have degraded availability in Copilot Chat, VS Code, and other Copilot products. Gemini Pro 2.5 and 2.0 Flash are currently unavailable. Our upstream model provider has indicated that they have identified the problem and are applying mitigations.<br /></p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>19:14</var> UTC</small><br><strong>Update</strong> - Gemini (2.5 Pro and 2.0 Flash) and Claude (Sonnet 4 and Opus 4) chat models in Copilot are still experiencing reduced availability. We are actively communicating with our upstream model provider to resolve the issue and restore full service. We will provide another update by 20:15 UTC.</p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>18:37</var> UTC</small><br><strong>Update</strong> - We redirected requests for Claude 3.7 Sonnet to additional partners and users should see recovery when using that model. We still are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products.</p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>18:23</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 3.7, Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>18:19</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 12 Jun 2025 21:07:22 +0000 https://www.githubstatus.com/incidents/j46wj670px33 https://www.githubstatus.com/incidents/j46wj670px33 Incident with Actions <p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>20:26</var> UTC</small><br><strong>Resolved</strong> - Multiple services critical to GitHub's attestation infrastructure experienced an outage which prevented Fulcio from issuing signing certificates. During the outage, GitHub customers who use the "actions/attest-build-provenance" action from public repositories were not able to generate attestations.</p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>18:56</var> UTC</small><br><strong>Update</strong> - Customers are currently unable to generate attestations from public repositories due to a broader outage with our partners.</p><p><small>Jun <var data-var='date'>12</var>, <var data-var='time'>18:50</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> Thu, 12 Jun 2025 20:26:55 +0000 https://www.githubstatus.com/incidents/d9xd9k1j6sl0 https://www.githubstatus.com/incidents/d9xd9k1j6sl0 Disruption with some GitHub services <p><small>Jun <var data-var='date'>11</var>, <var data-var='time'>01:51</var> UTC</small><br><strong>Resolved</strong> - Between 2025-06-10 12:25 UTC and 2025-06-11 01:51 UTC, GitHub Enterprise Cloud (GHEC) customers with approximately 10,000 or more users, saw performance degradation and 5xx errors when loading the Enterprise Settings’ People management page. Less than 2% of page requests resulted in an error. The issue was caused by a database change that replaced an index required for the page load. The issue was resolved by reverting the database change.<br /><br />To prevent similar incidents, we are improving the testing and validation process for replacing database indexes.</p><p><small>Jun <var data-var='date'>11</var>, <var data-var='time'>01:08</var> UTC</small><br><strong>Update</strong> - Fix is currently rolling out to production. We will update here once we verify.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>23:32</var> UTC</small><br><strong>Update</strong> - We are working to deploy the fix for this issue. We will update again once it is deployed and as we monitor recovery.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>22:42</var> UTC</small><br><strong>Update</strong> - We have the fix ready, once it's ready to deploy we will provide another update confirming that it has resolved the issue.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>21:04</var> UTC</small><br><strong>Update</strong> - We have identified the solution to the performance issue and are working on the mitigation. Impact continues to be limited to very large enterprise customers when viewing the People page.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>20:09</var> UTC</small><br><strong>Update</strong> - The mitigation to add a supporting index to improve the performance of the People page did not resolve the issue, and we are continuing to investigate a solution.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>18:57</var> UTC</small><br><strong>Update</strong> - We are working on the mitigation and anticpate recovery within an hour.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>18:35</var> UTC</small><br><strong>Update</strong> - Large enterprise customers may encounter issues loading the People page</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>18:17</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Wed, 11 Jun 2025 01:51:21 +0000 https://www.githubstatus.com/incidents/gj9d2m9x4mff https://www.githubstatus.com/incidents/gj9d2m9x4mff Codespaces billing is delayed <p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>19:08</var> UTC</small><br><strong>Resolved</strong> - On June 10, 2025, between 12:15 UTC and 19:04 UTC, Codespaces billing data processing experienced delays due to capacity issues in our worker pool. Approximately 57% of codespaces were affected during this incident, during which some customers may have observed incomplete or delayed billing usage information in their dashboards and usage reports, and may not have received timely notifications about approaching usage or spending limits. <br /><br />The incident was caused by an increase in the number of jobs in our worker pool without a corresponding increase in capacity, resulting in a backlog of unprocessed Codespaces billing jobs. <br /><br />We mitigated the issue by scaling up worker capacity, allowing the backlog to clear and billing data to catch up. We started seeing recovery immediately at 17:40 UTC and were fully caught up by 19:04 UTC.<br /><br />To prevent recurrence, we are moving critical billing jobs into a dedicated worker pool monitored by the Codespaces team, and are reviewing alerting thresholds to ensure more rapid detection and mitigation of delays in the future.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>18:21</var> UTC</small><br><strong>Update</strong> - We've increased capacity to process the codespaces billing jobs and see are seeing recovery, we expect a full mitigation within the hour.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>17:47</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 10 Jun 2025 19:08:34 +0000 https://www.githubstatus.com/incidents/6nxmxxqcgmh7 https://www.githubstatus.com/incidents/6nxmxxqcgmh7 Incident with Pull Requests <p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>14:46</var> UTC</small><br><strong>Resolved</strong> - On June 10, 2025, between 14:28 UTC and 14:45 UTC the pull request service experienced a period of degraded performance, resulting in merge error rates exceeding 1%. The root cause was an overloaded host in our Git infrastructure.<br /><br />We mitigated the incident by removing this host from the actual set of valid replicas until the host was healthy again.<br /><br />We are working to improve the various mechanisms that are in place in our existing infrastructure to protect us from such problems, and we will be revisiting why in this particular scenario they didn't protect us as expected.</p><p><small>Jun <var data-var='date'>10</var>, <var data-var='time'>14:28</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> Tue, 10 Jun 2025 14:46:32 +0000 https://www.githubstatus.com/incidents/5n51wd9mnkz0 https://www.githubstatus.com/incidents/5n51wd9mnkz0 Incident With Copilot <p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>23:00</var> UTC</small><br><strong>Resolved</strong> - On June 6, 2025, an update to mitigate a previous incident led to automated scaling of database infrastructure used by Copilot Coding Agent. The clients of the service were not implemented to automatically handle an extra partition. Hence it was unable to retrieve data across partitions, resulting in unexpected 404 errors.<br /><br />As a result, approximately 17% of coding sessions displayed an incorrect final state - such as sessions appearing in-progress when they were actually completed. Additionally, some Copilot-authored pull requests were missing timeline events indicating task completion. Importantly, this did not affect Copilot Coding Agent’s ability to finish code tasks and submit pull requests.<br /><br />To prevent similar issues in the future we are taking steps to improve our systems and monitoring.</p> Fri, 06 Jun 2025 23:00:00 +0000 https://www.githubstatus.com/incidents/5g8smlrj5ynp https://www.githubstatus.com/incidents/5g8smlrj5ynp Incident with Copilot <p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>12:40</var> UTC</small><br><strong>Resolved</strong> - On June 6, 2025, between 00:21 UTC to 12:40 UTC the Copilot service was degraded and a subset of Copilot Free users were unable to sign up for or use the Copilot Free service on github.com. This was due to a change in licensing code that resulted in some users losing access despite being eligible for Copilot Free.<br />We mitigated this through a rollback of the offending change at 11:39 AM UTC. This resulted in users once again being able to utilize their Copilot Free access.<br />As a result of this incident, we have improved monitoring of Copilot changes during rollout. We are also working to reduce our time to detect and mitigate issues like this one in the future.</p><p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>12:40</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>12:18</var> UTC</small><br><strong>Update</strong> - We are continuing to monitor recovery and expect a complete resolution very shortly.</p><p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>11:31</var> UTC</small><br><strong>Update</strong> - The changes have been reverted and we are seeing signs of recovery. We expect impact to be largely mitigated, but are continuing to monitor and will update further as progress continues.</p><p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>10:39</var> UTC</small><br><strong>Update</strong> - We have identified changes that may be causing the issue and are working to revert the offending changes. We will continue to keep users updated as we work toward mitigation.</p><p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>10:04</var> UTC</small><br><strong>Update</strong> - We are investigating reports of users unable to utilize Copilot Free after a trial subscription has ended for Copilot Pro. We will continue to keep users updated on progress towards mitigation.</p><p><small>Jun <var data-var='date'> 6</var>, <var data-var='time'>09:58</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Fri, 06 Jun 2025 12:40:26 +0000 https://www.githubstatus.com/incidents/wqrqgd9gyvz5 https://www.githubstatus.com/incidents/wqrqgd9gyvz5 Incident with Actions <p><small>Jun <var data-var='date'> 5</var>, <var data-var='time'>19:29</var> UTC</small><br><strong>Resolved</strong> - On June 5th, 2025, between 17:47 UTC and 19:20 UTC the Actions service was degraded, leading to run start delays and intermittent job failures. During this period, 47.2% of runs had delayed starts, and 21.0% of runs failed. The impact extended beyond Actions itself - 60% of Copilot Coding Agent sessions were cancelled, and all Pages sites using branch-based builds failed to deploy (though Pages serving remained unaffected). The issue was caused by a spike in load between internal Actions services exposing a misconfiguration that caused throttling of requests in the critical path of run starts. We mitigated the incident by correcting the service configuration to prevent throttling and have updated our deployment process to ensure the correct configuration is preserved moving forward.</p><p><small>Jun <var data-var='date'> 5</var>, <var data-var='time'>19:02</var> UTC</small><br><strong>Update</strong> - We have applied a mitigation and we are beginning to see recovery. We are continuing to monitor for recovery.</p><p><small>Jun <var data-var='date'> 5</var>, <var data-var='time'>18:35</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded availability. We are continuing to investigate.</p><p><small>Jun <var data-var='date'> 5</var>, <var data-var='time'>18:30</var> UTC</small><br><strong>Update</strong> - Users of Actions will see delays in jobs starting or job failures. Users of Pages will see slow or failed deployments</p><p><small>Jun <var data-var='date'> 5</var>, <var data-var='time'>18:01</var> UTC</small><br><strong>Update</strong> - Pages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jun <var data-var='date'> 5</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> Thu, 05 Jun 2025 19:29:07 +0000 https://www.githubstatus.com/incidents/ry1gsyjqj4qh https://www.githubstatus.com/incidents/ry1gsyjqj4qh Incident with Actions <p><small>Jun <var data-var='date'> 4</var>, <var data-var='time'>15:55</var> UTC</small><br><strong>Resolved</strong> - On June 4, 2025, between 14:35 UTC and 15:50 UTC , the Actions service experienced degradation, leading to run start delays. During the incident, about 15.4% of all workflow runs were delayed by an average of 16 minutes. An unexpected load pattern revealed a scaling issue in our backend infrastructure. We mitigated the incident by blocking the requests that triggered this pattern. <br /><br />We are improving our rate limiting mechanisms to better handle unexpected load patterns while maintaining service availability. We are also strengthening our incident response procedures to reduce the time to mitigate for similar issues in the future.</p><p><small>Jun <var data-var='date'> 4</var>, <var data-var='time'>15:39</var> UTC</small><br><strong>Update</strong> - We have applied mitigations and are monitoring for recovery.</p><p><small>Jun <var data-var='date'> 4</var>, <var data-var='time'>15:19</var> UTC</small><br><strong>Update</strong> - We are currently investigating delays with Actions triggering for some users.</p><p><small>Jun <var data-var='date'> 4</var>, <var data-var='time'>15:15</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> Wed, 04 Jun 2025 15:55:34 +0000 https://www.githubstatus.com/incidents/v7vmwf4pyx6y https://www.githubstatus.com/incidents/v7vmwf4pyx6y Codespaces Scheduled Maintenance <p><small>May <var data-var='date'>31</var>, <var data-var='time'>04:30</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>May <var data-var='date'>29</var>, <var data-var='time'>21:30</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>May <var data-var='date'>29</var>, <var data-var='time'>21:01</var> UTC</small><br><strong>Scheduled</strong> - Codespaces will be undergoing global maintenance from May 29, 2025 21:30 UTC to May 31, 2025 4:30 UTC. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.<br /><br />During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.<br /><br />To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.</p> Sat, 31 May 2025 04:30:21 +0000 Sat, 31 May 2025 04:30:00 +0000 https://www.githubstatus.com/incidents/nwjwwhj118sf https://www.githubstatus.com/incidents/nwjwwhj118sf Disruption with some GitHub services <p><small>May <var data-var='date'>30</var>, <var data-var='time'>15:57</var> UTC</small><br><strong>Resolved</strong> - On May 30, 2025, between 08:10 UTC and 16:00 UTC, the Microsoft Teams GitHub integration service experienced a complete service outage. <br /><br />During this period, the service was unable to deliver notifications or process user requests, resulting in a 100% error rate for all integration functionality except link previews.<br /><br />This outage was due to an authentication issue with our downstream provider. We mitigated the incident by working with our provider to restore service functionality and are working to migrate to more durable authentication methods to reduce the risk of similar issues in the future.</p><p><small>May <var data-var='date'>30</var>, <var data-var='time'>14:47</var> UTC</small><br><strong>Update</strong> - Our team is continuing to work to mitigate the source of the disruption affecting a small set of customers using the GitHub Microsoft Teams integration.</p><p><small>May <var data-var='date'>30</var>, <var data-var='time'>12:29</var> UTC</small><br><strong>Update</strong> - We are experiencing a disruption with our Microsoft Teams integration. Investigations are underway and we will provide further updates as we progress.</p><p><small>May <var data-var='date'>30</var>, <var data-var='time'>11:20</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Fri, 30 May 2025 15:57:06 +0000 https://www.githubstatus.com/incidents/p1rf575rlqml https://www.githubstatus.com/incidents/p1rf575rlqml Codespaces Scheduled Maintenance <p><small>May <var data-var='date'>29</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>May <var data-var='date'>28</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>May <var data-var='date'>22</var>, <var data-var='time'>15:26</var> UTC</small><br><strong>Scheduled</strong> - Codespaces will be undergoing global maintenance from 16:30 UTC on Wednesday, May 28 to 16:30 UTC on Thursday, May 29. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.<br /><br />During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.<br /><br />To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.</p> Thu, 29 May 2025 16:30:21 +0000 Thu, 29 May 2025 16:30:00 +0000 https://www.githubstatus.com/incidents/67vdd3b7d1zq https://www.githubstatus.com/incidents/67vdd3b7d1zq Disruption with some GitHub services <p><small>May <var data-var='date'>28</var>, <var data-var='time'>14:43</var> UTC</small><br><strong>Resolved</strong> - On May 28, 2025, from approximately 09:45 UTC to 14:45 UTC, GitHub Actions experienced delayed job starts for workflows in public repos using Ubuntu-24 standard hosted runners. This was caused by a misconfiguration in backend caching behavior after a failover, which led to duplicate job assignments and reduced available capacity. Approximately 19.7% of Ubuntu-24 hosted runner jobs on public repos were delayed. Other hosted runners, self-hosted runners, and private repo workflows were unaffected.<br /><br />By 12:45 UTC, we mitigated the issue by redeploying backend components to reset state and scaling up available resources to more quickly work through the backlog of queued jobs. We are working to improve our deployment and failover resiliency and validation to reduce the likelihood of similar issues in the future.</p><p><small>May <var data-var='date'>28</var>, <var data-var='time'>14:35</var> UTC</small><br><strong>Update</strong> - We are continuing to monitor the affected Actions runners to ensure a smooth recovery.</p><p><small>May <var data-var='date'>28</var>, <var data-var='time'>13:42</var> UTC</small><br><strong>Update</strong> - We are observing indications of recovery with the affected Actions runners.<br /><br />The team will continue monitoring systems to ensure a return to normal service.</p><p><small>May <var data-var='date'>28</var>, <var data-var='time'>12:41</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate delays in Actions runners for hosted Ubuntu 24.<br /><br />We will provide further updates as more information becomes available.</p><p><small>May <var data-var='date'>28</var>, <var data-var='time'>11:49</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>May <var data-var='date'>28</var>, <var data-var='time'>11:42</var> UTC</small><br><strong>Update</strong> - Actions is experiencing high wait times for obtaining standard hosted runners for ubuntu 24. Other hosted labels and self-hosted runners are not impacted.</p><p><small>May <var data-var='date'>28</var>, <var data-var='time'>11:11</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Wed, 28 May 2025 14:43:08 +0000 https://www.githubstatus.com/incidents/l5jqk83qnfzd https://www.githubstatus.com/incidents/l5jqk83qnfzd Incident with Actions <p><small>May <var data-var='date'>27</var>, <var data-var='time'>13:31</var> UTC</small><br><strong>Resolved</strong> - On May 27, 2025, between 09:31 UTC and 13:31 UTC, some Actions jobs experienced failures uploading to and downloading from the Actions Cache service. During the incident, 6% of all workflow runs couldn’t upload or download cache entries from the service, resulting in a non-blocking warning message in the logs and performance degradation. The disruption was caused by an infrastructure update related to the retirement of a legacy service, which unintentionally impacted Cache service availability. We resolved the incident by reverting the change and have since implemented a permanent fix to prevent recurrence.<br /><br />We are improving our configuration change processes by introducing additional end-to-end tests to cover the identified gaps, and implementing deployment pipeline improvements to reduce mitigation time for similar issues in the future.</p><p><small>May <var data-var='date'>27</var>, <var data-var='time'>13:03</var> UTC</small><br><strong>Update</strong> - Mitigation is applied and we’re seeing signs of recovery. We’re monitoring the situation until the mitigation is applied to all affected repositories.</p><p><small>May <var data-var='date'>27</var>, <var data-var='time'>12:27</var> UTC</small><br><strong>Update</strong> - We are experiencing degradation with the GitHub Actions cache service and are working on applying the appropriate mitigations.</p><p><small>May <var data-var='date'>27</var>, <var data-var='time'>12:26</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> Tue, 27 May 2025 13:31:59 +0000 https://www.githubstatus.com/incidents/9hzy25gws8vh https://www.githubstatus.com/incidents/9hzy25gws8vh Disruption with some GitHub services <p><small>May <var data-var='date'>27</var>, <var data-var='time'>12:41</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>May <var data-var='date'>27</var>, <var data-var='time'>12:20</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 27 May 2025 12:41:24 +0000 https://www.githubstatus.com/incidents/2fk03fzv6zk0 https://www.githubstatus.com/incidents/2fk03fzv6zk0