KEMBAR78
tag:www.githubstatus.com,2005:/history GitHub Status - Incident History 2025-10-23T01:37:45Z GitHub tag:www.githubstatus.com,2005:Incident/26848498 2025-10-22T15:53:37Z 2025-10-22T15:53:37Z Incident with API Requests <p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>15:53</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>15:53</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>15:17</var> UTC</small><br><strong>Update</strong> - We have identified a possible source of the issue and there is currently no user impact but we are continuing to investigate and will not resolve this incident until we have more confidence in our mitigations and investigation results.</p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>14:37</var> UTC</small><br><strong>Update</strong> - Some users may see slow, timing out requests or not found when browsing repos. We have identified slowness in our platform and are investigating.</p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>14:29</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests</p> tag:www.githubstatus.com,2005:Incident/26837586 2025-10-21T17:39:34Z 2025-10-21T17:39:34Z Disruption with some GitHub services <p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>17:39</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>17:18</var> UTC</small><br><strong>Update</strong> - Mitigation continues, the impact is limited to Enterprise Cloud customers who have configured SAML at the organization level.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>17:11</var> UTC</small><br><strong>Update</strong> - We continuing to work on mitigation of this issue.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>16:33</var> UTC</small><br><strong>Update</strong> - We’ve identified the issue affecting some users with SAML/OIDC authentication and are actively working on mitigation. Some users may not be able to authenticate during this time.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>16:03</var> UTC</small><br><strong>Update</strong> - We're seeing issues for a small amount of customers with SAML/OIDC authentication for GitHub.com users. We are investigating.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26833707 2025-10-21T12:28:19Z 2025-10-21T12:28:19Z Incident with Actions <p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>12:28</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>11:59</var> UTC</small><br><strong>Update</strong> - We were able to apply a mitigation and we are now seeing recovery.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>11:37</var> UTC</small><br><strong>Update</strong> - We are seeing about 10% of Actions runs taking longer than 5 minutes to start, we're still investigating and will provide an update by 12:00 UTC.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>09:59</var> UTC</small><br><strong>Update</strong> - We are still seeing delays in starting some Actions runs and are currently investigating. We will provide updates as we have more information.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>09:25</var> UTC</small><br><strong>Update</strong> - We are seeing delays in starting some Actions runs and are currently investigating.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>09:12</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/26820913 2025-10-20T16:40:02Z 2025-10-21T20:25:22Z Disruption with Grok Code Fast 1 in Copilot <p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>16:40</var> UTC</small><br><strong>Resolved</strong> - From October 20th at 14:10 UTC until 16:40 UTC, the Copilot service experienced degradation due to an infrastructure issue which impacted the Grok Code Fast 1 model, leading to a spike in errors affecting 30% of users. No other models were impacted. The incident was caused due to an outage with an upstream provider.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>16:39</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider continue to improve, and Grok Code Fast 1 is once again stable in Copilot Chat, VS Code and other Copilot products.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>16:07</var> UTC</small><br><strong>Update</strong> - We are continuing to work with our provider on resolving the incident with Grok Code Fast 1 which is impacting 6% of users. We’ve been informed they are implementing fixes but users can expect some requests to intermittently fail until all issues are resolved.<br /></p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>14:47</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>14:46</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26815001 2025-10-20T11:01:14Z 2025-10-20T11:01:14Z Codespaces creation failling <p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>11:01</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>10:56</var> UTC</small><br><strong>Update</strong> - We are now seeing sustained recovery. As we continue to make our final checks, we hope to resolve this incident in the next 10 minutes.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>10:15</var> UTC</small><br><strong>Update</strong> - We are seeing early signs of recovery for Codespaces. The team will continue to monitor and keep this incident active as a line of communication until we are confident of full recovery.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>09:34</var> UTC</small><br><strong>Update</strong> - We are continuing to monitor Codespace's error rates and will report further as we have more information.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>09:01</var> UTC</small><br><strong>Update</strong> - We are seeing increased error rates with Codespaces generally. This is due to a third party provider experiencing problems. This impacts both creation of new Codespaces and resumption of existing ones.<br /><br />We continue to monitor and will report with more details as we have them.</p><p><small>Oct <var data-var='date'>20</var>, <var data-var='time'>08:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Codespaces</p> tag:www.githubstatus.com,2005:Incident/26788390 2025-10-17T14:12:45Z 2025-10-20T12:56:38Z Disruption with push notifications <p><small>Oct <var data-var='date'>17</var>, <var data-var='time'>14:12</var> UTC</small><br><strong>Resolved</strong> - On October 17th, 2025, between 12:51 UTC and 14:01 UTC, mobile push notifications failed to be delivered for a total duration of 70 minutes. This affected github.com and GitHub Enterprise Cloud in all regions. The disruption was related to an erroneous configuration change to cloud resources used for mobile push notification delivery.<br /><br />We are reviewing our procedures and management of these cloud resources to prevent such an incident in the future.</p><p><small>Oct <var data-var='date'>17</var>, <var data-var='time'>14:01</var> UTC</small><br><strong>Update</strong> - We're investigating an issue with mobile push notifications. All notification types are affected, but notifications remain accessible in the app's inbox. For 2FA authentication, please open the GitHub mobile app directly to complete login.</p><p><small>Oct <var data-var='date'>17</var>, <var data-var='time'>13:11</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26754510 2025-10-14T18:57:11Z 2025-10-17T15:32:24Z Disruption with some GitHub services <p><small>Oct <var data-var='date'>14</var>, <var data-var='time'>18:57</var> UTC</small><br><strong>Resolved</strong> - On October 14th, 2025, between 18:26 UTC and 18:57 UTC a subset of unauthenticated requests to the commit endpoint for certain repositories received 503 errors. During the event, the average error rate was 3%, peaking at 3.5% of total requests.<br /><br />This event was triggered by a recent configuration change and some traffic pattern shifts on the service. We were alerted of the issue immediately and made changes to the configuration in order to mitigate the problem. We are working on automatic mitigation solutions and better traffic handling in order to prevent issues like this in the future.</p><p><small>Oct <var data-var='date'>14</var>, <var data-var='time'>18:26</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26752063 2025-10-14T16:00:29Z 2025-10-17T18:18:35Z Disruption with GPT-5-mini in Copilot <p><small>Oct <var data-var='date'>14</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Resolved</strong> - On Oct 14th, 2025, between 13:34 UTC and 16:00 UTC the Copilot service was degraded for GPT-5 mini model. On average, 18% of the requests to GPT-5 mini failed due to an issue with our upstream provider.<br /><br />We notified the upstream provider of the problem as soon as it was detected and mitigated the issue by failing over to other providers. The upstream provider has since resolved the issue.<br /><br />We are working to improve our failover logic to mitigate similar upstream failures more quickly in the future.</p><p><small>Oct <var data-var='date'>14</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Update</strong> - GPT-5-mini is once again available in Copilot Chat and across IDE integrations.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Oct <var data-var='date'>14</var>, <var data-var='time'>15:42</var> UTC</small><br><strong>Update</strong> - We are continuing to see degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We continue to work with the model provider to resolve the issue.<br />Other models continue to be available and working as expected.</p><p><small>Oct <var data-var='date'>14</var>, <var data-var='time'>14:50</var> UTC</small><br><strong>Update</strong> - We continue to see degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We continue to work with the model provider to resolve the issue.<br />Other models continue to be available and working as expected.</p><p><small>Oct <var data-var='date'>14</var>, <var data-var='time'>14:07</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br />Other models are available and working as expected.</p><p><small>Oct <var data-var='date'>14</var>, <var data-var='time'>14:05</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26702024 2025-10-09T16:40:52Z 2025-10-15T21:35:15Z Incident with Webhooks <p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>16:40</var> UTC</small><br><strong>Resolved</strong> - On October 9th, 2025, between 14:35 UTC and 15:21 UTC, a network device in maintenance mode that was undergoing repairs was brought back into production before repairs were completed. Network traffic traversing this device experienced significant packet loss.<br /><br />Authenticated users of the github.com UI experienced increased latency during the first 5 minutes of the incident. API users experienced up to 7.3% error rates, after which it stabilized to about 0.05% until mitigated. Actions service experienced 24% of runs being delayed for an average of 13 minutes. Large File Storage (LFS) requests experienced minimally increased error rate, with 0.038% of requests erroring.<br /><br />To prevent similar issues, we are enhancing the validation process for device repairs of this category.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>16:39</var> UTC</small><br><strong>Update</strong> - All services have fully recovered.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>16:27</var> UTC</small><br><strong>Update</strong> - Actions has fully recovered but Notifications is still experiencing delays. We will continue to update as the system is fully restored to normal operation.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>16:24</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>16:08</var> UTC</small><br><strong>Update</strong> - Pages is operating normally.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>16:04</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>16:02</var> UTC</small><br><strong>Update</strong> - Actions and Notifications are still experiencing delays as we process the backlog. We will continue to update as the system is fully restored to normal operation.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:51</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:48</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:44</var> UTC</small><br><strong>Update</strong> - We are seeing full recovery in many of our systems, but delays are still expected for actions. We will continue to update as the system is fully restored to normal operation.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:43</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:40</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:39</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:38</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:26</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:25</var> UTC</small><br><strong>Update</strong> - We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:20</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:20</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:17</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:11</var> UTC</small><br><strong>Update</strong> - We are investigating widespread reports of delays and increased latency in various services. We will continue to keep users updated on progress toward mitigation.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:09</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:09</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>15:09</var> UTC</small><br><strong>Update</strong> - Pages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>14:50</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>14:45</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Webhooks</p> tag:www.githubstatus.com,2005:Incident/26701366 2025-10-09T13:56:06Z 2025-10-15T12:45:49Z Multiple GitHub API endpoints are experiencing errors <p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>13:56</var> UTC</small><br><strong>Resolved</strong> - Between 13:39 UTC and 13:42 UTC on Oct 9, 2025, around 2.3% of REST API calls and 0.4% Web traffic were impacted due to the partial rollout of a new feature that had more impact on one of our primary databases than anticipated. When the feature was partially rolled out it performed an excessive number of writes per request which caused excessive latency for writes from other API and Web endpoints and resulted in 5xx errors to customers. <br /><br />The issue was identified by our automatic alerting and reverted by turning down the percentage of traffic to the new feature, which led to recovery of the data cluster and services. <br /><br />We are working to improve the way we roll out new features like this and move the specific writes from this incident to a storage solution more suited to this type of activity. We have also optimized this particular feature to avoid its rollout from having future impact on other areas of the site. We are also investigating how we can even more quickly identify issues like this.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>13:54</var> UTC</small><br><strong>Update</strong> - A feature was partially rolled out that had high impact on one of our primary databases but we were able to roll it back. All services are recovered but we will monitor for recovery before statusing green.</p><p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>13:52</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26679334 2025-10-08T00:05:41Z 2025-10-14T20:05:38Z Disruption with some GitHub services <p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>00:05</var> UTC</small><br><strong>Resolved</strong> - On October 7, 2025, between 7:48 PM UTC and October 8, 12:05 AM UTC (approximately 4 hours and 17 minutes), the audit log service was degraded, creating a backlog and delaying availability of new audit log events. The issue originated in a third-party dependency.<br /><br />We mitigated the incident by working with the vendor to identify and resolve the issue. Write operations recovered first, followed by the processing of the accumulated backlog of audit log events.<br /><br />We are working to improve our monitoring and alerting for audit log ingestion delays and strengthen our incident response procedures to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Oct <var data-var='date'> 7</var>, <var data-var='time'>22:45</var> UTC</small><br><strong>Update</strong> - We are seeing recovery of audit log ingestion and continue to monitor recovery.</p><p><small>Oct <var data-var='date'> 7</var>, <var data-var='time'>21:51</var> UTC</small><br><strong>Update</strong> - We are seeing recovery of audit log ingestion and continue to monitor recovery.</p><p><small>Oct <var data-var='date'> 7</var>, <var data-var='time'>21:17</var> UTC</small><br><strong>Update</strong> - We continue to apply mitigations and monitor for recovery.</p><p><small>Oct <var data-var='date'> 7</var>, <var data-var='time'>20:33</var> UTC</small><br><strong>Update</strong> - We have identified an issue causing delayed audit log event ingestion and are working on a mitigation.</p><p><small>Oct <var data-var='date'> 7</var>, <var data-var='time'>19:48</var> UTC</small><br><strong>Update</strong> - Ingestion of new audit log events is delayed</p><p><small>Oct <var data-var='date'> 7</var>, <var data-var='time'>19:48</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26633369 2025-10-03T03:47:27Z 2025-10-10T15:45:07Z Incident with Copilot <p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>03:47</var> UTC</small><br><strong>Resolved</strong> - <p>On October 3rd, between approximately 10:00 PM and 11:30 Eastern, the Copilot service experienced degradation due to an issue with our upstream provider. Users encountered elevated error rates when using the following Claude models: Claude Sonnet 3.7, Claude Opus 4, Claude Opus 4.1, Claude Sonnet 4, and Claude Sonnet 4.5. No other models were impacted.</p><p>The issue was mitigated by temporarily disabling affected endpoints while our provider resolved the upstream issue. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.</p></p><p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>03:47</var> UTC</small><br><strong>Update</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>03:04</var> UTC</small><br><strong>Update</strong> - The upstream provider is implementing a fix. Services are recovering. We are monitoring the situation.</p><p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>02:42</var> UTC</small><br><strong>Update</strong> - We’re seeing degraded experience across Anthropic models. We’re working with our partners to restore service.</p><p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>02:41</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26615316 2025-10-02T22:33:20Z 2025-10-06T22:39:20Z Degraded Gemini 2.5 Pro experience in Copilot <p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>22:33</var> UTC</small><br><strong>Resolved</strong> - Between October 1st, 2025 at 1 AM UTC and October 2nd, 2025 at 10:33 PM UTC, the Copilot service experienced a degradation of the Gemini 2.5 Pro model due to an issue with our upstream provider. Before 15:53 UTC on October 1st, users experienced higher error rates with large context requests while using Gemini 2.5 Pro. After 15:53 UTC and until 10:33 PM UTC on October 2nd, requests were restricted to smaller context windows when using Gemini 2.5. Pro. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider. GitHub is collaborating with our provider to enhance communication and improve the ability to reproduce issues with the aim to reduce resolution time.</p><p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>22:26</var> UTC</small><br><strong>Update</strong> - We have confirmed that the fix for the lower token input limit for Gemini 2.5 Pro is in place and are currently testing our previous higher limit to verify that customers will experience no further impact.</p><p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>17:13</var> UTC</small><br><strong>Update</strong> - The underlying issue for the lower token limits for Gemini 2.5 Pro has been identified and a fix is in progress. We will update again once we have tested and confirmed that the fix is correct and globally deployed.</p><p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>02:52</var> UTC</small><br><strong>Update</strong> - We are continuing to work with our provider to resolve the issue where some Copilot requests using Gemini 2.5 Pro return an error indicating a bad request due to exceeding the input limit size.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>18:16</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate and test solutions internally while working with our model provider on a deeper investigation into the cause. We will update again when we have identified a mitigation.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>17:37</var> UTC</small><br><strong>Update</strong> - We are testing other internal mitigations so that we can return to the higher maximum input length. We are still working with our upstream model provider to understand the contributing factors for this sudden decrease in input limits.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:49</var> UTC</small><br><strong>Update</strong> - We are experiencing a service regression for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. The maximum input length of Gemini 2.5 prompts been decreased. Long prompts or large context windows may result in errors. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:43</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26610255 2025-10-01T16:55:59Z 2025-10-07T21:49:26Z Degraded Performance for GitHub Actions MacOS Runners <p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:55</var> UTC</small><br><strong>Resolved</strong> - On October 1, 2025 between 07:00 UTC and 17:20 UTC, Mac hosted runner capacity for Actions was degraded, leading to timed out jobs and long queue times. On average, the error rate was 46% and peaked at 96% of requests to the service. XL and Intel runners recovered by 10:10 UTC, with the other types taking longer to recover.<br /><br />The degraded capacity was triggered by a scheduled event at 07:00 UTC that led to a permission failure on Mac runner hosts, blocking reimage operations. The permission issue was resolved by 9:41 UTC, but the recovery of available runners took longer than expected due to a combination of backoff logic slowing backend operations and some hosts needing state resets.<br /><br />We deployed changes immediately following the incident to address the scheduled event and ensure that similar failures will not block critical operations in the future. We are also working to reduce the end-to-end time for self-healing of offline hosts for quicker full recovery of future capacity or host events.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:27</var> UTC</small><br><strong>Update</strong> - We are seeing some recovery for image queueing and continuing to monitor.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>14:41</var> UTC</small><br><strong>Update</strong> - We are continuing work to restore capacity for our MacOS ARM runners.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>13:58</var> UTC</small><br><strong>Update</strong> - Our team continues to work hard on restoring capacity for the Mac runners.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>13:12</var> UTC</small><br><strong>Update</strong> - Work continues on restoring capacity on the Mac runners.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>12:32</var> UTC</small><br><strong>Update</strong> - MacOS ARM runners continue to be at reduced capacity, causing queuing of jobs. Investigation is ongoing.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>11:51</var> UTC</small><br><strong>Update</strong> - Work continues to bring the full runner capacity back online. Resources are focused on improving the recovery of certain runner types.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>11:11</var> UTC</small><br><strong>Update</strong> - We are continuing to see recovery of some runner capacity and investigating slow recovery of certain runner types.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>10:30</var> UTC</small><br><strong>Update</strong> - We are seeing recovery of some runner capacity, while also investigating slow recovery of certain runner types.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>09:44</var> UTC</small><br><strong>Update</strong> - MacOS runners are coming back online and starting to process queued work.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>08:59</var> UTC</small><br><strong>Update</strong> - We are continuing to deploy the necessary changes to restore MacOS runner capacity.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>08:27</var> UTC</small><br><strong>Update</strong> - We have identified the cause and are deploying a change to restore MacOS runner capacity.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>08:17</var> UTC</small><br><strong>Update</strong> - Customers using GitHub Actions Mac OS runners are experiencing job start delays and failures. We are aware of this issue and actively investigating.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>08:09</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>07:59</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26592339 2025-09-29T19:12:41Z 2025-10-06T23:21:59Z Disruption with Gemini 2.5 Pro and Gemini 2.0 Flash in Copilot <p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>19:12</var> UTC</small><br><strong>Resolved</strong> - On September 29, 2025, between 17:53 and 18:42 UTC, the Copilot service experienced a degradation of the Gemini 2.5 model due to an issue with our upstream provider. Approximately 24% of requests failed, affecting 56% of users during this period. No other models were impacted.<br /><br />GitHub notified the upstream provider of the problem as soon as it was detected. The issue was resolved after the upstream provider rolled back a recent change that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>19:12</var> UTC</small><br><strong>Update</strong> - The upstream model provided has resolved the issue and we are seeing full availability for Gemini 2.5 Pro and Gemini 2.0 Flash.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>18:40</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Gemini 2.5 Pro & Gemini 2.0 Flash models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>18:39</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26591278 2025-09-29T17:33:51Z 2025-10-06T19:58:45Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>17:33</var> UTC</small><br><strong>Resolved</strong> - On September 29, 2025 between 16:26 UTC and 17:33 UTC the Copilot API experienced a partial degradation causing intermittent erroneous 404 responses for an average of 0.2% of GitHub MCP server requests, peaking at times around 2% of requests. The issue stemmed from an upgrade of an internal dependency which exposed a misconfiguration in the service.<br /><br />We resolved the incident by rolling back the upgrade to address the misconfiguration. We fixed the configuration issue and will improve documentation and rollout process to prevent similar issues.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>17:28</var> UTC</small><br><strong>Update</strong> - Customers are getting 404 responses when connecting to the GitHub MCP server. We have reverted a change we believe is contributing to the impact, and are seeing resolution in deployed environments.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>16:45</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26554560 2025-09-25T17:36:18Z 2025-09-29T22:28:29Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>17:36</var> UTC</small><br><strong>Resolved</strong> - On September 26, 2025 between 16:22 UTC and 18:32 UTC raw file access was degraded for a small set of four repositories. On average, raw file access error rate was 0.01% and peaked at 0.16% of requests. This was due to a caching bug exposed by excessive traffic to a handful of repositories. <br /><br />We mitigated the incident by resetting the state of the cache for raw file access and are working to improve cache usage and testing to prevent issues like this in the future.<br /></p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>17:06</var> UTC</small><br><strong>Update</strong> - We are seeing issues related to our ability to serve raw file access across a small percentage of our requests.</p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26542071 2025-09-24T15:36:09Z 2025-09-29T17:34:12Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>15:36</var> UTC</small><br><strong>Resolved</strong> - On September 23, 2025, between 15:29 UTC and 17:38 UTC and also on September 24, 2025 between 15:02 UTC and 15:12, email deliveries were delayed up to 50 minutes which resulted in significant delays for most types of email notifications. This occurred due to an unusually high volume of traffic which caused resource contention on some of our outbound email servers.<br /><br />We have updated the configuration we use to better allocate capacity when there is a high volume of traffic and are also updating our monitors so we can detect this type of issue before it becomes a customer impacting incident.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>14:55</var> UTC</small><br><strong>Update</strong> - We are seeing delays in email delivery, which is impacting notifications and user signup email verification. We are investigating and working on mitigation.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>14:46</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26538990 2025-09-24T09:18:30Z 2025-10-03T15:33:54Z Claude Opus 4 is experiencing degraded performance <p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:18</var> UTC</small><br><strong>Resolved</strong> - On September 24th, 2025, between 08:02 UTC and 09:11 UTC the Copilot service was degraded for Claude Opus 4 and Claude Opus 4.1 requests. On average, 22% of requests failed for Claude Opus 4 and 80% of requests for Claude Opus 4.1. This was due to an upstream provider returning elevated errors on Claude Opus 4 and Opus 4.1.<br /><br />We mitigated the issue by directing users to select other models and by monitoring recovery. To resolve the issue, we are expanding failover capabilities by integrating with additional infrastructure providers.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:16</var> UTC</small><br><strong>Update</strong> - Between around 8:16 UTC and 8:51 UTC we saw elevated errors on Claude Opus 4 and Opus 4.1, up to 49% of requests were failing. This has recovered to around 4% of requests failing, we are monitoring recovery.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:08</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26534607 2025-09-24T00:26:29Z 2025-10-01T21:21:18Z Incident with Copilot <p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Resolved</strong> - Between 20:06 UTC September 23 and 04:58 UTC September 24, 2025, the Copilot service experienced degraded availability for Claude Sonnet 4 and 3.7 model requests.<br /><br />During this period, 0.46% of Claude 4 requests and 7.83% of Claude 3.7 requests failed.<br /><br />The reduced availability resulted from Copilot disabling routing to an upstream provider that was experiencing issues and reallocating capacity to other providers to manage requests for Claude Sonnet 3.7 and 4.<br />We are continuing to investigate the source of the issues with this provider and will provide an update as more information becomes available.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 and Claude Sonnet 4 are once again available in Copilot Chat, VS Code and other Copilot products.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.<br /></p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>22:22</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Claude Sonnet 3.7 and Claude Sonnet 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>22:22</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26532100 2025-09-23T17:41:57Z 2025-09-24T17:37:48Z Incident with Pages and Actions <p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>17:41</var> UTC</small><br><strong>Resolved</strong> - On September 23, between 17:11 and 17:40 UTC, customers experienced failures and delays when running workflows on GitHub Actions and building or deploying GitHub Pages. The issue was caused by a faulty configuration change that disrupted service to service communication in GitHub Actions. During this period, in-progress jobs were delayed and new jobs would not start due to a failure to acquire runners, and about 30% of all jobs failed. GitHub Pages users were unable to build or deploy their Pages during this period.<br /><br />The offending change was rolled back within 15 minutes of its deployment, after which Actions workflows and Pages deployments began to succeed. Actions customers continued to experience delays for about 15 minutes after the rollback was completed while services worked through the backlog of queued jobs. We are planning to implement additional rollout checks to help detect and prevent similar issues in the future.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>17:33</var> UTC</small><br><strong>Update</strong> - We are investigating delays in Actions Workflows.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>17:28</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions and Pages</p> tag:www.githubstatus.com,2005:Incident/26531614 2025-09-23T17:40:25Z 2025-10-07T13:31:27Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>17:40</var> UTC</small><br><strong>Resolved</strong> - On September 23, 2025, between 15:29 UTC and 17:38 UTC and also on September 24, 2025 between 14:02 UTC and 15:12 UTC, email deliveries were delayed up to 50 minutes which resulted in significant delays for most types of email notifications. This occurred due to an unusually high volume of traffic which caused resource contention on some of our outbound email servers.<br /><br />We have updated the configuration we use to better allocate capacity when there is a high volume of traffic and are also updating our monitors so we can detect this type of issue before it becomes a customer impacting incident.<br /></p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>16:50</var> UTC</small><br><strong>Update</strong> - We're seeing delays related to outbound emails and are investigating.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>16:46</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26473036 2025-09-17T17:55:39Z 2025-09-19T21:16:04Z Incident with Codespaces <p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>17:55</var> UTC</small><br><strong>Resolved</strong> - On September 17, 2025 between 13:23 and 16:51 UTC some users in West Europe experienced issues with Codespaces that had shut down due to network disconnections and subsequently failed to restart. Codespace creations and resumes were failed over to another region at 15:01 UTC. While many of the impacted instances self-recovered after mitigation efforts, approximately 2,000 codespaces remained stuck in a "shutting down" state while the team evaluated possible methods to recover unpushed data from the latest active session of affected codespaces. Unfortunately, recovery of that data was not possible. We unblocked shutdown of those codespaces, with all instances either shut down or available by 8:26 UTC on September 19.<br /><br />The disconnects were triggered by an exhaustion of resources in the network relay infrastructure in that region, but the lack of self-recovery was caused by an unhandled error impacting the local agent, which led to an unclean shutdown.<br /><br />We are improving the resilience of the local agent to disconnect events to ensure shutdown of codespaces is always clean without data loss. We have also addressed the exhausted resources in the network relay and will be investing in improved detection and resilience to reduce the impact of similar events in the future.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>17:55</var> UTC</small><br><strong>Update</strong> - We have confirmed the original mitigation to failover has resolved the issue causing Codespaces to become unavailable. We are evaluating if there is a path to recover unpushed data from the approximately 2000 Codespaces that are currently in the shutting down state. We will be resolving this incident and will detail the next steps in our public summary.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>16:51</var> UTC</small><br><strong>Update</strong> - For Codespaces that were stuck in the shutting down state and have been resumed, we've identified an issue that is causing the contents Codespace to be irrecoverably lost which has impacted approximately 250 Codespaces. We are actively working on a mitigation to prevent any more Codespaces currently in this state from being forced to shut down to prevent the potential data loss.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>16:07</var> UTC</small><br><strong>Update</strong> - We're continuing to see improvement with Codespaces that were stuck in in the shutting down state and we anticipate the remaining should self resolve in about an hour.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>15:31</var> UTC</small><br><strong>Update</strong> - Some users with Codespaces in West Europe were unable to connect to Codespaces, we have failed over that region and users should be able to create new Codespaces. If a user has a Codespace in a shutting down state, we are still investigating potential fixes and mitigations.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>15:04</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> tag:www.githubstatus.com,2005:Incident/26462594 2025-09-16T18:30:08Z 2025-10-06T13:38:31Z Unauthenticated LFS requests for public repos are returning unexpected 401 errors <p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>18:30</var> UTC</small><br><strong>Resolved</strong> - Between 16:26 UTC on September 15th and 18:30 UTC on September 16th, anonymous REST API calls to approximately 20 endpoints were incorrectly rejected because they were not authenticated. While this caused unauthenticated requests to be rejected by these endpoints, all authenticated requests were unaffected, and no protected endpoints were exposed.<br /><br />This resulted in 100% of requests to these endpoints failing at peak, representing less than 0.1% of GitHub’s overall request volume. On average, the error rate for these endpoints was less than 50% and peaked at 100% for about 26 hours over September 16th. API requests to the impacted endpoints were rejected with a 401 error code. This was due to a mismatch in authentication policies, for specific endpoints, during a system migration.<br /><br />The failure to detect the errors was the result of the issue occurring for a low percentage of traffic.<br /><br />We mitigated the incident by reverting the policy in question, and correcting the logic associated with the degraded endpoints. We are working to improve our test suite to further validate mismatches, and refining our monitors for proactive detection.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>18:29</var> UTC</small><br><strong>Update</strong> - We have mitigated the issue and are monitoring the results</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>18:02</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded performance. We are continuing to investigate.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:55</var> UTC</small><br><strong>Update</strong> - A recent change to our API routing inadvertently added an authentication requirement to the anonymous route for LFS requests. We're in the process of fixing the change, but in the interim retrying should eventually succeed.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:55</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26462194 2025-09-16T17:45:22Z 2025-09-19T18:21:23Z Creating GitHub apps using the REST API will fail with a 401 error <p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:45</var> UTC</small><br><strong>Resolved</strong> - Between 16:26 UTC on September 15th and 18:30 UTC on September 16th, anonymous REST API calls to approximately 20 endpoints were incorrectly rejected because they were not authenticated. While this caused unauthenticated requests to be rejected by these endpoints, all authenticated requests were unaffected, and no protected endpoints were exposed.<br /><br />This resulted in 100% of requests to these endpoints failing at peak, representing less than 0.1% of GitHub’s overall request volume. On average, the error rate for these endpoints was less than 50% and peaked at 100% for about 26 hours over September 16th. API requests to the impacted endpoints were rejected with a 401 error code. This was due to a mismatch in authentication policies, for specific endpoints, during a system migration.<br /><br />The failure to detect the errors was the result of the issue occurring for a low percentage of traffic.<br /><br />We mitigated the incident by reverting the policy in question, and correcting the logic associated with the degraded endpoints. We are working to improve our test suite to further validate mismatches, and refining our monitors for proactive detection.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:27</var> UTC</small><br><strong>Update</strong> - We have mitigated the issue and are monitoring the results</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:15</var> UTC</small><br><strong>Update</strong> - A recent change to our API routing inadvertently added an authentication requirement to the anonymous route for creating GitHub apps. We're in the process of fixing the change, but in the interim retrying should eventually succeed.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:14</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>