AWS status (made simple)

Current status - Jan 27, 2020 PST

North America

All services are operating normally

South America

All services are operating normally

Europe

All services are operating normally

Asia Pacific

All services are operating normally

Service interruptions for the past week

North America

Amazon Route 53 Jan 24, 2020 PST [RESOLVED] Route 53 DNS Change Issues

12:50 PM PST We are investigating increased propagation times of DNS edits to the Route 53 DNS servers. Queries to existing DNS records are not affected by this issue.
1:21 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. This will affect provisioning of new resources that rely on Route 53 for DNS, such as EFS and PrivateLink. Queries to existing DNS records are not affected by this issue
1:38 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. To help accelerate recovery, the Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway. Queries to existing DNS records are not affected by this issue.
2:58 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. To help accelerate recovery, the Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation and Chime Voice Connector.
3:08 PM PST We have identified root cause resulting in increased propagation times of DNS edits to the Route 53 DNS servers. The Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests in order to help accelerate recovery. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector and Global Accelerator.
3:45 PM PST We have identified root cause resulting in increased propagation times of DNS edits to the Route 53 DNS servers, and are working towards recovery. The Route 53 API is now accepting changes again, though these changes are still experiencing delays propagating as there is a significant backlog of changes to process. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector, Global Accelerator, RDS, SageMaker Ground Truth, Amazon Managed Blockchain and Directory Service.
5:21 PM PST Between 12:07 PM and 5:15 PM PST, customers experienced delays propagating changes submitted to the Route 53 API, as well as increased API error rates from 1:55 PM until 3:20 PM. This also affected provisioning of new resources that rely on Route 53 DNS, such as EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector, Global Accelerator, RDS, SageMaker Ground Truth, Amazon Managed Blockchain, Directory Service and Elastic Inference. The Route 53 API is now operating normally, and all changes that were accepted by the Route 53 API have been propagated. Queries for all existing records were answered normally during this time.

South America

Amazon Route 53 Jan 24, 2020 PST [RESOLVED] Route 53 DNS Change Issues

12:50 PM PST We are investigating increased propagation times of DNS edits to the Route 53 DNS servers. Queries to existing DNS records are not affected by this issue.
1:21 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. This will affect provisioning of new resources that rely on Route 53 for DNS, such as EFS and PrivateLink. Queries to existing DNS records are not affected by this issue
1:38 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. To help accelerate recovery, the Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway. Queries to existing DNS records are not affected by this issue.
2:58 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. To help accelerate recovery, the Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation and Chime Voice Connector.
3:08 PM PST We have identified root cause resulting in increased propagation times of DNS edits to the Route 53 DNS servers. The Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests in order to help accelerate recovery. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector and Global Accelerator.
3:45 PM PST We have identified root cause resulting in increased propagation times of DNS edits to the Route 53 DNS servers, and are working towards recovery. The Route 53 API is now accepting changes again, though these changes are still experiencing delays propagating as there is a significant backlog of changes to process. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector, Global Accelerator, RDS, SageMaker Ground Truth, Amazon Managed Blockchain and Directory Service.
5:21 PM PST Between 12:07 PM and 5:15 PM PST, customers experienced delays propagating changes submitted to the Route 53 API, as well as increased API error rates from 1:55 PM until 3:20 PM. This also affected provisioning of new resources that rely on Route 53 DNS, such as EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector, Global Accelerator, RDS, SageMaker Ground Truth, Amazon Managed Blockchain, Directory Service and Elastic Inference. The Route 53 API is now operating normally, and all changes that were accepted by the Route 53 API have been propagated. Queries for all existing records were answered normally during this time.

Europe

Amazon Route 53 Jan 24, 2020 PST [RESOLVED] Route 53 DNS Change Issues

12:50 PM PST We are investigating increased propagation times of DNS edits to the Route 53 DNS servers. Queries to existing DNS records are not affected by this issue.
1:21 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. This will affect provisioning of new resources that rely on Route 53 for DNS, such as EFS and PrivateLink. Queries to existing DNS records are not affected by this issue
1:38 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. To help accelerate recovery, the Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway. Queries to existing DNS records are not affected by this issue.
2:58 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. To help accelerate recovery, the Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation and Chime Voice Connector.
3:08 PM PST We have identified root cause resulting in increased propagation times of DNS edits to the Route 53 DNS servers. The Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests in order to help accelerate recovery. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector and Global Accelerator.
3:45 PM PST We have identified root cause resulting in increased propagation times of DNS edits to the Route 53 DNS servers, and are working towards recovery. The Route 53 API is now accepting changes again, though these changes are still experiencing delays propagating as there is a significant backlog of changes to process. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector, Global Accelerator, RDS, SageMaker Ground Truth, Amazon Managed Blockchain and Directory Service.
5:21 PM PST Between 12:07 PM and 5:15 PM PST, customers experienced delays propagating changes submitted to the Route 53 API, as well as increased API error rates from 1:55 PM until 3:20 PM. This also affected provisioning of new resources that rely on Route 53 DNS, such as EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector, Global Accelerator, RDS, SageMaker Ground Truth, Amazon Managed Blockchain, Directory Service and Elastic Inference. The Route 53 API is now operating normally, and all changes that were accepted by the Route 53 API have been propagated. Queries for all existing records were answered normally during this time.

Amazon Elastic Compute Cloud (Paris) Jan 22, 2020 PST [RESOLVED] Network Connectivity

12:04 PM PST Between 10:00 AM and 11:28 AM PST we experienced network connectivity issues affecting EC2 instances in a single Availability Zone in the EU-WEST-3 Region. Instances in the affected Availability Zone were able to connect to the Internet but were unable to resolve DNS records during this time. New instance launches into the affected Availability Zone were also affected by the event. The issue has been resolved and the service is operating normally.

AWS Internet Connectivity (Paris) Jan 22, 2020 PST [RESOLVED] Network Connectivity

10:25 AM PST We are investigating an issue which is affecting internet connectivity to a single availability zone in EU-WEST-3 Region.
11:05 AM PST We have identified the root cause of the issue that is affecting connectivity to a single availability zone in EU-WEST-3 Region and continue to work towards resolution.
11:45 AM PST Between 10:00 AM and 11:28 AM PST we experienced an issue affecting network connectivity to AWS services in a single Availability Zone in EU-WEST-3 Region. The issue has been resolved and connectivity has been restored.

Asia Pacific

Amazon Route 53 Jan 24, 2020 PST [RESOLVED] Route 53 DNS Change Issues

12:50 PM PST We are investigating increased propagation times of DNS edits to the Route 53 DNS servers. Queries to existing DNS records are not affected by this issue.
1:21 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. This will affect provisioning of new resources that rely on Route 53 for DNS, such as EFS and PrivateLink. Queries to existing DNS records are not affected by this issue
1:38 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. To help accelerate recovery, the Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway. Queries to existing DNS records are not affected by this issue.
2:58 PM PST We are still investigating increased propagation times of DNS edits to the Route 53 DNS servers. To help accelerate recovery, the Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation and Chime Voice Connector.
3:08 PM PST We have identified root cause resulting in increased propagation times of DNS edits to the Route 53 DNS servers. The Route 53 API is temporarily not accepting MakeChange or CreateHostedZone requests in order to help accelerate recovery. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector and Global Accelerator.
3:45 PM PST We have identified root cause resulting in increased propagation times of DNS edits to the Route 53 DNS servers, and are working towards recovery. The Route 53 API is now accepting changes again, though these changes are still experiencing delays propagating as there is a significant backlog of changes to process. Queries to existing DNS records are not affected by this issue. This will also affect provisioning of new resources that rely on Route 53 for DNS, such as: EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector, Global Accelerator, RDS, SageMaker Ground Truth, Amazon Managed Blockchain and Directory Service.
5:21 PM PST Between 12:07 PM and 5:15 PM PST, customers experienced delays propagating changes submitted to the Route 53 API, as well as increased API error rates from 1:55 PM until 3:20 PM. This also affected provisioning of new resources that rely on Route 53 DNS, such as EFS, PrivateLink, Amazon MQ, Amazon Managed Streaming for Apache Kafka, API Gateway, DocumentDB, FSx for Lustre, Certificate Manager, Transfer for SFTP, EKS, CloudFormation, Chime Voice Connector, Global Accelerator, RDS, SageMaker Ground Truth, Amazon Managed Blockchain, Directory Service and Elastic Inference. The Route 53 API is now operating normally, and all changes that were accepted by the Route 53 API have been propagated. Queries for all existing records were answered normally during this time.

Amazon WorkSpaces (Sydney) Jan 22, 2020 PST [RESOLVED] Increased API Error Rates

7:31 PM PST We are investigating increased Amazon WorkSpaces API error rates and provisioning times for Amazon WorkSpaces in the AP-SOUTHEAST-2 Region.
8:39 PM PST We have identified the cause of increased Amazon WorkSpaces API error rates and provisioning times for Amazon WorkSpaces in the AP-SOUTHEAST-2 Region and continue working towards resolution.
9:02 PM PST We continue to experience increased API error rates and provisioning times for Amazon WorkSpaces due to the issue affecting EC2 within the AP-SOUTHEAST-2 Region. Existing Amazon WorkSpaces sessions will continue to operate.
11:32 PM PST Between 4:04 PM and 10:55 PM PST, we experienced increased Amazon WorkSpaces API error rates and provisioning times due to the issue affecting EC2 within the AP-SOUTHEAST-2 Region. The issue has been resolved and the service is now operating normally.

AWS Lambda (Sydney) Jan 22, 2020 PST [RESOLVED] Increased API Error Rates

7:17 PM PST We can confirm increased API error rates in the AP-SOUTHEAST-2 Region for functions that are configured with VPC settings. Functions that are not configured with VPC settings are unaffected.
9:02 PM PST We continue to experience increased API error rates in the AP-SOUTHEAST-2 Region for functions that are configured with VPC settings due to the issue affecting EC2 in the AP-SOUTHEAST-2 Region. We continue to work towards full resolution.
11:31 PM PST Between 4:50 PM and 11:00 PM PST, we experienced increased API error rates for functions due to an issue affecting EC2 in the AP-SOUTHEAST-2 Region. The issue has been resolved and the service is operating normally.

Amazon ElastiCache (Sydney) Jan 22, 2020 PST [RESOLVED] Increased API Error Rates

7:01 PM PST We are experiencing increased latencies while provisioning new ElastiCache nodes and and elevated API error rates in the AP-SOUTHEAST-2 AWS Region. Existing ElastiCache clusters are not impacted and are continuing to serve traffic. We are working to resolve the issue.
9:32 PM PST We continue to experience increased latencies for ElastiCache cluster creation, modification and deletion operations, and elevated API error rates due to the issue affecting EC2 within the AP-SOUTHEAST-2 Region. We continue to work towards full resolution. Existing clusters are operating normally.
11:55 PM PST Between 4:09 PM and 11:44 PM PST, we experienced increased latencies for ElastiCache cluster creation, modification and deletion operations, and elevated API error rates due to the issue affecting EC2 within the AP-SOUTHEAST-2 Region. The issue has been resolved and the service is operating normally.

Amazon AppStream 2.0 (Sydney) Jan 22, 2020 PST [RESOLVED] Increased Instance Provisioning Error Rates

6:08 PM PST We are currently experiencing an issue provisioning new image builder and fleet streaming instances in the AP-SOUTHEAST-2 Region.
7:20 PM PST We are continuing to investigate an increase in instance provisioning error rates in the AP-SOUTHEAST-2 Region.
8:32 PM PST We have identified the cause of the increased provisioning error rates in the AP-SOUTHEAST-2 Region and continue working towards resolution.
8:59 PM PST We continue to experience increased instance provisioning error rates due to the issue affecting EC2 within the AP-SOUTHEAST-2 Region. We continue to work towards full resolution. Existing streaming sessions and instances will continue to operate.
Jan 23, 12:41 AM PST We continue to experience increased instance provisioning error rates within the AP-SOUTHEAST-2 Region. We continue to work towards full resolution. Existing streaming sessions and instances will continue to operate.
Jan 23, 1:51 AM PST We are continuing to work towards resolution of increased instance provisioning error rates within the AP-SOUTHEAST-2 Region. Existing streaming sessions and instances will continue to operate.
Jan 23, 2:38 AM PST We recently experienced increased instance provisioning errors within the AP-SOUTHEAST-2 Region. The issue has been resolved and the service is operating normally.

Amazon Elastic Load Balancing (Sydney) Jan 22, 2020 PST [RESOLVED] Increased Provisioning Latencies

5:33 PM PST We are investigating increased provisioning times and ELB API error rates for load balancers in the AP-SOUTHEAST-2 Region. Connectivity to existing load balancers is not affected.
6:13 PM PST We can confirm increased provisioning/scaling latencies and ELB API error rates for load balancers in the AP-SOUTHEAST-2 Region and continue to work towards resolution. Traffic remains unaffected on running load balancers.
7:23 PM PST We are continuing to work towards resolution of increased provisioning/scaling latencies and ELB API error rates for load balancers in the AP-SOUTHEAST-2 Region. Traffic remains unaffected on running load balancers.
8:59 PM PST We continue to experience increased provisioning/scaling latencies and ELB API error rates for load balancers due to the issue affecting EC2 in the AP-SOUTHEAST-2 Region. We continue to work towards full resolution. Traffic remains unaffected on running load balancers.
Jan 23, 12:22 AM PST Between 4:10 PM and 11:40 PM PST, we experienced increased provisioning/scaling latencies and ELB API error rates for load balancers due to the issue affecting EC2 within the AP-SOUTHEAST-2 Region. The issue has been resolved and the service is operating normally.

Amazon Relational Database Service (Sydney) Jan 22, 2020 PST [RESOLVED] Increased API Error Rates

5:29 PM PST We are investigating increased API error rates and latencies in the AP-SOUTHEAST-2 Region.
6:20 PM PST We can confirm increased API error rates and latencies in the AP-SOUTHEAST-2 Region and continue to work towards resolution. Connectivity to existing instances remains unaffected.
7:35 PM PST We are continuing to work towards resolution of increased API error rates and latencies in the AP-SOUTHEAST-2 Region. Connectivity to existing instances remains unaffected.
9:00 PM PST We continue to experience increased API error rates and latencies due to the issue affecting EC2 within the AP-SOUTHEAST-2 Region. We continue to work towards full resolution.
11:38 PM PST Between 4:41 PM and 11:35 PM PDT we experienced increased API error rates and latencies due to the issue affecting EC2 within the AP-SOUTHEAST-2 Region. The issue has been resolved and the service is operating normally.

Amazon Elastic Compute Cloud (Sydney) Jan 22, 2020 PST [RESOLVED] Increased API Error Rates

4:41 PM PST We are investigating increased API error rates and latencies in the AP-SOUTHEAST-2 Region. Connectivity to existing instances is not impacted.
5:18 PM PST We have identified the root cause of the issue causing increased API error rates and latencies in the AP-SOUTHEAST-2 Region and continue working towards resolution. This issue mainly affects EC2 RunInstances and VPC related API requests. Customer using the EC2 Management Console will also experience error rates for instance and network related functions. Connectivity to existing instances remains unaffected.
6:25 PM PST We continue to experience increased API error rates for the EC2 APIs in the AP-SOUTHEAST-2 Region. We have confirmed the root cause, and are working on multiple paths toward recovering the subsytem that is impaired, which is responsible for networking related API calls. This issue mainly affects EC2 RunInstance, and VPC related API requests. Customers using the EC2 Management Console may experience errors Describing Resources, as well as making mutating API requests. Connectivity to existing instances in the AP-SOUTHEAST-2 remains unaffected.
8:49 PM PST We wanted to provide you with more details on the issue causing increased API error rates and latencies in the AP-SOUTHEAST-2 Region. A data store used by a subsystem responsible for the configuration of Virtual Private Cloud (VPC) networks is currently offline and the engineering team are working to restore it. While the investigation into the issue was started immediately, it took us longer to understand the full extent of the issue and determine a path to recovery. We determined that the data store needed to be restored to a point before the issue began. In order to do this restore, we needed to disable writes. Error rates and latencies for the networking-related APIs will continue until the restore has been completed and writes re-enabled. We are working through the recovery process now. With issues like this, it is always difficult to provide an accurate ETA, but we expect to complete the restore process within the next 2 hours and begin to allow API requests to proceed once again. We will continue to keep you updated if that ETA changes. Connectivity to existing instances is not impacted. Also, launch requests that refer to regional objects like subnets that already exist will succeed at this stage, as they do not depend on the affected subsystem. If you know the subnet ID, you can use that to launch instances within the region. We apologize for the impact and continue to work towards full resolution.
10:10 PM PST We continue to make steady progress towards the restoration of the affected data store and are currently within the 2 hours ETA published above.
10:55 PM PST We have completed the restoration of the affected data store but are still working towards re-enabling writes. We have seen an improvement in successful launches over the last 20 minutes and expect that to continue as we work towards full recovery.
11:45 PM PST We can confirm that all error rates and latencies have returned to normal levels. The issue has been resolved and the service is operating normally.
Jan 23, 12:30 AM PST Now that we are fully recovered, we wanted to provide a brief summary of the issue. Starting at 4:07 PM PST, customers began to experience increased error rates and latencies for the network-related APIs in the AP-SOUTHEAST-2 Region. Launches of new EC2 instances also experienced increased failure rates as a result of this issue. Connectivity to existing instances was not affected by this event. We immediately began investigating the root cause and identified that the data store used by the subsystem responsible for the Virtual Private Cloud (VPC) regional state was impaired. While the investigation into the issue was started immediately, it took us longer to understand the full extent of the issue and determine a path to recovery. We determined that the data store needed to be restored to a point before the issue began. We began the data store restoration process, which took a few hours and by 10:50 PM PST, we had fully restored the primary node in the affected data store. At this stage, we began to see recovery in instance launches within the AP-SOUTHEAST-2 Region, restoring many customer applications and services to a healthy state. We continued to bring the data store back to a fully operational state and by 11:20 PM PST, all API error rates and latencies had fully recovered. Other AWS services - including AppStream, Elastic Load Balancing, ElastiCache, Relational Database Service, Amazon WorkSpaces and Lambda – were also affected by this event. We apologize for any inconvenience this event may have caused as we know how critical our services are to our customers. We are never satisfied with operational performance of our services that is anything less than perfect, and will do everything we can to learn from this event and drive improvement across our services.

Why this site? Because it's frustrating scrolling the infinite AWS status page looking for any issue in the AWS infrastructure. This site only shows the issues, anything else it's just ok.

Warning: This site is not maintained by or affiliated with Amazon in any way. Information is scraped from AWS status site and the data shown is not guaranteed to be accurate or current. Last Update: 01/27/2020 20:05 PST