AWS status (made simple)

Current status - Aug 24, 2019 PDT

North America

All services are operating normally

South America

All services are operating normally

Europe

All services are operating normally

Asia Pacific

All services are operating normally

Service interruptions for the past week

North America

Amazon Elastic Compute Cloud (N. Virginia) Aug 19, 2019 PDT [RESOLVED] Increased API Error Rates and Latencies

6:26 PM PDT We are investigating increased API error rates and latencies for the EC2 APIs in the US-EAST-1 Region.
6:48 PM PDT Between 6:00 PM and 6:41 PM PDT we experienced increased API error rates and latencies for the EC2 APIs in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

Amazon Connect (N. Virginia) Aug 19, 2019 PDT [RESOLVED] Increased Call Failures

10:29 AM PDT We are investigating call failures and issues accessing Amazon Connect in the US-EAST-1 Region.
10:58 AM PDT Call handling has recovered for agents who can access Amazon Connect in the US-EAST-1 Region. We continue to investigate problems accessing the Amazon Connect Console.
11:46 AM PDT Between 10:02 AM and 11:13 AM PDT, some Amazon Connect users experienced issues logging in or performing actions in the Connect application in the US-EAST-1 Region. Some calls may have failed during this time. The issue has been resolved and the service is operating normally.

South America

All services were operating normally

Europe

All services were operating normally

Asia Pacific

Amazon Relational Database Service (Tokyo) Aug 22, 2019 PDT [RESOLVED] インスタンスの接続性について | Instance Availability

10:22 PM PDT AWSでは、現在、東京リージョンの1つのアベイラビリティゾーンで発生している、複数インスタンスに対する接続性の問題について調査を進めております。| We are investigating connectivity issues affecting some instances in a single Availability Zone in the AP-NORTHEAST-1 Region.
11:25 PM PDT AWSでは、東京リージョンの1つのアベイラビリティゾーンで発生しているインスタンスの接続性の問題について原因を特定し、現在復旧に向けて対応を進めております。| We have identified the root cause of instance connectivity issues within a single Availability Zone in the AP-NORTHEAST-1 Region and are working toward recovery.
Aug 23, 12:01 AM PDT AWSでは、現在、東京リージョンの1つのアベイラビリティゾーンで発生しているインスタンスの接続性の問題ついて、復旧を開始しております。影響を受けている全てのインスタンスの復旧に向け、対応を継続いたします。| We are starting to see recovery for instance connectivity issues within a single Availability Zone in the AP-NORTHEAST-1 Region. We continue to work towards recovery for all affected instances.
Aug 23, 2:16 AM PDT AWSでは、現在、東京リージョンの1つのアベイラビリティゾーンで接続性の問題が生じている全てのインスタンスの復旧に向け、対応を進めております。| We continue to see recovery for instance connectivity issues within a single Availability Zone in the AP-NORTHEAST-1 Region and are working towards recovery for all affected instances.
Aug 23, 6:19 AM PDT 日本時間 2019年8月23日 12:36 から 22:05 にかけて、東京リージョンの単一のアベイラビリティゾーンで一部の RDS インスタンスに接続性の問題が発生しました。現在、この問題は解消しており、サービスは正常稼働しております。 | Between August 22 8:36 PM and August 23 6:05 AM PDT, some RDS instances experienced connectivity issues within a single Availability Zone in the AP-NORTHEAST-1 Region. The issue has been resolved and the service is operating normally.

Amazon Elastic Compute Cloud (Tokyo) Aug 22, 2019 PDT [RESOLVED] インスタンスの接続性について | Instance Availability

9:18 PM PDT We are investigating connectivity issues affecting some instances in a single Availability Zone in the AP-NORTHEAST-1 Region.
9:47 PM PDT We can confirm that some instances are impaired and some EBS volumes are experiencing degraded performance within a single Availability Zone in the AP-NORTHEAST-1 Region. Some EC2 APIs are also experiencing increased error rates and latencies. We are working to resolve the issue.
10:27 PM PDT We have identified the root cause and are working toward recovery for the instance impairments and degraded EBS volume performance within a single Availability Zone in the AP-NORTHEAST-1 Region.
11:40 PM PDT We are starting to see recovery for instance impairments and degraded EBS volume performance within a single Availability Zone in the AP-NORTHEAST-1 Region. We continue to work towards recovery for all affected instances and EBS volumes.
Aug 23, 1:54 AM PDT Recovery is in progress for instance impairments and degraded EBS volume performance within a single Availability Zone in the AP-NORTHEAST-1 Region. We continue to work towards recovery for all affected instances and EBS volumes.
Aug 23, 2:39 AM PDT The majority of impaired EC2 instances and EBS volumes experiencing degraded performance have now recovered. We continue to work on recovery for the remaining EC2 instances and EBS volumes that are affected by this issue. This issue affects EC2 instances and EBS volumes in a single Availability Zone in the AP-NORTHEAST-1 region.
Aug 23, 4:18 AM PDT 日本時間 2019年8月23日 12:36 より、AP-NORTHEAST-1 の単一のアベイラビリティゾーンで、一定の割合の EC2 サーバのオーバーヒートが発生しました。この結果、当該アベイラビリティゾーンの EC2 インスタンス及び EBS ボリュームのパフォーマンスの劣化が発生しました。 このオーバーヒートは、影響を受けたアベイラビリティゾーン中の一部の冗長化された空調設備の管理システム障害が原因です。日本時間 15:21 に冷却装置は復旧し、室温が通常状態に戻り始めました。温度が通常状態に戻ったことで、影響を受けたインスタンスの電源が回復しました。日本時間 18:30 より大部分の EC2 インスタンスと EBS ボリュームは回復しました。 我々は残りの EC2 インスタンスと EBS ボリュームの回復に取り組んでいます。少数の EC2 インスタンスと EBS ボリュームが電源が落ちたハードウェア ホスト上に残されています。我々は影響をうけた全ての EC2 インスタンスと EBS ボリュームの回復のための作業を継続しています。 早期回復の為、可能な場合残された影響を受けている EC2 インスタンスと EBS ボリュームのリプレースを推奨します。いくつかの影響をうけた EC2 インスタンスはお客様側での作業が必要になる可能性がある為、 後ほどお客様個別にお知らせすることを予定しています。 | Beginning at 8:36 PM PDT a small percentage of EC2 servers in a single Availability Zone in the AP-NORTHEAST-1 Region shutdown due to overheating. This resulted in impaired EC2 instances and degraded EBS volume performance for resources in the affected area of the Availability Zone. The overheating was caused by a control system failure that caused multiple, redundant cooling systems to fail in parts of the affected Availability Zone. The chillers were restored at 11:21 PM PDT and temperatures in the affected areas began to return to normal. As temperatures returned to normal, power was restored to the affected instances. By 2:30 AM PDT, the vast majority of instances and volumes had recovered. We have been working to recover the remaining instances and volumes. A small number of remaining instances and volumes are hosted on hardware which was adversely affected by the loss of power. We continue to work to recover all affected instances and volumes. For immediate recovery, we recommend replacing any remaining affected instances or volumes if possible. Some of the affected instances may require action from customers and we will be reaching out to those customers with next steps.

Why this site? Because it's frustrating scrolling the infinite AWS status page looking for any issue in the AWS infrastructure. This site only shows the issues, anything else it's just ok.

Warning: This site is not maintained by or affiliated with Amazon in any way. Information is scraped from AWS status site and the data shown is not guaranteed to be accurate or current. Last Update: 08/24/2019 14:09 PDT