Resolved -
We have fully caught up with any backlog on C102 and all queries and activity through C102 are functioning normally. If you continue to experience any issues please contact Support.
Mar 17, 12:21 PDT
Monitoring -
Engineering has restored C102 back to a "green" state and ingestion and API calls are succeeding again for this cluster. We will continue to monitor until we are fully through the accumulated backlog. Next update at 1 PM PDT or sooner.
Mar 17, 12:00 PDT
Update -
Engineering is continuing to investigate this issue. At this time there is no improvement yet to any customers on C102. Next Update at 12:15 PDT or sooner.
Mar 17, 11:43 PDT
Investigating -
We are currently investigating an issue for one of ES clusters - c102 - The cluster is in a red state, leading to very slow ingestion and widespread API 500 errors.
Customers are experiencing failed searches and errors on multiple API endpoints. Ingestion rates are near zero, with lag up to 20 minutes and increasing for internal, realtime, and bulk data pipelines.
Mar 17, 11:07 PDT