Ingestion delay across multiple clusters
Incident Report for Iterable
Resolved
This incident has been resolved.
Posted Nov 03, 2022 - 12:14 PDT
Update
We are continuing to monitor for any further issues.
Posted Nov 02, 2022 - 19:43 PDT
Monitoring
Our engineering team is currently working through suggested fixes to completely alleviate the lag. Previously backed up topics are back to normal with the exception of a few customers. Next update at 7:41 PDT or sooner.
Posted Nov 02, 2022 - 18:48 PDT
Update
Our engineering team has taken steps to alleviate ingestion lag for some clusters while continuing to work with a downstream vendor, as well as investigating the root cause. Customers may continue to experience delays in the previously stated areas. Next update at 6:41 PDT or sooner.
Posted Nov 02, 2022 - 17:40 PDT
Update
Our engineering team has taken steps to alleviate ingestion lag for some clusters while continuing to work with a downstream vendor. Customers may continue to experience delays in the previously stated areas. Next update at 5:41 PDT or sooner.
Posted Nov 02, 2022 - 16:45 PDT
Identified
Our engineering team is working with a downstream vendor on the next steps. Customers will continue to experience ingestion delays in the previously stated areas. Next update at 4:41 PDT or sooner.
Posted Nov 02, 2022 - 15:47 PDT
Investigating
Multiple clusters within the Iterable platform are experiencing data ingestion delays. This includes clusters c12, c24, c6
and c22. Data is not being dropped, only delayed. This impacts the following areas: list uploads, user updates, webhooks, custom event calls, and event triggers. Our engineering team has identified the issue and is taking steps to mitigate the delays and improve processing. Next update at 3:41 PDT or sooner.
Posted Nov 02, 2022 - 14:47 PDT
This incident affected: Global System Webhooks, Cluster 6 (User Updates, List Uploads), Cluster 13 (User Updates, List Uploads), and Cluster 18 (User Updates, List Uploads).