Message Processing Delay
Incident Report for Redox Engine
Postmortem

Executive Summary

Root Cause

One of our internal routes between our firewall and interface relied on a DNS entry and, as a result, the DNS service was experiencing significant latency.

Impact on Customers

Approximately 150 Redox customers were affected by this latency, beginning at 08:33 AM CT and ending at 10:27 AM CT. During this time processing was slow, but not halted.

What Happened?

We were alerted to the system latency at approximately 08:40 AM CT, which began our investigationl. It was soon discovered that one of our internal routes between our firewall and interface relied on a DNS entry and, as a result, the DNS service was experiencing significant latency. We replaced the DNS entry with the actual hostname to resolve the issue.

Learnings / Follow-ups

Redox engineers are determining how we can be alerted when the system deduces that customer/CHO feeds are backing up.

Posted Nov 03, 2021 - 09:20 CDT

Resolved
This incident has been resolved.
Posted Oct 26, 2021 - 12:56 CDT
Monitoring
A fix has been implemented and we are monitoring the results.
Posted Oct 26, 2021 - 10:27 CDT
Update
We are continuing to investigate this issue.
Posted Oct 26, 2021 - 10:04 CDT
Investigating
We are currently investigating a message processing delay - queues for a small group of customers may be processing slowly at this time. We will update you as we discover more.
Posted Oct 26, 2021 - 09:44 CDT
This incident affected: redoxengine.com.