2026-01-21 Connection Failures on Outbound Calls
Table of Contents
Affected Services
Bandwidth Outbound Call Routing - DFW Data Center
Event Summary
At 10:31 AM ET on January 21, 2026 Bandwidth, our primary outbound carrier, experienced call completion failures due to a route processing failure at their DFW (Dallas/Fort Worth) data center which prevented calls to successfully complete the call connection process.
Outbound call routing was temporarily redirected to an alternate carrier while Bandwidth resolved the route processing failures.
Event Timeline
January 21, 2026
10:20 AM ET – We became aware that outbound calls through our primary outbound carrier Bandwidth began experiencing outbound call completion failures.
10:31 AM ET - Bandwidth acknowledged call completion failures at their DFW data center.
10:34 AM ET – Call routing was manually switched to an alternate carrier to restore service.
11:20 PM ET – Bandwidth reported the incident as resolved. Reroute through our alternate outbound carriers remained enabled for continued monitoring to ensure stable outbound call routing.
8:05 PM ET – Call routing was successfully updated back to Bandwidth as the primary carrier. Test call performed and confirmed successful routing to Bandwidth.
January 22, 2026
10:15 AM ET – After 24 hours of sustained stability confirmed, the incident was marked as resolved.
Root Cause
At approximately 10:20 AM ET, Bandwidth began experiencing outbound call failures through their DFW Data Center. While Bandwidth confirmed resolution of the call completion failures, detailed technical root cause information was not provided by the carrier.
During the incident, SIP trace analysis revealed that Bandwidth was sending "100 Trying" responses to our outbound call invites. This response indicates their servers received and acknowledged the requests but couldn't complete the call processing. Because "100 Trying" signals successful acknowledgment, our automatic failover mechanisms were not triggered, as the call was seen as successfully initiated—ultimately requiring manual intervention to route calls through an alternate carrier.
Impact Summary
Outbound Calling Interruption - Approximately 15 minutes of customer impact
All outbound calls routing through Bandwidth experienced complete service disruption
To minimize customer impact, our team manually redirected call routing to an alternate carrier, restoring outbound calling while Bandwidth addressed the processing failures within their infrastructure.
After 24 hours of confirmed sustained stability, outbound call routing was successfully returned to Bandwidth at 8:05 PM ET with no further incidents reported.
Follow-up Actions
Immediate Actions:
- added specific monitoring and alerts for earlier detection and response of outbound call completion failures where a 100 trying is not followed by a ring/acknowledgement response.
Long-term actions:
- Investigate third party alternatives to determine possibility of automatic failover where a "100 Trying" is not immediately followed by a ringing or acknowledgment response.