Canfor
25-Minute Talk
Corruption in PostgreSQL is rare, but when it happens in managed environments, it feels like solving a detective mystery. Over the past year, we’ve encountered cases that turned out to be PostgreSQL bugs—one was fixed upstream, one remained completely clueless, and one was caused by operator mistakes.
While best practices for preventing corruption are well-documented, real-world incidents still occur—sometimes in unexpected ways. Instead of repeating generic prevention strategies, this session focuses on the actual challenges we faced in large-scale managed PostgreSQL deployments and the investigative process that led us to the root cause.
I'll cover: Customer Impact: How corruption manifested and what symptoms were observed. Detection Signals: What clues pointed us toward corruption. Root Cause Journey: The step-by-step process from high-level symptoms to pinpointing the issue. Lessons Learned: Practical insights for engineers and DBAs handling similar scenarios.
This session is ideal for: PostgreSQL Hackers and Core Developers: If you want to understand the corruption cases that went clueless and take them further to improve PostgreSQL. DBAs and PostgreSQL Developers: To learn from these cases and turn them into a practical checklist for your own databases.

