So far I've talked about the irritation I was seeing on the C++/Win32 parent process side. However, we also saw weirdness on the Haskell side. Everything is done inside a System.Timeout.timeout call, so that the process really should never wedge. The parent process, being buggy, was doing a blocking read on stdout. The child process was writing to stderr. We saw weird behaviour:
The first piece of behaviour is expected... but what's going on in the other two cases? In the last case, the OS buffer is filled, the write call blocks, and as the write is inside the timeout block the timeout occurs. We might expect this, but why only over 4608 bytes? Haskell internally buffers 512 byte blocks. So, if you write over 4608 bytes, it'll fill up the internal buffer 9 times, write out 4096 bytes (to fill the OS buffer), and then trigger a final write, inside the 'timeout' block. The process times out.
Between 4097 and 4608 bytes, the internal buffer will be drained 8 times, writing out 4096 bytes to the OS, and filling its pipe, but leaving the internal non-empty. We leave the 'timeout' block, and start to shut the process down. Our stderr internal buffer needs flushing, so we perform a blocking write, this time outside the 'timeout' block. Deadlock.
What's the upshot of this? If you want to know what's going on, you need to understand the details of your system, as abstractions leak. Oh, and if you want 'timeout' to work quite as you expect, you might want to flush your handles before the end of the block.