Yep, I have Gino's book and I'm referring to that for my implementation.
I'm using double precision and still seeing issues. I am hesitant to suspect precision both because of that, and because the failure cases are so far off, geometrically, especially when compared to the apparent precision of the successes. There doesn't seem to be an in-between. It almost seems like there is a fundamental bogus assumption in the algorithm itself, which would explain why people report problems with robustness-- but I'd of course have to rule out an error on my part first.
I'm already handling the zero-determinant case; in general it implies that GJK has failed to "escape" the previous simplex, and added another point (or duplicate vertex) on the same face. I've found in this case it is generally safe to terminate with a non-intersecting condition. In cases where a sub-simplex "contains" the origin, this can be detected if the distance-to-simplex is exactly zero, which allows us to exit before the degenerate case emerges. In cases where precision causes the distance to be not-quite-zero, then we would again revert to a degenerate simplex in the following step and decide no intersection-- technically consistent, since the distance is finite-- or intersecting, after we proceed toward the interior of the minkowski volume and just barely envelop the origin. In any case, these are not the cases where I'm seeing Johnson's algorithm fail. I'm seeing much more egregious failures, like a nearly-zero distance measurement (1e-30) to a simplex several units away.
I'm using Johnson's algorithm because the rest of my code is dimension-agnostic. Johnson's algorithm is appealing because it is (at least procedurally) extremely simple and coordinate-free. I agree that it's lack of a geometric abstraction is a big detriment, and it's certainly why I'm having trouble.
I'll post some code (and maybe some images) here a bit later as soon as I have access to them.