No. As long as you start out with a well-conditioned X, it should stay that way. (For example, if your starting X is orthogonal.)
The basic reason for this is similar to optimizing on a sphere by changing variables to x/\Vert x \Vert: because \nabla_x f(x/\Vert x \Vert) is tangent to the sphere, gradient-based optimization should never make x stray very far towards the origin. In the same way, the gradient of g(X) = f(X(X^T X)^{-1/2}) is tangent to the manifold of orthogonal matrices, so optimization steps will keep X pretty close to that manifold (i.e., well-conditioned).
(In particular, you can easily show that if X^T X = I, then X^T \nabla g is anti-symmetric. It follows that if you take a small gradient step \delta X = \epsilon \nabla g, then the change in X^T X is O(\epsilon^2), i.e. it stays orthogonal to first order.)