Of note, when looking through the output from the cgtrust function: the variation in trust region radius is quite interesting. In most cases, the trust region started off fairly large (in particular for the starting point (-1.9,20), the trust region initially had a radius of 20.) In the (0,0) case (or at least my approximation thereof,) the trust region radius began very small, but eventually grew, allowing the algorithm to move a little faster toward the minimum. For both the (-1.9,2) and (-1.9,20) cases, the norm of the gradient was not necessarily decreasing with each iteration (in fact, it increased for several iterations.) I believe this explains the "negative curvature" messages. I believe that by using (0,0) as a starting point, the initial trust region radius ends up being 0 (regardless, it causes a "divide by zero" error.)
Comments: The following is the output from the program. It crashed on the point (0,0) because it starts off with the TR radius =. Other than that we notice that the function values are non increasing which means we have descent directions. Also, we notice that the values of the TR first increase and then shrink as we get towards the minimum (i.e. as we start to converge).
Notes: for x0=[0;0], the program had problems (dividing by zero). The gradients and the evaluations did not change, the function did not seem to converge. The program also took twice as many iterations when x0 was significantly farther away from the optimal solution. The function values chosen always seemed to decrease, so the program seemed to choose descent directions each time. The radius of the trust region generally decreased, but not necessarily from one iteration to the next.