The CUTEr test set represents a testing environment for nonlinear optimization solvers containing more than 1,000 academic and applied nonlinear problems. It is often used to verify the robustness and performance of nonlinear optimization solvers. In this paper, we perform a quantitative analysis of the CUTEr test set. As a result we see that some paradigms of nonlinear optimization and automatic differentiation can be verified whereas others need to be questioned. Furthermore, we show that the CUTEr test set is probably biased, i.e., solvers that use exact derivatives and sparse linear algebra are likely to perform advantageously compared to solvers employing directional derivatives and low-rank updating.