Choosing which numerical optimization algorithm will perform best on a given problem is a task that researchers often face. Optimization benchmarking experiments allow researchers to compare the performance of different algorithms on various problems and thus provide insights into which algorithms should be used for a given problem. We benchmarked the prototypical iterative optimization algorithms, gradient descent, and the BFGS algorithm on a suite of test problems using the COCO benchmarking software. Our results indicate that the performance of gradient descent and BFGS varies by dimension, problem class, and solution accuracy. We provide recommendations for improving algorithm accuracy while reducing computational cost based on the implications of our results.