Skip to main content

Building Custom Optimizers

Here we look at the steepest descent algorithm for unconstrained problems.

For a general unconstrained optimization problem stated as:

minimizexRnf(x)\underset{x \in \mathbb{R^n}}{\text{minimize}} \quad f(x)

the steepest descent algorithms computes the new iterate recursively by using the formula

xk+1=xkf(xk).x_{k+1} = x_{k} - \nabla f(x_k) .

Given an initial guess x0x_0, we can write an optimizer using the steepest descent algorithm using the Optimizer() class in modOpt as follows:

import numpy as np
import time

from modopt import Optimizer


class SteepestDescent(Optimizer):
def initialize(self):
# Name your algorithm
self.solver = 'steepest_descent'

self.obj = self.problem.compute_objective
self.grad = self.problem.compute_objective_gradient

self.options.declare('opt_tol', types=float)
# self.declare_outputs(x=2, f=1, opt=1, time=1)

def solve(self):
nx = self.problem.nx
x0 = self.problem.x0
opt_tol = self.options['opt_tol']
maxiter = self.options['maxiter']

obj = self.obj
grad = self.grad

start_time = time.time()

# Setting intial values for current iterates
x_k = x0 * 1.
f_k = obj(x_k)
g_k = grad(x_k)

itr = 0

opt = np.linalg.norm(g_k)

# Initializing outputs
self.update_outputs(itr=0,
x=x0,
obj=f_k,
opt=opt,
time=time.time() - start_time)

while (opt > opt_tol and itr < maxiter):
itr_start = time.time()
itr += 1

p_k = -g_k
x_k += p_k
f_k = obj(x_k)
g_k = grad(x_k)

opt = np.linalg.norm(g_k)

# Append outputs dict with new values from the current iteration
self.update_outputs(itr=itr,
x=x_k,
obj=f_k,
opt=opt,
time=time.time() - itr_start)

end_time = time.time()
self.total_time = end_time - start_time

The Optimizer() class records all the data needed using the outputs dictionary.