# Jonathan M. Borwein

# Commemorative Conference

## 25—29 September, 2017

# Applied Analysis, Optimisation & Convex Functions►

Theme chaired by Regina Burachik and Guoyin Li

**Keynote talk:**Local Elicitation of Maximal Monotonicity in Solving Generalized Equations and Problems of Optimization

A wide range of problems can be formulated in terms of a closed subspace
$L$ of a Hilbert space $H$ and a set-valued mapping $T: H$ $H$ as
determining a pair $(\bar x,\bar w)\in L\times L^\perp$ such that
$\bar w \in T(\bar x)$. Spingarn showed in 1983 that if $T$ is maximal
monotone the proximal point algorithm can be applied from any starting point
to generate a sequence of pairs $(x^\nu,w^\nu) \in L\times L^\perp$ that will
surely converge to a particular solution, as long as one exists.

An extension of this approach is in fact possible to an important setting in which $T$ is not maximal monotone but a partial inverse version of local maximal monotonicity can be elicited at a solution $(\bar x,\bar w)$ by a kind of augmentation. Local convergence is guaranteed then if the starting point is close enough to that solution. In optimization this is intimately related to the convexifying properties of augmented Lagrangian functions.

An extension of this approach is in fact possible to an important setting in which $T$ is not maximal monotone but a partial inverse version of local maximal monotonicity can be elicited at a solution $(\bar x,\bar w)$ by a kind of augmentation. Local convergence is guaranteed then if the starting point is close enough to that solution. In optimization this is intimately related to the convexifying properties of augmented Lagrangian functions.

Uniform continuity of the product of real functions defined on a metric space

As is well-known, the pointwise product of two bounded real-valued uniformly continuous functions $f$ and $g$ defined on a metric space is uniformly continuous, but this condition is
hardly necessary: let $f(x) = g(x) = \sqrt{x}$ for $x \ge 0$. In joint work with Som Naimpally that appeared in Real Analysis Exchange, we produced necessary and sufficient conditions
on a pair of uniformly continuous real-valued functions $(f,g)$ defined on an arbitrary metric space so that their pointwise product is uniformly continuous. Our conditions are
sufficient for any pair of real-valued functions, and are necessary for a class of pairs properly containing the uniformly continuous pairs. We precisely identify this class and
describe it in various ways.

A different but equally interesting question is this: what are necessary and sufficient conditions on the internal structure of a metric space so that the product of each pair of uniformly continuous real-valued functions remains uniformly continuous?

In joint work with Maribel Garrido and Ana Merono appearing in Set-Valued and Variational Analysis and dedicated to Jon Borwein, we show that this occurs exactly when the familiar bornology of Bourbaki bounded subsets coincides with a new larger bornology.

A different but equally interesting question is this: what are necessary and sufficient conditions on the internal structure of a metric space so that the product of each pair of uniformly continuous real-valued functions remains uniformly continuous?

In joint work with Maribel Garrido and Ana Merono appearing in Set-Valued and Variational Analysis and dedicated to Jon Borwein, we show that this occurs exactly when the familiar bornology of Bourbaki bounded subsets coincides with a new larger bornology.

An approach for the convex feasibility problem via Monotropic Programming

We use recent zero duality gap results arising from the Monotropic Programming problem for analyzing consistency of the convex feasibility problem in Hilbert spaces. We characterize
consistency in terms of the lower semicontinuity of the infimal convolution of the associated support functions.

Authors: Regina Burachik, Victoria Martín-Márquez

Authors: Regina Burachik, Victoria Martín-Márquez

Optimal Control with Minimum Total Variation

We consider optimal control problems where the aim is to minimize the total variation in the control variables in addition to a general objective functional, resulting in a multi-objective optimal control problem.
We study an illustrative convex problem, which appears in a simple form, and obtain asymptotic results for the challenging case when only the total variation is to be minimized. We also discuss problems which are in a more general form

Computing Radius of Robust Feasibility of Uncertain Linear Conic Programs via Semidefinite Programs

The radius of robust feasibility provides a numerical value for the largest possible uncertainty set that guarantees robust feasibility of an uncertain linear conic program. This determines when the robust feasible set is non-empty.
Otherwise the robust counterpart of an uncertain program is not well-defined as a robust optimization problem. In this talk, we address a key fundamental question of robust optimization: How to compute the radius of robust feasibility of uncertain linear conic programs, including linear programs? We first provide computable lower and upper bounds for the radius of robust feasibility for general uncertain linear conic programs under the commonly used ball uncertainty set. We then provide classes of linear conic programs where the bounds are calculated by finding the optimal values of related semidefinite linear programs (SDPs). The classes, in particular, include the important classes of uncertain SDPs and second-order cone programs. Then, we present an exact formula for the radius of robust feasibility for a class of uncertain linear conic programs. We show that in the case of an uncertain linear program the formula allows us to calculate the radius by finding the optimal value of an associated second-order cone program.
As an application, we show how the well-studied distance to ill-posedness of parametric linear conic systems can be calculated using the radius of robust feasibility. This is done by providing formulas for the lower and upper bounds and a new exact formula for the distance to ill-posedness

Meta-optimisation: Lower bounds for higher faces

Polytopes are the feasible regions of many optimisation problems, and many algorithms proceed by moving along the edges from one vertex to another. So it is interesting to find bounds on the number of edges.
We have previously announced such lower bounds for arbitrary $d$-dimensional polytopes with $V$ vertices, when $V\le 2d$, thereby confirming a 1967 conjecture of Gr\"unbaum. We can now also give precise lower bounds for the number of $m$-dimensional faces,
at least for $m\ge 0.618 d$. We present also the corresponding results for polytopes with $2d + 1$ and $2d + 2$ vertices; in some of these cases the answer depends on number theory.

On the Subdi§erential and Recession Function of the Fitzpatrick Function

Some years ago during a sabbatical to Nova Scotia, Jon Borwein introduced
me to the Fitzpatrick function and its relevance to monotone operator theory.
At that time he gave me the project of trying to find a way to use the Fitzpatrick
function to study the single valuedness of monotone operators. We made some
progress (which I will discuss) but the fundamental goal of being able, to at
least, reproduce the classical results about the single valuedness of a maximal
monotone operator on Asplund spaces still remains elusive. This fact
stands in stark contract to impact the study of the Fitzpatrick function has had
on the understanding, and simplification of proof, of many other properties of
monotone operators. Similarly the recession function associated with the Fitzpatrick
functions appears to have been little studied. In this talk I will discuss
what is known and introduce a number of interesting observation as well as
some conjectures and open problems

Optimal Backward Error for ODE

Backward Error Analysis for the numerical solution of differential equations has a very long history. Recently it has been discovered that finding an
"optimal" backward error (for a given problem and a given numerical method) is not only possible, using techniques from optimal control, but also
informative. this talk discusses recent progress in the area.

Joint work with Yalcin Kaya, Robert HC Moir, and Julia Jankowski.

Joint work with Yalcin Kaya, Robert HC Moir, and Julia Jankowski.

Come to Order

Jon Borwein had a long and sustained interest in the fixed point theory of nonexpansive maps. If $(X,d_{X})$ and $(Y,d_{Y})$ and metric spaces (typically with $X\subseteq Y$ and $d_{Y}\Vert_{X} = d_{X}$),
recall that $T:X\rightarrow Y$ is \textsl{nonexpansive} if $d_{Y}(Tx,Ty)\leq d_{X}(x,y)$ for all $x,y \in X$. In the Hilbert space context, examples include: projections and reflections in closed convex sets, and resolvents of monotone mappings.

Upon learning of the early 1980's results by Maurey [2] and Alspach [1] concerning the spaces $c_0$ and $L_1[0,1]$, Jon sensed underlying order theoretic arguments were in play [2]. I will briefly describe my interactions with Jon and survey results for Banach latices that his insight led to, as well as the new geometric conditions that these in turn inspired.

Upon learning of the early 1980's results by Maurey [2] and Alspach [1] concerning the spaces $c_0$ and $L_1[0,1]$, Jon sensed underlying order theoretic arguments were in play [2]. I will briefly describe my interactions with Jon and survey results for Banach latices that his insight led to, as well as the new geometric conditions that these in turn inspired.

- Dale E. Alspach, A fixed point free nonexpansive map, Proc. Amer. Math. Soc., 82 (1981), 423-424
- J. Borwein and B. Sims, Non-expansive mappings on Banach lattices and related topics}, Houston J. of Math., 10 (1984), 339-355
- B. Maurey, Points fixes des contractions sur un convexe forme de L1, Seminaire d'Analyse Fonctionnelle 80-81, Exposk No. VIII, Ecole Polytechnique, Palaiseau.

A Lyapunov-type approach to convergence of the Douglas--Rachford algorithm

The Douglas-Rachford projection algorithm is an iterative method used to find a point in the intersection of closed constraint sets. The algorithm has been experimentally observed
to solve various nonconvex feasibility problems which current theory cannot sufficiently explain. In this talk, we discuss convergence of the Douglas--Rachford algorithm in a potentially
nonconvex setting. Our analysis relies on the existence of a Lyapunov-type functional whose convexity properties are not tantamount to convexity of the original constraint sets. Benoist
(2015) solved a conjecture of Borwein and Sims (2011) on the existence of such a functional for a specific nonconvex feasibility problem. Our work can be viewed as a generalization of
Benoist's approach. This is based on joint work with Matthew Tam (University of Göttingen).

Detecting the Convexity of a Function and Related Issues

The idea of writing something on detecting the convexity of a function, started after one of us (J. Dutta)
visited Jon Borwein in Newcastle in 2014. In the celebrated book by Borwein and Lewis titled

Recently Ahmadi and Parillo (2013) has also answered a long standing conjecture of Shor, which said that it is NP-hard to detect the convexity of a polynomial function of degree four and above. They showed the NP-hardness of detecting the convexity of polynomials by reducing the problem of a biquadratic optimization to this problem. We use an approach based on algebraic geometry to discuss this issue.

In the last section we collect some other notion of convexity, like Schur convexity, geodesic convexity and matrix convexity. We consider this article/talk as a survey on the issue of convexity detection and the exciting challenges it throws at us.

J. M. Borwein and A. S. Lewis,

Ahmadi, Amir Ali; Olshevsky, Alex; Parrilo, Pablo A.; Tsitsiklis, John N. NP-hardness of deciding convexity of quartic polynomials and related problems. Math. Program. 137 (2013), no. 1-2, Ser. A, 453-476

*Convex analysis and nonlinear optimization*there are two exercise problems where in one is asked to prove the convexity of two different function. Proving the continuity of those two functions by hand was extremely cumbersome and one had to resort to the use of the mathematical package MAPLE in order to actualyy compute the second derivative and also draw its graph in order to get some more confidence about the nature of the function. In fact it took a good amount of analysis to finally conclude that the functions were convex. This was the first time that we got the idea that proving the convexity of a function is not all that easy but is also exciting at the same time. Thus we provide some more examples of functions which one needs to use computational tool to detect its convexity, including the problems in Borwein and Lewis (2006).Recently Ahmadi and Parillo (2013) has also answered a long standing conjecture of Shor, which said that it is NP-hard to detect the convexity of a polynomial function of degree four and above. They showed the NP-hardness of detecting the convexity of polynomials by reducing the problem of a biquadratic optimization to this problem. We use an approach based on algebraic geometry to discuss this issue.

In the last section we collect some other notion of convexity, like Schur convexity, geodesic convexity and matrix convexity. We consider this article/talk as a survey on the issue of convexity detection and the exciting challenges it throws at us.

J. M. Borwein and A. S. Lewis,

*Convex Analysis and Nonlinear Optimization*, 2nd Edition, Springer 2006Ahmadi, Amir Ali; Olshevsky, Alex; Parrilo, Pablo A.; Tsitsiklis, John N. NP-hardness of deciding convexity of quartic polynomials and related problems. Math. Program. 137 (2013), no. 1-2, Ser. A, 453-476

An Abstract Variational Theorem

In this talk we will present a proof of the following theorem:

We will also present some applications of this theorem and compare it to Stegall's variational principle.

**Theorem**Let $f:X \to \mathbb{R}\cup\{\infty\}$ be a proper function on a Banach space $(X, \|\cdot\|)$. If there exists a nonempty open subset $A$ of $\mathrm{Dom}(f^*)$ such that $\mathrm{argmax}(x^*-f) \not= \emptyset$ for each $x^* \in A$, then there exists a dense and $G_\delta$ subset $R'$ of $A$ such that $(x^*-f):X \to \mathbb{R} \cup\{-\infty\}$ has a strong maximum for each $x^* \in R'$. In addition, if $0 \in A$ and $\varepsilon >0$ then there exists an $x_0^* \in X^*$ with $\|x^*_0\| < \varepsilon$ such that $(x_0^*-f):X \to \mathbb{R} \cup\{-\infty\}$ has a strong maximum.We will also present some applications of this theorem and compare it to Stegall's variational principle.

Numerical methods for nonmonotone contact problems in continuum mechanics

We present several robust and efficient numerical methods for non-convex, non-smooth variational problems in nonmonotone contact. Examples include nonmonotone friction and adhesive
contact problems, delamination and crack propagation in adhesive bonding of composite structures. A challenging problem is adhesive bonding in case of contamination. The non-smoothness
comes from the non-smooth data of the problems itself, in particular from nonmonotone, multivalued physical laws involved in the boundary conditions. The nonmonotone laws can be
expressed by means of the Clarke's subdifferential of a maximum, minimum or nested maximum-minimum functions. The weak formulation of the resulting boundary value problems leads to a
class of non-smooth variational inequalities, the so-called hemivariational inequalities (HVIs). The latter maybe viewed as a first order condition of a non-convex, non-smooth
minimization problem. These problems are much harder to analyze and solve than the classical variational inequality problems like Signorini contact or Tresca-frictional problems. The
resulting HVI problem is first regularized and then discretized by either finite element or boundary element methods. Another approach to solve nonsmooth variational problems is by the
strategy: first discretized by finite elements, then optimize using finite dimensional non-smooth optimization methods. Various numerical experiments illustrate the behavior, the
strength and the limitations of the proposed approximation schemes.

On lexicographic tangents

Lexicographic differentiation was introduced by Yurii Nesterov in 1987. Lexicographic derivatives have some useful properties such as exact calculus rules. Lexicographic tangents are
the geometric counterpart of this construction that allows to characterise some desirable properties of convex sets.

I will introduce the notions of lexicographic differentiation and tangents, and focus on characterisations of facial dual completeness property of convex cones.

The talk is based on joint work with Prof. Levent Tunçel, University of Waterloo.

I will introduce the notions of lexicographic differentiation and tangents, and focus on characterisations of facial dual completeness property of convex cones.

The talk is based on joint work with Prof. Levent Tunçel, University of Waterloo.

Innovations in AskConstants version 2

The AskConstants program was inspired by the free online

The reason for developing another such program was that I needed a

Since then I added the table lookup method that the Inverse Symbolic Calculator uses in addition to integer null vectors. But rather than using a few hundred binary searches in one very large sorted table, AskConstants uses bidirectional search using two smaller tables of approximately the same length $n$. The backward table is an

Also, sorting the tables by their unsigned base-2 significands permits the program to omit all but the simplest of entries that differ in magnitude by a power of 2, then infer the appropriate sign and power of 2 to include in the result, thus making the coverage much larger than $n^{2}$ expressions.

The presentation will demonstrate AskConstants and describe its techniques. I plan to post at AskConstants.org shortly before the conference a freely downloadable version 2 including these performance improvements and many interface improvements. Meanwhile, please email me challenging non-float constants and floats of unknown identity to try!

*Inverse Symbolic Calculator*program developed by a team including Jon and Peter Borwein, Simon Plouffe, David Bailey, Nathan Singer, Andrew Shouldice, Lingyun Ye, Tomas Daske, Peter Dobcsanyi, Dante Manna, and O-Yeat Chan. If you have not experienced this program, then visit that website for a delightful surprise.The reason for developing another such program was that I needed a

*function*rather than a program, and I wanted it written in*Mathematica*so that I could easily use it for another*Mathematica*package. AskConstants version 1 was based on the integer null vector method described in Alan Meichsner's M.S. thesis and later transformed by Kevin Hare to the*Maple identify*... function.Since then I added the table lookup method that the Inverse Symbolic Calculator uses in addition to integer null vectors. But rather than using a few hundred binary searches in one very large sorted table, AskConstants uses bidirectional search using two smaller tables of approximately the same length $n$. The backward table is an

*uninstantiated*set of univariate expressions together with one of their inverses with respect to one of the operators or functions therein. Given an input float, the inverse expressions and corresponding float values are instantiated, then sorted, followed by a collinear search. This permits assessment of $n^{2}$ expressions in time $\Theta(n\log n)$ and space $\Theta(n)$.Also, sorting the tables by their unsigned base-2 significands permits the program to omit all but the simplest of entries that differ in magnitude by a power of 2, then infer the appropriate sign and power of 2 to include in the result, thus making the coverage much larger than $n^{2}$ expressions.

The presentation will demonstrate AskConstants and describe its techniques. I plan to post at AskConstants.org shortly before the conference a freely downloadable version 2 including these performance improvements and many interface improvements. Meanwhile, please email me challenging non-float constants and floats of unknown identity to try!