Cyan's Blog

Search

Search IconIcon to open search

凸优化与线性回归问题

Last updated Aug 2, 2021 Edit Source

# Gradient Descent & Convex Optimization / 凸优化

2021-08-02

Tags: #MachineLearning #ConvexOptimization #Math

这里(和下面的引用里面), 我们特殊的线性规划的损失函数一定是一个凸函数, 那么在其他情况下, 线性规划还是凸函数吗, 线性规划问题会陷入局部最优的问题中去吗?

Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and no other local, optima; thus gradient descent always converges (assuming the learning rate α is not too large) to the global minimum. Indeed, J is a convex quadratic function. Here is an example of gradient descent as it is run to minimize a quadratic function.

The ellipses shown above are the contours of a quadratic function.1


  1. https://www.coursera.org/learn/machine-learning/supplement/U90DX/gradient-descent-for-linear-regression ↩︎