This algorithm does a gradient descent by feeling a small distance out in each dimension to measure the gradient. For efficiency reasons, it only measures the gradient in one dimension (which it cycles round-robin style) per iteration and uses the remembered gradient in the other dimensions.
|
| GEmpiricalGradientDescent (GTargetFunction *pCritic, GRand *pRand) |
|
virtual | ~GEmpiricalGradientDescent () |
|
virtual const GVec & | currentVector () |
| Returns the best vector yet found. More...
|
|
virtual double | iterate () |
| Performs a little more optimization. (Call this in a loop until acceptable results are found.) More...
|
|
void | setLearningRate (double d) |
| Sets the learning rate. More...
|
|
void | setMomentum (double d) |
| Sets the momentum value. More...
|
|
| GOptimizer (GTargetFunction *pCritic) |
|
virtual | ~GOptimizer () |
|
void | basicTest (double minAccuracy, double warnRange=0.001) |
| This is a helper method used by the unit tests of several model learners. More...
|
|
double | searchUntil (size_t nBurnInIterations, size_t nIterations, double dImprovement) |
| This will first call iterate() nBurnInIterations times, then it will repeatedly call iterate() in blocks of nIterations times. If the error heuristic has not improved by the specified ratio after a block of iterations, it will stop. (For example, if the error before the block of iterations was 50, and the error after is 49, then training will stop if dImprovement is > 0.02.) If the error heuristic is not stable, then the value of nIterations should be large. More...
|
|