This is somewhat of a multi-dimensional version of binary-search. It greedily probes the best choices first, but then starts trying the opposite choices at the higher divisions so that it can also handle non-monotonic target functions. Each iteration performs a binary (divide-and-conquer) search within the unit hypercube. (Your target function should scale the candidate vectors as necessary to cover the desired space.) Because the high-level divisions are typically less correlated with the quality of the final result than the low-level divisions, it searches through the space of possible "probes" by toggling choices in the order from high level to low level. In low-dimensional space, this algorithm tends to quickly find good solutions, especially if the target function is somewhat smooth. In high-dimensional space, the number of iterations to find a good solution seems to grow exponentially.
|
| GProbeSearch (GTargetFunction *pCritic) |
|
virtual | ~GProbeSearch () |
|
virtual const GVec & | currentVector () |
| Returns the best vector yet found. More...
|
|
virtual double | iterate () |
| Do a little bit more work toward finding a good vector. More...
|
|
void | setSampleCount (size_t n) |
| Specify the number of vectors to use to sample each side of a binary-split division. More...
|
|
void | setStabDepth (size_t n) |
| Specify the number of times to divide the space before satisfactory accuracy is obtained. Larger values will result in more computation, but will find more precise values. For most problems, 20 to 30 should be sufficient. More...
|
|
size_t | stabCount () |
| Returns the total number of completed stabs. More...
|
|
| GOptimizer (GTargetFunction *pCritic) |
|
virtual | ~GOptimizer () |
|
void | basicTest (double minAccuracy, double warnRange=0.001) |
| This is a helper method used by the unit tests of several model learners. More...
|
|
double | searchUntil (size_t nBurnInIterations, size_t nIterations, double dImprovement) |
| This will first call iterate() nBurnInIterations times, then it will repeatedly call iterate() in blocks of nIterations times. If the error heuristic has not improved by the specified ratio after a block of iterations, it will stop. (For example, if the error before the block of iterations was 50, and the error after is 49, then training will stop if dImprovement is > 0.02.) If the error heuristic is not stable, then the value of nIterations should be large. More...
|
|