The methods used by Next Level Solutions for rapid analysis
are based on the methods of Genetic Algorithm (GA), in combination with two other
general, discrete search algorithms, hill climbing and grid search.
First, a search space of all biologically
plausible solutions is defined. Linear and non-linear models can
be used, with an arbitrary number of compartments. Any structural
model that can be coded in NONMEM can be included in the search
space. In addition to structural models, any covariate relationship,
components of inter-individual variability (including off-diagonal
elements of the OMEGA matrix), residual variability and different initial
estimatess can be
searched. Next, decisions must be made regarding the purpose of
the analysis, and appropriate penalties applied for
parsimony. The Akiake information criteria can be used for
determining penalties for additional parameters, or the log
likelihood ratio for hierarchical models.
Machine learning methods (a hybrid of GA, hill climbing and grid search)
are then used to search the specified solution space for the optimal model. The process is initiated with
a GA search for 5 generations. At this point, hill climbing and grid search are alternated. A number of the "best"
models are selected (typically 4), and a hill climbing search is done on each of these. In hill climbing, each
aspect, or feature of the model is systematically changed, one at a time, and the resulting model is evaluated. So, one model would have 1
compartment changed to 2, another, no lag time changed to with lag time, a third, no effect of weight on volume
change to having an effect of weight on volume, a fourth, no interindividual effect of Clearance changed to having
an interindividual effect on Clearance, a fifth, the second set of initial estimates is changed to the third set, etc.
After a single hill climbing step is done (in which each of the models with a single change is run), all models that
are better than the original are identified, and a grid search of those better features (i.e., all possible combinations
of the effects that improved the model, up to 6 effects) is undertaken. The single best models (derived from each of the ~4 initial models)
is identified and once again the hill climbing search is done, generating models with each effect changed, followed by another grid search.
This process is repeated until no further improvement is seen. Once no further improvement is seen, the GA algorithm is once again run,
for 5 generations. This processes is repeated until no further improvement is found. Even with this very
computationally intense method (typically ~10,000 models are evaluated), large scale parallel computing means that
typical linear models can be completed in a few days, rather than the typical weeks for an analysis.