|
In this article we show a GA with distributed hierarchic architecture that confronts the problems that arise when trying to develop ANN optimizing, simultaneously, the activation functions of each element of process, in addition to the values of the own connections of the traditional training processes. In some experiences it has been verified that the simultaneous optimization of these parameters using a GA with only one population is not feasible. The difficulties arise from the differences in the speeds of convergence of the different aspects in the development of the ANN that make the process fall in local minimum impossible to avoid.
|