Optimization of a network learning

New tune, News / Monday, December 11th, 2017

The optimization of a learning is a key.  To improve the size of a tunes database I transposed all available tunes of  a certain types (i.e. reels, jigs, hornpipes) into one scale considering modes (Dorian, Maj and so on). The scale of D was chosen just arbitrary.  The very first attempt of the learning process on reels` database with the same parameters I used before led to quite over-fitted model. However, percentage of listenable tunes is definitely increased. Here `s the one example (transposed back to Edor though):

ABC and MIDI versions are posted here:

Currently, network learning for one set of parameters takes around ~30 hours. The next step will be attempt to avoiding the over-fit of the model.