VB Neural Net: Tech Details

Home Up

 

Home
Software
Prices
Download
Yahoo Charts


NEW:
ADVANCED
TRADESTATION TECHNIQUE

US Markets Daily
snapshots

Technical
Description


Forthcoming Releases:

Fuzzy Logic:
sFLC3
DLL & API


Neural Net:
Release of NXL3
DLL & API


 

This page lists resolved bugs, known technical issues and experimentations from the VB neural net code:

  •     Batch mode needs a bit of correcting. 
  •     VB Backprop update: QuickProp weight updating is on its way...
  •     VB Backprop may not be fully localized for national versions of Excel.
  •     Batch mode (epoch learning)

Unlike what is generally prescribed in all neural literature, we add an average error over the epoch, and not the sum at the end of the training set scan.  VB Programmers are welcome to experiment on the issue and discuss that with us.  Very pragmatically, it just seems to work better. 

  •     Selecting the right activation function

The tanh activation is surprisingly (or not) slower than the plain logistic sigmoid and makes the neural net converge very slowly.  3 tanh calculations are included in the code.  Performance/ Accuracy tests have not been completed.

The Inverse Abs function is quicker to compute, but has a very sharp derivative around zero.  It is similar to logistic and tanh with a larger gain setting.  It will be very reactive to patterns in the data set.  

Tanh also tends to saturate with large learning rates, and is apparently more sensitive to initial weights settings.

Known issue with Tanh: Tanh is a sigmoid, and is very similar to the logistic function.  Its main difference is that the tanh output ranges from [-1,1], whereas Logistic output ranges from [0,1].  It is often said that when predicting a target around a zero mean, the tanh is likely to be more suitable.  Our experimentations are on the contrary:
Tanh saturates faster.
Tanh is slower to converge.
Tanh seems to behave better when data is mapped to the [0,1] range like the other sigmoid.  

We attribute that to sensitivity to initial weights.  Keep them very low, even down to zero and train you net again.

  •     VB BackProp only allows for one output node which therefore restricts neural nets to predictions. You can still use classification nets in associating classes to different target values, but this option cannot perform class separation as well as with one output node per class.

The VB code indexes output nodes, even though only one node is used in the current version.  It allows for loops in the output layer without any major changes to the existing code.  Adding output nodes should therefore be no problem for VB programmers.

  •     Multiple Learning Rates

The data structures do allow for very easy modification with regards to the learning rate. This version only uses one learning rate at network level. One can easily change it to layer level or even down to node level in order to implement other learning techniques like Delta-Bar-Delta, Silva-Almeida, SuperSAB, etc...  It is also easy to implement learning techniques.

We might implement some of these in a future version, but we can also assist you if you wish to program it yourself from our source codes.

  •     Neural Net Info

The best trained neural net is saved in a separate worksheet called "Net Info".  It is possible to generate VB code from it for other applications.  We have instead decided to leave it within Excel.  VB BackProp reads the "Net Info" and recreates the neural net accordingly.  The data min-max array (used in data scaling) is saved in the MinMax sheet. 

You can of course choose to prune the net, use a weight threshold, or apply any technique to your neural net to enhance the generalization or speed performance of your neural net.  Attention should be given to the general data layout.  The program reads the layer info and subsequently reads all neuron weights.  Unpredictable results may occur if you decide to for instance delete one line without changing the layer's node count.

  •     The Noise issue

The purpose being to show the correctness of the algorithms, the provided sample is a fairly simple noise-free quadratic function.  In financial predictions, it is obviously essential to first properly select your inputs.

In addition, one must always keep in mind that the neural net tries to minimize an error function (also often called a cost function in the literature). In doing so, it performs a regression type analysis.  A regression is more sensitive to movements in your data than absolute values, and subsequently, you may want to find a good correlation of fluctuations than an actual prediction.

In an example on Daily-AOL, we have used 5 simple inputs, namely:
Close-Open, High-Low, Open-PreviousClose, Close-PreviousClose, and EMA(Close,3), and one output NextClose-Close

These inputs are likely to be too highly correlated and therefore could be significantly improved, but the neural net can do a fair job at ignoring redundant inputs.  The neural net output is at first sight not exceptional, with a error level still quite high after a long training period.  However the percentage of correct signs in the prediction is 61%, which is already quite a valuable information. 

Home Up Best viewed with MS Internet Explorer 5 and above

Page last modified: May 08, 2008
Copyright © ForeTrade Technologies 21st century and thereafter