>
Neural nets? What? Why? Huh?
With real-estate, I currently have the problem of not enough capital to begin investing with. The algorithm to solve this, currently, is simple enough; save up money. Since I can only work so many hours a day, I am left with oodles of free time (Even after reading and watching TV and the like). Due to this, I’ve started getting back into something that I can do while the “save money” process is going on; foreign exchange.
The random-entry system that I’ve gotten from Van K. Tharp’s books is coming along well, with varying results. It built up 4% of a return in a month’s time, but has lost half of it in the last few weeks. So right now, I’m just at 2% return, or in other words, if I do nothing for the rest of the year I’m keeping up with inflation. This is something I could get quite comfortable with; If I’m going to be getting on average 2% per 1.5 months, that gives me a yearly return of 16%. Not quite the jaw-dropping 48% I would have with 4% per month, but still more than double of what I’d get from a municipal or provincial bond, and quite respectable.
I’ve known for the last couple of years now that neural networks are a kind of artificial-intelligence programming technique used in trading systems. For the past week I’ve been trying to figure them out. Apparently there are more than 20 algorithms that can be used in the neural-network family of AI-programming. Talk about biting off more than I can chew.
Okay, lets get started
While I believe I’ll eventually just use a pre-made library from someone else, I would like to understand how these things work to satisfy my curiosity. So I started a program to try to solve problems that I do understand, in the field of linear regression.
I started out by trying to look for programming code that implements neural nets, and found that the level of documentation on that code was beyond my level. Now, neural nets aren’t that difficult as theoretical constructs, but the devil is in the implementation. I couldn’t figure out what the code was doing.
So I went through the real learning process that is defined by patience and frustration. I walked my way through the code, and that didn’t help much. And then I went through some university’s online lecture notes and that didn’t help much. And then I kept breaking down the problem that I would try to solve with my very first neural net (Lowering my expectations, essentially). And then I worked through it on paper and that didn’t help much. But with all of the little help I got from each stage, along with a lot of frustration and hypothetical hitting my head against the wall, I came up with the following program.
My first neural net (Well, neural line actually)
The following program is provided a set of coordinates that are parts of a line. The program then cycles over the coordinates, and finds out what the slope of the line is. This isn’t a neural net; its a neural line. There are only 2 nodes, an output one and an input one. I kept selecting various problems till I worked my way down to this simplest-of-all one. I figure that the next step is to either build up from this program and add functionality and create an actual neural net, or to just stop here, and get along with my objective of devising a net that will help me figure out a more profitable trading strategy.
I hope that the comments that I’ve provided with the code are satisfactory. I’ve had difficulty following the code of other people who have put up their neural net code, and hope that I’ve done a better job of it. I’ve not only put in code that explains what the code does, but that tries to explain the motivation behind the steps, along with an overview of what I’m trying to do. I hope that it is helpful.
The code
Just copy the code below and paste it into your favourite text editor (Like Notepad or Wordpad on Windows).
;;;; This is the simplest net possible (I think). Its not even a net, but a line. I'm just trying to understand how things work.
;; I've worked through this on paper, and now I will program it.
;;;; This program is going to work as a simple linear regression. It will be given a number of points that can be plotted on a
;; simple graph. The points are going to be determined as 'x' and 'y' (Just to keep the terminology in line with what is taught in
;; schools).
;;;; The goal of this program will be to find the slope that best describes the relationship between 'x' and 'y'. At the moment,
;; I'm not programming a constant. If you recall high-school math, a line is described as a function of 'x'. The formula is
;; 'y = a * x + b'. This program will be given the 'x' and the 'y', and will try to figure out what the 'a' is equal to.
;; This variable determines the rate at which the "weight" (Which is the same as 'a') is modified to bring it closer to the correct
;; answer. The reason for keeping this number small is that if the weight is changed at too high a rate, the program might just
;; keep bouncing around the correct answer. If the rate is too low, the program will take a long time to get to the correct answer.
;; It really is a trial-and-error number, and I don't know of any formula that will tell us how to get the best rate of learning.
(defparameter rate 1/10)
;; This parameter is the same as 'a' in the formula of a line. This concept in neural net terminology is called the weight, and
;; I'll be using that terminology. Just remember that 'weight' here is another way of saying 'stuff that we want the right answer to'.
;; We are just going to give it a random starting number, from which the net will try to work to getting the correct answer.
(defparameter weight (random 50))
;; This is the input neuron. It is just set to 0 (zero) now to give it a starting value, but it will cycle through the array of
;; test information that we will provide the program. This is the 'x' in the formula of a line.
(defparameter input-neuron 0)
;; This is the array of test information that is provided to the program. It is a set of correct answers to a select number of
;; inputs. The program will use this to get to the correct answer.
(defparameter test (make-array '(4 2) :initial-contents
'((1 5)
(2 10)
(3 15)
(4 20))))
;; This is the output neuron that will be calculated by the program. The output neuron is the formula that is given in our problem
;; statement. Since 'weight' is 'a' and 'input-neuron' is 'x', the output neuron is simply the calculation 'a * x'.
(defun output-neuron ()
(* input-neuron weight))
;; This is just a little function that provides output to let the user know what is going on in the program.
(defun display (output actual)
(if (= output actual)
(format t "~&~A should be ~A ***" output actual)
(format t "~&~A should be ~A" output actual)))
;;; Here comes the heavy lifting of the program.
;; We want to keep the inner body keeping on running over and over and over again.
(loop
;; We want the inner inner body to keep looping from the numbers zero to 3, so the index 'i' can be used the access the 'test'
;; array in sequence.
(dotimes (i 4)
;; Lets begin by reading the x-value in the 'test' array
(setf input-neuron (aref test i 0))
;; Do some print-out of the current weight-value
(format t "~&Beginning: weight = ~A" weight)
;; Make a variable 'out' that is the calculation of the output-neuron given the present input and weight.
(let ((out (output-neuron)))
;; Increment the current weight-value by the product of the error (The difference) of what output was calculated and what
;;it should have been, and the rate-of-learning ('rate').
;; This is the part of the neural network that does the actual learning. This is where the magic of neural-net learning
;; actually happens. In plain english, the output, given the current weight, is checked against what the output should have
;; been (The correct output), and the weight is then modified ever so slightly so that it will (In the next iteration) provide
;; an output that is closer to what it should be.
(incf weight (* rate
(- (aref test i 1) out)))
;; And here we show what the output was found to be and what it should have been.
(display out (aref test i 1)))
;; And here we display the new weight, given the slight modification that has been done.
(format t "~&New weight is ~A" weight)))
; This line was originally placed to have the program take breaks of one-second duration to slow down the output for human consumption.
; Without it, the program gets to straight-line answers in a matter of 2 seconds or so, if that. It goes so fast that I've never
; bothered to check how fast it is.
; (sleep 1)))
Thanks for reading,
Ravi Desai
Leave a Reply