Elements of Artificial Neural Networks

(http://mitpress.mit.edu/book-home.tcl?isbn=0262133288)

Published by MIT Press, 1997

Check it out, you'll like it; it's introductory, covers many recent topics, presents algorithms using pseudo-code (Pascal/C style), and emphasizes general principles.

Transparency masters are ready (19 July 1997), and include abbreviated text and figures for the entire book, in large bold-font, ready to be used in a classroom. I'll be glad to send these (and Answers to Exercises) to instructors using the book as a textbook in a course, just let me know of your (physical) mailing address, and the institution where the course is being taught.

The rest of this page contains errata, info. about useful web-sites, and program and data files directly related to the book.

p.42, Exercise 7a., first line, replace ``Present a 2-class classification example to illustrate that minimizing mean squared error'' to: ``For a two-class classification problem, discuss whether minimizing the Euclidean distance measure (p.36), averaged over all input data'' -- there is a corresponding error in the solutions manual mailed to instructors before Sept.'97.

p.50, section 2.3.1, the remark ``Termination is assured if ...'' should be taken to mean ``Termination in a reasonably short period of time is assured if ...''; the proof in p.53 holds even if the learning rate is large.

p.58, line 9, replace (

p.58, equation in the last line, omit

p.58, fourth and fifth lines from bottom (equation lines), insert 2 before first left parenthesis.

p.59, lines 6, 8 and 10, clarity would be enhanced if we omit the subscript `

p.84, line 10, change `a value closer to 1' to `a large value'.

p.96, ninth line, fifth word, replace ``are'' by ``is''; tenth line from bottom, replace ``equating'' by ``setting''.

p.97, first line, replace ``derivative'' by ``derivatives''; second paragraph, fourth line, replace ``The secant method, which'' by ``A simple method that''; replace the equation after the second paragraph by

(in this equation, `d' is used instead of the partial derivative symbol, `t' is used instead of `tau', and `=~' is used to mean ``approximately equals''). Delete the sentence after next, ``By substitution...''.

p.115, line 12 in Figure 4.5, replace `

p.125, Figure 4.13, to the right of the horizontal line, change the subscript of ß from 2 to 0.

p.131, Figure 4.19, all the connections leading into the output node should be shown using solid lines, in all three figures. Also, in the last figure, the connection between the two hidden nodes should be shown using solid line. A better figure is the following: corrected Cascade Correlation figure (4.19)

p.138, algorithm in Figure 4.25, in the right hand side of the equation on the third line before the caption, replace `f

p.146: replace each

p.147, Equation 4.21: multiply the right-hand-side by a factor of 1/sigma

p.147, fourth paragraph (beginning ``Various''), the reference in line 4, replace `Alkeson (1993)' by `Atkeson (1991)'. Relevant references for that paper and the one that occurs two lines earlier (missing from the Bibliography):

S. M. Botros and C. G. Atkeson, `Generalization Properties of Radial Basis Functions,' in Advances in Neural Information Processing Systems, Vol. 3, pp. 707-713, Morgan Kaufmann Publishers, Inc., 1991.

A.Saha and J.D.Keeler, `Algorithms for better representation and faster learning in radial basis function networks,' in Advances in Neural Information Processing Systems II (ed. D.Touretzky), pages 482-489, Morgan Kauffmann Pub., 1990.

p.155, ex.3c: an extra pair of parentheses is needed in the right-hand-side: it should read exp(-(d/c)

p.166, end of first line: under the square-root sign, change the subscript of the summation sign to `

p.176, lines 5-6: replace ``. In other words,'' by ``, e.g., ''

p.178, Fig. 5.12, Phase 2, item 2(c), second line: replace v

p.179, equation in sixth line: replace both occurrences of w

p.196, first line (after figure): delete `then'

p.198, last line: replace `Lampinen (1993)' by `Lampinen and Oja (1992)'

p.201, lines 4-5: replace `has a fixed number of neighbors' by `belongs to some hyper-tetrahedron'

p.203, equation in fifth line from the bottom, replace `

p.209, third line of second paragraph: replace `inverse' by `pseudo-inverse'.

p.210, top half of the page: the eigen-values should be one-fifth of the values mentioned (for gamma1, gamma2, gamma3); also the eigen-vectors should be listed in exactly the opposite order as in the text, so that the first row of W1 and W2 should be (-0.126 -0.542 -0.162).

p.225, Example 6.3, the weight matrix Omega should have been calculated in a different manner, using equation 6.11 (bottom of p.223), instead of as shown.

p.250, end of first paragraph, last sentence should be shortened, to read `The state change is accepted with probability 1/(1+exp(Delta E / tau)).' [where Delta and tau refer to the greek letters].

p.251, replace expression `min(...)' in lines 12 and 27 by `1/(1+exp(Delta E / tau))', [where Delta and tau refer to the greek letters].

p.263, ex.1b: replace `generalized inverse' by `pseudo-inverse'.

p.282, first line, insert parentheses surrounding the expression following `exp', so that it reads `(exp((

p.284, line 16, change `100011' to `010011'.

[Thanks to Prof. Jim Reggia, Prof. Bob Keller, Jeff Miller, Bing Chen, Ayed Salman and William W. Simons for catching some of these errors.]

The following are some computer programs and data files related to the text. ALL the files should be readable as of 19 May 1997: please send me email (ckmohan@syr.edu) if there's a file you're looking for but are unable to access. All the foll. files have also been tarred into a single file and compressed/zipped; they can be extracted from bf.tar.gz or bookfiles.tar.Z

[Note: On Aug.27, 1997, I generalized the genetic algorithm program to many-bin graph partititioning.]