tag:blogger.com,1999:blog-3366935554564939610.post2569556451691856307..comments2021-09-20T02:39:30.570-04:00Comments on Data Miners Blog: Thoughts on Understanding Neural NetworksMichael J. A. Berryhttp://www.blogger.com/profile/06077102677195066016noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-3366935554564939610.post-60413462930308080462016-04-17T00:21:18.316-04:002016-04-17T00:21:18.316-04:00This is really cool! I am just a student right and...This is really cool! I am just a student right and have some experience with neural netowrks, but I have always thought of them just like you described at the beginning of your article: as a black box, or like a mathematical formula. More so like a collection of mathematical formulas rather than a black box, but anyways…<br />I know this post is pretty old, but I got so excited about it that I’m just going to post it anyways, even thought it might have no scientific value for you… I honestly have never even thought of any way of visualizing a neural network, or just purely analyzing which hidden unit contributes the most to the output. I am not really familiar with all of the current advancements in neural network metalearning, but this seems like something like what you described may help researchers and data scientists explain their findings more clearly. Not only that, the researchers themselves may better understand what’s going on with their learning model if they analyze it by what you called “clustering” (for the lack of a better word :) ). <br />Thank you so much for this post!Anonymoushttps://www.blogger.com/profile/02087394114293042255noreply@blogger.comtag:blogger.com,1999:blog-3366935554564939610.post-2354625354193416052013-07-20T02:54:13.611-04:002013-07-20T02:54:13.611-04:00Here is a link to the best visualization of a neur...Here is a link to the best visualization of a neural network that I've ever seen. <br />http://webgl-ann-experiment.appspot.com/Anonymoushttps://www.blogger.com/profile/01328753280859635286noreply@blogger.comtag:blogger.com,1999:blog-3366935554564939610.post-30425437939082944362010-02-03T21:32:47.040-05:002010-02-03T21:32:47.040-05:00Hi, one way I find useful to understand neural net...Hi, one way I find useful to understand neural networks is to look at various applications from a time series regression perspective.<br /><br />I imagine a linear regression as just being the best fit to a noisy line. If you think of any other scatterplot that might be uniquely represented by a non-linear relationship, a neural network can usually find it.<br /><br />I wrote up a brief tutorial on learning financial time series that you may find of interest at:<br /><br />http://intelligenttradingtech.blogspot.com/Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3366935554564939610.post-53857454360945901772009-08-18T21:11:33.597-04:002009-08-18T21:11:33.597-04:00I remember a publication by Le Cun in the early 90...I remember a publication by Le Cun in the early 90s I think that described how a neural network doing OCR behaved--if I remember correctly, some HL neurons were effectively horizontal line detectors, other vertical line detectors, others edge detectors (this one generated a l ot of excitement because the human eye does edge detection, among many other things!).<br /><br />However, I think too that there is value in understanding overall trends in a neural network too (overall sensitivity), much like one sees in many neural net applications like in Clementine or Neuralware Predict. These are akin to examining coefficients in a regression model in that they give overall average trends but not much else.Dean Abbotthttps://www.blogger.com/profile/16818000233889520746noreply@blogger.comtag:blogger.com,1999:blog-3366935554564939610.post-24992075653804950172009-03-24T09:44:00.000-04:002009-03-24T09:44:00.000-04:00I would also point out the Neural Networks chapter...I would also point out the Neural Networks chapter in Tom Mitchell's "Machine Learning" book. He goes through some cases of ANNs applied to image recognition, and these are quite telling as to what hidden nodes *could* learn.<BR/>Max Khesin.<BR/>(Regards!)Max Khesinhttps://www.blogger.com/profile/02376091179295592595noreply@blogger.comtag:blogger.com,1999:blog-3366935554564939610.post-85973899306080419862009-01-21T01:40:00.000-05:002009-01-21T01:40:00.000-05:00Gordon,I spent 4 years of my life trying to unders...Gordon,<BR/>I spent 4 years of my life trying to understand how neural networks did what they did during my doctorate. It was only after I started drawing plots of the outputs of the hidden neurons that the penny finally dropped and everything became crystal clear.<BR/><BR/>What I would recommend is start off with simple made up funcions such as y=x^2. It quickly becomes apparent why 2 sigmoidal neurons are required. Then try y=log(x), and then a function such as y=x^2 + log(x).<BR/><BR/>The ability to 'prune' a network to force particuar inputs to be procesed by selected hidden neurons is a must, you can then essentially decompose the model into y = f(x1,x2) + f(x3) + f(x4) etc..<BR/>This gives you a great ability to extract underlying trends from data. For example, my doctorate was electric load prediction and this method enables the load to be decomposed into f(temperature) + f(time of day) + f(day of week) etc... <BR/>More recently I extracted the premium of LPG vehicles over Unleaded petrol vehicles over time for cars sold at auction. The results did reflect the difference in the two fuel costs (and govt subsidies to convert your car) over time.<BR/><BR/>I wrote a little application in excel that will allow you to view the outputs of the hidden neurons as the network is being trained. <BR/><BR/>http://www.philbrierley.com/code/vba.html<BR/><BR/>Goto the train tab and select the 'decomposition' graph. It can be informative to watch. The default data is something like y=x^2 but you can paste any data in you want.<BR/><BR/>I also write an application called Tiberius that lets you do a similar thing but also lets you manually 'prune' the network to force the form of the model.'<BR/><BR/>www.tiberius.biz<BR/><BR/>You can see some oldish screenshots of the decomposition at<BR/>www.philbrierley.com/tiberius/slideshow.html<BR/>slide 8 of the y=x^2 demo will give you the idea.<BR/><BR/>Hope my work will help you on your quest to understand neural networks. Would be more than willing to link up on skype to talk you through what can be achieved.<BR/><BR/>Regards<BR/><BR/>Phil Brierley.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3366935554564939610.post-70153115287629405402009-01-19T18:42:00.000-05:002009-01-19T18:42:00.000-05:00I added a few pictures to my blog in case you don'...I added a few pictures to my blog in case you don't want to run the vb.net executable etc.<BR/><BR/>http://timmanns.blogspot.com/2009/01/re-thoughts-on-understanding-neural.html<BR/><BR/>cheers<BR/><BR/>TimTim Mannshttps://www.blogger.com/profile/17405266346372888597noreply@blogger.comtag:blogger.com,1999:blog-3366935554564939610.post-81361804011568238132009-01-19T16:02:00.000-05:002009-01-19T16:02:00.000-05:00nice!I tried to do something like this a few years...nice!<BR/><BR/>I tried to do something like this a few years ago. I like yours more. The best idea I could come up with was to use colour and tansparency to represent the positive or negative effect and strengths of the NN weights and hidden nodes.<BR/><BR/>I went to the effort of creating a vb.net application that loads PMML into data grids and then from the data grids builds the graphic. You might be able to use the data grids for your own graphics.<BR/><BR/>It should work with neural net saved as PMML from SAS EM. I tested it a little bit with Clementine and SAS PMML but only with simple single hidden layer NN's.<BR/><BR/>See;<BR/>http://www.kdkeys.net/forums/thread/6495.aspx<BR/><BR/>You can simply drag and drop the NN PMML onto the .exe and it will display the graph. I supplied the code and everything. Use as you wish. Completely free!<BR/><BR/>I'm not sure its much use, but I did it for fun to learn a bit of VB.NET (I'm certainly no programmer :)<BR/><BR/>Cheers<BR/><BR/>TimTim Mannshttps://www.blogger.com/profile/17405266346372888597noreply@blogger.com