by Joseph Rickert

Few of us have enough time to read, and most of us already have depressingly deep stacks of material that we would like to get through. However, sometimes a random encounter with something interesting is all that it takes to regenerate enthusiasm. Just in case you are not going to get to a book store with a good technical section this weekend, here are a few not-quite-random reads.

Deep Learning by Goodfellow, Bengio and Courville is a solid, self-contained introduction to Deep Learning that begins with Linear Algebra and ends with discussions of research topics such as Autoencoders, Representation Learning, and Boltzman Machines. The online layout extends an invitation to click anywhere and begin reading. Sampling the chapters, I found the text to be engaging reading; much more interesting and lucid than just an online resource. For some Deep Learning practice with R and H2O, have a look at the post Deep Learning in R by Kutkina and Feuerriegel.

However, if you are under the impression that getting a handle on Deep Learning will get you totally up to speed with neural network buzzwords, you may be disappointed. Extreme Learning Machines, which “aim to break the barriers between the conventional artificial learning techniques and biological learning mechanisms”, are sure to take you even deeper into the abyss. For a succinct introduction to ELMs with and application to handwritten digit classification, have a look at the recent paper by Pang and Yang. For more than an afternoon’s worth of reading, browse through the IEEE Intelligent Systems issue on Extreme Learning Machines here, and the other resources collected here. See the announcement of the 2014 conference for the full context of the quote above.

For something a little lighter and closer to home, Christopher Gandrud’s page on the networkD3 package is sure to set you browsing through Sankey Diagrams and Force Directed Drawing Alorithms.

library(networkD3)
data(MisNodes)

# Plot
Group = "group", opacity = 0.8)