This is the final set of presentations in my series ‘Elements of Neural Networks and Deep Learning’. This set follows the earlier 2 sets of presentations namely

1. My presentations on ‘Elements of Neural Networks & Deep Learning’ -Part1,2,3

2. My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 4,5

In this final set of presentations I discuss initialization methods, regularization techniques including dropout. Next I also discuss gradient descent optimization methods like momentum, rmsprop, adam etc. Lastly, I briefly also touch on hyper-parameter tuning approaches. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave‘

1. **Elements of Neural Networks and Deep Learning – Part 6**

This part discusses initialization methods specifically like He and Xavier. The presentation also focuses on how to prevent over-fitting using regularization. Lastly the dropout method of regularization is also discusses

The corresponding implementations in vectorized R, Python and Octave of the above discussed methods are available in my post Deep Learning from first principles in Python, R and Octave – Part 6

2. **Elements of Neural Networks and Deep Learning – Part 7**

This presentation introduces exponentially weighted moving average and shows how this is used in different approaches to gradient descent optimization. The key techniques discussed are learning rate decay, momentum method, rmsprop and adam.

The equivalent implementations of the gradient descent optimization techniques in R, Python and Octave can be seen in my post Deep Learning from first principles in Python, R and Octave – Part 7

3. **Elements of Neural Networks and Deep Learning – Part 8**

This last part touches upon hyper-parameter tuning in Deep Learning networks

This concludes this series of presentations on “Elements of Neural Networks and Deep Learning’

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and and in kindle version($9.99/Rs449).

See also

1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon

2. Big Data-1: Move into the big league:Graduate from Python to Pyspark

3. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)

4. Revisiting crimes against women in India

5. Introducing cricket package yorkr: Part 1- Beaten by sheer pace!

6. Deblurring with OpenCV: Weiner filter reloaded

7. Taking a closer look at Quantum gates and their operations

To see all posts click Index of posts

This post is a continuation of my earlier post Big Data-1: Move into the big league:Graduate from Python to Pyspark. While the earlier post discussed parallel constructs in Python and Pyspark, this post elaborates similar and key constructs in R and SparkR. While this post just focuses on the programming part of R and SparkR it is essential to understand and fully grasp the concept of Spark, RDD and how data is distributed across the clusters. This post like the earlier post shows how if you already have a good handle of R, you can easily graduate to Big Data with SparkR

Note 1: This notebook has also been published at Databricks community site Big Data-2: Move into the big league:Graduate from R to SparkRNote 2: You can download this RMarkdown file from Github at Big Data- Python to Pyspark and R to SparkR