Hi Sam, glad you liked it!

How should this cost gradient “connect” to the softmax backward gradient you posted here? → do you mean the chain rule?

Maybe this post will help? https://medium.com/@aerinykim/derive-the-gradients-w-r-t-the-inputs-to-an-one-hidden-layer-neural-network-fb24ed1ed05f

It’s not the complete explanation of backprop but it explains how to connect them using the chain rule.

If you want a step by step explanation of the backprop, this article might help you: https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/

I’m an Engineering Manager at Scale AI and this is my notepad for Applied Math / CS / Deep Learning topics. Follow me on Twitter for more!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store