Hi Sam, glad you liked it!

How should this cost gradient “connect” to the softmax backward gradient you posted here? → do you mean the chain rule?

Maybe this post will help? https://medium.com/@aerinykim/derive-the-gradients-w-r-t-the-inputs-to-an-one-hidden-layer-neural-network-fb24ed1ed05f

It’s not the complete explanation of backprop but it explains how to connect them using the chain rule.

If you want a step by step explanation of the backprop, this article might help you: https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/

I’m an Engineering Manager at Scale AI and this is my notepad for Applied Math / CS / Deep Learning topics. Follow me on Twitter for more!

I’m an Engineering Manager at Scale AI and this is my notepad for Applied Math / CS / Deep Learning topics. Follow me on Twitter for more!