Rabu, 09 Mei 2018

Ebook Grokking Deep Learning

Ebook Grokking Deep Learning

There is no doubt that publication Grokking Deep Learning will constantly give you motivations. Also this is simply a book Grokking Deep Learning; you could locate several genres and sorts of publications. From delighting to journey to politic, and scientific researches are all supplied. As what we mention, here our company offer those all, from famous writers and also publisher in the world. This Grokking Deep Learning is among the compilations. Are you interested? Take it currently. How is the method? Learn more this write-up!

Grokking Deep Learning

Grokking Deep Learning


Grokking Deep Learning


Ebook Grokking Deep Learning

After ending up being successful to complete checking out a publication, have you been enough? As a book lover, it will not be enough to review the book. Continue and also continue! This is what you should do to boost and also constantly develop the understanding. Bok is one that will make you really feel addicted. Yet, it is in the favorable term. Locate guides that will give positive enhancement for you currently.

When other individuals still feel so tough to discover this book, you might not deal with that issue. Your way to use the net connection as well as take part this site is right. You could find the source of guide as Grokking Deep Learning that will certainly not run out whenever. For making great problem, it becomes one of the manner ins which lead you to always use as well as utilize the innovative technology.

To know just how guide will be, it will be interacted with the performance as well as look of guide. The subject of the book that you wish to read must be connected to the subject that you require or the topic that you like. Reading typical book will not be interested for you also you have actually kept in on your hands. This is one problem to constantly deal with. But right here, when obtaining Grokking Deep Learning as referral, you may not fret anymore.

Why must be this on-line book Grokking Deep Learning You might not have to go somewhere to review the e-books. You could review this publication Grokking Deep Learning every time and also every where you want. Even it is in our leisure or feeling bored of the jobs in the workplace, this is right for you. Obtain this Grokking Deep Learning right now and be the quickest person who completes reading this book Grokking Deep Learning

Grokking Deep Learning

About the Author

Andrew Trask is a PhD student at Oxford University, funded by the Oxford-DeepMind Graduate Scholarship, where he researches Deep Learning approaches with special emphasis on human language. Previously, Andrew was a researcher and analytics product manager at Digital Reasoning where he trained the world's largest artificial neural network with over 160 billion parameters, and helped guide the analytics roadmap for the Synthesys cognitive computing platform which tackles some of the most complex analysis tasks across government intelligence, finance, and healthcare industries.

Read more

Product details

Paperback: 336 pages

Publisher: Manning Publications; 1 edition (January 25, 2019)

Language: English

ISBN-10: 1617293709

ISBN-13: 978-1617293702

Product Dimensions:

7.2 x 0.8 x 9.2 inches

Shipping Weight: 1.3 pounds (View shipping rates and policies)

Average Customer Review:

4.2 out of 5 stars

6 customer reviews

Amazon Best Sellers Rank:

#27,426 in Books (See Top 100 in Books)

Just arrived and diving in this week, the first impressions are that this is a deep dive on the mechanisms of Deep learning, but exceptional in the way the material is accessible to those without classical math background. You just need to devote some effort and basic reasoning and you should be plenty out of this book, Bon appetit ! I will update this if my description changes, this study effort will take a few weeks. Peace.

This is a wonderful, plain-English discussion of the mechanics that go on under the hood of neural networks - from data flow to updating of weights. Specifically written without a slant on normally-wonky math, the concepts are presented and then advanced at a digestable pace for anyone. It makes for a wonderful textbook for a course, and should be required reading for product managers or marketing people getting into deep learning, alike.

TL;DRPros: the book helps to grasp basic math concepts that can be used as the foundation for deep learning, gradually increasing their complexity and using lots of simple examples.Cons: some critical code implementations of deep learning networks are wrong and creating misconstrued notions about how they work.Recommendation: not worth the price (~45$ for print book as of Feb 2019), deep networks' code snippets must be treated with suspicion.Full review: This review is done based on first 9 chapters, if anything changes significantly when I'm done with the rest of the book, I will update the review. My first impressions from 'Grokking Deep Learning' were very positive. I can agree with many reviewers here that the book has a very cool concept of starting with some easy and accessible math and gradually building up reader's understanding of deep learning inner workings.However, chapters 8 and 9 were so bad that they marred the whole experience for me. In these chapters the readers were supposed to take their first steps beyond simplest neural networks and learn about dropout and batching (chapter 8), as well as some other activation functions(chapter 9). The problem is that code snippets that were supposed to introduce these concepts contained some serious mistakes. Those were not just typos or leaked draft versions - every tech book contains some amount of those. I'm rather talking about implementation errors that could distort reader's comprehension of why and how enhancements to basic neural networks could and should be made. It seems that even the text flow in those chapters was affected by implementation flaws, even further confusing a reader. To substantiate these claims, I collected some examples. Chapter 8. 1. The very first code snippet is a simple network that learns from MNIST dataset how to decipher handwritten digits. Its test accuracy is ~70% at iteration 349, and this low number is supposed to show how easily neural network overfits. The real problem though is that this low test accuracy is the result of incorrect implementation of the 'relu2deriv' function that is used in backpropagation. With the fixed function, test accuracy of the same network reaches ~82%. Overfitting is still there, but 82% for fairly simple example is not that bad. Unfortunately all other examples in chapter 8 and 9 that use relu and relu2deriv functions use the same erroneous implementation and are not valid. 2. To reduce overfitting dropout technique is applied. While high level arguments in the books are clear and valid, dropout implementation is problematic on several levels. First of all, dropout example is incomparable to non-dropout one since they have different layout - the hidden layer size was increased from 40 to 100 nodes and iteration count was decreased from 350 to 300. Moreover, the last logged iteration now is 290 instead of 349. For some reason, these changes were not announced in the text. Secondly, even if we compare two performances, the boost of ~10% that a reader can see should be explained as the result of incorrect implementation of 'relu2deriv'. Despite of what text is saying, dropout in this example is just trying to curb effects of that mistake. If you implement relu2deriv correctly, set up hidden layer nodes count to 100 and iterations to 290, you will not see much difference between non-dropout (88.88% peak test accuracy, 87.6% final test accuracy) and dropout versions (88.09% and 87.34% respectively). So either a reader will be misguided by the book's examples to think that dropout gives an enormous boost, or disillusioned, s/he will be left wondering and searching elsewhere what dropout real efficiency is and if this implementation is correct. 3. Batch example is just plain wrong. The book says that batched network will be updating weights once per batch, but the implementation keeps updating it with each row of input data. What is really perplexing is that the network wouldn't work with such loop logic. Instead of doing the right thing and fixing the latter, the code divides values of layer_2_delta by batch size. And still neither accuracy nor speed are improved. If we implement it correctly with the proper looping logic and proper 'relu2deriv' implementation, we will not get much improvements for test accuracy, but training will be done much faster. 4. So bottom line is that all improvements in chapter 8 were done just to battle the 'original sin' - incorrect 'relu2deriv' function. With this function fixed and increased number of nodes in the hidden layer, the very fist simple network with relu activation function will perform as good as the network with dropout and batching. The only gain was in speed, and only with the properly implemented batching logic. Of course, all this says nothing about how good dropout is or what the real power of batching is - a reader must go and find the answers in some other sources. Chapter 9. The biggest part of the chapter is the explanation of how different activation functions can be used in different contexts for greater accuracy. While on theoretical level everything looks valid, code implementation is again misleading. 1. The same as in chapter 8, it uses a divisor when calculating layer_2_delta. While in chapter 8 it was necessary because of the incorrect loop implementation, here it is absolutely uncalled for. The loop this time is correct, but divisor is actually became larger since now it's a product of batch size and layer_2.shape[0], or 100*100 = 10000. Why? No explanation. Also, alpha is set up unusually high as 2. So, in order to fix that, one has to get rid of the divisor and divide alpha by the same value of 10000. Test accuracy will not change (0.8701 at iteration 290), but the network logic will not be dependent on batch size anymore. 2. Still, whether's it's the book's version or the fixed one, accuracy doesn't go above 87%. It's still less than 88% of 'plain vanilla' relu example, granted that relu2deriv is implemented properly and hidden nodes count is 100. So all the improvements were in vain - and yes, I realize that test accuracy is not the only parameter to look at. But in the book it's a decisive one, and there's no gain in it from more complex tanh/softmax example. 3. Just out of curiosity I tried to use properly implemented relu/relu2deriv functions in the fixed example, while keeping softmax function in place. This is first time we can see any gain in accuracy. It centers around 89% which is 1% gain to plain vanilla version. Well, it seems that softmax can help. Is it just 1%, or could be it be more? What about tanh? Why it didn't show any gain in accuracy compared to relu? Is it because we implemented it wrong? Is it because the context is not right? Maybe, parameters set up was wrong - alpha, hidden layers/nodes etc? Again, we have to look for answers elsewhere. It's the same outcome that I had with chapter 8. Conclusion After first 7 chapters I would gladly recommend this book to everyone interested in learning about deep networks from scratch. After chapter 8 and 9 I'm not so positive about it. If you like forensic analysis - maybe. If the trust to what you read is a must - probably, the book is not for you. Anyway, I think that the price isn't justifiable. This book needed a good technical editor and/or reviewer, perhaps it could become a real gem. If you see it on 50% sale at Manning or Amazon and you feel adventurous - go for it. If not - go elsewhere.

What is the book about?- This book is about teaching you everything you need to know about Deep Learning in greater detail. The author will teach you how to write Deep Learning algorithms from scratch.Who is this book for?- For everyone who want to learn everything about Deep Learning from scratch.What I loved- The book simplifyes deep learning a lot. It gives you great insight into how Deep Learning learns and why it works.- The book starts very small and builds up your knowledge in greater detail, chapter after chapter. It will teach you how to write your Deep Learning algorithms better through each chapter while answering questions surrounding the algorithms you face a long the way.What could be improved- Chapter 10 teaches you how to write a program that identify symbols/numbers from images. The chapter is too short and does not describe the code in greater detail. It would be better if the author could write a bit more to save the reader a lot of time figuring out Convolutional Neural Networks algorithms from scratch. However, the knowledge learned from the book will give you an advantage to figure this out in the end.Round Up- If you want to start with Deep Learning, this is the perfect introduction.This book gives you a solid and important foundation to fully understand and build your own Deep Learning algorithms and frameworks using only Python.- The best way to learn Deep Learning is by writing the algorithms by hand. This book is an excellent choice to do just that.- The book was extremely exciting and fun.- The book have made it easier to fully understand the deep learning frameworks that are available today.- Thanks to this book, Deep Learning is not a black box anymore.

Excellent book, I really like the analogies that you made in order to explain some complex concepts

The author does an excellent job of gently taking the reader through a series of learning exercises, steadily building-up a deeper understanding and a broader view of Deep Learning.

Grokking Deep Learning PDF
Grokking Deep Learning EPub
Grokking Deep Learning Doc
Grokking Deep Learning iBooks
Grokking Deep Learning rtf
Grokking Deep Learning Mobipocket
Grokking Deep Learning Kindle

Grokking Deep Learning PDF

Grokking Deep Learning PDF

Grokking Deep Learning PDF
Grokking Deep Learning PDF

0 komentar:

Posting Komentar