Nerd Corner #57

Bad books, superintelligence, and more!

Hi All!

What I am reading 📚

“Against the Gods” by Peter Bernstein is the first book this year that I’ve given up on. 

One of my non-negotiables when reading is to give a book the first 100 pages, if it doesn’t interest me or captivates by the hundredth page, I stop reading it. Life is too short to read boring books. 

So I was very disappointed when I decided to stop reading Against the Gods. I actually gave the book about 190 pages since I had high hopes for it. But in the end, it wasn’t worth it. Bernstein’s style is very dry. The book felt like a regurgitation of facts from the history of mathematics without a clear central narrative to it. As mathematics books for non-mathematicians go, this one is sub-par. I’d rather stick with John Gribbin’s great prose.

This said, I never throw away books. Even the most boring of them go back to my library. In some cases, I’ll pick one of these after some time and I may find them interesting. But most of the time they’ll stay there collecting dust and reminding me that great writing is hard!

And so I wonder, am I the only one that does this or do you have a similar system? I’m curious to hear!

Nerd Corner 🤓

Over the weekend I had a great conversation about AI and the apparent imminent threat that it poses on a lot of jobs. 

Most of the conversation was driven by the magical results that people have been reporting with the GPT-3 model (it made an appearance in Nerd Corner #52 in case you missed it). But, while in light of these results we might be tempted to believe that an AI takeover is imminent, we fail to realize how specialized and brute-force the GPT-3 approach is.

For starters, it required an enormous amount of computing power to train (we’ talking about data-center scale compute power here) highlighting the fact that despite the great results, it seems that our current approach to AI is very brute force. As the recent paper “The Computational Limits of Deep Learning” argues, the current trend of “magical” results is “strongly reliant on increases in computing power”

This suggests that if we stay on this trend, our AIs will get better insofar our computational capacity increases. But we know we’re almost done with Moore’s Law. 

Given all this, I’m not worried about any imminent Terminator-type scenario.

Actually, I am excited at the prospect of new research into areas of software efficiency and optimization, that will be needed to keep the compute costs low and environmentally friendly. 

And, rather than AI taking completely over our jobs, I’m optimistic about AIs enhancing the work we do (like helping us type smarter and with fewer errors) and helping automate the most repetitive tasks to help us achieve our unique potential. 

To dig deeper: “The implausibility of intelligence explosion” by Francois Chollet, one of my favorite and most outspoken AI researchers out there. 

Cool Finds 🤯

  • This Moto GP crash is one of the craziest and luckiest I’ve seen. The footage is impressive and the fact the no rider was gravely injured is just surreal. For a closer look, see it from Valentino Rossi’s motorcycle.

  • I’m usually a shy guy and struggle writing cold emails and intros so I’ve been watching Kevin Hart’s story of when he met Jeff Bezos for some inspiration. It’s down to earth and super funny!

  • 1,000 True Fans? Try 100 This is a great read on the current trend of the “passion economy” and how users are shifting from a “want more for less” perspective towards a “want the only the best” perspective. People are willing to pay more for more exclusive access and better resources. 

I have questions for you! 😎

Is there a topic that you want me to cover in these emails?

Hit the reply button to this email and tell me more!

Do you know someone who might enjoy what you read here today?

Simply forward them this email and tell them to subscribe!

Have an awesome week,