Just some links to sites and articles I like—you might also enjoy them.
- Colah: Understanding LSTM Networks
- Andrej Karpathy: The Unreasonable Effectiveness of Recurrent Neural Networks
- Chuan Li and Michael Wand – Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis (Paper) (Code)
- Sketch-based images
- Generating Abstract Patterns with TensorFlow (Code)
- Generating Large Images from Latent Vectors (Code)
- Google AI – Looking Back at 2019
- Andrej Karpathy – A Recipe for Training Neural Networks
- Matt H / Daniel R (BYU) – Practical Advice for Building Deep Neural Networks
- Chris Yiu – DeepIndex – some of the coolest applications of AI out there, organized by industry/application space.
- Arvind Narayanan – How to recognize AI snake oil
- Michael I. Jordan – Artificial Intelligence – The Revolution Hasn’t Happened Yet
- Backing off towards simplicity – why baselines need more loveTake an existing baseline or write it yourself, ensure it remains simple and fast. Then take this simple and fast baseline and push it as far as possible. This means tuning hyperparameters extensively, trying a variety of regularization techniques, sanity checking against bugs and potentially flawed assumptions…As it is fast, you can spend many runs tuning your hyperparameters. As it is simple, bugs and flawed assumptions become easier to find as the model isn’t powerful enough to hide it from you.
- Shreya Shankar’s blog
For not even working in software, it seems I read an awful lot about software engineering…
- Jeff Dean and Luiz André Barroso – The Tail at Scale
- Bob Martin – Clean Code
- Bob Martin – Agile Software Development
- Julia Evans – What’s a Senior Engineer’s Job?
- John Allspaw – On Being a Senior Engineer
- Stephen Bush – The Ten Commandments of Egoless Programming
- OmniTI – Writing Readable Code
- And the related Your Code May Be Elegant, but Mine Fucking Works
- Sage Sharp – The Gentle Art of Patch Review
- Lara Hogan – What sponsorship looks like
- Anything from Terence Tao’s blog
- Matt Might – 12 resolutions for grad students
- And check out the Related pages at the bottom of this page, too.
- Optimize for your learning rate
My most frequent listens:
- Lex Fridman: Artificial Intelligence
- Lex tends a bit philosophical for my taste, where I am interested more on the engineering side, but still a great podcast. Being one of the more mainstream AI podcasts, there is plenty of content and he brings in some big names for the discussions.
- NVIDIA: The AI Podcast
- Software Engineering Daily (although most of this goes over my head)
- Jocko Podcast
- Linear Digressions
- Katie, a co-host of the show, recently gave a talk called Community-Building for Data Scientists as part of the MSiA Seminar Series. It was very interesting!