NOAH
JACOBS

TABLE OF CONTENTS
2025.02.09-On-Overengineering
2025.02.02-On-Autocomplete
2025.01.26-On-The-Automated-Turkey-Problem
2025.01.19-On-Success-Metrics
2025.01.12-On-Being-the-Best
2025.01.05-On-2024
2024.12.29-On-Dragons-and-Lizards
2024.12.22-On-Being-a-Contrarian
2024.12.15-On-Sticky-Rules
2024.12.08-On-Scarcity-&-Abundance
2024.12.01-On-BirdDog
2024.11.24-On-Focus
2024.11.17-On-The-Curse-of-Dimensionality
2024.11.10-On-Skill-as-Efficiency
2024.11.03-On-Efficiency
2024.10.27-On-Binary-Goals
2024.10.20-On-Commitment
2024.10.13-On-Rules-Vs-Intuition
2024.10.06-On-Binding-Constraints
2024.09.29-On-Restrictive-Rules
2024.09.22-On-Conflicting-Ideas
2024.09.15-On-Vectors
2024.09.08-On-Perfection
2024.09.01-On-Signal-Density
2024.08.25-On-Yapping
2024.08.18-On-Wax-and-Feather-Assumptions
2024.08.11-On-Going-All-In
2024.08.04-On-Abstraction
2024.07.28-On-Naming-a-Company
2024.07.21-On-Coding-in-Tongues
2024.07.14-On-Sufficient-Precision
2024.07.07-On-Rewriting
2024.06.30-On-Hacker-Houses
2024.06.23-On-Knowledge-Graphs
2024.06.16-On-Authority-and-Responsibility
2024.06.09-On-Personal-Websites
2024.06.02-On-Reducing-Complexity
2024.05.26-On-Design-as-Information
2024.05.19-On-UI-UX
2024.05.12-On-Exponential-Learning
2024.05.05-On-School
2024.04.28-On-Product-Development
2024.04.21-On-Communication
2024.04.14-On-Money-Tree-Farming
2024.04.07-On-Capital-Allocation
2024.03.31-On-Optimization
2024.03.24-On-Habit-Trackers
2024.03.17-On-Push-Notifications
2024.03.10-On-Being-Yourself
2024.03.03-On-Biking
2024.02.25-On-Descoping-Uncertainty
2024.02.18-On-Surfing
2024.02.11-On-Risk-Takers
2024.02.04-On-San-Francisco
2024.01.28-On-Big-Numbers
2024.01.21-On-Envy
2024.01.14-On-Value-vs-Price
2024.01.07-On-Running
2023.12.31-On-Thriving-&-Proactivity
2023.12.24-On-Surviving-&-Reactivity
2023.12.17-On-Sacrifices
2023.12.10-On-Suffering
2023.12.03-On-Constraints
2023.11.26-On-Fear-Hope-&-Patience
2023.11.19-On-Being-Light
2023.11.12-On-Hard-work-vs-Entitlement
2023.11.05-On-Cognitive-Dissonance
2023.10.29-On-Poetry
2023.10.22-On-Gut-Instinct
2023.10.15-On-Optionality
2023.10.08-On-Walking
2023.10.01-On-Exceeding-Expectations
2023.09.24-On-Iterative-Hypothesis-Testing
2023.09.17-On-Knowledge-&-Understanding
2023.09.10-On-Selfishness
2023.09.03-On-Friendship
2023.08.27-On-Craftsmanship
2023.08.20-On-Discipline-&-Deep-Work
2023.08.13-On-Community-Building
2023.08.05-On-Decentralized-Bottom-Up-Leadership
2023.07.29-On-Frame-Breaks
2023.07.22-On-Shared-Struggle
2023.07.16-On-Self-Similarity
2023.07.05-On-Experts
2023.07.02-The-Beginning

WRITING

"if you have to wait for it to roar out of you, then wait patiently."

- Charles Bukowski

Writing is one of my oldest skills; I started when I was very young, and have not stopped since. 

Age 13-16 - My first recorded journal entry was at 13 | Continued journaling, on and off.

Ages 17-18 - Started writing a bit more poetry, influenced heavily by Charles Bukwoski | Shockingly, some of my rather lewd poetry was featured at a county wide youth arts type event | Self published my first poetry book .

Age 19 - Self published another poetry book | Self published a short story collection with a narrative woven through it | Wrote a novel in one month; after considerable edits, it was long listed for the DCI Novel Prize, although that’s not that big of a deal, I think that contest was discontinued.

Age 20 - Published the GameStop book I mention on the investing page | Self published an original poetry collection that was dynamically generated based on reader preferences | Also created a collection of public domain poems with some friend’s and I’s mixed in, was also going to publish it with the dynamic generation, but never did.

Age 21 - Started writing letters to our hedge fund investors, see investing.

Age 22 - Started a weekly personal blog | Letters to company Investors, unpublished. 

Age 23 - Coming up on one year anniversary of consecutive weekly blog publications  | Letters to investors, unpublished.

You can use the table of contents to the left or click here to check out my blog posts.

Last Updated 2024.06.10

Join my weekly blog to learn about learning

2025.01.26

LXXXIV

If you only look at things that are happening around you to determine how the world works, you’ve fallen victim to the turkey problem.

Watching what is going on around you is important, but unless you are also asking why things happen the way they do, you can quickly jump to a lot of bad conclusions.

Because it is easy to jump to such bad conclusions and because AI makes it easy to automate things, we’re seeing a lot of things get automated simply because they exist, not because they should exist.

Asking “why” and thinking logically can help you avoid this trap.

Subscribe

-------------------

The Turkey Problem 

There exists a turkey who lives on a farm. A human has been feeding it for the last 80 days.

The turkey thinks that the human is quite fond of him, a sort of benevolent patron or friend. After all, ALL of the evidence supports this! Every day so far, the farmer has fed him! 



Caption: Gobble, gobble.

As a matter of fact, for every day that passes, the turkey feels MORE sure that the farmer will feed him on the next day—there’s MORE evidence now!

And then, on day 100, when the turkey is at the peak of its confidence about the owner’s benevolence, he is taken to a shed and slaughtered. 

The problem could be comically reformulated as the “oats” problem with pigs. Please only click that link if you would normally spend 3 minutes watching a picture of two pigs as they argued with each other about oats.

Induction

Unfortunately for the turkey, it was only looking at a pattern without looking at any sort of measure of cause and effect.

Meaning, it saw the man feed it on day one and day two and day three, so it decided that it was likely that the man would feed it on day four. Every time this pattern held, the turkey grew more confident that it would continue to hold. 

This is the problem of induction–if you just use past examples, you are not actually knowing things with any degree of certainty. 

The Human Problem

Before you say the turkey problem doesn't apply to humans, I’d check out this collection of famous last words:

  • “The last two guys I got into bar fights with didn’t have knives!”

  • “Sears has been around for over a hundred years, of course it’s a great stock to buy!”

  • “The last time I drank a bottle of wine and went on a joy ride, I didn’t hit anyone!”

  • “I’ve always been able to raise more capital!”

  • “The last frog I touched wasn’t poisonous!”

  • “OpenAi has always lowered the cost of its API!”*

  • “I’ve never seen a cop on this road, so I can go 2x the speed limit!”**

  • “Well, I mean I’ve never died before! So I guess I can’t die in the future!”

As you can see, there are a lot of ways we can easily draw absurd conclusions from past evidence alone. 

*I am not actively betting on the cost of the api going up, but I’m certainly not building a business that relies on it continuing to go down or even stay as low as it is, either. In September, OpenAI said they would lose $5B in 2024.

**This is not an endorsement for speed limits.

Automate Everything

One trap that inductive thinking will get you is that you should just automate anything that currently exists.

With AI making it easier to just automate things, it does not seem like a bad bet to pick an industry and just slap ai on it to make the thing go faster. People already do the task, why would they not want to do the thing faster? 

This is natural and not always wrong, but it’s not always right, either. The issue arises if you don’t ask why the thing exists in the first place. Maybe it doesn’t need to exist.

In my head, I’m picturing a Rube Goldberg machine of increasingly absurd complexity being constructed to do something like dispense a particular amount of water for a cat every 4 hours when there is a constantly flowing fountain two feet away. 

The AI SDR

As an example of automating potentially the wrong thing in BirdDog’s space, a bunch of people who don’t know anything about sales* are building AI SDRs.

An SDR (sales development rep) is a sales position at a company that is tasked with booking and/or qualifying** meetings for a more senior sales person, typically an Account Executive (AE) who does the bulk of the sales process and closes the contract.

The SDR position sucks because it involves cold calling and emailing and doing anything you can to get meetings.

What’s not obvious, though, is that the position doesn’t necessarily need to exist in the first place. A sales team exists to generate revenue for a company. A sales team does not exist to have an SDR team. 

The first sales team I worked with in early 2024 had either just fired or promoted all of their SDRs to full cycle AEs, meaning the AE’s did their own prospecting and closing without any SDRs. Here is a reddit post from over a year ago debating the merits of not even having the SDR role. Here is a post from well before the AI SDR phenomenon was on the radar (2021) discussing the drawbacks of the SDR role. And, here is an article from 2018 explaining the issues of the position.

SDRs typically optimize for meetings booked with a notoriously bad rep for booking meetings that don’t convert into revenue. Might it make more sense to take a step back, realize that the sales team cares about revenue, not meetings booked, and help them optimize for that?

In short, while the position is common and therefore inductively justifiable, it is not obvious that it needs to exist or even should exist in all cases, let alone be optimized and automated.

*Jack and I are also people who fall into the “don’t know anything about sales” category, but I’d like to think we are good listeners 

**Making sure that the person might actually buy the product or service

Subscribe
An Aside on LLMs

A recursive aside on LLMs: they are characterized by the same issue that people who are automating things that shouldn’t exist might struggle with–they are pattern recognizers, not knowledge gatherers. Perhaps more on this later.

Tumors & Bubbles

A lot of things shouldn’t exist. 

However, if instead of asking “why,” you think like a turkey and go exclusively off of the past, you will not realize that these things should not exist. You will find yourself optimizing the status quo, not the ideal state.

Since AI makes it easier to automate things, we will continue to see more and more automations that should not exist.

Luckily, unlike the growth of a cancer that terminates with the death of the host, such things will likely be more inclined to pop like a bubble.

Live Deeply,