Before I moved to the Bay Area, I almost thought winter wouldn’t exist. The temperate climate would make each season the same.
Yeah, I was wrong.
The temperature here may not make your chest feel hollow, but the vibes are similar. The weather’s still colder than you’d like, and the sun’s down when you get off work.
Now I realize — winter is when it gets dark early, no matter the weather.1
In Chicago, winter was always a time to grind. Come summer, you’ll want to be outside. The energy around you will take you away from that keyboard. Lock in now and get it done, whatever it is.
In California, winter isn’t much different. So while I haven’t been writing newsletters, I have been locked in.
Beyond my day-to-day work, three things have been occupying my time:
All-nighters
Hackathons
Research publications
Each has opened my eyes wider about how I think about artificial intelligence. The constant learn + apply feedback loop has immersed me in my work. Here’s an attempt at making sense of the picture-in-progress for folks who aren’t living with AI every day.
If you want to skip through recent history, scroll down to the insights section.
All-nighters
A few Fridays ago, some of the Heyday crew pulled an all-day, all-night, all-dayer to win Product Hunt’s #1 Product of the Day.
I even posted on the capital letter website, so you know it’s real. For those of you unfamiliar with Product Hunt, I added details in the footnotes.2
This comes after a year of narrowing our audience and focusing on delivering real f’ing value so we can operate a profitable business. We’ve been doing it for coaches, as I’ve noted. Now we’re looking for folks with similar professional problems.
Why?
When OpenAI launched GPT3 into an accessible chat interface AND made it stupid easy to build using their API, any general-use AI tool saw their life flash before their eyes. That includes OG Heyday.
There are simply not enough coaches! That’s a shame, but it means we’re looking to help folks in other fields…and we need more coaches.
One of Heyday’s co-founders, Sam, outlined it extremely well in this post.
I won’t rehash more of Sam’s notes. They’re awesome, and he’s an incredibly concise writer. If you’re curious about the why, go read it.
For the curious builders, I’ll leave the how in the footnotes.3 It’s a familiar tale.
Hackathons
A month ago, I was poised to write y’all this note before a welcome invite came in for an AI for Thought hackathon. Up in a mansion atop Twin Peaks in SF, ~100 or so folks built and presented 15 different projects layering AI on tools for thought. I met a bunch of lovely people and shared snacks. Someone played the piano masterfully.
We built a set of thinking agents with different synthesis methods that talked to each other about the prompt you wrote. So, one agent would use an analogy to explain their take on “What’s ailing humanity?”, another would use a narrative, another an antagonistic view, and so on. The goal was to provide multiple perspectives to improve sensemaking. I’d like to think Hofstadter would be proud.
It’s hard to build something useful in a day, even with existing frameworks to build upon. It’s near impossible to build something valuable in that time.
People on Twitter like to debate this, but the benefits of a hackathon feel much more aligned to connection and takeaway ideas, even for really strong projects. That was reinforced here. I’d like to be proven wrong.
Research publications
The single conference paper we’re working on is turning into more — a standalone publication and a potential workshop at this year’s ACM CHI. Not counting chickens before they hatch, but hopefully there’s fun news next month.
The work is a continuation of the research from a few years back. We’re writing about synthesis, how people work together, hypertext, and how our current tech changes what we’re capable of.
The funny thing about having your hands in so many different things is your brain defaults to pattern recognition. What’s similar about these things? What’s different? What’s novel in surprising ways?
Ongoing insights about AI
AI is poorly named
Some artificial aspects of artificial intelligence exist, but if we’re being honest, those are mostly the bad parts — the hallucinations, wrong math, or when the chatbot sounds like a computer.
It’s kind of like talking to a brilliant physicist nearing the end of their tenure. You’re gonna hear a lot – some will be magic, the rest is going to make you scratch your head and hope the same doesn’t happen to you. Next time you try an AI writing tool, think about that.
That’s not to say AI isn’t useful. Today, the most useful AI functions augment what you can do. Take two simple cases I use it for:
Voice transcription
I take a ton of notes. I’ve talked about this ad nauseam, but when you’re creating things, you have lots of ideas. Not all of these ideas will work out, but the further they are from being usable (i.e. chicken scratch in your pocket notebook), the less likely they ever are to come to life. You want to be able to use your creativity, not let it become exhaust.
It’s easier for me to talk through something than writing it out, namely because of the speed in doing so. My fingers take longer to type than my mouth does to speak. ChatGPT’s mobile app has done a great job with this, aside from the occasional interruption. Hit the chat button and start riffing. You’ll get the transcript, and you can take the next step from there.
Writing SQL queries
I mean, are you kidding me? I don’t have to memorize this syntax!? I can just enter details of my database and get insights? This is valuable.
SQL is just one facet. In June, Github’s CEO reported that 46% of code in Copilot is AI generated. Machine language is tailor-made for AI.
We’re better off considering AI augmented intelligence. You will gain abilities using AI.
Our metaphors for AI limit what we believe is possible
Think grandly about your life for a moment. If you could have any ‘helper’, would it be an assistant? Probably not.
If you did say that, I’d suggest you think larger. How about a copilot? They can take the wheel from you; an assistant can’t. That’s better! Go further, though. How about a digital clone? A soul?4 A god?!
Still, most AI tools roll out as assistants.
As designers, we need to consider the metaphor our tools take on. Assistants can only do so much. Why not use the tools we’re building to frame a stronger idea in people’s minds? You will make a massive difference in what people believe is possible.
Most folks treat AI like a toy
This becomes incredibly clear any time I’m not around San Francisco. When I talk to folks outside that bubble about their experience with AI, the looks I get are reminiscent of me talking about Titus Andronicus. That’s to say, there’s confusion. Sometimes pushback, even — and it’s warranted if you haven’t seen the value yet.
Most people have been exposed to ChatGPT. Some have played with DALL•E to generate funny pictures. Maybe they’ve faceswapped to see what they’ll look like as an old person. The investor Chris Dixon has said it well, “The next big thing will start out looking like a toy.”
The broad, horizontal tools that are successful come from accessible UX approaches, like chat. We’re familiar with texting. Working with a chatbot feels familiar.
Successful vertical tools augment what you’re currently doing, like Heyday with coach’s calls. The SQL example is easy to think about. As a product maker, I need to understand what customers are doing in Heyday. I’m saved hours of hassle and confusion every time I need to write a SQL query.
So if you’re mostly seeing value from your kids generating visuals of a T-Rex eating a thousand blue fish in an ocean-forest hybrid, that’s okay, actually. Playing with things helps us learn their bounds.
But just wait until you use it over your own knowledge. Then (!) wait until you find a tool that augments your current workflows. AI will stop feeling like a toy.
Not generative, but preventative
My guess is that this struggle stems from AI’s assistant metaphor and its first mainstream act — generative AI.
Generative AI seems like it’s fallen prey to society’s obsession with more.
By nature, the design of generative AI tools create content. Because out-of-the-box tools aren’t tailored to your preference, much of that is noise. That’s not great.
I’d be so much happier with short answers in clear language than extensive responses with flowery language. That’s changing with time and fine-tuning, but consider why this is — AI’s evals are measured against existing tests.
In education, existing tests largely still reward length of responses (write me a 5-page essay, write at least 1000 words) more than clarity.
Until we revise our aims for AI and the evals that measure their performance, the Large Language Model approach will keep generating noise.
But beyond all of this, I don’t necessarily want generative AI to be the primary use. I want AI to help me. Namely, I want it to help me not fuck something up.
Perhaps this is a different angle — preventative AI.5 What might that look like in a model?
That’s the right question.
Somewhat coincidentally, I’ve also been watching the new season of True Detective over the past few weeks. I’m not done, so I’ll spare you a thinkpiece, but safe to say, it’s dark. The season is set in an Alaskan town around the winter solstice, when the sun doesn’t come out and a bunch of freaky shit happens.
Two things have dawned on me while watching. First, this is basically a horror show. So pointless and very damaging to my sleep. I’m not a fan of scary for scary-sake.
Second, and more importantly for us here, the lead detective (played by Jodie Foster) has an incredible ability to get people asking the right questions. In each episode, she angrily demands something like, “That’s the wrong question! Ask again.” I get a kick out of it. It can be somewhat annoying to the other detectives, but what a gift!
For a show that’s left me longing, I can’t stop thinking about it, and it’s a great continuation from Season 1’s masterpiece.
A question narrows perspective. Ask a broad question, get a broad answer. Ask a specific question about the wrong thing, get a specific answer…about the wrong thing.
Something about the old lesson “you have to learn for yourself” rings true here. A good question forces you out of your headspace by showing you a different perspective — one that you come to. Even when it’s prompted of you, you are the one taking the walk to that answer.
Overall, I’m left with a sense that we’re asking the wrong questions about artificial intelligence. What can we do with AI? Will AI replace our jobs? What can an assistant do for me?
Those are the wrong questions. They narrow our perspectives to those initial frames. This makes sense, because we’re still largely considering AI a toy.
I don’t know what the right questions are yet, but we won’t scratch the surface of understanding AI’s lasting impact without a change of perspective.
Here’s to asking the right questions.
The good news is we’re close to winter’s end. Just look to Daylight Savings, a mere 10 days away.
If Product Hunt is completely out of your awareness, a brief primer:
Companies, typically startups, post new tech products on Product Hunt.
People use the products, add comments, and support as they see fit.
In short, it’s a gathering space for parts of the tech community to stay up-to-date on new tech.
Heyday today does two things really well.
Summarize your calls based on the work you do.
Let you ask anything about your data.
The breakthrough we had internally came a few months back, when we started using our internal assistant over all of our notes, emails and calls. Samiur’s used it to frame our pitch story for our raise. Sam to write cold emails (they’re nails). It helps me figure out what’s driving our customers nuts.
Great products are often born from internal use. You don’t have to look for long to learn about Slack coming from an internal communication tool while building the game Glitch or Basecamp coming from 37Signals’ internal project management system.
The Heyday experience ends up feeling like you have a copilot with you. If I have a critique, it’s that the copilot is not quite the presidential aide you dream of, presenting you with a dossier of detail about what’s next. If you ask the right questions, you’ll be led nicely. But we still have plenty to identify within your workflow to make it default. It’s early. We’ll get there.
There’s room for souls. By programming the cognitive processes we want to see our AI take, we’re closer to tracking (and poking holes) in AI’s reasoning. This is an awesome project.
There’s no way that’s the right word. It’s something like prescient, preparative, or inverting. But you get the gist — what’s most likely to go wrong here? Help me avoid that.
Great thinking! Keep it coming!