I would hope that you, my dear reader, have not had the extreme displeasure of flying on a United Airlines flight recently. If you have, perhaps you should turn away now, such that you might avoid triggering any latent traumatic memory material. This story is about a wonderful adventure with the ever-helpful airline, which commenced on Christmas Eve: December 24th, 2023.
Sleepwalking into a horrible fate 4 A.M. rang my alarm.
I often drink matcha. Matcha is a bright green powder ground from green tea leaves. Taking matcha powder and swirling it into water renders a luxuriant smooth tea.
A cup of matcha, as imagined by Midjourney.
This tea is something of substance — a suspension that would not be the same without its constituent components — a solid and a liquid blended into something entirely new.
This is a bundle of somewhat unstructured notes from a Twitter Spaces event about value investing. The Spaces event was put on by a good friend of mine, Jason Wong, almost a year ago (May 21st, 2022). While going through my notes recently, I realized that these bullets might be worth publishing if I could turn them into a slightly more readable document. This is about the best I could do.
The killer use case for large language models (LLMs) is clearly summarization. At least today, in my limited experience, LLMs are incapable of generating unique insights. While LLMs are good at writing creatively regurgitated text based on certain inputs or writing generally about a topic, they’re unlikely to “think” something unique. However, LLMs appear to be quite good at knowing what they do and don’t know, and this is especially true when they are provided with a clear chunk of information or text to summarize.