Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Caution

wip - draft

What do you think about AI?

For whatever reason people ask me this question a lot. Maybe it’s because people know I love programming and LLMs have been one of the topics that has generated the most buzz around the internet. My thoughts are pretty mixed on LLMs, I think there’s cool tech that’s exciting, but I’m nervous about the current rollout and societal adoption. I’m going to use this post to clarify my own thoughts and share in what ways I do and do not use LLMs.

Technological Core

At it’s core I do think something of value has been achieved at the core of generative AI. At the heart of the crypto craze (an area I have a much deeper understanding of) there was a significant step forward in adversarial distributed systems described by the Bitcoin whitepaper. And I have the instinct that there’s similar human achievement at the core of the latest wave of AI progress. Attention is all you need is probably the core of this progress, and I plan to read this paper soon.

My current understanding of the technology is that we now have a way to ingest a large volume of written word, and basically guess with a high degree of accuracy what the next token in a stream of text would be.

But that’s not all, LLMs also attempt to nonlinearly combine areas of value, so they can ingest an understanding of algorithms, the specification of a new programming language and combine these things in novel ways. Another example would be to generate novel art in the style of Van Gogh.

The core of this technology has broad applications, from allowing humans to use natural language in more contexts, to lowering the cost of producing resources in a variety of scenarios.

While this is exciting and I do think has the potential to lead to some real growth at some point, I mostly see negative / neutral outcomes.

Some early victims

The llms are trained on a large volume of human generated value. Say you owned a car, and had a unique perspective that you wanted to share with the world. Generally speaking if it was valuable, the internet would send you traffic which you could convert to some sort of value for your efforts.

LLMs seem to be a much more efficient way to reference a lot of this same information, trivially summarizing information and cross referencing many sources simaltaneously. In the short term this massively reduces the incentive for people to generate this content.

But perhaps LLMs weren’t the problem, maybe capturing value based on attention / advertising was never the path we should have pursued. I personally blog for the love of the game (there are no ads on this blog and likely there never will be).

This sort of patern probably extends to art. There’s probably a bunch of situations where a graphic designer would be hired which will no longer happen due to generative ai.

I don’t think it’s nessisarily a problem that computers are doing things that people used to do. I think the ethical dillema is around the fact that LLMs were likely trained on assets that the companies didn’t have the rights to. The artists are certainly not capturing the economic upside that large language models are generating.

I can’t tell how fragile of an ecosystem art is. I think there’s a large volume of people who engage in art for the purpose of self expression and joy. And I certainly think that there’s an upper limit to LLM quality (more on this later). If you knock out the lower rungs on the career ladder for young artists, do we end up just living in a world with shittier art?

  • quality
  • Atrophy of programming skills
  • Financial investment and exposure
  • bad faith behavior from AI companies
  • How I use AI
  • if you’re getting value from lLms
  • massive codebase onboarding
  • cognitive debt
  • Sentiment
  • greybeards were ahead with worse tools