2 Comments
User's avatar
Matthew Gertner's avatar

The discussion around AGI is far too binary and frankly comes across as a bit naive. There seems to some expectation that AGI will happen all at once and that it will be clearly recognizable when it does. It is much more plausible that we will continue to add capabilities in an incremental way and AGI, if it ever does occur, will sneak up on us over time.

In the meantime labeling capabilities in a more granular way, as you suggest, makes total sense.

Expand full comment
Kevin Thuot's avatar

Yes! Thanks for you comment. I feel like I'm going crazy watching people argue about whether a model is "truly thinking" or "not really intelligent". We have to define these terms in detail before we even know if we disagree or not.

Expand full comment