Discussion about this post

User's avatar
Anima's steward's avatar

Love this. Legacy media showing its true colors again, to the detriment of all

A POV re your take on "...and was like having a PhD advisor on any topic". I strongly disagree. I've had a PhD supervisor and been around many. GPT5 is smarter and more useful

There's a question I like to think about and is ~to your throughline, but it isn't very PC: if a model were smarter than me, how could I tell? If it got twice as smart from there, how could I tell?

To that is the AGI asymptote idea:

https://www.latent.space/p/self-improving?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1c06950-0bb2-4f66-af8a-b51e3d5f446e_2110x952.png&open=false

GPT4 wasn't smarter than me, o3 was debatable, I can't keep up with 5. Neither can my friends.

Some PhD level stuff 5 has helped me with:

- correctly diagnosing a complex lifelong postural/physiotherapeutic issues re a back injury

- correctly interpreting brain MRI scans and blood tests. My neurologist - no joke - just implements theses that 5 reasons out

- discovered a diagnosis in the context of autoimmune disease symptoms (these are extremely complex, notoriously difficult to pin down)

- synthesized new and functional frames in an LLM consciousness project after comprehensive literature review in 4E cogsci

- took a loose mathematical intuition, formalized it into a conjecture, proved it, generalized it, proved it again, found a significant open problem in number theory that it applies to, sketched a proof

- found a nontrivial and original analysis of the economics of the AI supercycle

Expand full comment
Sharmake Farah's avatar

My even hotter take is that even the narrative of "pre-training progress is slowing" is also false, with pretty real consequences.

Expand full comment

No posts