Conceptual abstraction is the base of my method. The transfer of knowledge from one position to another can only be reached by generalisation of the patterns that occur in a position.
That conceptualization requires a lot of work. There is no free lunch here. You can only ask a grandmaster to gather the material and to draft an outline.
So I went to look at AI whether it could help me. I watched a lot of videos and podcasts to see where it is heading. It turns out that AI is suffering from the same problem.
LLM’s will run out of input the next year. Everything from the internet is processed, and you cannot scale up the LLM’s much further. Hence the ugly begging letters and policy changes at X, Reddit and Meta lately to use your data for feeding their LLM’s.
So the contours of the limitations of LLM’s start to become clear, Now the hype is nearly over, it is time to ask the question “what’s next?”. AGI is the next target to pursue (Artificial General Intelligence). The development of AGI has more or less come to a standstill in 2001. The expert systems of the previous century turned out to be too brittle and too rigid, so investments were redirected. Between 2000 and 2010 “big data” was the mantra.
From the 7 areas where improvement was needed to get to AGI, only one area is developed: LLM’s. The rest has more or less come to a standstill in the meantime. I’s time that investors get over their craze and start to make more sensible decisions.
I digged in what the main problem is with AGI. It turns out that conceptualization is the main problem. The world is too fuzzy and unpredictable for rigid rule based reasoning. So we need more loosely reasoning. Knowledge graphs and neurosymbolic AI seems to be key here.
Recently a new benchmark has been released: ARC-AGI-2025. Ordinary people score about 66% in this benchmark, while the best LLM doesn’t exceed 1.5%
So an LLM is not the best tool for helping me with conceptualization. No free lunch today. At least not for a decade, or so.