Is AI a new compression-abstraction layer?

Is AI a new compression-abstraction layer?
Photo by vackground.com / Unsplash

A Step Back: What Did Computational Systems Give Humanity?

Let's begin with a fundamental question: what did computational systems actually give to humanity? The answer goes beyond mere speed or efficiency. They expanded our comprehension limits by acting as enablers to perform, prove, and discover concepts that were previously intractable when relying only on our minds. Computational systems introduced a new compression-abstraction layer that takes enormous quantities of "BRICKS": raw data, observations, theories, experimental results, mathematical relationships and compresses them into condensed concepts: "WALLS". But here's the insight: these "WALLS", once created, can themselves become "BRICKS". When filtered through new compression-abstraction layers alongside other data, theories, and condensed concepts, they generate new "ROOMS" at higher levels of abstraction. This recursive process, where outputs become inputs, where "WALLS" become "BRICKS" to build "ROOMS" is how humanity expands its comprehension limits.

Kenneth Appel and Wolfgang Haken's 1976 proof of the Four Color Theorem marked the first major mathematical theorem proved using computers (1), requiring examination of nearly 2,000 configurations across what amounted to literally a billion cases that could not be performed by hand.

Similarly, Edward Lorenz's 1963 discovery of deterministic chaos emerged from computer simulations when he noticed that rounding a single variable from .506127 to .506 produced drastically different weather predictions (2), revealing sensitive dependence on initial conditions; that spawned the modern field of chaos theory.

The compression-abstraction layer provided by computation also enabled entirely new paradigms: Ron Rivest, Adi Shamir, and Leonard Adleman's 1977 RSA cryptosystem relied on the computational intractability of factoring large composite numbers into their prime factors (3), creating a form of secure communication that was mathematically provable yet practically implementable, only because computers could perform the necessary exponential operations while factorization remained computationally infeasible.

In each case, computers didn't merely accelerate human calculation, they compressed complex phenomena into tractable condensed abstractions. The pattern is clear: compression-abstraction layers take "BRICKS" (data, theories, observations) and compress-abstract them into "WALLS" (condensed concepts, discoveries, theorems). But, these "WALLS" don't just sit static, they become "BRICKS" for the next layer of compression-abstraction.


Discovering New Limits: When Bricks Revealed the Need for New compression-abstraction Layers

The computational compression-abstraction layer that emerged in the mid-20th century, revealed its own limits while simultaneously providing new "BRICKS" to transcend them. Carrying us to the development of artificial intelligence systems.

Claude Shannon's 1948 formalization of information theory provided a mathematical framework for quantifying information as bits (4), creating the conceptual foundation for representing knowledge computationally.

Richard Karp's 1972 demonstration that 21 diverse computational problems were NP-complete (5) revealed fundamental limits to what classical algorithms could achieve. Problems that could be verified quickly but seemingly required exponential time to solve optimally.

This paradox created a necessity: as data volumes exploded and computational ambitions grew, many real-world optimization and pattern recognition problems fell into this computationally intractable space. Yet, the computational era itself provided the conceptual tools that would point toward the solution: Gregory Chaitin's algorithmic information theory, developed in the 1960s alongside Kolmogorov and Solomonoff, defined information content as the size of the smallest program needed to generate a string (6), establishing compression as the fundamental measure of structure and meaning. A principle suggesting that the ability to solve complex problems might be understood as the discovery of optimal compression methods.

Paul Thagard's computational philosophy of science demonstrated that computational approaches could introduce entirely new representational schemes beyond formal logic, including prototypical concepts, concept hierarchies, and causal networks (7), while Paul Humphreys argued in "Extending Ourselves" that computational methods create a new kind of empiricism, where machines can extend human epistemological reach beyond what our cognitive abilities alone could achieve (8).

Computational science had expanded our comprehension limits beyond what human cognition alone could directly access before. The breakthrough, probably came when researchers realized that the 1986 backpropagation algorithm by Rumelhart, Hinton, and Williams enabled neural networks to learn internal representations through gradient-based weight adjustments (9), offering a fundamentally different approach. Not seeking optimal solutions through exhaustive search, but discovering useful compressions through learned statistical patterns across vast datasets, effectively enhancing the search for compression-abstraction layers that make complex phenomena tractable.

The computational layer had given us the language to measure information, shown us the boundaries of algorithmic tractability, formalized compression as the essence of understanding, and demonstrated that machines could extend human epistemological reach. These very concepts and limits, revealed the necessity for a new compression-abstraction layer capable of discovering compressed representations in data, transforming the intractable into the learnable and expanding our comprehension limit from what could be computed through explicit algorithms to what could be discovered through learned abstractions.


The AI-LLM compression-abstraction Layer: When Walls Become Bricks for Building Rooms

These AI-LLM systems represent a new compression-abstraction layer that takes both raw "BRICKS" (massive datasets, theories, observations) and new "WALLS" (computational algorithms, mathematical theorems, scientific knowledge) as input, filtering them together to create even more condensed concepts. What we might call ROOMS: entire spaces of understanding built from progressive layers of compression-abstraction.

DeepMind's AlphaFold 2 solved the 50-year-old protein folding problem in 2020, predicting protein structures with near-atomic accuracy (10). Demonstrated how neural networks trained on vast datasets can compress and abstract, previously computational but intractable: data, theories, observations, condensed concepts, etc.. "BRICKS" and "WALLS", via learned representations. Regularly predicting protein structures with atomic accuracy, even when no similar structure is known. A form of generalization that computational methods alone could not achieve. These new predictions are now "ROOMS" that will become "BRICKS" for engineering, manufacturing, and technology development. Where human biochemists alone saw overwhelming complexity, requiring decades of experimental work per protein, the artificial neural architecture discovered compressed geometric representations in high-dimensional space that made the incomprehensible tractable.

The power extends beyond solving known problems to discovering what we may call "new problem spaces": Google DeepMind's GNoME system predicted 2.2 million new crystal structures in 2023, equivalent to nearly 800 years of knowledge, including 380,000 stable materials (11,12), materials that escaped previous human chemical intuition. The AI revealed an entire landscape of viable materials, humanity didn't know existed. Even more provocatively, the Ramanujan Machine uses algorithms to automatically generate mathematical conjectures about fundamental constants without prior knowledge of underlying mathematical structure, discovering previously unknown formulas for π, e, Catalan's constant, and the Riemann zeta function that mathematicians must then prove (13). The system proposes mathematical truths before humans understand why they're true.

This raises the epistemological topic: when AI systems operating at higher abstraction layers generate patterns or solutions we cannot initially comprehend, we face the challenge of distinguishing a genuine insight from a confabulation without dismissing as "hallucination" what might be valid discovery at compression-abstraction layers beyond current human conceptual frameworks. A new kind of empiricism where, as philosopher Paul Humphreys argued, human cognitive abilities may no longer serve as the ultimate epistemic standard, and AI doesn't merely accelerate known discovery methods but introduces fundamentally new modes of compressing complexity into meaning.

The recursive pattern is undeniable: Our mind, computational systems, and AI/LLMs all function as compression-abstraction layers. They take BRICKS (raw data, observations, theories) and compress them into WALLS (condensed concepts, discoveries). But crucially, these WALLS simultaneously serve as BRICKS when filtered through new or higher compression-abstraction layers, creating new "ROOMS", that later could become new "BRICKS" to build "BUILDINGS", entire integrated spaces of knowledge built from multiple layers of recursive compression-abstraction.


Navigating this new Compression-Abstraction Layer Era where "WALLS" continuously become "BRICKS", where each abstraction layer enables the next, we must cultivate three essential capacities:

First, we must embrace what philosophers call intellectual humility. The capacity to appreciate the limits and fallibility of one's own knowledge, without this posing an intellectual threat. We must put what we know into discussion because what we may consider an immutable "WALL", might actually function better as a "BRICK", or a component for building something more elegant through a different compression-abstraction layer. As Socrates hinted with "I Know Nothing," true wisdom involves recognizing that our condensed concepts might need to be reconsidered, joined and recompressed through new frameworks. Epistemological humility means questioning our foundational assumptions, recognizing that today's "WALLS" are tomorrow's "BRICKS", and that paradigm shifts often require treating established knowledge as raw material for new abstractions. Recognizing that the human observer is inseparable from what is being observed, holding conclusions as provisional and subject to re-evaluation in light of new evidences or broader perspectives.

Second, chase simplicity as the path to understanding. The art of simplification, though paradoxically, may mean accepting that some AI-discovered patterns initially appear complex, precisely because we lack the (later) compressio-abstraction layer or we might simply not be used to think that way. Occam's razor and compression theory suggest that an ideal data compressor would also be a scientific explanation generator. That the best explanations are those that compress the most information into the simplest form (14). Simplicity preferences reflect a core principle of cognition: when faced with noisy observations, we naturally favor explanations that are simple yet adequate, rather than complex models that might merely be fitting random noises.

Third, find new ways to learn in a world where data has begun reaching sizes not manageable by the human mind alone. We are continuing to build new compression-abstraction layers, each one necessary because the previous layer's outputs "WALLS" and "BRICKS" have become too numerous, too complex, to manage without further compression. Computational systems emerged because human minds alone couldn't process certain "BRICKS" toghether. AI/LLM systems emerged because computational outputs "WALLS" and "BRICKS" had themselves become too numerous and complex, requiring a new layer to compress them into higher-order understanding.

Crucially, recognizing that AI may not just help us discover more, but actively participate in constructing new sciences and frameworks for modeling reality that may seem alien to human intuition. Thus, requiring us to develop the ability to distinguish genuine discoveries (valid new "ROOMS") from confabulation (random patterns) without prematurely rejecting what we don't yet understand. Here comes the ultimate challenge: can we develop frameworks to distinguish genuine discoveries from confabulation without prematurely rejecting the incomprehensible? Can we maintaining what one philosopher termed unbounded optimism about what is possible, while exercising rigorous skepticism?


The Recursive Nature of the Architecture of Knowledge

The fundamental insight is: knowledge grows through recursive compression-abstraction layers, where each layer's outputs become the next layer's inputs. Raw observations "BRICKS" → compressed through human cognition → produce theories "WALLS" → which become "BRICKS" → that compressed through computational systems → produce algorithms and proven theorems "ROOMS" → which become BRICKS → compressed through AI/LLMs → produce learned representations and discoveries "BUILDINGS".

Each compression-abstraction layer: human mind, computational systems, AI/LLMs acts as both tool and enabler, expanding our comprehension limits by transforming what was previously overwhelming into something tractable. But the process never stops: today's "BUILDINGS" become tomorrow's "BRICKS" for building even more sophisticated knowledge structures.

Our ability to expand the comprehension limits of our universe depends on our willingness to learn new languages and frameworks of compression-abstraction, that compress today's overwhelming complexity into tomorrow's elegant simplicity. We must be comfortable with a world where:

  • What we worked hard to understand "ROOMS" might need to become mere components "BRICKS" for new understandings
  • Compression happens at scales and speeds beyond individual human cognition alone
  • The "BUILDINGS" we build today will be the BRICKS others will use tomorrow
  • Each layer reveals new limits while providing the tools to transcend them

In this context, we are the drivers, while computation and AI are the enablers. Each compression-abstraction layer extend what we can comprehend, each "ROOM" potentially becoming a "BRICK", each limit revealing the necessity for the next layer, in an ongoing recursive process of human knowledge expansion.


Disclaimer

This article reflect my personal thoughts developed during my personal research. Nothing of it should be taken as pure truth, but more as a discussion point. Remember that Artificial Intelligence can make mistakes and so their output also must not be taken as pure truth.


Sources

  1. https://celebratio.org/Haken_W/article/794/
  2. https://www.technologyreview.com/2011/02/22/196987/when-the-butterfly-effect-took-flight/
  3. https://dl.acm.org/doi/10.1145/359340.359342?__cf_chl_tk=ermhbR2bVrU7Wj3n5R.dtcmLmPMr3ZS_OAFAsZLhLAw-1771149712-1.0.1.1-b6cBDNaYl_Q6mokdfvrwELiTq1GiV8gug0H5enuaR_w
  4. https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
  5. https://en.wikipedia.org/wiki/Karp's_21_NP-complete_problems
  6. https://en.wikipedia.org/wiki/Kolmogorov_complexity
  7. https://direct.mit.edu/books/monograph/2813/Computational-Philosophy-of-Science
  8. https://www.abebooks.com/9780195158700/Extending-Computational-Science-Empiricism-Scientific-0195158709/plp
  9. https://www.nature.com/articles/323533a0
  10. https://www.nature.com/articles/s41586-021-03819-2
  11. https://pmc.ncbi.nlm.nih.gov/articles/PMC10700131/
  12. https://deepmind.google/blog/millions-of-new-materials-discovered-with-deep-learning/
  13. https://deepmind.google/blog/millions-of-new-materials-discovered-with-deep-learning/
  14. https://arxiv.org/abs/2506.23194