Friday, January 19, 2024

Deep Learning and Solving NP-Complete Problems

This recent article about AlphaGeometry being able to solve Olympiad geometry proof problems brings to my attention that there have been active research in recent years to combine deep learning with theorem proving. The novelty of AlphaGeometry is that it specializes in constructing geometry proofs, using synthetic training data guided by the theorem prover, which previous approaches have had difficulty with.

As a formal method like type checking of programming languages, theorem proving tries to overcome the ambiguity of mathematical proofs written in a natural language by formalizing proof terms using an algebraic language with rigorous semantics (e.g. Lean), so that proofs written in the language can be verified algorithmically for correctness. In the past, proofs had to be tediously written by humans, as computers had great difficulty constructing the proofs. Computers can do exhaustive proof search, but the search space blows up exponentially on the number of terms, which renders the exhaustive search intractable. However, once the proof is constructed, verification is quick. Problems that are exponentially intractable to solve but easy to verify are called NP-complete.

Traditional proof searching limits the search space to simple deductions, which can offload some tedium from manually writing the proofs. In recent years, machine learning is used to prune or prioritize the search space, making it possible to write longer proofs and solve larger problems. It is now possible for computers to construct complete mathematical proofs for a selected number of high school level math. The idea is a natural extension to what AlphaGo does to prune the game's search space, which I have written about before.

This is quite promising for a programming language that embeds theorem proving in its type system to express more elaborate safety properties, such as Applied Type System (ATS) designed by my advisor, which I have written about before. Programs written in ATS enjoy a high degree of safety: think of it as a super-charged Rust, even though ATS predates it. Now that generative AI can help people write Rust, a logical next step would be to create a generative AI for ATS.

I would be interested to see how machine learning can be applied to solve other NP-complete problems in general, such as bin-packing, job shop scheduling, or the traveling salesman problem. However, machine learning will face fierce competition here, as these problems are traditionally solved using approximation algorithms, which comes up with a close-enough solution using fast heuristics. Also, only a few of them have real-world applications. Admittedly, none are as eye catching as "AI is able to solve math problems" giving the impression of achieving general intelligence; in reality, the general intelligence is built into the theorem proving engine when it is used in conjunction with machine learning.

However, I think proof construction is a very prolific direction that machine learning is heading. Traditionally, machine learning models are a black box: even though it can "do stuff," it could not rationalize how to do it, let alone teaching a human or sharing the same insight with a different machine learning model. With proof construction, at least the proofs are human readable and have a universal meaning, which means we can actually learn something from the new insights gained by machine learning. It is still not general intelligence, and the symbolic insight to be gained might very well be a mere probabilistic coincidence, but it does allow machine learning to begin contributing to human knowledge.

No comments: