Science aims to discover concise, explanatory formulae that align with background theory and experimental data. Traditionally, scientists have derived natural laws through equation manipulation and experimental verification, but this approach could be more efficient. The Scientific Method has advanced our understanding, but the rate of discoveries and their economic impact has stagnated. This slowdown is partly due to the depletion of easily accessible scientific insights. To address this, integrating background knowledge with experimental data is essential for discovering complex natural laws. Recent advances in global optimization methods, driven by improvements in computational power and algorithms, offer promising tools for scientific discovery.
Researchers from Imperial College Business School, Samsung AI, and IBM propose a solution to scientific discovery by modeling axioms and laws as polynomials. Using binary variables and logical constraints, they solve polynomial optimization problems via mixed-integer linear or semidefinite optimization, validated with Positivstellensatz certificates. Their method can derive well-known laws like Kepler’s Law and the Radiated Gravitational Wave Power equation from hypotheses and data. This approach ensures consistency with background theory and experimental data, providing formal proofs. Unlike deep learning methods, which can produce unverifiable results, their technique guarantees scalable and reliable discovery of new scientific laws.
The study establishes fundamental definitions and notations, including scalars, vectors, matrices, and sets. Key symbols include b for scalars, x for vectors, A for matrices, and Z for sets. Various norms and cones in the SOS optimization literature are defined. Putinar’s Positivstellensatz is introduced to derive new laws from existing ones. The AI-Hilbert aims to discover a low-complexity polynomial model q(x)=0 consistent with axioms G and H, fits experimental data, and is bounded by a degree constraint. The formulated optimization problem balances model fidelity to data and hypotheses with a hyperparameter λ.
AI-Hilbert is a paradigm for scientific discovery that identifies polynomial laws consistent with experimental data and a background knowledge base of polynomial equalities and inequalities. Inspired by David Hilbert’s work on the relationship between sum-of-squares and non-negative polynomials, AI-Hilbert ensures that discovered laws are axiomatically correct given the background theory. In cases where the background theory is inconsistent, the approach identifies the sources of inconsistency through best subset selection, determining the hypotheses that best explain the data. This methodology contrasts with current data-driven approaches, which produce spurious results in limited data settings and fail to differentiate between valid and invalid discoveries or explain their derivations.
AI-Hilbert integrates data and theory to formulate hypotheses, using the theory to reduce the search space and compensate for noisy or sparse data. In contrast, data helps address inconsistent or incomplete theories. This approach involves formulating a polynomial optimization problem from the background theory and data, reducing it to a semidefinite optimization problem, and solving it to obtain a candidate formula and its formal derivation. The method incorporates hyperparameters to control model complexity and defines a distance metric to quantify the relationship between the background theory and the discovered law. Experimental validation demonstrates AI-Hilbert’s ability to derive correct symbolic expressions from complete and consistent background theories without numerical data, handle inconsistent axioms, and outperform other methods in various test cases.
The study introduces an innovative method for scientific discovery that integrates real algebraic geometry and mixed-integer optimization to derive new scientific laws from incomplete axioms and noisy data. Unlike traditional methods relying solely on theory or data, this approach combines both, enabling discoveries in data-scarce and theory-limited contexts. The AI-Hilbert system identifies implicit polynomial relationships among variables, offering advantages in handling non-explicit representations common in science. Future directions include extending the framework to non-polynomial contexts, automating hyperparameter tuning, and improving scalability by optimizing the underlying computational techniques.
Check out the Paper and Details. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..
Don’t Forget to join our 47k+ ML SubReddit
Find Upcoming AI Webinars here
The post IBM Researchers Introduce AI-Hilbert: An Innovative Machine Learning Framework for Scientific Discovery Integrating Algebraic Geometry and Mixed-Integer Optimization appeared first on MarkTechPost.
Source: Read MoreÂ