|
Faculty
Advisor |
|

Brendan
Juba |
|
|
Can computers manipulate high-level ideas
to solve problems, as well as they can push symbols and
crunch numbers? A "yes" answer to this question
would further suggest that perhaps computers can even achieve
a kind of understanding in some areas of knowledge. For
today's computers, this does not appear to be the case,
and some philosophers such as Searle believe that this could
never be the case. While we do not assert that our work
answers or even addresses the above question, our work has
certainly been thoroughly inspired by it. We aim to give
a formal treatment of strategies as high-level ideas for
problem-solving, and provide a way to view strategies through
the lens of complexity theory.
Prima facie, formalizing 'high-level idea' looks like an
obviously circular proposition. That is, it seems if a 'high-level
idea' is made formal, then it would cease to be merely an
idea! An idea is inherently intuitive, and perhaps even
vague-this is exactly the kind of object that a computer
cannot manipulate. However, the indeterminacy of a process
is not an impediment to its formalization. For example,
in the definition of a nondeterministic machine, one defines
a machine's behavior via a relation between all potential
"deterministic" behaviors of the machine. This
is the sort of maneuver we are taking. The meaning of a
collection of strategies for a computational problem will
be tantamount to a partition on the space of instance-solution
pairs, admitting certain nice structural and computational
properties.
To this end, we are investigating how such a partition may
be realized in the domain of theorem-proving. That is, we
are considering methods for clustering theorem-proof pairs
to see whether or not a simple notion exists that indeed
places proofs together based on the similarity of the strategies
used. Such a method would be of particular practical value
to theorem-provers based on "Proof planning" [1]--theorem-provers
which use high-level plans to both guide the search for
proofs and produce proofs that are more understandable to
a human mathematician--since work has already begun to automatically
extract these high-level plans from sets of well-chosen
example proofs [2] which are precisely what we aim to produce.
[1] A. Bundy. "The use of explicit plans to guide inductive
proofs." In Ninth Conference on Automated Deduction,
Lecture Notes in Computer Science, Vol. 203, pp. 111-120,
SpringerVerlag, 1988.
[2] M. Jamnik, M. Kerber, M. Pollet and C. Benzmuller. "Automatic
learning of proof methods in proof planning." Logic
Journal of the IGPL, 11(6):647-674, 2003.
|
|