See PyETR to LLM: What Question to Ask? for more details.
Convert existing cases into a format that can be easily run through an LLM engine.
See https://github.com/dreamingspires/PyETR/blob/master/pyetr/cases.py for a list of cases.
Each case has data on it, as class variables
Can call to_str()
or to_fol()
on each class to get a logical string
Here’s an example:
class e17(DefaultInference, BaseExample):
"""
Example 17, p83
P1 There is a king in the hand and there is not an ace in the hand, or else there is an ace in the hand and there is not a king in the hand.
P2 There is a king in the hand.
C There isn't an ace in the hand.
"""
# v stands for view
v: tuple[View, View] = (
ps("{~King()Ace(),King()~Ace()}"),
ps("{King()}"),
)
# c stands for conclusion
c: View = ps("{~Ace()}")
Any class there which is a DefaultInference
is a test which has an assertion which checks v
against c
.
Create a harness for running questions through LLMs. See ‣ for details about that, but the upshot is that I’m going to use LM Evaluation Harness, which should do what we want.
See PyETR to LLM: What Question to Ask?
Difficulty