Cooperative Discourse Search
This is an open investigation on whether we can use adversarial search (e.g. minimax search, alpha beta search) to predict how to respond in dialog / debate (and whether all dialog/debate is adversarial) and how your opponent would respond too?
P1: I want to make an appointment.
P2: (what should I say such that the world is better for both of us?)
That is, would it be possible / useful to model the dialog between two (or more?) agents as adversaries (say, like in chess) trying to maximize their goals (say, their positive propositional attitudes like desire/wants)? Does the gricean principle of cooperation make us re-evaluate this cooperative-ly rather than adversarily? does the principle of cooperation appear in debate (e.g. political?)?
If so, what would the evaluation criteria be? Whether the end state of the conversation leads to more or fewer desires/wants accomplished? Or maybe gets us closer to them? Is there the equivalent of reinforced learning for adversarial discourse search so othe an expect can learn by themselves what is the evaluation criteria?
Is there an equivalent of a stale mate in dialog? That is, are circular arguments effectively stale mates ("Eat your vegetables" -> "I want candy" -> "Eat your vegetables!" -> ...)? If so, is there something like the 50-move rule for chess to dialog?
Is there something proactive that should be part of the dialog? That is, in chess, you make a move based on a prediction of what your opponent will do, so maybe you can utter a response announcing what you'd expect them to respond with?