Concept
While standard information retrieval (IR) systems present the results of a query in the form of a ranked list of relevant documents, question answering
(QA) systems attempt to return them in the form of sentences (or paragraphs, or phrases), responding more precisely to the users request.
However, in most state-of-the-art QA systems the output remains independent of the questioners characteristics, goals and needs. In other words,
there is a lack of user modelling: a 10-year-old and a University History student would get the same answer to the question: When did the Middle
Ages begin?. Secondly, most of the effort of current QA is on factoid questions, i.e. questions concerning people, dates, etc., which can generally be
answered by a short sentence or phrase.
The main QA evaluation campaign, TREC-QA, has long focused on this type of questions, for which the simplifying assumption is
that there exists only one correct answer. Even recent TREC campaigns do not move sufficiently beyond the factoid approach. They account for two types of nonfactoid
questions: list and definitional but not for non-factoid answers. In fact, a) TREC defines list questions as questions requiring multiple factoid
, b) it is clear that a definition question may be answered by spotting definitional passages (what is not clear is how to spot them).However,
accounting for the fact that some simple questions may have complex or controversial answers (e.g. What were the causes of World War II?) remains
an unsolved problem. We argue that in such situations returning a short paragraph or text snippet is more appropriate than exact answer spotting. Finally,
QA systems rarely interact with the user: the typical session involves the user submitting a query and the system returning a result; the session
is then concluded.
To respond to these deficiencies of existing QA systems, we proposed an adaptive system where a QA module interacts with a user model and a dialogue
interface. The dialogue interface provides the query terms to the QA module, and the user model (UM) provides criteria
to adapt query results to the users needs. Given such information, the goal of the QA module is to be able to discriminate between simple/factoid answers
and more complex answers, presenting them in a TREC-style manner in the first case and more appropriately in the second.