Instead of typing keywords in a search bar, we would rather have a chat with a (virtual) librarian and express our ‘information need’ or interest in certain content in the archive in a conversation. At Netherlands Institute for Sound and Vision, we investigate several use cases in the area of Spoken Conversational Search as a means to guide access to our rich but also large and heterogeneous audiovisual archive. In our LABS environment, we have search APIs built on top of indices of rich metadata that can be connected to a dialogue system that uses keyboard or speech input to communicate with users online and in our museum. Research questions we can think of:
- how to represent the ‘librarian’s knowledge’ of the contents of the archive so that it can be used in a dialogue system?
- how to interpret the user intent?
- what are the ethical and legal aspects of conversational AI?
- how to encode multimodal dialog status and model the evolving ‘information need’ of a user that communicates with the system and receives information from it?
- would it already be possible to develop a ‘convincing system’ for museum visitors with a simple approach that uses a search API and its metadata (e.g., facets from the result list)?
- how to personalise a dialogue based on additional information sources (e.g., a camera that spots a child)?
- how to improve the natural language processing of such a dialogue system (e.g., in a Museum environment)?
- how to implement a search dialogue in a VR environment with the librarian as omnipresent oracle?
- could we enrich the information of archival content by using information we obtain from the human-computer dialogue?
Please contact us if you are interested to engage with our data and APIs for your research or development project.