Recently, there has been growing interest in combining knowledge bases and multiple modalities such as NLP, vision and speech. These combinations have resulted in improvements to various downstream tasks including question answering, image classification, object detection, and link prediction. The objectives of the KBMM workshop is to bring together researchers interested in (a) combining knowledge bases with other modalities to showcase more effective downstream tasks, (b) improving completion and construction of knowledge bases from multiple modalities, and in general, to share state-of-the-art approaches, best practices, and future directions.
The workshop on Knowledge Bases and Multiple Modalities (KBMM) will consist of contributed posters, and invited talks on a wide variety of methods and problems in this area. We invite extended abstract submissions in the following categories to present at the workshop:
We invite submission of extended abstracts related to Knowledge Bases and Multiple Modalities (KBMM). Since the workshop is not intended to have a proceedings comprising full versions of the papers, concurrent submissions to other venues, as well accepted work, are allowed provided that concurrent submissions or intention to submit to other venues is declared to all venues including KBMM. Accepted work will be presented as posters during the workshop and listed on this website.
Submissions shall be refereed on the basis of technical quality, potential impact, and clarity. A fraction of the submitted extended abstract shall be accepted for poster presentation at the workshop. Atleast one of the authors of each accepted submission will be required to attend the workshop to present the work.
1). Prepare 1-page abstract.
2). Please upload your submission in the following Google form (only PDF accepted):
3). In case of any queries, please drop an email to email@example.com (remove underscore).
Following speakers have confirmed their presence and talk at KBMM so far (more to be added)