{"id":177,"date":"2019-11-21T10:30:58","date_gmt":"2019-11-21T09:30:58","guid":{"rendered":"https:\/\/multi3generation.eu\/?page_id=177"},"modified":"2019-11-21T10:30:58","modified_gmt":"2019-11-21T09:30:58","slug":"wg1","status":"publish","type":"page","link":"https:\/\/multi3generation.inesc-id.pt\/?page_id=177","title":{"rendered":"WG 1 &#8211; Grounded Multimodal Reasoning and Generation"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<p>Linguistic expressions are called <em>grounded<\/em> when they are linked to non-linguistic, especially perceptual data (such as information coming from modalities such as vision, audition, etc); grounding is in essence a key aspect of acquiring meaning. This is a long-standing challenge for Artificial Intelligence.<\/p>\n\n\n\n<p>WG1 focusses on grounded representations for AI systems that, amongst other things, use multimodal information to reason, learn, and generate natural language. The central themes for WG1 are the following:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Explainability and transparency in multimodal models;<\/li><li>Complementarity \/ redundancy among data sources or modalities;<\/li><li>Interaction between symbolic &amp; sub-symbolic (e.g. neural) representations in models;<\/li><li>The role of commonsense and other knowledge;&nbsp;<\/li><li>Situated reasoning and language generation.<\/li><\/ul>\n\n\n\n<p>WG1 will be working towards:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li>Drawing up standards for multimodal data sources<\/li><li>Defining a research roadmap, through an appraisal of existing work and identification of gaps to be addressed in future work.<\/li><\/ol>\n\n\n\n<p><em>Individuals interested in joining WG1 should contact the chair<\/em>, <strong><a rel=\"noreferrer noopener\" href=\"https:\/\/mehulbhatt.org\/\" target=\"_blank\">Mehul Bhatt<\/a><\/strong> (\u00d6rebro University, Sweden)  \/  mehul.bhatt {at} oru.se<\/p>\n<\/div><\/div>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>SELECT PUBLICATIONS<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>L. Parcalabescu, A. Gatt, A. Frank and I. Calixto (2021). Seeing past words: Testing the cross-modal capabilities of pretrained V&amp;L models on counting tasks. <em>In: Beyond Language: Multimodal Semantic Representations Workshop (MMSR) 2021.<\/em><\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\" start=\"2\"><li>H. Alberts, T. Huang, Y. Deshpande, Y. Liu, K. Cho, C. Vania, I. Calixto (2021). VisualSem: A high-quality knowledge graph for vision and language. <em>In: Multilingual Representation Learning Workshop (MRL) 2021.<\/em><\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\" start=\"3\"><li>J. Suchan, M. Bhatt, S. Vardarajan (2021). Commonsense Visual Sensemaking for Autonomous Driving: On Generalised Neurosymbolic Online Abduction Integrating Vision and Semantics<em>. In: Artificial Intelligence Journal (AIJ), October 2021, Volume 299.<\/em><\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\"><li>M. Cafagna, K. van Deemter and A. Gatt (2021). What Vision-Language models \u2018see\u2019 when they see scenes. arXiv preprint arXiv:2109.07301<\/li><\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<h2 class=\"wp-block-heading\">CA18231 Meeting<\/h2>\n\n\n\n<div class=\"wp-block-file\"><a href=\"https:\/\/multi3generation.inesc-id.pt\/wp-content\/uploads\/2021\/01\/WG1-meeting-summary-Lisbon-online-Dec2020.docx\">WG1-meeting-summary-Lisbon-online-Dec2020<\/a><a href=\"https:\/\/multi3generation.inesc-id.pt\/wp-content\/uploads\/2021\/01\/WG1-meeting-summary-Lisbon-online-Dec2020.docx\" class=\"wp-block-file__button\" download>Download<\/a><\/div>\n\n\n\n<div class=\"wp-block-file\"><a href=\"https:\/\/multi3generation.inesc-id.pt\/wp-content\/uploads\/2021\/01\/WG1-video-Presentation.pptx\">WG1 video-Presentation<\/a><a href=\"https:\/\/multi3generation.inesc-id.pt\/wp-content\/uploads\/2021\/01\/WG1-video-Presentation.pptx\" class=\"wp-block-file__button\" download>Download<\/a><\/div>\n<\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Linguistic expressions are called grounded when they are linked to non-linguistic, especially perceptual data (such as information coming from modalities such as vision, audition, etc); &hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":175,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-177","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/multi3generation.inesc-id.pt\/index.php?rest_route=\/wp\/v2\/pages\/177","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/multi3generation.inesc-id.pt\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/multi3generation.inesc-id.pt\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/multi3generation.inesc-id.pt\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/multi3generation.inesc-id.pt\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=177"}],"version-history":[{"count":0,"href":"https:\/\/multi3generation.inesc-id.pt\/index.php?rest_route=\/wp\/v2\/pages\/177\/revisions"}],"up":[{"embeddable":true,"href":"https:\/\/multi3generation.inesc-id.pt\/index.php?rest_route=\/wp\/v2\/pages\/175"}],"wp:attachment":[{"href":"https:\/\/multi3generation.inesc-id.pt\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}