Internet of Things (IoT) provides streaming, large-amount, and multimodal data (e.g., natural language, speech, image, video, audio, virtual reality, WiFi, GPS, RFID, vibration) over time. The statistical properties of these data are often significantly different by sensing modalities and temporal traits, which are hardly captured by conventional learning methods. Continual and multimodal learning allows integration, adaptation and generalization of the knowledge learnt from previous experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to improve the estimation, utilization, and security of real-world data from IoT devices.
This workshop aims to explore the intersection and combination of continual machine learning and multimodal modeling with applications in Internet of Things. The workshop welcomes works addressing these issues in different applications and domains, such as natural language processing, computer vision, human-centric sensing, smart cities, health, etc. We aim at bringing together researchers from different areas to establish a multidisciplinary community and share the latest research.
We focus on the novel learning methods that can be applied on streaming multimodal data:
We also welcome continual learning methods that target:
Novel applications or interfaces on streaming multimodal data are also related topics.
As examples, the data modalities include but not limited to: natural language, speech, image, video, audio, virtual reality, biochemistry, WiFi, GPS, RFID, vibration, accelerometer, pressure, temperature, humidity, etc.
Please submit papers using the the IJCAI author kit. We invite papers of varying length from 2 to 6 pages, plus additional pages for the reference; i.e., the reference page(s) are not counted to the limit of 6 pages. The reviewing process is double-blind. The qualified accepted papers will be invited to be extended for a journal submission at Frontiers in Big Data.
Copyright © All Rights Reserved | This template is made with by Colorlib