3rd Workshop on Continual and Multimodal Learning for Internet of Things

August 21, 2021 • Online

Co-located with IJCAI 2021

About CML-IOT (previous editions: CML-IOT'20 , CML-IOT'19 )

Internet of Things (IoT) provides streaming, large-amount, and multimodal data (e.g., natural language, speech, image, video, audio, virtual reality, WiFi, GPS, RFID, vibration) over time. The statistical properties of these data are often significantly different by sensing modalities and temporal traits, which are hardly captured by conventional learning methods. Continual and multimodal learning allows integration, adaptation and generalization of the knowledge learnt from previous experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to improve the estimation, utilization, and security of real-world data from IoT devices.



Call for Papers

This workshop aims to explore the intersection and combination of continual machine learning and multimodal modeling with applications in Internet of Things. The workshop welcomes works addressing these issues in different applications and domains, such as natural language processing, computer vision, human-centric sensing, smart cities, health, etc. We aim at bringing together researchers from different areas to establish a multidisciplinary community and share the latest research.

We focus on the novel learning methods that can be applied on streaming multimodal data:

  • continual learning
  • transfer learning
  • federated learning
  • few-shot learning
  • multi-task learning
  • reinforcement learning
  • learning without forgetting
  • individual and/or institutional privacy
  • manage high volume data flow

  • We also welcome continual learning methods that target:

  • data distribution changed caused by the fast-changing dynamic physical environment
  • missing, imbalanced, or noisy data under multimodal data scenarios

  • Novel applications or interfaces on streaming multimodal data are also related topics.


    As examples, the data modalities include but not limited to: natural language, speech, image, video, audio, virtual reality, biochemistry, WiFi, GPS, RFID, vibration, accelerometer, pressure, temperature, humidity, etc.



    Important Dates

  • Submission deadline: May 12, 2021
  • Notification of acceptance: May 29, 2021
  • Deadline for camera ready version: June 12, 2021
  • Workshop: August 21, 2021
  • Submit Now



    Submission Guidelines

    Please submit papers using the the IJCAI author kit. We invite papers of varying length from 2 to 6 pages, plus additional pages for the reference; i.e., the reference page(s) are not counted to the limit of 6 pages. The reviewing process is double-blind. The qualified accepted papers will be invited to be extended for a journal submission at Frontiers in Big Data.



    Invited Speakers (TBA)

    Organizers

    Workshop Chairs (Feel free to contact us by cmliot2021@gmail.com, if you have any questions.)
  • Tong Yu (Adobe Research)
  • Susu Xu (Stony Brook University)
  • Handong Zhao (Adobe Research)
  • Ruiyi Zhang (Adobe Research)
  • Shijia Pan (UC Merced)


  • Advising Committee
  • Nicholas Lane (University of Cambridge and Samsung AI)
  • Jennifer Healey (Adobe Research)
  • Branislav Kveton (Google Research)
  • Zheng Wen (DeepMind)
  • Changyou Chen (University at Buffalo)


  • Technical Program Committee
  • Bang An (University of Maryland)
  • Guan-Lin Chao (Carnegie Mellon University)
  • Jonathon Fagert (Baldwin Wallace University)
  • Gao Tang (University of Illinois at Urbana-Champaign)
  • Ajinkya Kale (Adobe)
  • Chuanyi Li (Nanjing University)
  • Kunpeng Li (Northeastern University)
  • Wei Ma (Hongkong Polytech University)
  • Mostafa Mirshekari (Searchable.ai)
  • Xidong Pi (Aurora)
  • Can Qin (Northeastern University)
  • Shijing Si (Pingan Technology AI Center)
  • Rui Wang (Duke University)
  • Yikun Xian (Rutgers University)
  • Yifan Zhou (University at Buffalo)
  • Ming Zeng (Facebook)


  • Agenda (TBA)

    Copyright © All Rights Reserved | This template is made with by Colorlib