DARE Workshop
The 2nd Workshop on
Distributed AI for REsource-Constrained Platforms (DARE)
Program committee
- Luis Almeida – CISTER / University of Porto, Portugal
- Ali Balador – RISE, Sweden
- Veselka Boeva – Blekinge Institute of Technology, Sweden
- Barış Bulut – Enforma, Turkey
- Xenofon Fafoutis – Technical University of Denmark, Denmark
- Nicolás González-Deleito – Sirris, Belgium
- Anna Hristoskova – Sirris, Belgium
- Pedro Santos – CISTER / Polytechnic of Porto, Portugal
- Sima Sinaei – RISE, Sweden
- Pavel Smrz – Brno University of Technology, Czech Republic
- Joana Sousa – NOS Inovação, Portugal
Description
For the last two decades the usual approach for IoT applications has been to leverage on cloud infrastructures to address the computational and storage limitations and constraints at end and edge nodes. However, offloading processing capabilities to the cloud requires transferring data from edge devices to a backend cloud infrastructure, dealing with the limitations imposed by the underlying communication channels, depending on the availability constraints of both those communication channels and the cloud infrastructure, establishing proper mechanisms to secure data in transit and data at rest at the backend, dealing with the costs incurred from the transmission of data and by the usage of the backend cloud infrastructure, etc.
The advent of end and edge nodes with increased computational and storage capabilities makes it possible to perform a large range of increasingly resource-demanding computations (including AI-based tasks) locally at the edge, and only rely on a backend cloud for communicating results or performing computations requiring the combination and further processing of results from different edge devices or historical data sources. In addition, this increase in computation capabilities also enables neighboring edge devices to perform tasks collaboratively, leveraging on each other’s available computational resources, before offloading computations to a cloud backend.
Managing the complexity and heterogeneity of IoT systems is a big challenge for the future of edge computing, as data is to be collected analyzed on a (potentially) large network of different devices which may change at run-time. Only with an open and technology-agnostic approach this challenge can be addressed for a broad set of applications. In addition, as data becomes more valuable, security and privacy concerns will play an important role. In a networked IoT system, a single vulnerable device can be an entry point for cyberattacks.
This workshop will focus on AI and ML techniques, edge computing systems, and security and privacy approaches in view of data sharing in order to enable the smart and sustainable planning and operation of resource constrained IoT and edge computing applications. The workshop is organized within the scope of the European ITEA3 MIRAI (https://itea3.org/project/mirai.html) and ECSEL DAIS (https://dais-project.eu/) R&D projects. It welcomes innovative contributions, early results and position papers addressing one or more of to the topics listed below, and intends to foster informal discussions and cross-fertilization on the convergence of AI and edge computing.
- Open and interoperable ad-hoc architectures for on-demand computation
- Scalability of edge/fog computing IoT applications
- Run-time adaptation of edge/fog computing infrastructures
- Advanced AI algorithms and techniques for continuous learning under uncertainty and noise
- Distributed and composable ML models and techniques guaranteeing high-quality decision-making
- Security and privacy on the edge including secure data sharing between edge nodes
- Low-latency IoT applications deployment and operation on edge computing platforms
- Benchmarking of AI solutions for edge computing
- Industrial experiences/show cases for AI enabled edge computing platform management, orchestration, operation
- Industrial experiences and show cases of distributed AI applications running on edge computing infrastructures
- Real-world implementations of on-demand computation infrastructures
- Development and refinement of AI/ML algorithms to execute in resource constrained devices with example cases in predictive maintenance
- Distributed management and orchestration of IoT applications at edge infrastructure including federated learning approaches
Important dates
Please visit AIAI 2022 important dates to be informed about the submission deadlines.
Submission instructions
Submission details can be found at AIAI conference submission page.
All papers should be submitted either in a doc/docx or in a pdf form and will be peer reviewed by at least 2 academic referees. Contributing authors must follow the AIAI2022’s paper format guidelines as far as the IFIP AICT file format.
Papers will be peer reviewed by at least two (-2-) members of the workshop’s program committee.
Accepted papers will be published in the Proceedings of AIAI VOLUME2, under the Springer IFIP AICT Series.
You can submit you DARE paper here http://www.easyacademia.org/