Edge Computing has emerged as a response to the growing adoption of the Internet of Things, reducing latency and enabling real-time data processing by shifting computation from centralized cloud servers to the network edge. Adopting the Function-as-a-Service model at edge nodes would further increase flexibility and cost-efficiency, but comes with several challenges related to resource constraints and unpredictable traffic patterns. In addition, efficient task offloading approaches are critical in distributed edge environments. Reinforcement Learning (RL) can be beneficial in this context, outperforming traditional heuristic methods due to its ability to dynamically optimize workload distribution. In this preliminary study, we investigate the potential of multi-agent RL for workload management in federated FaaS-Edge environments. By comparing the widely-used Proximal Policy Optimization (PPO) and Soft Actor-Critic RL methods with two heuristic baselines, we highlight how PPO holds promise in increasing the processed load by exploiting requests forwarding to neighboring nodes.

Petriglia, E., Filippini, F., Ciavotta, M., Savi, M. (2025). Multi-Agent Reinforcement Learning for Workload Distribution in FaaS-Edge Computing Systems. In 2025 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (pp.1128-1131). IEEE [10.1109/IPDPSW66978.2025.00176].

Multi-Agent Reinforcement Learning for Workload Distribution in FaaS-Edge Computing Systems

Petriglia E.;Filippini F.;Ciavotta M.;Savi M.
2025

Abstract

Edge Computing has emerged as a response to the growing adoption of the Internet of Things, reducing latency and enabling real-time data processing by shifting computation from centralized cloud servers to the network edge. Adopting the Function-as-a-Service model at edge nodes would further increase flexibility and cost-efficiency, but comes with several challenges related to resource constraints and unpredictable traffic patterns. In addition, efficient task offloading approaches are critical in distributed edge environments. Reinforcement Learning (RL) can be beneficial in this context, outperforming traditional heuristic methods due to its ability to dynamically optimize workload distribution. In this preliminary study, we investigate the potential of multi-agent RL for workload management in federated FaaS-Edge environments. By comparing the widely-used Proximal Policy Optimization (PPO) and Soft Actor-Critic RL methods with two heuristic baselines, we highlight how PPO holds promise in increasing the processed load by exploiting requests forwarding to neighboring nodes.
paper
Edge Computing; Function as a Service; Load Balancing; Reinforcement Learning;
English
2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025 - 03-07 June 2025
2025
2025 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
9798331526436
13-ago-2025
2025
1128
1131
11106134
https://ieeexplore.ieee.org/abstract/document/11106134
open
Petriglia, E., Filippini, F., Ciavotta, M., Savi, M. (2025). Multi-Agent Reinforcement Learning for Workload Distribution in FaaS-Edge Computing Systems. In 2025 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (pp.1128-1131). IEEE [10.1109/IPDPSW66978.2025.00176].
File in questo prodotto:
File Dimensione Formato  
Petriglia-2025-IPDPSW-preprint.pdf

accesso aperto

Tipologia di allegato: Submitted Version (Pre-print)
Licenza: Creative Commons
Dimensione 280.07 kB
Formato Adobe PDF
280.07 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/568203
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact