The ALLEGRO project introduces the MLFO, a dedicated orchestrator for managing distributed AI/ML pipelines across the network. Unlike traditional intent-based systems focused on single network entities, MLFO enables a global, scalable, and reconfigurable AI/ML deployment.
🌐 Key Capabilities of MLFO:
- Dynamically computes and reconfigures AI/ML pipelines.
- Places functions intelligently across the network to optimize performance and resilience.
- Supports diverse use cases, such as failure localization, requiring data from heterogeneous sources.
🧠 Example AI/ML Pipeline Components:
- Collector (Co) – Gathers monitoring data (e.g., from optical nodes).
- Aggregator (Ag) – Computes basic statistics (min, max, average).
- Processor (Pr) – Executes intensive data analysis tasks.
- Time Series DB (DB) – Stores and organizes temporal data.
- User Interface (UI) – Visualizes pipeline insights for users.
This architecture is modeled as a graph T = (V, A), representing the interaction and data flow between AI/ML functions. The MLFO enables intelligent orchestration of these components for agile, intent-driven service management.
#AI #ML #IntentBasedNetworking #EdgeAI #DataOrchestration #HorizonEurope #ALLEGRO #SmartNetworks #FutureInternet
