Tesi etd-12302022-180039
Link copiato negli appunti
Tipo di tesi
Dottorato
Autore
ARA, GABRIELE
URN
etd-12302022-180039
Titolo
OS Mechanisms for Energy-Efficient Real-Time and High-Performance Networking Applications
Settore scientifico disciplinare
ING-INF/05
Corso di studi
Istituto di Tecnologie della Comunicazione, dell'Informazione e della Percezione - PHD IN EMERGING DIGITAL TECHNOLOGIES
Commissione
relatore Prof. CUCINOTTA, TOMMASO
Membro Dott. SOJKA, MICHAL
Presidente Prof. LIPARI, GIUSEPPE
Membro Prof. BUTTAZZO, GIORGIO CARLO
Membro Dott. SOJKA, MICHAL
Presidente Prof. LIPARI, GIUSEPPE
Membro Prof. BUTTAZZO, GIORGIO CARLO
Parole chiave
- embedded systems
- energy-aware scheduling
- high-performance networking
- HPC
- NFV
- power monitoring
- real-time
- system simulation
Data inizio appello
16/06/2023;
Disponibilità
parziale
Riassunto analitico
Over the last decade, many applications shifted from centralized approaches to distributed computing paradigms. Thanks to the widespread availability of high-speed Internet connections, cloud computing services experienced stable growth, both in sheer size and the number of services provided to their end-users. At the same time, mobile devices like smartphones, tablets, and other battery-powered appliances have become ubiquitous in our everyday lives, following us everywhere we go and helping us achieve the most diverse kinds of tasks, from the most trivial to the more complex ones. Portable and embedded devices evolved in various directions to support this trend: some evolved into highly specialized designs, and others were packed with increasingly powerful hardware components (from more powerful and capable CPUs to hardware accelerators like GPUs or FPGAs). The latter ones are today used to support an unprecedented variety of fields, many characterized by real-time requirements, like real-time control applications for factory automation and Industry 4.0, autonomous driving, and robotics.
Starting from the Internet of Things trend, distributed computing systems moved beyond this concept by tightening the closed loop between the users, the embedded devices that they interact with (usually sensors/actuators or other portable devices, typically characterized by real-time timing constraints), and more processing-oriented high-end systems, located either at the edge of the network or in cloud data centers. This design pattern for massively distributed computing systems is known as “fog computing” (and later, also “edge computing”).
The typical design of a distributed application in edge computing consists of several intercommunicating components, each characterized by wildly different requirements. For software components deployed into cloud instances, one of the most critical factors is the capability to squeeze every last bit of performance from the machine.
For some industries, high throughput or ultra-low latency may even be requirements posed by standards, like the spreading fifth generation of the broadband cellular network (5G), which dictates latencies smaller than a millisecond among some components of the network backbone. On the other hand, software components deployed on battery-powered devices must consider that pushing the device to its limits negatively affects its battery life. When designing these components, the focus is on energy efficiency rather than raw performance.
Historically, embedded systems rely on specialized Operating Systems (OSes) or custom software stacks developed on bare metal. More recently, however, the widespread popularity of Linux, thanks in part to the Android project, brought to the attention of everybody that the versatility of a General Purpose OS (GPOS) is not limited to desktop or server applications anymore. Today, Linux provides out-of-the-box support for virtually any computing system, from giant HPC machines to the smallest embedded ones.
Supporting such a wide range of devices and applications is challenging. In particular, any GPOS of today must face the problem of efficiently supporting different performance and energy requirements depending on the use case. One key component of an OS that plays a critical role in determining the performance and energy efficiency of a system is the kernel. The kernel is the central part of the OS that manages resources such as memory, CPU time, and device access and is responsible for communication between hardware and software. As such, the kernel design and implementation can significantly impact the performance and energy efficiency of applications running on the system. For instance, the Linux kernel was not initially designed with real-time or high-performance networking applications in mind, posing some challenges when using it to support these applications.
In this Ph.D. thesis, we address these challenges by exploring mechanisms involving (or bypassing) the OS to satisfy the performance and energy requirements of high-performance networking applications for HPC and real-time applications executing on embedded devices. We also examine the use of these mechanisms in the context of different application domains, highlighting the contrast between some enabling techniques for HPC/NFV and soft real-time applications running on embedded systems. Overall, this Ph.D. thesis contributes to various aspects of application design for cloud and edge computing.
Starting from the Internet of Things trend, distributed computing systems moved beyond this concept by tightening the closed loop between the users, the embedded devices that they interact with (usually sensors/actuators or other portable devices, typically characterized by real-time timing constraints), and more processing-oriented high-end systems, located either at the edge of the network or in cloud data centers. This design pattern for massively distributed computing systems is known as “fog computing” (and later, also “edge computing”).
The typical design of a distributed application in edge computing consists of several intercommunicating components, each characterized by wildly different requirements. For software components deployed into cloud instances, one of the most critical factors is the capability to squeeze every last bit of performance from the machine.
For some industries, high throughput or ultra-low latency may even be requirements posed by standards, like the spreading fifth generation of the broadband cellular network (5G), which dictates latencies smaller than a millisecond among some components of the network backbone. On the other hand, software components deployed on battery-powered devices must consider that pushing the device to its limits negatively affects its battery life. When designing these components, the focus is on energy efficiency rather than raw performance.
Historically, embedded systems rely on specialized Operating Systems (OSes) or custom software stacks developed on bare metal. More recently, however, the widespread popularity of Linux, thanks in part to the Android project, brought to the attention of everybody that the versatility of a General Purpose OS (GPOS) is not limited to desktop or server applications anymore. Today, Linux provides out-of-the-box support for virtually any computing system, from giant HPC machines to the smallest embedded ones.
Supporting such a wide range of devices and applications is challenging. In particular, any GPOS of today must face the problem of efficiently supporting different performance and energy requirements depending on the use case. One key component of an OS that plays a critical role in determining the performance and energy efficiency of a system is the kernel. The kernel is the central part of the OS that manages resources such as memory, CPU time, and device access and is responsible for communication between hardware and software. As such, the kernel design and implementation can significantly impact the performance and energy efficiency of applications running on the system. For instance, the Linux kernel was not initially designed with real-time or high-performance networking applications in mind, posing some challenges when using it to support these applications.
In this Ph.D. thesis, we address these challenges by exploring mechanisms involving (or bypassing) the OS to satisfy the performance and energy requirements of high-performance networking applications for HPC and real-time applications executing on embedded devices. We also examine the use of these mechanisms in the context of different application domains, highlighting the contrast between some enabling techniques for HPC/NFV and soft real-time applications running on embedded systems. Overall, this Ph.D. thesis contributes to various aspects of application design for cloud and edge computing.
File
Nome file | Dimensione |
---|---|
Ci sono 1 file riservati su richiesta dell'autore. |