CHORS: hardening high-assurance security systems with trusted computing

CHORS: hardening high-assurance security systems with trusted computing

Conference paper
06 May 2022
This paper presents and formally proves a novel defense against the cuckoo attack.
Performance_prediction_of_deep_learning_applications_training_in_GPU_as_a_service_systems

Performance prediction of deep learning applications training in GPU as a service systems

Journal paper
31 March 2022
This paper proposes performance models to predict GPU-deployed neural networks (NNs) training. The proposed approach is based on machine learning and exploits two main sets of features, thus capturing both NNs properties and hardware characteristics.
A_serverless_gateway_for_event_driven_machine_learning_inference_in_multiple_clouds

A serverless gateway for event-driven machine learning inference in multiple clouds

Journal paper
15 December 2021
This paper presents a serverless web-based scientific gateway to execute the inference phase of previously trained machine learning and artificial intelligence models.
A Randomized Greedy Method for AI Applications Component Placement and Resource Selection in Computing

A Randomized Greedy Method for AI Applications Component Placement and Resource Selection in Computing Continua

Conference paper
13 October 2021
This paper explains how efficient component placement and resource allocation algorithms are crucial to orchestrate at best the computing continuum resources.
TaScaaS_A_Multi_Tenant_Serverless_Task_Scheduler_and_Load_Balancer_as_a_Service

TaScaaS: A Multi-Tenant Serverless Task Scheduler and Load Balancer as a Service

Journal paper
03 September 2021
This work introduces TaScaaS, a highly scalable and completely serverless service deployed on AWS to distribute loosely coupled jobs among several computing infrastructures, and load balance them using a completely asynchronous approach to cope with the performance fluctuations with minimum impact in the execution time.
Colony: Parallel Functions as a Service on the Cloud-Edge Continuum

Colony: Parallel Functions as a Service on the Cloud-Edge Continuum

Conference paper
01 September 2021
This paper proposes a solution to organise the devices within the Cloud-Edge Continuum in such a way that each one, as an autonomous individual –Agent–, processes events/data on its embedded compute resources while offering its computing capacity to the rest of the infrastructure in a Function-as-a-Service manner.
ADAM-CS - Advanced Asynchronous Monotonic Counter Service

ADAM-CS - Advanced Asynchronous Monotonic Counter Service

Conference paper
06 August 2021
ADAM-CS is an asynchronous monotonic counter service to protect such high-traffic applications against rollback attacks. Leveraging a set of distributed monotonic counters and specific algorithms, ADAM-CS minimizes the maximum vulnerability window (MVW), i.e., the amount of transactions an adversary could successfully rollback.
Network_Function_Decomposition_and_Offloading_on_Heterogeneous_Networks_With_Programmable_Data_Planes

Network Function Decomposition and Offloading on Heterogeneous Networks With Programmable Data Planes

Journal paper
02 August 2021
This work presents a framework for the automatic deployment of disaggregated and decomposed network functions. The framework comprises an orchestrator capable of deploying the decomposed network functions on programmable network hardware and software switches running in containers.
Pareto-Optimal Progressive Neural Architecture Search

Pareto-Optimal Progressive Neural Architecture Search

Conference paper
14 July 2021
This paper presents the Neural Architecture Search (NAS): the process of automating architecture engineering, searching for the best deep learning configuration.
PERUN_Confidential_Multi_stakeholder_Machine_Learning_Framework_with_Hardware_Acceleration_Support

PERUN: Confidential Multi-stakeholder Machine Learning Framework with Hardware Acceleration Support

Journal paper
14 July 2021
PERUN is a framework for confidential multi-stakeholder machine learning that allows users to make a trade-off between security and performance. PERUN executes ML training on hardware accelerators (e.g., GPU) while providing security guarantees using trusted computing technologies, such as trusted platform module and integrity measurement architecture.