ASPLOS
2023
Going beyond the Limits of SFI: Flexible and Secure Hardware-Assisted In-Process Isolation with HFI
SFI, Wasm, sandboxing, hardware-based isolation
Background of WASM
Wasm context switches are very fast — in the low 10s of cycles, cheaper than a hardware context switch. Yet, wasm has limitations:
- Performance: 40% overhead on code execution.
- Scaling: relies on an ad-hoc system of guard regions for memory isolation, consuming huge virtual memory. (8GiB of virtual address space)
- Backwards compatibility: cannot run unmodified binaries, code that directly accesses hardware or JIT applications.
- Spectre safety: can speculate past security checks.
HFI
Goal:
- Replacing SFI with efficient hardware primitives.
- Backwards compatibile in-process isolation.
Choices:
- HFI is purly in userspace.
- regions, supporting coarse-grin isolation (e.g. heaps) and fine-grain sharing (e.g. objects).
- only keeps on-chip state for the currently executing sandbox (for scalability)
HFI controls all memory access by regions. HFI includes implicit regions and explicit regions. Implicit regions apply checks to every memory access, and grant access on a first-match basis. An explicit region acts as a handle to a memory range, and follows the normal (base, bound) style of addressing.
HFI has 2 sandbox types: hybrid and native. Hybrid sandboxes are trusted, and generally for a wasm application. Native sandboxs are for unmodified and untrusted code.
HFI interact with sandboxes by enter and exit instructions and syscall redirection (for native sandboxes only).
HFI Instructions
Regions:
hfi_set_region
: set up regionshfi_clear_region
:hfi_get_region
:hfi_clear_all_regions
:
Sandboxing:
hfi_enter
: start the sandboxhfi_reenter
:hfi_exit
: exit the sandbox
Memory access:
hmov
: region-based addressing
2022
CARAT CAKE: Replacing Paging via Compiler/Kernel Cooperation
virtual memory, memory management, kernel, runtime
FlexOS: Towards Flexible OS Isolation
Operating Systems, Security, Isolation
TODO
Memory-Harvesting VMs in Cloud Platforms
Cloud computing, memory management, resource harvesting
TODO
IOCost: Block IO Control for Containers in Datacenters
Datacenters, Operating Systems, I/O, Containers
TODO
TMO: Transparent Memory Offloading in Datacenters
Datacenters, Operating Systems, Memory Management, Non-volatile Memory
TODO
IceBreaker: Warming Serverless Functions Better with Heterogeneity
Serverless Computing, Cloud Computing, Cold Start, Keep-alive Cost, Heterogeneous Hardware
TODO
Serverless Computing on Heterogeneous Computers
Cloud computing, serverless computing, heterogeneous computers, function-as-a-service, operating system
TODO
2021
TODO
Cloud computing, serverless computing, function-as-a-service, microservices
TODO
CubicleOS: A Library OS with Software Componentisation for Practical Isolation
2017
Black-box Concurrent Data Structures for NUMA Architectures
NR combines ideas from two disciplines: distributed systems and shared-memory algorithms. NR maintains per-node replicas of an arbitrary data structure and synchronizes them via a shared log (an idea from distributed systems). The shared log is realized by a hierarchical, NUMA-aware design that uses flat combining within nodes and lock-free appending across nodes (ideas from shared-memory algorithms). With this interdisciplinary approach, only a handful of threads need to synchronize across nodes, so most synchronization occurs efficiently within each node.