| cloudlab | c6525-25g @ utah.cloudlab.us |
| host | node0.ljx-295304.advosuwmadison-pg0.utah.cloudlab.us |
| cpu | AMD EPYC 7302P 16-Core Processor |
| memory | 125 GB |
| storage | MTFDDAK480TDN — SSD (/dev/sda3) |
| filesystem | ext4 — 294 GB total, 236 GB free |
| kernel | 6.8.0-106-generic |
| distro | Ubuntu 24.04.4 LTS |
| repo | c2c43b (user: clean, kmod: clean) |
| data | results.json |
Summary: Sequential buffered read benchmark with cold page cache.
Fixture: The workload creates `<dest>/testfile.dat` inside the mounted sandbox as a 1 GiB deterministic patterned file before running fio.
Harness: Harness behavior: create the sandbox-local backing file before the timed run, then have the parent/backend drop page cache before fio starts.
Execution: Command:
fio --output-format=json <dest>/job.fio
Jobfile:
[fio-seq-read-cold]
filename=<dest>/testfile.dat
rw=read
bs=4k
filesize=1g
io_size=256m
direct=0
invalidate=0
ioengine=psync
Source: src/workloads/fio_seq_read_cold.rs
Summary: Sequential buffered read benchmark with warm page cache.
Fixture: The workload creates `<dest>/testfile.dat` inside the mounted sandbox as a 1 GiB deterministic patterned file before running fio.
Harness: Harness behavior: pre-read `<dest>/testfile.dat` once before launching fio to warm the page cache.
Execution: Command:
fio --output-format=json <dest>/job.fio
Jobfile:
[fio-seq-read-warm]
filename=<dest>/testfile.dat
rw=read
bs=4k
filesize=1g
io_size=1g
direct=0
invalidate=0
ioengine=psync
Source: src/workloads/fio_seq_read_warm.rs
Summary: Sequential buffered write benchmark over a 1 GiB logical file space.
Fixture: No pre-existing file is required; fio creates `<dest>/testfile.dat` inside the workload directory.
Harness: Harness behavior: no extra cold/warm cache preparation is applied.
Execution: Command:
fio --output-format=json <dest>/job.fio
Jobfile:
[fio-seq-write]
filename=<dest>/testfile.dat
rw=write
bs=4k
filesize=1g
io_size=256m
direct=0
invalidate=0
ioengine=psync
Source: src/workloads/fio_seq_write.rs
Summary: Random buffered read benchmark with cold page cache.
Fixture: The workload creates `<dest>/testfile.dat` inside the mounted sandbox as a 1 GiB deterministic patterned file before running fio.
Harness: Harness behavior: create the sandbox-local backing file before the timed run, then have the parent/backend drop page cache before fio starts.
Execution: Command:
fio --output-format=json <dest>/job.fio
Jobfile:
[fio-rand-read-cold]
filename=<dest>/testfile.dat
rw=randread
bs=4k
filesize=1g
io_size=256m
direct=0
invalidate=0
ioengine=psync
Source: src/workloads/fio_rand_read_cold.rs
Summary: Random buffered read benchmark with warm page cache.
Fixture: The workload creates `<dest>/testfile.dat` inside the mounted sandbox as a 1 GiB deterministic patterned file before running fio.
Harness: Harness behavior: pre-read `<dest>/testfile.dat` once before launching fio to warm the page cache.
Execution: Command:
fio --output-format=json <dest>/job.fio
Jobfile:
[fio-rand-read-warm]
filename=<dest>/testfile.dat
rw=randread
bs=4k
filesize=1g
io_size=1g
direct=0
invalidate=0
ioengine=psync
Source: src/workloads/fio_rand_read_warm.rs
Summary: Random buffered write benchmark over a 1 GiB logical file space.
Fixture: No pre-existing file is required; fio creates `<dest>/testfile.dat` inside the workload directory.
Harness: Harness behavior: no extra cold/warm cache preparation is applied.
Execution: Command:
fio --output-format=json <dest>/job.fio
Jobfile:
[fio-rand-write]
filename=<dest>/testfile.dat
rw=randwrite
bs=4k
filesize=1g
io_size=256m
direct=0
invalidate=0
ioengine=psync
Source: src/workloads/fio_rand_write.rs
Summary: Mixed random buffered benchmark with cold page cache.
Fixture: The workload creates `<dest>/testfile.dat` inside the mounted sandbox as a 1 GiB deterministic patterned file before running fio.
Harness: Harness behavior: create the sandbox-local backing file before the timed run, then have the parent/backend drop page cache before fio starts.
Execution: Command:
fio --output-format=json <dest>/job.fio
Jobfile:
[fio-randrw-cold]
filename=<dest>/testfile.dat
rw=randrw
bs=4k
filesize=1g
io_size=256m
direct=0
invalidate=0
ioengine=psync
rwmixread=70
Source: src/workloads/fio_randrw_cold.rs
Summary: Mixed random buffered benchmark with warm page cache.
Fixture: The workload creates `<dest>/testfile.dat` inside the mounted sandbox as a 1 GiB deterministic patterned file before running fio.
Harness: Harness behavior: pre-read `<dest>/testfile.dat` once before launching fio to warm the page cache.
Execution: Command:
fio --output-format=json <dest>/job.fio
Jobfile:
[fio-randrw-warm]
filename=<dest>/testfile.dat
rw=randrw
bs=4k
filesize=1g
io_size=1g
direct=0
invalidate=0
ioengine=psync
rwmixread=70
Source: src/workloads/fio_randrw_warm.rs
Summary: Op benchmark for file creation throughput and latency.
Fixture: No pre-existing files required. The work directory is created on demand inside the mounted session.
Execution: Rust code:
for i in 0..count {
let t0 = Instant::now();
File::create(dest.join(format!("f-{i:06}.dat")))?;
latencies.push(t0.elapsed());
}
Source: src/workloads/meta_create.rs
Summary: Op benchmark for file creation throughput and latency.
Fixture: No pre-existing files required. The work directory is created on demand inside the mounted session.
Execution: Rust code:
for i in 0..count {
let t0 = Instant::now();
File::create(dest.join(format!("f-{i:06}.dat")))?;
latencies.push(t0.elapsed());
}
Source: src/workloads/meta_create.rs
Summary: Op benchmark for file creation throughput and latency.
Fixture: No pre-existing files required. The work directory is created on demand inside the mounted session.
Execution: Rust code:
for i in 0..count {
let t0 = Instant::now();
File::create(dest.join(format!("f-{i:06}.dat")))?;
latencies.push(t0.elapsed());
}
Source: src/workloads/meta_create.rs
Summary: Append 4 KiB to each of 10,000 pre-existing files, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let buf = vec![0xAB; workloads::OP_FILE_SIZE]; let mut latencies =
Vec::with_capacity(count); for i in 0..count
{
let path = dest.join(format!("file-{i:06}.dat")); let t0 =
Instant::now();
std::fs::OpenOptions::new().append(true).open(&path).with_context(||
format!("opening {}", path.display()))?
.write_all(&buf).with_context(||
format!("appending {}", path.display()))?;
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_append.rs
Summary: Append 4 KiB to each of 10,000 pre-existing files, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let buf = vec![0xAB; workloads::OP_FILE_SIZE]; let mut latencies =
Vec::with_capacity(count); for i in 0..count
{
let path = dest.join(format!("file-{i:06}.dat")); let t0 =
Instant::now();
std::fs::OpenOptions::new().append(true).open(&path).with_context(||
format!("opening {}", path.display()))?
.write_all(&buf).with_context(||
format!("appending {}", path.display()))?;
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_append.rs
Summary: Open 10,000 files from warm dcache/icache, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let mut latencies = Vec::with_capacity(count); for i in 0..count
{
let path = dest.join(format!("file-{i:06}.dat")); let t0 =
Instant::now(); let f =
std::fs::File::open(&path).with_context(||
format!("open {}", path.display()))?; std::hint::black_box(f);
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_open_warm.rs
Summary: Open 10,000 files from warm dcache/icache, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let mut latencies = Vec::with_capacity(count); for i in 0..count
{
let path = dest.join(format!("file-{i:06}.dat")); let t0 =
Instant::now(); let f =
std::fs::File::open(&path).with_context(||
format!("open {}", path.display()))?; std::hint::black_box(f);
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_open_warm.rs
Summary: Stat 10,000 files from warm dcache/icache, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let mut latencies = Vec::with_capacity(count); for i in 0..count
{
let path = dest.join(format!("file-{i:06}.dat")); let t0 =
Instant::now(); let meta =
std::fs::metadata(&path).with_context(||
format!("stat {}", path.display()))?; std::hint::black_box(meta);
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_stat_warm.rs
Summary: Stat 10,000 files from warm dcache/icache, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let mut latencies = Vec::with_capacity(count); for i in 0..count
{
let path = dest.join(format!("file-{i:06}.dat")); let t0 =
Instant::now(); let meta =
std::fs::metadata(&path).with_context(||
format!("stat {}", path.display()))?; std::hint::black_box(meta);
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_stat_warm.rs
Summary: Enumerate one warm directory containing 100 or 10,000 files, measuring per-readdir latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let t0 = Instant::now(); let mut count = 0usize; for entry in
std::fs::read_dir(dest).with_context(||
format!("reading {}", dest.display()))?
{
let entry =
entry.with_context(|| format!("iterating {}", dest.display()))?;
std::hint::black_box(entry.file_name()); count += 1;
} if count != expected
{
bail!("expected {expected} entries in {}, found {count}",
dest.display());
} Ok(vec![t0.elapsed()])
}
Source: src/workloads/meta_readdir_warm.rs
Summary: Enumerate one warm directory containing 100 or 10,000 files, measuring per-readdir latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let t0 = Instant::now(); let mut count = 0usize; for entry in
std::fs::read_dir(dest).with_context(||
format!("reading {}", dest.display()))?
{
let entry =
entry.with_context(|| format!("iterating {}", dest.display()))?;
std::hint::black_box(entry.file_name()); count += 1;
} if count != expected
{
bail!("expected {expected} entries in {}, found {count}",
dest.display());
} Ok(vec![t0.elapsed()])
}
Source: src/workloads/meta_readdir_warm.rs
Summary: Rename 10,000 files, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let mut latencies = Vec::with_capacity(count); for i in 0..count
{
let from = dest.join(format!("file-{i:06}.dat")); let to =
dest.join(format!("renamed-{i:06}.dat")); let t0 = Instant::now();
std::fs::rename(&from,
&to).with_context(||
format!("renaming {} -> {}", from.display(), to.display()))?;
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_rename.rs
Summary: Rename 10,000 files, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let mut latencies = Vec::with_capacity(count); for i in 0..count
{
let from = dest.join(format!("file-{i:06}.dat")); let to =
dest.join(format!("renamed-{i:06}.dat")); let t0 = Instant::now();
std::fs::rename(&from,
&to).with_context(||
format!("renaming {} -> {}", from.display(), to.display()))?;
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_rename.rs
Summary: Delete 10,000 files, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let mut latencies = Vec::with_capacity(count); for i in 0..count
{
let path = dest.join(format!("file-{i:06}.dat")); let t0 =
Instant::now();
std::fs::remove_file(&path).with_context(||
format!("unlinking {}", path.display()))?;
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_unlink.rs
Summary: Delete 10,000 files, measuring per-operation latency.
Fixture: Fixture varies by source: base populates before mount; stage creates inside the mount; checkpoint snapshots after creation.
Execution: Rust code:
{
let mut latencies = Vec::with_capacity(count); for i in 0..count
{
let path = dest.join(format!("file-{i:06}.dat")); let t0 =
Instant::now();
std::fs::remove_file(&path).with_context(||
format!("unlinking {}", path.display()))?;
latencies.push(t0.elapsed());
} Ok(latencies)
}
Source: src/workloads/meta_unlink.rs
Summary: Session microbenchmark that creates N new 4 KiB files to exercise create + write behavior through each backend.
Fixture: No external fixture. The workload runs in a fresh work directory created inside the backend session.
Execution: Rust code:
let buf = vec![0u8; 4096];
for i in 0..count {
fs::write(dest.join(format!("file-{i:06}.dat")), &buf)?;
}
Source: src/workloads/write_files.rs
Summary: Session microbenchmark for copy-on-write / copy-up behavior on existing files.
Fixture: Populates the backend base layer with N 4 KiB files before timing so each write targets an existing file.
Execution: Rust code:
let buf = vec![0xFFu8; 4096];
for i in 0..count {
fs::write(dest.join(format!("file-{i:06}.dat")), &buf)?;
}
Source: src/workloads/overwrite_files.rs
Summary: Session microbenchmark for rename-heavy directory operations on existing files.
Fixture: Populates the backend base layer with N 4 KiB files before timing.
Execution: Rust code:
for i in 0..count {
fs::rename(
dest.join(format!("file-{i:06}.dat")),
dest.join(format!("renamed-{i:06}.dat")),
)?;
}
Source: src/workloads/rename_files.rs
Summary: Session microbenchmark for delete-heavy operations on existing files.
Fixture: Populates the backend base layer with N 4 KiB files before timing.
Execution: Rust code:
for i in 0..count {
fs::remove_file(dest.join(format!("file-{i:06}.dat")))?;
}
Source: src/workloads/unlink_files.rs
Summary: Checkpoint-depth scaling workload (fixed-size create/read probes).
Fixture: Creates synthetic files under a single workload directory; no external fixture required.
Harness: Uses backend checkpoint protocol between setup phases. Configure with CHECKPOINT_SCALING_MODE={create|read|commit|status} and CHECKPOINT_SCALING_DEPTH=<N>. Checkpoint creation is trivial: 10 seed files are overwritten at each layer.
Execution: Builds a checkpoint chain (overwriting 10 seed files per layer), then measures 100 create or 100 read operations (or exits for commit-time measurement) and emits OpResult.
Source: src/workloads/checkpoint_scaling.rs
Summary: Session macrobenchmark that replays a real overlayfs patch series as a search/edit/build/commit workflow on a pinned Linux base commit.
Fixture: Ensures `~/.cache/yolo-bench/linux` exists as the source repo/object store and reuses checked-in workflow fixtures under `bench/fixtures/dev-workflow/`.
Execution: Runs `git worktree add --detach <dest> <base-commit>`, `make tinyconfig`, a clean build, then per-commit search/read/edit command lists, incremental build, git status/diff/add/commit, and a backend-managed checkpoint after each edit command.
Source: src/workloads/dev_workflow.rs
Summary: Session macrobenchmark that untars a Linux source release into the mounted destination.
Fixture: Caches one Linux source tarball under ~/.cache/yolo-bench/linux-tar/ and reuses it across runs.
Execution: Runs `tar -xJf <cached-tarball> -C <dest> --strip-components=1`.
Source: src/workloads/linux_untar.rs