24
Max: A Multicore-Accelerated File System for Flash Storage Xiaojian Liao, Youyou Lu, Erci Xu, Jiwu Shu 1 Tsinghua University

Max: A Multicore-Accelerated File System for Flash Storage

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Max: A Multicore-Accelerated File System for Flash Storage

Max: A Multicore-Accelerated File System for Flash Storage

Xiaojian Liao, Youyou Lu, Erci Xu, Jiwu Shu

1

Tsinghua University

Page 2: Max: A Multicore-Accelerated File System for Flash Storage

2

Agenda

• Background

• Motivation

• Design and Implementation

• Evaluation

• Conclusion

Page 3: Max: A Multicore-Accelerated File System for Flash Storage

Background: Storage trend

HDD SATA SSD NVMe SSD

Bandwidth ~80 MB/s ~500 MB/s ~5 GB/s

Modern storage hardware delivers higher write bandwidth.

3

Year 2014 2020 2021

Product(media)

Intel DC P3700 (MLC)

Samsung 980pro(V-NAND MLC)

Intel D5-P5316(QLC)

Seq. bandwidth 1900 MB/s 5 GB/s 3.6 GB/s

Rand. bandwidth 600 MB/s 3.2 GB/s 31 MB/s

Seq. / Rand. 3.2 1.6 116

Flash-based SSDs provide higher sequential write bandwidth.

Page 4: Max: A Multicore-Accelerated File System for Flash Storage

Background: Storage system

CPU

System Software

Classic HDD

10%

100%

4

CPU

System Software

SSDs(array)

100%

40%

The CPU or system software become the bottlenecks.

Page 5: Max: A Multicore-Accelerated File System for Flash Storage

Background: Storage system

5

CPU

System Software

SSDs(array)

100%

40%

CPUs

System Software

SSDs(array)

Employing multicore CPUs to the system software.

Page 6: Max: A Multicore-Accelerated File System for Flash Storage

Motivation: Multicore scalability

6

0

5

10

15

1 4 9 18 27 36 45 54 63 72

Thro

ughpu

t (K

IO

PS)

# of processes

SATA SSD

SpanFS ideal Ext4 F2FS XFS

0

50

100

150

200

1 4 9 18 27 36 45 54 63 72

# of processes

NVMe SSD

SpanFS ideal Ext4 F2FS XFS

• Workload: concurrent I/Os including create(), write(), fsync() and unlink().• ideal: partition the drive and the system software; run an independent F2FS

atop each partition.• Other Linux file systems: multiple CPU cores share a single drive partition.

5X

Existing Linux file systems fail to scale for high performance SSDs.

Page 7: Max: A Multicore-Accelerated File System for Flash Storage

Motivation: Analysis

7

NVMe Controller

flash

Multi-Queue Block Device

Concurrent I/Os and multicore CPUscore core core core

flash flash flash

Multicore-friendly design of block layer

and device driver

High internal data parallelism of the SSD

File systemFile system becomes the

scalability bottleneck

Page 8: Max: A Multicore-Accelerated File System for Flash Storage

Motivation: Analysis

8

Use F2FS, an existing LFS designed for flash-based SSD, as an example

Concurrency Control

In-Memory Data Structure

FS metadata File metadata File data

Space Allocation

51% CPU cycles for locking on write()

10~37% CPU cycles for locking on create() and unlink()

45% CPU cycles for locking on fsync()

CPU becomes the bottleneck; I/O device is underutilized.

Page 9: Max: A Multicore-Accelerated File System for Flash Storage

9

Agenda

• Background

• Motivation

• Design and Implementation

• Evaluation

• Conclusion

Page 10: Max: A Multicore-Accelerated File System for Flash Storage

Max overview

10

Concurrency Control

In-Memory Data Structure

Space Allocation

File cell File cell File cell

Application I/Os

Reader Pass-through Semaphore for CC

File cell to scale the access to IMDS

Mlog for concurrent SA and persistence

Key idea: sequential I/O (log structure) + efficient CPU use (sharding)

Mlog Mlog Mlog

Page 11: Max: A Multicore-Accelerated File System for Flash Storage

LFS’s concurrency control

11

User operations (syscalls)

1. Operations and concurrency control of classic LFS

Read operations Write operations

File-level reader-writer lock

LFS’s internal operations (checkpoint)

Global reader-writer lock

2. Concurrency control among write ops and a checkpoint

checkpoint requires an exclusive-mode lock

core 0 core 1 core N…

Sharedcounter

checkpoint

+2^32Sharedcounter

Sharedcounter

Page 12: Max: A Multicore-Accelerated File System for Flash Storage

LFS’s concurrency control

12

User operations (syscalls)

1. Operations and concurrency control of classic LFS

Read operations Write operations

File-level reader-writer lock

LFS’s internal operations (checkpoint)

Global reader-writer lock

2. Concurrency control among write ops and a checkpoint

write requires a shared-mode lock

core 0 core 1 core N…

Sharedcounter

write

+1Sharedcounter

Sharedcounter

Expensive cache coherence traffic

Page 13: Max: A Multicore-Accelerated File System for Flash Storage

Reader Pass-through Semaphore

13

Key idea: per-core counters + CPU scheduler’s free rides

core 0 core 1 core N…

counter counter counter

write write

+1 +1

checkpointany write ops?

aggressive polling (ns)

low latency but interfere other cores (e.g., IPIs)

lazy wait (ms)

high latency and slows the checkpoint

scheduler-assistedcheck (us-scale, consistent with SSDs’ latency)

Page 14: Max: A Multicore-Accelerated File System for Flash Storage

Reader Pass-through Semaphore

14

The FS in kernel space frequently triggers CPU scheduler, especially when a syscall is finished.

core 0 core 1 core N

write write

checkpointTime

shared-mode lock

exclusive-mode lock

file systems ops

CPU scheduler

blocked!

blocked!block other write ops via a traditional reader-writer lock.

no writer in the critical section.

no writer in the critical section.

Page 15: Max: A Multicore-Accelerated File System for Flash Storage

File Cell

15

Key idea: partition and reorganize in-memory data structure

File cell group 0inode number % N = 0

File cell group Ninode number % N = N-1

file 1 inode table9 B

4 KB inode

4 KB file 1 data

File cell

write(), read(), stat(), fsync(): 1. lock group/indexing in shared-mode2. lock file cell of target file3. file operations

Page 16: Max: A Multicore-Accelerated File System for Flash Storage

File Cell

16

Key idea: partition and reorganize in-memory data structure

File cell group 0inode number % N = 0

File cell group Ninode number % N = N-1

file 1 inode table9 B

4 KB inode

4 KB file 1 data

File cell

creat(), unlink(), mkdir(): 1. lock group/indexing in exclusive-mode2. lock file cell of target files and directory3. file operations

Page 17: Max: A Multicore-Accelerated File System for Flash Storage

Mlog

17

Key idea: multiple logs, each serves for atomic file operations

mlog 1 mlog 2 mlog 3

write(file1) create(file2)• dispatch atomic file

operations in a round-robin fashion

• each mlog maintains its internal consistency

How to keep consistency over multiple mlogs?

mlog 1 mlog 2

write(file1) rename(file1,file 2)

inode

Use a global version number to decide the persistence ordering

Page 18: Max: A Multicore-Accelerated File System for Flash Storage

18

Agenda

• Background

• Motivation

• Design and Implementation

• Evaluation

• Conclusion

Page 19: Max: A Multicore-Accelerated File System for Flash Storage

Evaluation

19

CPU 4 Intel Xeon Gold 6140 CPU, each with 18 cores, totally 72 physical CPU cores

SSD Intel DC P3700 2 TB SSD

Compared system

Linux vanilla kernel 4.19.11;ext4, XFS, F2FS [FAST’15], SpanFS [ATC’15]

Workloads

• Data and metadata scalability;• Varmail and RocksDB;• Upper bound evaluation against tmpfs;

Page 20: Max: A Multicore-Accelerated File System for Flash Storage

Data and metadata scalability

20

0

50

100

150

1 4 9 18 27 36 45 54 63 72

M o

ps/

s

# of threads

Data Overwrite

SpanFS Max F2FS Ext4 XFS

Parallel data intensive ops

0

0.5

1

1 4 9 18 27 36 45 54 63 72# of threads

Metadata Create

SpanFS Max F2FS Ext4 XFS

Parallel metadata intensive ops

Max-72threads = 56 x Max-1threadMax = 2.8 x SpanFSMax = 18.6 x F2FS

Page 21: Max: A Multicore-Accelerated File System for Flash Storage

Application scalability

21

create, unlink and fsync intensive

Max = 2.9 x SpanFS = 2.1 x F2FS Max > SpanFS = 1.5x F2FS

0

200

400

600

800

1 4 9 18 27 36 45 54 63 72

K o

ps/s

# of threads

Varmail

SpanFS Max F2FS Ext4 XFS

0

10

20

30

40

1 4 9 18 27 36 45 54 63 72# of threads

RocksDB overwrite

SpanFS Max F2FS Ext4 XFS

compaction intensive, value size = 8 KB

Page 22: Max: A Multicore-Accelerated File System for Flash Storage

Upper bound evaluation

22

0

10

20

30

1 4 9 18 27 36 45 54 63 72

M o

ps/s

# of threads

Data append write

Max-mem tmpfs

0

1

2

3

1 4 9 18 27 36 45 54 63 72

# of threads

Metadata create

Max-mem tmpfs

tmpfs: a simple wrapper of Linux VFS, a memory-based file systemMax-mem: disable fsync and page cache flushes to avoid duplicate on-disk copytested device: 20GB memory-backed RAMdisk

The throughput of Max-mem comes close to tmpfs

Page 23: Max: A Multicore-Accelerated File System for Flash Storage

Conclusion

23

⚫Max: A Multicore-Accelerated File System for Flash Storage

➢ Reader Pass-throughput Semaphore to improve the concurrency control

➢ File cell to scale the accesses of in-memory data structures

➢ Mlog to parallel the persistence functions

⚫ Performance evaluation

➢ Performs significantly better than existing Linux file systems

➢ Achieve near-optimal performance for some file operations (i.e., append

write and create).

Source code: https://github.com/thustorage/max

Page 24: Max: A Multicore-Accelerated File System for Flash Storage

Q&A

24

Max: A Multicore-Accelerated File System for Flash Storage

Xiaojian Liao, Youyou Lu, Erci Xu, Jiwu Shu

Tsinghua University

[email protected]

Thank You!