perrynzhou

专注于系统组件研发

0%

OpenCAS 消耗过多内存问题

OpenCase 内存耗尽的问题

1.现象

  • 使用3.5T SSD 初始化opencas初始化,宿主机内存为120G,初始化成功后,运行一段时间出现内存全部消耗完。

2.opencas 版本信息

1
2
3
opencas version:19.9
Linux kernel: 3.10.0-957.el7.x86_64
CentOS Linux release 7.6.1810 (Core)

3.opencas 出问题时候日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
use command:  grep vmalloc /proc/vmallocinfo |grep cas_cache
0xffffae2a4dffe000-0xffffae2a4e000000 8192 ocf_metadata_hash_ctrl_init+0x23/0xe0 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4ebb2000-0xffffae2a4ebb4000 8192 ocf_metadata_hash_ctrl_init+0x23/0xe0 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4ebb4000-0xffffae2a4ebb6000 8192 _raw_ram_init+0x2d/0x70 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4ebb6000-0xffffae2a4ebb8000 8192 _raw_ram_init+0x2d/0x70 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4ebb8000-0xffffae2a4ebc5000 53248 _raw_ram_init+0x2d/0x70 [cas_cache] pages=12 vmalloc N0=12
0xffffae2a4ebc5000-0xffffae2a4ebc7000 8192 _raw_ram_init+0x2d/0x70 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4ebc7000-0xffffae2a4ebd1000 40960 raw_dynamic_init+0x38/0x90 [cas_cache] pages=9 vmalloc N0=9
0xffffae2a4ebd1000-0xffffae2a4ebd3000 8192 _cache_mngt_cache_priv_init+0x2e/0x60 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4ebf1000-0xffffae2a4ebf3000 8192 _ocf_mngt_attach_cache_device+0x23/0x1b0 [cas_cache] pages=1 vmalloc N1=1
0xffffae2a4ebf3000-0xffffae2a4ebf5000 8192 ocf_volume_init+0x99/0x100 [cas_cache] pages=1 vmalloc N1=1
0xffffae2a4ebfc000-0xffffae2a4ebfe000 8192 ocf_freelist_init+0x25/0x100 [cas_cache] pages=1 vmalloc N1=1
0xffffae2a4ebfe000-0xffffae2a4ec00000 8192 ocf_freelist_init+0x64/0x100 [cas_cache] pages=1 vmalloc N1=1
0xffffae2a4edda000-0xffffae2a4edfb000 135168 _raw_ram_init+0x2d/0x70 [cas_cache] pages=32 vmalloc N0=32
0xffffae2a4edfb000-0xffffae2a4edfd000 8192 ocf_freelist_init+0x74/0x100 [cas_cache] pages=1 vmalloc N1=1
0xffffae2a4edfe000-0xffffae2a4ee00000 8192 _raw_ram_init+0x2d/0x70 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4ef01000-0xffffae2a4ef7b000 499712 _raw_ram_init+0x2d/0x70 [cas_cache] pages=121 vmalloc N0=121
0xffffae2a4ef7b000-0xffffae2a4ef87000 49152 _raw_ram_init+0x2d/0x70 [cas_cache] pages=11 vmalloc N1=11
0xffffae2a4ef87000-0xffffae2a4ef8f000 32768 _raw_ram_init+0x2d/0x70 [cas_cache] pages=7 vmalloc N1=7
0xffffae2a4ef8f000-0xffffae2a4ef9b000 49152 _raw_ram_init+0x2d/0x70 [cas_cache] pages=11 vmalloc N1=11
0xffffae2a4ef9b000-0xffffae2a4efab000 65536 _raw_ram_init+0x2d/0x70 [cas_cache] pages=15 vmalloc N1=15
0xffffae2a4efab000-0xffffae2a4efb9000 57344 ocf_metadata_concurrency_attached_init+0x3c/0x190 [cas_cache] pages=13 vmalloc N1=13
0xffffae2a4efbe000-0xffffae2a4efc0000 8192 _raw_ram_init+0x2d/0x70 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4efc0000-0xffffae2a4efe1000 135168 _raw_ram_init+0x2d/0x70 [cas_cache] pages=32 vmalloc N0=32
0xffffae2a4efe1000-0xffffae2a4efee000 53248 _raw_ram_init+0x2d/0x70 [cas_cache] pages=12 vmalloc N0=12
0xffffae2a4efee000-0xffffae2a4eff0000 8192 _raw_ram_init+0x2d/0x70 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4eff0000-0xffffae2a4effa000 40960 raw_dynamic_init+0x38/0x90 [cas_cache] pages=9 vmalloc N0=9
0xffffae2a4effa000-0xffffae2a4effc000 8192 _cache_mngt_cache_priv_init+0x2e/0x60 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4f101000-0xffffae2a4f17b000 499712 _raw_ram_init+0x2d/0x70 [cas_cache] pages=121 vmalloc N0=121
0xffffae2a4f17b000-0xffffae2a4f17d000 8192 _ocf_mngt_attach_cache_device+0x23/0x1b0 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4f17d000-0xffffae2a4f17f000 8192 ocf_volume_init+0x99/0x100 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4f17f000-0xffffae2a4f18b000 49152 _raw_ram_init+0x2d/0x70 [cas_cache] pages=11 vmalloc N0=11
0xffffae2a4f192000-0xffffae2a4f19a000 32768 _raw_ram_init+0x2d/0x70 [cas_cache] pages=7 vmalloc N0=7
0xffffae2a4f19a000-0xffffae2a4f1a6000 49152 _raw_ram_init+0x2d/0x70 [cas_cache] pages=11 vmalloc N0=11
0xffffae2a4f1a6000-0xffffae2a4f1b6000 65536 _raw_ram_init+0x2d/0x70 [cas_cache] pages=15 vmalloc N0=15
0xffffae2a4f1b6000-0xffffae2a4f1c4000 57344 ocf_metadata_concurrency_attached_init+0x3c/0x190 [cas_cache] pages=13 vmalloc N0=13
0xffffae2a4f1c4000-0xffffae2a4f1c6000 8192 ocf_freelist_init+0x25/0x100 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4f1c6000-0xffffae2a4f1c8000 8192 ocf_freelist_init+0x64/0x100 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4f1c8000-0xffffae2a4f1ca000 8192 ocf_freelist_init+0x74/0x100 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4f1ca000-0xffffae2a4f1d4000 40960 ocf_cache_line_concurrency_init+0x48/0x190 [cas_cache] pages=9 vmalloc N0=9
0xffffae2a4f1d4000-0xffffae2a4f1d9000 20480 _ocf_realloc_with_cp+0x158/0x1b0 [cas_cache] pages=4 vmalloc N0=4
0xffffae2a4f1d9000-0xffffae2a4f1de000 20480 _ocf_realloc_with_cp+0x158/0x1b0 [cas_cache] pages=4 vmalloc N1=4
0xffffae2a4f1df000-0xffffae2a4f1e1000 8192 ocf_promotion_init+0x2e/0xc0 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4f1e5000-0xffffae2a4f1ef000 40960 ocf_cache_line_concurrency_init+0x48/0x190 [cas_cache] pages=9 vmalloc N1=9
0xffffae2a4f1f5000-0xffffae2a4f1f7000 8192 ocf_promotion_init+0x2e/0xc0 [cas_cache] pages=1 vmalloc N1=1
0xffffae2a4f301000-0xffffae2a4f34b000 303104 ocf_metadata_concurrency_attached_init+0x57/0x190 [cas_cache] pages=73 vmalloc N0=73
0xffffae2a4f34b000-0xffffae2a4f395000 303104 ocf_metadata_concurrency_attached_init+0x57/0x190 [cas_cache] pages=73 vmalloc N1=73
0xffffae2a4f503000-0xffffae2a4f505000 8192 _raw_dynamic_get_item.isra.10+0xbc/0x160 [cas_cache] pages=1 vmalloc N0=1
0xffffae2a4f5fa000-0xffffae2a4f5fc000 8192 _raw_dynamic_get_item.isra.10+0xbc/0x160 [cas_cache] pages=1 vmalloc N1=1
0xffffae2a4fe90000-0xffffae2a4ff9a000 1089536 ocf_mngt_cache_start+0x1b0/0x7a0 [cas_cache] pages=265 vmalloc N0=265
0xffffae2a50b02000-0xffffae2a50c28000 1204224 _raw_ram_init+0x2d/0x70 [cas_cache] pages=293 vmalloc N0=293
0xffffae2a50c28000-0xffffae2a50d32000 1089536 ocf_mngt_cache_start+0x1b0/0x7a0 [cas_cache] pages=265 vmalloc N0=265
0xffffae2a50d32000-0xffffae2a50e58000 1204224 _raw_ram_init+0x2d/0x70 [cas_cache] pages=293 vmalloc N0=293
0xffffae2a50e58000-0xffffae2a52313000 21737472 _raw_ram_init+0x2d/0x70 [cas_cache] pages=5306 vmalloc vpages N0=5306
0xffffae2a52313000-0xffffae2a530e2000 14479360 _raw_ram_init+0x2d/0x70 [cas_cache] pages=3534 vmalloc vpages N0=3534
0xffffae2a530e2000-0xffffae2a5459d000 21737472 _raw_ram_init+0x2d/0x70 [cas_cache] pages=5306 vmalloc vpages N0=5306
0xffffae2a5459d000-0xffffae2a56311000 30883840 _raw_ram_init+0x2d/0x70 [cas_cache] pages=7539 vmalloc vpages N0=7539
0xffffae2a56311000-0xffffae2a564cc000 1814528 _raw_ram_init+0x2d/0x70 [cas_cache] pages=442 vmalloc N0=442
0xffffae2a564cc000-0xffffae2a57cf6000 25337856 ocf_metadata_concurrency_attached_init+0x3c/0x190 [cas_cache] pages=6185 vmalloc vpages N0=6185
0xffffae2a57cf6000-0xffffae2a58cf8000 16785408 ocf_cache_line_concurrency_init+0x48/0x190 [cas_cache] pages=4097 vmalloc vpages N0=4097
0xffffae2a58cf8000-0xffffae2a593e0000 7241728 _ocf_realloc_with_cp+0x158/0x1b0 [cas_cache] pages=1767 vmalloc vpages N0=1767
0xffffae2a5c278000-0xffffae2a5d733000 21737472 _raw_ram_init+0x2d/0x70 [cas_cache] pages=5306 vmalloc vpages N1=5306
0xffffae2a5d733000-0xffffae2a5e502000 14479360 _raw_ram_init+0x2d/0x70 [cas_cache] pages=3534 vmalloc vpages N1=3534
0xffffae2a5e502000-0xffffae2a5f9bd000 21737472 _raw_ram_init+0x2d/0x70 [cas_cache] pages=5306 vmalloc vpages N1=5306
0xffffae2a5f9bd000-0xffffae2a5fb78000 1814528 _raw_ram_init+0x2d/0x70 [cas_cache] pages=442 vmalloc N1=442
0xffffae2a70001000-0xffffae2d073a0000 11127091200 _raw_ram_init+0x2d/0x70 [cas_cache] pages=2716574 vmalloc vpages N0=2716574
0xffffae2d073a0000-0xffffae2ec0f22000 7410819072 _raw_ram_init+0x2d/0x70 [cas_cache] pages=1809281 vmalloc vpages N0=1809281
0xffffae2ec0f22000-0xffffae31582c1000 11127091200 _raw_ram_init+0x2d/0x70 [cas_cache] pages=2716574 vmalloc vpages N0=2716574
0xffffae31582c1000-0xffffae3506818000 15809736704 _raw_ram_init+0x2d/0x70 [cas_cache] pages=3859798 vmalloc vpages N0=3859798
0xffffae3506818000-0xffffae353db8a000 926359552 _raw_ram_init+0x2d/0x70 [cas_cache] pages=226161 vmalloc vpages N0=226161
0xffffae353db8a000-0xffffae3842bac000 12968927232 ocf_metadata_concurrency_attached_init+0x3c/0x190 [cas_cache] pages=3166241 vmalloc vpages N0=3166241
0xffffae3842bac000-0xffffae384bcc2000 152133632 ocf_metadata_concurrency_attached_init+0x57/0x190 [cas_cache] pages=37141 vmalloc vpages N0=37141
0xffffae384bcc2000-0xffffae3928a84000 3705413632 _ocf_realloc_with_cp+0x158/0x1b0 [cas_cache] pages=904641 vmalloc vpages N0=904641
0xffffae3928a84000-0xffffae3bbfe23000 11127091200 _raw_ram_init+0x2d/0x70 [cas_cache] pages=2716574 vmalloc vpages N1=2716574
0xffffae3bbfe23000-0xffffae3d799a5000 7410819072 _raw_ram_init+0x2d/0x70 [cas_cache] pages=1809281 vmalloc vpages N1=1809281
0xffffae3d799a5000-0xffffae4010d44000 11127091200 _raw_ram_init+0x2d/0x70 [cas_cache] pages=2716574 vmalloc vpages N1=2716574
0xffffae4010d44000-0xffffae43bf29b000 15809736704 _raw_ram_init+0x2d/0x70 [cas_cache] pages=3859798 vmalloc vpages N1=3859798
0xffffae43bf29b000-0xffffae43c100f000 30883840 _raw_ram_init+0x2d/0x70 [cas_cache] pages=7539 vmalloc vpages N1=7539
0xffffae43c100f000-0xffffae43f8381000 926359552 _raw_ram_init+0x2d/0x70 [cas_cache] pages=226161 vmalloc vpages N1=226161
0xffffae43f8381000-0xffffae46fd3a3000 12968927232 ocf_metadata_concurrency_attached_init+0x3c/0x190 [cas_cache] pages=3166241 vmalloc vpages N1=3166241
0xffffae46fd3a3000-0xffffae46febcd000 25337856 ocf_metadata_concurrency_attached_init+0x3c/0x190 [cas_cache] pages=6185 vmalloc vpages N1=6185
0xffffae46febcd000-0xffffae4707ce3000 152133632 ocf_metadata_concurrency_attached_init+0x57/0x190 [cas_cache] pages=37141 vmalloc vpages N1=37141
0xffffae4707ce3000-0xffffae4708ce5000 16785408 ocf_cache_line_concurrency_init+0x48/0x190 [cas_cache] pages=4097 vmalloc vpages N1=4097
0xffffae4708ce5000-0xffffae47e5aa7000 3705413632 _ocf_realloc_with_cp+0x158/0x1b0 [cas_cache] pages=904641 vmalloc vpages N1=904641
0xffffae47e5aa7000-0xffffae47e618f000 7241728 _ocf_realloc_with_cp+0x158/0x1b0 [cas_cache] pages=1767 vmalloc vpages N1=1767
0xffffae47e6eb3000-0xffffae47e6eec000 233472 cleaning_policy_acp_initialize+0x3e/0x330 [cas_cache] pages=56 vmalloc N1=56
0xffffae47e6eec000-0xffffae47e6ef3000 28672 cleaning_policy_acp_add_core+0x7c/0x160 [cas_cache] pages=6 vmalloc N1=6
0xffffae47e712a000-0xffffae47e7163000 233472 cleaning_policy_acp_initialize+0x3e/0x330 [cas_cache] pages=56 vmalloc N1=56
0xffffae47e7163000-0xffffae47e7169000 24576 cleaning_policy_acp_add_core+0x7c/0x160 [cas_cache] pages=5 vmalloc N1=5
0xffffae47e7692000-0xffffae47e8238000 12214272 cleaning_policy_acp_add_core+0x7c/0x160 [cas_cache] pages=2981 vmalloc vpages N1=2981
0xffffae47e8238000-0xffffae47e8af5000 9162752 cleaning_policy_acp_add_core+0x7c/0x160 [cas_cache] pages=2236 vmalloc vpages N1=2236

use command grep vmalloc /proc/vmallocinfo |grep cas_cache | awk '{total+=$2}; END {print total}'
126764556288
[root@szdpl1491 ~]# free -h
total used free shared buff/cache available
Mem: 125G 123G 967M 5.1M 864M 194M
Swap: 31G 7.9G 24G

the cas_cache use 118G, I have another sever with opencas for 1 month, the cas_cache use 59G

3.opencas 相关配置

  • cache mode: WT

  • cache config

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    casadm -L

    type id disk status write policy device
    cache 1 /dev/sda Running wt -
    └core 1 /dev/sdd Active - /dev/cas1-1
    cache 2 /dev/sdb Running wt -
    └core 1 /dev/sde Active - /dev/cas2-1
    casadm -P -i 1

    Cache Id 1
    Cache Size 926351414 [4KiB Blocks] / 3533.75 [GiB]
    Cache Device /dev/sda
    Core Devices 1
    Inactive Core Devices 0
    Write Policy wt
    Eviction Policy lru
    Cleaning Policy alru
    Promotion Policy always
    Cache line size 4 [KiB]
    Metadata Memory Footprint 46.7 [GiB]
    Dirty for 0 [s] / Cache clean
    Metadata Mode normal
    Status Running

    ╔══════════════════╤═══════════╤══════╤═════════════╗
    ║ Usage statistics │ Count │ % │ Units ║
    ╠══════════════════╪═══════════╪══════╪═════════════╣
    ║ Occupancy │ 500156640 │ 54.0 │ 4KiB blocks ║
    ║ Free │ 426194774 │ 46.0 │ 4KiB blocks ║
    ║ Clean │ 500156640 │ 54.0 │ 4KiB blocks ║
    ║ Dirty │ 0 │ 0.0 │ 4KiB blocks ║
    ╚══════════════════╧═══════════╧══════╧═════════════╝

    ╔══════════════════════╤════════════╤═══════╤══════════╗
    ║ Request statistics │ Count │ % │ Units ║
    ╠══════════════════════╪════════════╪═══════╪══════════╣
    ║ Read hits │ 2628479218 │ 61.6 │ Requests ║
    ║ Read partial misses │ 0 │ 0.0 │ Requests ║
    ║ Read full misses │ 64 │ 0.0 │ Requests ║
    ║ Read total │ 2628479282 │ 61.6 │ Requests ║
    ╟──────────────────────┼────────────┼───────┼──────────╢
    ║ Write hits │ 1579075690 │ 37.0 │ Requests ║
    ║ Write partial misses │ 4237509 │ 0.1 │ Requests ║
    ║ Write full misses │ 58309934 │ 1.4 │ Requests ║
    ║ Write total │ 1641623133 │ 38.4 │ Requests ║
    ╟──────────────────────┼────────────┼───────┼──────────╢
    ║ Pass-Through reads │ 0 │ 0.0 │ Requests ║
    ║ Pass-Through writes │ 0 │ 0.0 │ Requests ║
    ║ Serviced requests │ 4270102415 │ 100.0 │ Requests ║
    ╟──────────────────────┼────────────┼───────┼──────────╢
    ║ Total requests │ 4270102415 │ 100.0 │ Requests ║
    ╚══════════════════════╧════════════╧═══════╧══════════╝

    ╔══════════════════════════════════╤═════════════╤═══════╤═════════════╗
    ║ Block statistics │ Count │ % │ Units ║
    ╠══════════════════════════════════╪═════════════╪═══════╪═════════════╣
    ║ Reads from core(s) │ 356 │ 0.0 │ 4KiB blocks ║
    ║ Writes to core(s) │ 2954907028 │ 100.0 │ 4KiB blocks ║
    ║ Total to/from core(s) │ 2954907384 │ 100.0 │ 4KiB blocks ║
    ╟──────────────────────────────────┼─────────────┼───────┼─────────────╢
    ║ Reads from cache │ 0 │ 0.0 │ 4KiB blocks ║
    ║ Writes to cache │ 0 │ 0.0 │ 4KiB blocks ║
    ║ Total to/from cache │ 0 │ 0.0 │ 4KiB blocks ║
    ╟──────────────────────────────────┼─────────────┼───────┼─────────────╢
    ║ Reads from exported object(s) │ 12708231250 │ 81.1 │ 4KiB blocks ║
    ║ Writes to exported object(s) │ 2954907028 │ 18.9 │ 4KiB blocks ║
    ║ Total to/from exported object(s) │ 15663138278 │ 100.0 │ 4KiB blocks ║
    ╚══════════════════════════════════╧═════════════╧═══════╧═════════════╝

    ╔════════════════════╤═══════╤═════╤══════════╗
    ║ Error statistics │ Count │ % │ Units ║
    ╠════════════════════╪═══════╪═════╪══════════╣
    ║ Cache read errors │ 0 │ 0.0 │ Requests ║
    ║ Cache write errors │ 0 │ 0.0 │ Requests ║
    ║ Cache total errors │ 0 │ 0.0 │ Requests ║
    ╟────────────────────┼───────┼─────┼──────────╢
    ║ Core read errors │ 0 │ 0.0 │ Requests ║
    ║ Core write errors │ 0 │ 0.0 │ Requests ║
    ║ Core total errors │ 0 │ 0.0 │ Requests ║
    ╟────────────────────┼───────┼─────┼──────────╢
    ║ Total errors │ 0 │ 0.0 │ Requests ║
    ╚════════════════════╧═══════╧═════╧══════════╝
  • cache line size

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    # dmesg |grep "Cache line size"
    2751 [ 1505.783016] cache1: Hash offset : 44427904 kiB
    2752 [ 1505.783017] cache1: Hash size : 904644 kiB
    2753 [ 1505.783018] cache1: Cache line size: 4 kiB
    2754 [ 1505.783019] cache1: Metadata capacity: 47803 MiB
    2755 [ 1521.649327] cache1: OCF metadata self-test PASSED
    2756 [ 1527.389763] Thread cas_clean_cache1 started

    2893 [ 1823.699193] cache2: Hash offset : 44427904 kiB
    2894 [ 1823.699194] cache2: Hash size : 904644 kiB
    2895 [ 1823.699195] cache2: Cache line size: 4 kiB
    2896 [ 1823.699197] cache2: Metadata capacity: 47803 MiB
    2897 [ 1839.660385] cache2: OCF metadata self-test PASSED
    2898 [ 1845.359600] Thread cas_clean_cache2 started

5.解决方法

  • opencas在初始化时候casadm 需要添加–cache-line-size 参数,默认是4k,针对宿主机器消耗内存的计算公式为 内存消耗 = SSD_size / Cache_Line_size * 70 byte
  • 官方给的简要说明
    1
    2
    3
    4
    5
    6
    It looks quite normal to me. You have two huge cache devices (2 x 3.5TB) and size of CAS metadata is proportional to number of cache lines. CAS allocates about 70 bytes of metadata per cache line, so in your case it is about 60GiB of metadata per single cache, giving ~120GiB in total. That matches pretty well with your numbers.

    You can decrease memory consumption by choosing bigger cache line size. You can select cache line size up to 64kiB, which would decrease memory usage by factor of 16.

    I'd also recommend you, if it's possible, to switch to CAS v20.3.
    CAS v19.9 was tested only with basic set of tests, while v20.3 was thoroughly validated with extensive set of tests, thus it's much more stable than any previous version.

6.关于opencas cache line官方说明

Why does Open CAS Linux use some DRAM space?

Open CAS Linux uses a portion of system memory for metadata, which tells us where data resides. The amount of memory needed is proportional to the size of the cache space. This is true for any caching software solution. However with Open CAS Linux this memory footprint can be decreased using a larger cache line size set by the parameter –cache-line-size which may be useful in high density servers with many large HDDs.

Configuration Tool Details

The Open CAS Linux product includes a user-level configuration tool that provides complete control of the caching software. The commands and parameters available with this tool are detailed in this chapter.

To access help from the CLI, type the -H or --help parameter for details. You can also view the man page for this product by entering the following command:

# man casadm

Usage: casadm –start-cache –cache-device [option…]

Example:

# casadm –start-cache –cache-device /dev/sdc

or

# casadm -S -d /dev/sdc

Description: Prepares a block device to be used as device for caching other block devices. Typically the cache devices are SSDs or other NVM block devices or RAM disks. The process starts a framework for device mappings pertaining to a specific cache ID. The cache can be loaded with an old state when using the -l or –load parameter (previous cache metadata will not be marked as invalid) or with a new state as the default (previous cache metadata will be marked as invalid).

Required Parameters:

[-d, –cache-device ] : Caching device to be used. This is an SSD or any NVM block device or RAM disk shown in the /dev directory. needs to be the complete path describing the caching device to be used, for example /dev/sdc.

Optional Parameters:

[-i, –cache-id ]: Cache ID to create; <1 to 16384>. The ID may be specified or by default the command will use the lowest available number first.

[-l, –load]: Load existing cache metadata from caching device. If the cache device has been used previously and then disabled (like in a reboot) and it is determined that the data in the core device has not changed since the cache device was used, this option will allow continuing the use of the data in the cache device without the need to re-warm the cache with data.

  • Caution: You must ensure that the last shutdown followed the instructions in section Stopping Cache Instances. If there was any change in the core data prior to enabling the cache, data would be not synced correctly and will be corrupted.

[-f, –force]: Forces creation of a cache even if a file system exists on the cache device. This is typically used for devices that have been previously utilized as a cache device.

  • Caution: This will delete the file system and any existing data on the cache device.

[-c, –cache-mode ]: Sets the cache mode for a cache instance the first time it is started or created. The mode can be one of the following:

wt: (default mode) Turns write-through mode on. When using this parameter, the write-through feature is enabled which allows the acceleration of only read intensive operations.

wb: Turns write-back mode on. When using this parameter, the write-back feature is enabled which allows the acceleration of both read and write intensive operations.

  • Caution: A failure of the cache device may lead to the loss of data that has not yet been flushed to the core device.

wa: Turns write-around mode on. When using this parameter, the write-around feature is enabled which allows the acceleration of reads only. All write locations that do not already exist in the cache (i.e. the locations have not be read yet or have been evicted), are written directly to the core drive bypassing the cache. If the location being written already exists in cache, then both the cache and the core drive will be updated.

pt: Starts cache in pass-through mode. Caching is effectively disabled in this mode. This allows the user to associate all their desired core devices to be cached prior to actually enabling caching. Once the core devices are associated, the user would dynamically switch to their desired caching mode (see ‘-Q | –set-cache-mode’ for details).

wo: Turns write-only mode on. When using this parameter, the write-only feature is enabled which allows the acceleration of write intensive operations primarily.

  • Caution: A failure of the cache device may lead to the loss of data that has not yet been flushed to the core device.

[-x, –cache-line-size ]: Set cache line size {4 (default), 8, 16, 32, 64}. The cache line size can only be set when starting the cache and cannot be changed after cache is started.