use command grep vmalloc /proc/vmallocinfo |grep cas_cache | awk '{total+=$2}; END {print total}' 126764556288 [root@szdpl1491 ~]# free -h total used free shared buff/cache available Mem: 125G 123G 967M 5.1M 864M 194M Swap: 31G 7.9G 24G the cas_cache use 118G, I have another sever with opencas for 1 month, the cas_cache use 59G
It looks quite normal to me. You have two huge cache devices (2 x 3.5TB) and size of CAS metadata is proportional to number of cache lines. CAS allocates about 70 bytes of metadata per cache line, so in your case it is about 60GiB of metadata per single cache, giving ~120GiB in total. That matches pretty well with your numbers.
You can decrease memory consumption by choosing bigger cache line size. You can select cache line size up to 64kiB, which would decrease memory usage by factor of 16.
I'd also recommend you, if it's possible, to switch to CAS v20.3. CAS v19.9 was tested only with basic set of tests, while v20.3 was thoroughly validated with extensive set of tests, thus it's much more stable than any previous version.
6.关于opencas cache line官方说明
Why does Open CAS Linux use some DRAM space?
Open CAS Linux uses a portion of system memory for metadata, which tells us where data resides. The amount of memory needed is proportional to the size of the cache space. This is true for any caching software solution. However with Open CAS Linux this memory footprint can be decreased using a larger cache line size set by the parameter –cache-line-size which may be useful in high density servers with many large HDDs.
Configuration Tool Details
The Open CAS Linux product includes a user-level configuration tool that provides complete control of the caching software. The commands and parameters available with this tool are detailed in this chapter.
To access help from the CLI, type the -H or --help parameter for details. You can also view the man page for this product by entering the following command:
Description: Prepares a block device to be used as device for caching other block devices. Typically the cache devices are SSDs or other NVM block devices or RAM disks. The process starts a framework for device mappings pertaining to a specific cache ID. The cache can be loaded with an old state when using the -l or –load parameter (previous cache metadata will not be marked as invalid) or with a new state as the default (previous cache metadata will be marked as invalid).
Required Parameters:
[-d, –cache-device ] : Caching device to be used. This is an SSD or any NVM block device or RAM disk shown in the /dev directory. needs to be the complete path describing the caching device to be used, for example /dev/sdc.
Optional Parameters:
[-i, –cache-id ]: Cache ID to create; <1 to 16384>. The ID may be specified or by default the command will use the lowest available number first.
[-l, –load]: Load existing cache metadata from caching device. If the cache device has been used previously and then disabled (like in a reboot) and it is determined that the data in the core device has not changed since the cache device was used, this option will allow continuing the use of the data in the cache device without the need to re-warm the cache with data.
Caution: You must ensure that the last shutdown followed the instructions in section Stopping Cache Instances. If there was any change in the core data prior to enabling the cache, data would be not synced correctly and will be corrupted.
[-f, –force]: Forces creation of a cache even if a file system exists on the cache device. This is typically used for devices that have been previously utilized as a cache device.
Caution: This will delete the file system and any existing data on the cache device.
[-c, –cache-mode ]: Sets the cache mode for a cache instance the first time it is started or created. The mode can be one of the following:
wt: (default mode) Turns write-through mode on. When using this parameter, the write-through feature is enabled which allows the acceleration of only read intensive operations.
wb: Turns write-back mode on. When using this parameter, the write-back feature is enabled which allows the acceleration of both read and write intensive operations.
Caution: A failure of the cache device may lead to the loss of data that has not yet been flushed to the core device.
wa: Turns write-around mode on. When using this parameter, the write-around feature is enabled which allows the acceleration of reads only. All write locations that do not already exist in the cache (i.e. the locations have not be read yet or have been evicted), are written directly to the core drive bypassing the cache. If the location being written already exists in cache, then both the cache and the core drive will be updated.
pt: Starts cache in pass-through mode. Caching is effectively disabled in this mode. This allows the user to associate all their desired core devices to be cached prior to actually enabling caching. Once the core devices are associated, the user would dynamically switch to their desired caching mode (see ‘-Q | –set-cache-mode’ for details).
wo: Turns write-only mode on. When using this parameter, the write-only feature is enabled which allows the acceleration of write intensive operations primarily.
Caution: A failure of the cache device may lead to the loss of data that has not yet been flushed to the core device.
[-x, –cache-line-size ]: Set cache line size {4 (default), 8, 16, 32, 64}. The cache line size can only be set when starting the cache and cannot be changed after cache is started.