filesystems - ZFS Blocksize (recordsize) and primarycache issue -
on website: http://www.patpro.net/blog/index.php/2014/03/19/2628-zfs-primarycache-all-versus-metadata/
the person shows switching primarycache
or metadata, gets wildly different read performance when using antivirus.
however, shows read bandwidth has vast difference too.
i create 2 brand new datasets, both primarycache=none , compression=lz4, , copy in each 1 4.8gb file (2.05x compressratio). set primarycache=all on first one, , primarycache=metadata on second one. cat first file /dev/null zpool iostat running in terminal. , finally, cat second file same way.
the sum of read bandwidth column (almost) physical size of file on disk (du output) dataset primarycache=all: 2.44gb. other dataset, primarycache=metadata, sum of read bandwidth column ...wait it... 77.95gb.
he says anonymous user explained this:
clamscan reads file, gets 4k (pagesize?) of data , processes it, reads next 4k, etc.
zfs, however, cannot read 4k. reads 128k (recordsize) default. since there no cache (you've turned off) rest of data thrown away.
128k / 4k = 32
32 x 2.44gb = 78.08gb
i don't quite understand anonymous user's explanation. i'm still confused why there such big difference in read bandwidth.
so why zfs experiment show when primarycache
all, read bandwidth 2.44 gb, when metadata, it's 77.95gb? , implications tuning zfs? if person perhaps reduced recordsize, different result?
what claim zfs's recordsize variable?
the test blogger, patrick, ran "cat" 4.8 gb file (compressed 2.44 gb) /dev/null , watch how long took file read.
the key "primarycache=metadata" might mean "cache=off," because none of actual file stored in cache. when "primarycache=all," system reads whole file once , stores in cache (typically ram, , l2 ssd cache when fills up). when "cat" or "clamscan" file, can find there, , doesn't need read again disk.
as cat writes file /dev/null, doesn't write in single 2.44 gb block, writes little bit @ time, checks cache next bit, writes little more, etc.
with cache off, file need re-read disk ridiculous amount of times it's written /dev/null (or stdout, wherever) -- that's logic of "128k/4k = 32".
zfs writes files on disk in 128k blocks, forum posters found "clamscan" (and "cat", @ least on user's freebsd box) processes data in 4k blocks. so, without cache, each 128k block have served 32 times instead of once. (clamscan pulls block #1, 128k large, uses first 4k; needs block #1 again, since there's no cache reads block disk again; takes second 4k, throws rest out; etc.)
the upshot is:
[1] maybe never "primarycache=metadata", reason.
[2] when block size mismatched so, performance issues can result. if clamscan read 128k blocks, there no (significant?) difference on read of single file. otoh, if need file again shortly after, cache still have data blocks , woudn't need pulled disk again.
...
here tests inspired forum post illustrate. examples take place on zfs dataset, record size set 128k (the default), primarycache set metadata , 1g dummy file copied @ different block sizes, 128k first, 4 8. (scroll right, i've lined copy commands w/ iostat readout).
notice how dramatically, when block sizes mismatched, ratio of reads writes balloons , read bandwidth takes off.
root@zone1:~# zpool iostat 3 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 291g 265g 0 21 20.4k 130k rpool 291g 265g 0 0 0 0 rpool 291g 265g 0 515 0 38.9m ajordan@zone1:~/mnt/test$ mkfile 1g test1.tst rpool 291g 265g 0 1.05k 0 121m rpool 292g 264g 0 974 0 100m rpool 292g 264g 0 217 0 26.7m rpool 292g 264g 0 516 0 58.0m rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 96 0 619k rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 474 0 59.3m 0 ajordan@zone1:~/mnt/test$ dd if=test1.tst of=test1.2 bs=128k rpool 292g 264g 254 593 31.8m 67.8m rpool 292g 264g 396 230 49.6m 27.9m rpool 293g 263g 306 453 38.3m 45.2m 8192+0 records in rpool 293g 263g 214 546 26.9m 62.0m 8192+0 records out rpool 293g 263g 486 0 60.8m 0 rpool 293g 263g 211 635 26.5m 72.9m rpool 293g 263g 384 235 48.1m 29.2m rpool 293g 263g 0 346 0 37.2m rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0 rpool 293g 263g 1.05k 70 134m 3.52m ajordan@zone1:~/mnt/test$ dd if=test1.tst of=test1.3 bs=4k rpool 293g 263g 1.45k 0 185m 0 rpool 293g 263g 1.35k 160 173m 10.0m rpool 293g 263g 1.44k 0 185m 0 rpool 293g 263g 1.31k 180 168m 9.83m rpool 293g 263g 1.36k 117 174m 9.20m rpool 293g 263g 1.42k 0 181m 0 rpool 293g 263g 1.26k 120 161m 9.48m rpool 293g 263g 1.49k 0 191m 0 rpool 293g 263g 1.40k 117 179m 9.23m rpool 293g 263g 1.36k 159 175m 9.98m rpool 293g 263g 1.41k 12 180m 158k rpool 293g 263g 1.23k 167 157m 9.63m rpool 293g 263g 1.54k 0 197m 0 rpool 293g 263g 1.36k 158 175m 9.70m rpool 293g 263g 1.42k 151 181m 9.99m rpool 293g 263g 1.41k 21 180m 268k rpool 293g 263g 1.32k 132 169m 9.39m rpool 293g 263g 1.48k 0 189m 0 rpool 294g 262g 1.42k 118 181m 9.32m rpool 294g 262g 1.34k 121 172m 9.73m rpool 294g 262g 859 2 107m 10.7k rpool 294g 262g 1.34k 135 171m 6.83m rpool 294g 262g 1.43k 0 183m 0 rpool 294g 262g 1.31k 120 168m 9.44m rpool 294g 262g 1.26k 116 161m 9.11m rpool 294g 262g 1.52k 0 194m 0 rpool 294g 262g 1.32k 118 170m 9.44m rpool 294g 262g 1.48k 0 189m 0 rpool 294g 262g 1.23k 170 157m 9.97m rpool 294g 262g 1.41k 116 181m 9.07m rpool 294g 262g 1.49k 0 191m 0 rpool 294g 262g 1.38k 123 176m 9.90m rpool 294g 262g 1.35k 0 173m 0 rpool 294g 262g 1.41k 114 181m 8.86m rpool 294g 262g 1.29k 155 165m 10.3m rpool 294g 262g 1.50k 7 192m 89.3k rpool 294g 262g 1.43k 116 183m 9.03m rpool 294g 262g 1.52k 0 194m 0 rpool 294g 262g 1.39k 125 178m 10.0m rpool 294g 262g 1.28k 119 164m 9.52m rpool 294g 262g 1.54k 0 197m 0 rpool 294g 262g 1.39k 120 178m 9.57m rpool 294g 262g 1.45k 0 186m 0 rpool 294g 262g 1.37k 133 175m 9.60m rpool 294g 262g 1.38k 173 176m 10.1m rpool 294g 262g 1.61k 0 207m 0 rpool 294g 262g 1.47k 125 189m 10.2m rpool 294g 262g 1.56k 0 200m 0 rpool 294g 262g 1.38k 124 177m 10.2m rpool 294g 262g 1.37k 145 175m 9.95m rpool 294g 262g 1.51k 28 193m 359k rpool 294g 262g 1.32k 171 169m 10.1m rpool 294g 262g 1.55k 0 199m 0 rpool 294g 262g 1.29k 119 165m 9.48m rpool 294g 262g 1.11k 110 142m 8.36m rpool 294g 262g 1.43k 0 183m 0 rpool 294g 262g 1.36k 118 174m 9.32m rpool 294g 262g 1.49k 0 190m 0 rpool 294g 262g 1.35k 118 173m 9.32m rpool 294g 262g 1.32k 146 169m 10.1m rpool 294g 262g 1.07k 29 137m 363k 262144+0 records in rpool 294g 262g 0 79 0 4.65m 262144+0 records out rpool 294g 262g 0 0 0 0 rpool 294g 262g 0 0 0 0 rpool 294g 262g 0 0 0 0 rpool 294g 262g 0 0 0 0 rpool 294g 262g 0 0 0 0 rpool 294g 262g 0 0 0 0 root@zone1:~# zpool iostat 3 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 292g 264g 0 21 22.6k 130k rpool 292g 264g 0 0 0 0 rpool 292g 264g 0 0 0 0 rpool 292g 264g 1.03k 0 131m 0 ajordan@zone1:~/mnt/test$ dd if=test8k.tst of=test8k.2 bs=8k rpool 292g 264g 1.10k 202 141m 16.4m rpool 292g 264g 1.25k 25 161m 316k rpool 292g 264g 960 215 120m 15.5m rpool 292g 264g 1.25k 0 160m 0 rpool 292g 264g 1k 210 128m 14.8m rpool 292g 264g 1010 159 126m 14.3m rpool 292g 264g 1.28k 0 164m 0 rpool 292g 264g 1.08k 169 138m 15.6m rpool 292g 264g 1.25k 0 161m 0 rpool 292g 264g 1.00k 166 128m 15.3m rpool 293g 263g 998 201 125m 15.1m rpool 293g 263g 1.19k 0 153m 0 rpool 293g 263g 655 161 82.0m 14.2m rpool 293g 263g 1.27k 0 162m 0 rpool 293g 263g 1.02k 230 130m 12.7m rpool 293g 263g 1.02k 204 130m 15.5m rpool 293g 263g 1.23k 0 157m 0 rpool 293g 263g 1.11k 162 142m 14.8m rpool 293g 263g 1.26k 0 161m 0 rpool 293g 263g 1.01k 168 130m 15.5m rpool 293g 263g 1.04k 215 133m 15.5m rpool 293g 263g 1.30k 0 167m 0 rpool 293g 263g 1.01k 210 129m 16.1m rpool 293g 263g 1.24k 0 159m 0 rpool 293g 263g 1.10k 214 141m 15.3m rpool 293g 263g 1.07k 169 137m 15.6m rpool 293g 263g 1.25k 0 160m 0 rpool 293g 263g 1.01k 166 130m 15.0m rpool 293g 263g 1.25k 0 160m 0 rpool 293g 263g 974 230 122m 15.8m rpool 293g 263g 1.11k 160 142m 14.4m rpool 293g 263g 1.26k 0 161m 0 rpool 293g 263g 1.06k 172 136m 15.8m rpool 293g 263g 1.27k 0 162m 0 rpool 293g 263g 1.07k 167 136m 15.4m rpool 293g 263g 1011 217 126m 15.8m rpool 293g 263g 1.22k 0 156m 0 rpool 293g 263g 569 160 71.2m 14.6m 131072+0 records in rpool 293g 263g 0 0 0 0 131072+0 records out rpool 293g 263g 0 98 0 1.09m rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0 rpool 293g 263g 0 0 0 0
Comments
Post a Comment