Can't understand the whole machine translation, but here's my experience with flashcache.
The hardware configuration at the time I tested flashcache was:
- 16GB ram
- raid 1, 1TB 7200 rpm
- 4-core xeon from 2008 (model L5420)
Yacy searches were too slow, with disks overloaded by the amount of I/O ops, so I thought of setting up flashcache.
Unfortunately, I was virtualizing Yacy under Openvz on kernel 3.6.32; flashcache was incompatible with that, so I had to move to KVM instead and upgrade the kernel.
After that, I bought a cheap Kingspec SSD, 64GB, and configured flashcache to cache the raid 1 (/dev/md0) in writethrough mode toward the SSD.
It worked very well on its first days, with read speeds in the magnitude of 100s MB/s, but after less than two weeks of intense abuse on this overloaded server, the SSD looked like damaged, with write performance dropped down to 300÷500 ĸB/s. The write speed on the SSD were so low that the md0 device would have been faster without the flashcache writethrough. And so I took off the SSD and stopped using flashcache.
I then found out that, apparently, all Kingspec SSDs models don't support the trim command and thus lose write performance as soon as all the memory locations are occupied. Any subsequent write on the SSD must then be preceded by an erase to prepare the memory for writing, and this kills the performance. Once a Kingspec is full, its write performance is not recoverable.
Also, flashcache didn't support trim at the time I tested it, so even a branded SSD (is Kingspec a real
brand?) with trim support would have dropped its performance, unless a regular trim was scheduled to run at boot.
After all, flashcache looked quite nonfunctional to me.