- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
If you aren’t already using the mover tuning plug-in, now is a good time to have a look at it.
The latest update allows per-share settings override for detailed control over how your caches are used.
I use this plug-in to keep files in cache based on their age, so for example in a media server, you can have the last 14 days of TV shows kept in cache, while still running the mover regularly.
It can also do a full move if the disk is above a certain threshold value, so if your cache is getting full, it can dump all files to the array as per normal.
So you always keep the most important recent files on the cache, with a greatly reduced risk of running into a full cache issue and the problems that causes.
Now, with the latest update, you can tune these settings PER SHARE, rather than being across the whole system.
Ahh right I see, so you are using mover as a means of reducing power usage, that’s interesting, power is super expensive where i live so that is a factor for me, i have spin down set reasonably tightly, but not insane, i think its 1 hour idle at the mo.
Originally yes, lower power, heat and noise, these days I don’t worry too much about the heat and noise as the servers are in the garage, but my power bill is more than double the average 5 person house… and I’m only 3 people.
I prefer to have my daily reads and writes hit the ssds though, with the hdd array being more of a “warm” archive, once 6.13 rolls around and allows for more flexible pool/cache assignments I will add a 4 disk zfs array as a bulk cache that will mean I only need 4 disks spinning 24/7 and the main array could be spun down for weeks on end.
I set up my server years ago with separate shares for downloads, and multiple separate media shares for different types of content (didnt really know thst one share would allow for hardlinks and the flexibility that allowed for), so I like having control over how each of those is cached and what sort of retention is used on each one.