Storage Pooling is a technology that assembles several hard drives (or storage devices) and turn them into a single and massive virtual storage device.
Put another way, Storage Pooling is an aggregation mechanism allowing you to turn independent hard drives of various sizes, makes, and models into a unified storage space, even if they already contain data.
Both RAID-F and tRAID provide storage pooling features:
- Combine your drives as a single massive drive (JBOD like but safer)
- Automatic data management (write data without worrying about which drive has space)
- Supports drives with existing data (no need to copy off your data)
- Drives can be read outside of the FlexRAID solution used (you can remove FlexRAID and still access your data)
- Drives can remain spun down until their content is actually accessed (sleep mode)
- Supports automatic and manual merge modes
- Support for Virtual Views (RAID-F only)
- No performance impact – as fast as if you were reading or writing to the disk
- Independent from RAID features (you can use the Storage Pool without RAID’ing your data)
Storage Pool Merge Modes
While the storage pooling feature included in Transparent RAID only has a single mode of operation, the storage pooling feature in RAID-F supports several merge modes including:
– an automatic mode with balanced space across drives
– an automatic mode reducing folder split across drives
– a manual mode where you explicitely configure the merge
Auto Merge with Balanced Space Priority
With this mode, the configuration is automatic and there is no configuration to deal with. All folders and files already on the drives will be available in the pool, and when you add folders and files to the pool, they will be written to the different drives depending on their available space.
This mode is very similar to what you are used to with WHS with the difference being in that it balances the disk space so that the drives are used evenly.
Auto Merge with Minimized Folder Split Priority
As above, the configuration is automatic in this mode and there is no configuration to deal with. Again, all folders and files already on the drives will be shown in the pool, and when you add folders and files to the pool, they will be written to the different drives in order to minimize the folder split: rather than balancing the data based on free space, this mode tries to keep folder splits to a minimum.
This mode will fill up each drive one by one, and as such can cause certain drives to be used more than others. But it is more energy efficient.
This is the mode you are all familiar with from older versions of FlexRAID. This mode allows for a lot of flexibilities and really intricate configurations.
It allows you to treat your source data as a database, and to create Views into that data (no need to mess with symbolic/hard links).
Additionally, it provides the greatest energy saving feature.
See this tutorial for more information about this mode, and how to configure it.
Ever wanted to organize your data in various views without the need of actually moving the data around or create hardlinks or shortcuts?
How about showing your music files based on the files tag attributes?
A “Year” folder that shows all the songs in your collection in one folder based on the year specified or an “Artist” folder that shows all the songs a given artist either performed or was featured on in one folder?
How about showing all the files that were modified during a date range in one folder?
How about combining one folder with the contents of a number of other folders and showing some additional folders and files as children to create a custom view of your data?
Virtual folders also support full Windows permission and ACL.
All that without messing with links (which clutter your filesystem) or moving your data around?
Basically, have you ever longed for Flexibility?
- May 14, 2013 @ 14:12:31 [Current Revision] by xliv
- May 14, 2013 @ 14:11:05 by admin
- May 14, 2013 @ 14:09:12 by admin
- May 14, 2013 @ 14:08:49 [Autosave] by admin
- May 14, 2013 @ 14:02:13 by admin
- May 14, 2013 @ 13:47:23 by admin
- May 14, 2013 @ 13:45:05 by admin
- December 8, 2011 @ 17:15:49 by admin
- December 8, 2011 @ 17:15:05 by xliv
- December 8, 2011 @ 17:15:00 by xliv