===== Storage Setup =====
During the Gentoo [[gentoo:installation|installation]] on the home server, you should have configured the operating system disk. This page will show you how to setup the data storage, the arrays where you will be storing your files, media and such. In other words, your storage space.
The basic idea of separation between the disk/partition on which the operating system is installed and where you will store your data and your services is the key to ensure that if you will need to migrate your services and data to a new server, for an upgrade or a failure, all it will take is reinstalling the OS and re-plugging the storage drives.
Your storage must be located on at least two different disks, maybe even more. You should plan your data allocations and decide how many disks you want or need. Please keep in mind that having data and services on the same drive ensures that you can perform hard-links, which in some cases are mandatory to avoid data duplication (i am talking about torrents).
The storage disks will need to be used within a specific RAID array. The use of a RAID array ensures redoundancy and fault tollerance, something that will prodect your data from one (or more) failing hard drives. Please note that **RAID is not a backup**.
You can read mode about RAID and RAID levels[[https://en.wikipedia.org/wiki/Standard_RAID_levels|here]], and you should do so before proceeding.
I choose a RAID1 array. This is because RAID-1 has a few advantages for me which are:
* Fast enough on read (reads will be balanced on both disks, writes will take slightly longer)
* Solid enough to survive one disk fail (provided you monitor the RAID status and replace failed disks)
It's main drawback is the wasted space: 50% is a big waste, but it's for the peace of mind. Just remember: **RAID does NOT means BACKUP** and **do your backups** anyway. More on this [[selfhost:backup|here]].
(note: i will always say //disk// but that can be an SSD, doesn't means mechanical hard drive)
==== Software or Hardware RAID? ====
RAID can be implemented in hardware or in software, using Linux RAID software implementation.
I have been using the software RAID approach for more than two decades and i have never been let down:
* It's solid,
* it's simple,
* it works and it's efficient.
* Each disk can always be mounted as single drive, without the RAID array.
If you choose to use a commercial external RAID solution, skip the RAID part ahead.
==== Disks Preparations ====
I will assume you have two external drives called **/dev/sdb** and **/dev/sdc**. I will assume that **/dev/sda** it's the drive where Gentoo is installed.
The size of the two disks is not important: get the biggest ones you can afford.
I suggest, if you can afford them, to use SSDs because more silent and consume less power, which is a plus for a home server, but they are still more expensive than traditional drives.
Any way, it doesn't matter whether you choose expensive high-end data-center grade HDD or cheapo Chinese dubious SSDs (well, i assume you factor in the value of losing your data ofc).
A good approach to add more drives, when you run out of internal slots in your server, is to use USB-3/USB-C external drives. You can buy a JBOD box (Just a Bunch Of Disks) where you can store 2, 4 or even 8 or 16 disks sharing one USB plug. I have been using this type of setup for the best of the last 15 years without any data loss or corruption. Speed-wise, you will be streaming your data over your home network, which more than often means WiFi. A good USB-3 SSD is more than capable to keep up any data transfer requirement for any streamed media today, even 4K, so there is not need to worry that external disks or USB-3 might be a bottleneck.
Note: i will refer to //two// disks, but you can create more RAID arrays if you have //four// disks!
===== Partitioning =====
To create a software RAID, you need to first partition the two drives, for this job you can use the good old fdisk:
su
fdisk /dev/sdb
... do the partitioning ...
fdisk /dev/sdc
... do the partitioning ...
You will need to be root for fdisk to work. You should be root at this point, tough. The //su// command might be redundant. How to use fdisk? I think you can find out easily. given that these should be new and clean disks, there is not much risk to mess up. Create a GPT partition table, for future-proof support, and one single partition filling up the disk, unless you want a more complex setup.
Using //fdisk//, create one partition on each drive to fill it, that will be called **/dev/sdb1** and **/dev/sdc1**, these two partitions type needs to be of type //Linux RAID//. I assume the two drives are of the same size. If not, consider that the bigger one will have wasted space. In this case, create the partitions of identical sizes on both drives: the biggest drive will have free spare space that you can partition again as non-RAID partition.
Save the changes and quit from fdisk, since the disks are not being used yet, you will not need to reboot the server.
Remember that using Linux Software RAID you can create more than one partition and create more than one RAID-1 from two disks. For example, if you have a huge disk and want to separate two areas (one for data and one for webcam storage, for example) you can create **two** RAID-1 arrays by splitting both disks in two partitions each and mixing them up. Just don't create a RAID from two partitions on the **same** disk as that would be, at best, dumb.
(if you need to retain your data and you have only two disks, you can create the RAID only on one of the two, which will be deleted, mount with only one drive, copy the data over from the other disk, then format the other disk and hot-add it to the RAID-1. How to do this in details it's not difficult to figure out, but be careful not to lose your data in the process)
===== Creating the RAID array =====
You need to create a new software RAID array out of **/dev/sdb1** and **/dev/sdc1**, for this you will use the //mdadm// command we have installed previously:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
since this is the first RAID array of the server (and probably the only one) it's called **/dev/md0**. If you have more than one RAID array, the naming will be different reflecting that.
If you want to do the trick of copying existing data from one of the two disks, at this point you can create a RAID-1 array with only one drive by replacing the drive you do not want to add now with the work //missing//. You can then add at a later time the disk with the //--add// option of the //mdadm// tool.
===== Format and mount =====
You need to format and mount your newly created RAID array. You need to choose which filesystem you want to use. A common choice on Linux, and probably the more straightforward, is to go with EXT4. This might not be your best choice if you have SSDs or you want to leverage the error-correction and balancing of an advance filesystem like ZFS and BrtFS, but i like to go simple and so i will choose EXT4 for you here. I have been running my software RAID-1 on EXT-4 since it become stable well over a decade ago, and again i never lost any data to bugs or corruption.
Format the RAID array:
mkfs.ext4 /dev/md0
mkdir /data
mount /dev/md0 /data
The newly formatted drive needs to be automatically mounted at every boot, so you need to add a line like this to **/etc/fstab**:
/dev/md0 /data ext4 noatime 0 0
The //noatime// option will reduce USB traffic and wear-and-tear. Of course, change the filesystem type to whatever you choose.
===== Automate RAID at boot =====
You also want to automate linux raid startup, so that upon reboot everything will still work just fine. To do so, the **mdraid** service needs to be started in the //boot// runlevel. Do NOT start it in the //default// runlevel or things will break badly after the first reboot.
> rc-update add mdraid boot
The //mdadm// service is not required, unless you want monitoring of your RAID array (nicluding email reporting) which is a **TODO**.
You also want to ensure the **/dev/md0** device doesn't change name upon reboot (it happened when the USB drives change order on boot for example, or because you plug/unplug them), so put this line into your **/etc/mdadm.conf**:
ARRAY /dev/md0 UUID=1758bcfa:67af3a42:d3df2d83:ecbb0728
where the UUID can be read by the output of the command:
mdadm -detail /dev/md0
One last bit that might be required if you use USB storage is to increase the udev timeout at boot. USB drives might be slow to spin-up and be recognized (even SSDs) so this trick might be needed if you find that upon reboot your RAID array has not been mounted properly.
Add this lines to your **/etc/init.d/mdraid** script:
start() {
local output
ebegin "Waiting a little bit longer for USB stuff to popup..." # line added
sleep 10 # line added
ebegin "Starting up RAID devices"
Adapt the ten seconds to your likings.