I’ve recently upgraded my desktop PC at home to a more modern system. This made my previous desktop redundant but it continued to serve as a media server with its meager 8TB storage until a fateful PSU fan failure. After that day, it sat inactive in my room and we only watched Netflix. On a fateful day, almost two years after the failure, I realized that I was running out of space in the 2 TB SSD in my “new” desktop and decided I needed a solution where I could store data long-term. Instead of buying an expensive and inflexible off-the shelf NAS, it made sense to re-purpose the old beast to a DIY NAS server.

I had a few requirements for this storage solution:

  • I wanted to use smaller drives in a RAID array instead of a single large-capacity one to safeguard against disk failures.
  • I wanted to abstract away the hardware to easily extend the storage in the future.
  • I wanted peak performance, rather than enterprise level flexibility.
  • I wanted ease of access for the files on the system from various systems in my home.

In order to achieve these, I decided to build a Linux system that ran RAID, LVM, an XFS file system and a Samba file server. I had set up something very similar for a laboratory build years ago, so I thought it would be quite straightforward. It wasn’t. The setup procedure is documented in the rest of this article.

RAID Array

I wanted to go for a RAID solution to protect against hardware failures. The reasonable options were RAID1, RAID5 or RAID6 arrays. I found the best compromise between disk usage and safety to be the RAID5 array, which uses a single parity disk to safeguard against a single drive failure.

Before setting up RAID, we first need to create partitions in the drives that will be used to build the array.

➜  ~ fdisk /dev/sda

Inside fdisk, follow these steps:

  1. Press g to create a new empty GPT partition table.
  2. Press n to create a new partition. Press Enter through the prompts to accept the defaults, which will use the entire disk.
  3. Press t to change the partition type.
  4. Type raid (for “Linux RAID”) and press Enter.
  5. Press w to write the changes to the disk and exit.

Repeat this for all disks you want in the array. In my case, I was building with three drives: /dev/sda, /dev/sdb and /dev/sdc. You should be able to verify that these partitions have been created by seeing the /dev/sd[abc]1 appear.

➜  ~ sudo mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1

You can now see the RAID array status by:

➜  ~ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Aug  3 15:04:36 2025
        Raid Level : raid5
        Array Size : 15627786240 (14.55 TiB 16.00 TB)
...
       Update Time : Mon Aug  4 00:04:07 2025
             State : clean, degraded, recovering
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1
...
    Rebuild Status : 78% complete

              Name : fedora:0  (local to host fedora)
              UUID : c89b4882:2eb89e2e:b46483e8:52939c5c
            Events : 6190

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       49        2      spare rebuilding   /dev/sdc1

At this point the array is ready to use, but it’s in a clean, degraded, recovering state. This means that it will not perform until the rebuilding process is finished, and the data on it will not be safe, as a drive failure at this stage can cause a total loss.

The rebuilding process will take long: in my case, it was over 10 hours. During this time the disks will see heavy access and it is a good idea to ensure that they have good ventilation. Halfway through the process, I realized the temperature of my disks had climbed to 60°C because I had all the case fans disconnected.

There are a couple more steps to ensure that the RAID array goes online with the system. I am not sure how much of the following is required, as there are reports that Fedora actually is able to detect the array and bring it online without any intervention. However, running the below worked for me.

➜  ~ sudo mdadm --detail --scan | sudo tee /etc/mdadm.conf
➜  ~ sudo dracut -f

LVM

The next layer in the storage process is the Logical Volume Manager. LVM allows a flexible way to manage disk space, acting as a layer of abstraction between the physical disks and actual partitions the operating system uses. It has three components:

  1. Physical Volumes: PVs are the actual disks or partitions, or in our case, the RAID array.
  2. Volume Groups: VGs are the storage pools that combine one or more PVs into a large storage space.
  3. Logical Volumes: LVs are the actual virtual partitions that get formatted with a file system and mounted to be used by the operating system.

The benefit of using LVM is the flexibility it provides beyond the physical constraints of the actual disks. In this case, we might want to a completely new RAID array and have LVs that span across both; or perhaps move an LV from one RAID array to another.

For the flexibility it provides, it is also extremely easy to set up. First label the disks to be used as PVs:

➜  ~ sudo pvcreate /dev/md0

Then, to create the VG, which will only contain /dev/md0 for now:

➜  ~ sudo vgcreate vg_storage /dev/md0

Finally, to create the LV that spans across the entire free space on /dev/md0:

➜  ~ sudo lvcreate -l 100%FREE -n lv_storage vg_storage

This should make the LV now available at /dev/vg_storage/lv_storage and /dev/mapper/vg-storage-lv_storage. At this point, this is a partition, very much like a /dev/sda1 that we can format and mount.

XFS File system

Finally, we get to format our virtual partition and mount it. An important choice here is the file system. The ext4 file system is a good choice, but xfs is reported to have a slight edge on performance in RAID arrays, due to its awareness of the underlying RAID geometry. The important parameters to know while creating an xfs file system are the Stripe Unit(su) and Stripe Width(sw). We use the information from the RAID array to get these:

➜  ~ sudo mdadm --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Sun Aug  3 15:04:36 2025
        Raid Level : raid5
...
        Chunk Size : 512K
...
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       3       8       49        2      active sync   /dev/sdc1
  1. Stripe Unit(su) is the Chunk Size. In our case, this is 512K.
  2. Stripe Width(sw) is a multiplier of su, and is the number of data disks in a RAID device. The data disks exclude the parity disks, which take up one disk in a RAID5 configuration. In our case, this is 2.
➜  ~ sudo mkfs.xfs -f -d su=512k,sw=2 /dev/vg_storage/lv_storage

Unfortunately, this selection of the RAID geometry is a one-time process during file system creation. Extending the RAID array with another disk will change the geometry (sw will increase to 3) and the underlying file system will not be optimally sized for it.

Mounting the File system

➜  ~ sudo blkid /dev/vg_storage/lv_storage
/dev/vg_storage/lv_storage: UUID="582846f1-c651-4794-a1d4-064ca3fc8550" BLOCK_SIZE="4096" TYPE="xfs"

Add this UUID as a mount point to /etc/fstab.

#
# /etc/fstab
# Created by anaconda on Sun Aug  3 00:57:19 2025
...
UUID=582846f1-c651-4794-a1d4-064ca3fc8550 /mnt/storage xfs defaults,noatime 0 2

The noatime is an parameter telling the system not to update the access time of the files stored. This offers a slight performance boost.

Finally, execute all the mounts in the /etc/fstab by running:

➜  ~ sudo mount -a

Samba Server

The last piece of the puzzle is the Samba server. With this, the Windows machines on the network can access the files hosted on the Linux NAS. I wanted to set up two types of shares, a public one anyone in my home network could access; and a private one that would require authentication.

Samba Installation

Setting up Samba is fairly forward. Install the packages, enable it through the firewall and we’re done.

First install Samba by:

➜  ~ sudo dnf install samba

Enable Samba through the firewall.

➜  ~ sudo firewall-cmd --get-active-zone
FedoraWorkstation (default)
  interfaces: eno1

➜  ~ sudo firewall-cmd --permanent --zone=FedoraWorkstation --add-service=samba
➜  ~ sudo firewall-cmd --reload

Public Share

I created the public share under /mnt/storage/public and created a new sharepub group, so that the members of this group will have access to the share under Linux as well.

➜  ~ sudo mdkir /mnt/share/public
➜  ~ sudo chown -R nobody:sharepub /mnt/share/public
➜  ~ sudo chmod -R 2775 /mnt/storage/public

The chmod sets the setgid flag for the directory, meaning that all files and directories created under it will inherit the group ownership rules.

Edit the Samba configuration file at /etc/samba/smb.conf.

...
[global]
        workgroup = SAMBA
        security = user
...
        map to guest = bad user
...
[public]
        comment = public
        path = /mnt/storage/public
        public = yes
        writable = yes
        browsable = yes
        guest ok = yes
        read only = no
        create mask = 0664
        directory mask = 0775
        force user = nobody
        force group = sharepub

Here we define the [public] share (which will be accessed from Windows clients at \\ip-address\public) at path = /mnt/storage/public in the Linux file system. The guest ok = yes statement tells Samba that we do not need an user to authenticate to be able to access the share. The rest are fairly self explanatory, allowing users to browse, read and write to the folder.

The create mask = 0664 and directory mask = 0775 ensures all users can read these files, but only members of the sharepub group are allowed to edit them. The force user = nobody and force group = sharepub statements sets the user and the group of newly created files;

The final step on the Linux side is to ensure that SELinux does not block the Samba daemon from accessing the shared folder. Run the following to tag the hierarchy with the samba_share_t:

➜  ~ sudo semanage fcontext -a -t samba_share_t "/mnt/storage/public(/.*)?"
➜  ~ sudo restorecon -Rv /mnt/storage/public

Restart the samba service:

➜  ~ sudo systemctl restart smb nmb

Finally, on the Windows side, there might be a Group Policy Setting that blocks users to access the public share without authentication. From gpedit.msc You need to go to Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and enable Enable insecure guest logons.

After following these steps, you should be able to navigate to ``\ip-adress\public` and access the drive, without being prompted for a username or password.

Private Share

I created the private share under /mnt/storage/private and created a new shareprv group.

➜  ~ sudo mdkir /mnt/share/private
➜  ~ sudo chown -R cgurleyuk:shareprv /mnt/share/private
➜  ~ sudo chmod -R 2770 /mnt/storage/private

The /etc/samba/smb.conf is modified to define the new private storage area:

[private]
        comment = private
        path = /mnt/storage/private
        read only = no
        browsable = yes
        writable = yes
        guest ok = no
        valid users = cgurleyuk @shareprv
        create mask = 0660
        directory mask = 0770

Here we define valid users = cgurleyuk @shareprv meaning that the user cgurleyuk and members of the group shareprv will be able to access the share. Compared to the public share, the create and directory masks have 0660 and 0770, disabling public access on the Linux side.

Finally, to remove SELinux restrictions the new hierarchy is tagged with samba_share_t:

➜  ~  sudo semanage fcontext -a -t samba_share_t "/mnt/storage/private(/.*)?"
➜  ~  sudo restorecon -Rv /mnt/storage/public

Restart the samba service:

➜  ~ sudo systemctl restart smb nmb

After following these steps, you should be able to navigate to ``\ip-adress\private` and access the drive after being prompted for a username and password.

Troubleshooting

There are a couple of issues I ran into while trying to access these shares from the Windows server. Firstly, make sure that the Credential Manager on Windows does not have a login stored. If you have a login stored already, you might run into the issue that Windows prompts for a password each time you try to access the public share as well.

Also, if you already log in as an anonymous user, and then try to access the private drive; you might run into authentication issues as well. Windows establishes a connection to the public share using the anonymous guest user; but since the private share is on the same server and you try to login with a different user account, Windows will reject opening the second connection. This especially is problematic if you map a network drive as the guest user, as there is no way to remove the connection other than simply removing the network drive.

In order to resolve this issue, you need to remove all network drive mappings; clear cached credentials and finally delete all active network connections by:

net use * /delete

After this, you need to login to the private share first, and give your username and password. With the above Samba setup; you should be able to access both the private and public shares with the authenticated user.