In a VM of OMV 3.0.73
((In the VM, I used "fixed size" disks, versus dynamically sized disks.))
I started with 4 data drives.
They were 5GB each and created a RAID5 array of just under 10GB.
(I created a 4th 5GB drive, dev/sde, to add later.)
I created a folder share, built a Samba share on top of that, and populated the array with a bit over 1GB of data to try to simulate something real world. All worked fine.
Then I ran the following command lines to add a 4th drive, dev/sde, to the array and "grow" it.
mdadm –add /dev/md0 /dev/sde
mdadm –grow /dev/md0 –raid-devices=4
In the GUI, in RAID management dev/md0’s state was "clean". After the above commands it changed to "clean/reshaping".
(*As it seems these mdadm commands, from the command line, won’t break OMV’s GUI interface.*)
– During the "reshape", the share and it’s files were available. (I played an MP3 from the share while the array was reshaping.)
– While I didn’t time it, integrating a 5GB drive took roughly 10 to 15 minutes in a VM which is not capable of 100% CPU utilization. (I’d assume that full access to the VM’s host CPU or a faster physical CPU might do the job faster.) Regardless, if this scaled, using roughly 12 minutes for restripeing 5GB drives into 15GB, adding a 1TB drive to an array of 1TB drives would take a heck of a lot longer.
At the end of the array reshape, the array was just a bit below 15GB.
I went into <File systems>, clicked on "resize" and this operation ended with an ext4 at 14.69GB
While the above probably wouldn’t cover all situations or possibilities, adding a drive to a RAID5 array in this manner was straight forward and, seemingly, painless.
As a last detail, I created one more 5GB disk, to see the result of the "add" command.
mdadm –add /dev/md0 /dev/sdf
A 5GB hot spare was "added" to the array without affecting it’s size.