I just encountered some annoyance in OMV. This is probably my fault in the first place. I replaced the 8TB HDD that I had on my backup OMV server without removing the references to it. What happened now is what you can see on the featured image above. I could not delete this disk entry in the File system section.
The proper way of removing a disk from an OMV server :
I did not do any of those steps and went ahead and removed the physical disk from the server and replaced it with a 10TB HDD. The problem I created was I could not removed the damn disk from the File System.
I had to modify the /etc/openmediavault/config.xml
and manually deleted the code blocks related to the disk in question. In my case, I was using SnapRAID and MergerFS combination. I had to remove the blocks <drive></drive>
and <mntent></mntent>
that was related to the disk in question.
If you were to take this route, make sure you make a backup of the config.xml before modifying it.
Just to give you an idea, these are the block of codes I removed from the config.xml
. If you have the same problem, but your setup is different, you would probably have to search the entire config.xml
file to find the disk to be removed.
... ... omitted for brevity ... <mntent> <uuid>91ae480e-99e5-431e-8c37-b71f145447d0</uuid> <fsname>/dev/disk/by-label/DISK1</fsname> <dir>/srv/dev-disk-by-label-DISK1</dir> <type>ext4</type> <opts>defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts> <freq>0</freq> <passno>2</passno> <hidden>0</hidden> </mntent> ... ... omitted for brevity ... <drive> <uuid>6fb2e4b6-b311-45d7-9599-00c70a2f7718</uuid> <mntentref>91ae480e-99e5-431e-8c37-b71f145447d0</mntentref> <name>SR-DISK1</name> <label>DISK1</label> <path>/srv/dev-disk-by-label-DISK1</path> <content>1</content> <data>1</data> <parity>0</parity> </drive> ... ... omitted for brevity ...
Cheers!
This worked for me!
Thanks.
I commented out the lines instead of deleting them but that worked as well.
Haha the captcha time expired. It needs to be given a look.
This is just a stupid and annoying thing about OMV. Why can’t they just do a refresh and allow the user to remove those missing disk after confirmation?
This is one of the reasons why I stopped using OMV and built my NAS using a plain Debian and installed the packages that I need.
OMV is a great distro, but it is too much work to maintain.
Would you recommend UnRaid over OMV 5 ? I am currently running OMV5 with a simple RAID1 with 2 HDD. I just received 4 additionals HDDs. Was wondering if UnRaid offers more performance over OMV also. (mergerfs+snapraid seems great, until one of your HDD dies, and that you have the kind of problems you are describing in this post).
I have tried both, but I ended up deploying a plain Debian with the packages that I need for a NAS. I like it this way. It is cleaner and easy to find a solution to the problem because there is no vendor config mess. Both OSes are great. For speed, I think OMV is a bit faster because Unraid’s snapshots are real-time, and OMV with SnapRAID’s snapshots are on schedule. Unraid has a cache drive to deal with its slowness, but when I was using both, it felt like OMV was still faster. For maintenance, Unraid is easier to… Read more »
Thanks for this, solved the annoying missing drive situation after I had a drive fail on me.
This is beyond annoying. In my case, I am unable to “apply” config changes. The 5min long attempt to apply changes followed by a long, difficult to parse error page needs to be addressed. A few simple checks by OMV could produce a note to user, such as “remove this volume from A,…B,…C, etc, before removing the volume”. Or, “oh I see that you already removed this volume, would you like to delete all references to this volume as well?”
I am not using OMV anymore. Actually, I didn’t deploy it as my Unraid replacement. I ended up with just a plain Debian with the packages I needed. During my time testing OMV, I got tired of fixing OMV specific problems.
This worked for me, thank you very much!