There are a few threads here going back and forth and never really getting a clear answer on media server storage. When a drive fails, unRAID begins emulating it - in real time. How long does SnapRAID tend to take to do parity?
Modern high-end storage is getting so fast that accessing it remotely, even over say a 100-gbit IB fabric, adds very significant latency - remote-access would eliminate the entire point of doing things like putting 3D-Xpoint storage onto the memory bus. I am a little confused about SnapRaid as it seems like a backup only solution, not something I could configure and start copying files and folders to, am I wrong? It is advisable to immediately put a new disk into the system so that unRAID can start rebuilding a … boot images, but what general type of files are you looking to store? And if you already have Areca, don't even bother with unRaid.You might also want to read this article about btrfs and ZFS: I think you are confusing what a SAN is for. Even our Compellent storage array at work is built that way, but it has 16-channel SAS HBAs (4x 8087 connectors per card) in the controller nodes, and I think around 80 7K spindles all sitting on a single quad-channel (single 8087 cable) connection for the low-tier right now. if SnapRAID could work in the future with my theoretical Infiniband/Fibrechannel networking scenarios. Poll: FlexRAID, unRAID, SnapRAID, FreeNAS, DrivePool w/? Yes a SAN presents a luns that you mount as disks, but the whole point of the SAN is provide the redundancy you are wanting with SnapRAID. Its not that it wouldn't work is that you would be losing space due to two layers of parity. For a better experience, please enable JavaScript in your browser before proceeding. I see something about folder replication and storage pooling on there... one thing I had considered as possible is with that say "10 users, 10 separate drives" multitasking was to have the same files that need high availability (like drive boot images which neednt be huge) to be on ALL drives so it scales out to serve max speed to any of them. What do you guys recommend?If you need real-time RAID which always updates parity data when data are changed, then you are better off using your existing Areca or other hardware RAID controllers.
I'm sorry if I've missed this, and I know you've talked about wanting to store (static?) SnapRAID is available for windows as well as linux. Practically speaking the difference between SAN and NAS are the exports, SAN block, NAS file, that's it. ?It has a better raid… Then for later expansion you stick a second SAS card in with external ports and plug a JBOD into that - repeat as needed. Asking on the FreeNAS boards basically showed me that people hat 32TB arrays, but barely any larger. I think its not so much that a SAN is required for the highest levels of performance - but more that people needing the highest levels of shared-storage performance all use SANs to accomplish that, or at least used to.
- More large systems in existance than systems like FreeNAS (lots of people seem to have 70TB and up, 24 drives a not uncommon configuration, when I looked into freeNAS over 32tb was uncommon and RAM limits under ZFS were an issue) SnapRAID is available for windows as well as linux. I am probably going with linux version so I posted it here, although that's not absolutely set in stone.I've been very happy with my snapraid setup since I started using it - you've got the pro's and con's pretty much all listed there.Stablebit Drivepool + SnapRaid work wonderfully together.
Hi everybody,I'm setting up my new Hp Microserver N52L with 4X2tb WDEARS and a dedicated disk for OMV.I want to configure a Raid 5 with all my 2tb disks but i'm a little bit confused.Why using Snapraid instead of OMV raid managment? This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
Unraid costs about $100 for the number of drives I have and seems pretty simple to setup booting from a flash drive. I currently have a 1200 series Areca 16 port raid controller to which 16 drives are connected in two RAID 5 volumes, I am using this for media backups for two NAS's. It would be more beneficial to build a NAS with either ZFS or SnapRAID, and use normal shares from there. As far as block layer, my idea was to do a SAN with block level storage under SnapRAID. My understanding was that the SAN should present itself to the server as local storage, and I hoped SnapRAID could simply install over the top of that. I use You can easily use a single 8-channel SAS controller to run a chassis with 24 hot-swap drives in the front connected through a SAS expander. We have a two node san, each node has two controllers, you could literally destroy an entire node and not a single thing would know. But wow… The elitist attitude that oozes out of your response here is the kind thing that turns people off. I have been looking at Unraid and Snapraid and think I would like to try one of these, just not sure which. My main concern is with your #2 as I like the flexibility to be able to add/upgrade disks on the fly (currently using SnapRAID), which seems to be something unRAID is better at than FreeNAS. With spinning disks, there's plenty of bandwidth available to stick quite a few drives onto a single controller card Even 64TB was unknown territory despite all the people using it. To get to the performance and space that you are wanting is $$ and you add complexity and management overhead that is unneeded. unRAID implements real time parity. Forums SnapRAID is streamlined and easy to set up, easy to automate however I like. How long it takes depends on how much IO bandwidth your system has, and how much data has changed since the last time you ran a sync to update the parity files. SnapRaid (and snapshot RAID in general) is not suited for files that change often. Stablebit Drivepool + SnapRaid work wonderfully together. How long it takes depends on how much IO bandwidth your system has, and how much data has changed since the last time you ran a sync to update the parity filesWith spinning disks, there's plenty of bandwidth available to stick quite a few drives onto a single controller card - a PCIe x8 slot (common for most SAS controllers) has 4GB/s of throughput for v2, or almost 8GB/s on PCIe v3. Could you tell me a bit more about it? For a better experience, please enable JavaScript in your browser before proceeding.