Comments Locked

58 Comments

Back to Article

  • milkod2001 - Monday, December 29, 2014 - link

    I could understand this unit to be used by business users who need to have ready to go solution, don't mind the cost,don't have enough IT skills or are not willing to pay IT Pro to get similar or better system build for less then half price.

    But for home users this is definitely overkill and price wise a madness. Would love to see more down to earth 2 - 4 bays NAS units reviewed, some roundup(4-5 options).

    keep up good work guys, thanks
  • PaulJeff - Monday, December 29, 2014 - link

    Agreed. The cost for something like this for even a home pro-user is insane. I would like to see Ganesh do a ZFS shootout based on price with FreeNAS (or similar). For the cost of these NASs, I can build a crazy fast ZFS NAS with SSD L2ARC and 4x 1Gbit NICs.

    I propose Anandtech/Ganesh do a DIY NAS comparison based on price and feature tiers.
  • nathanddrews - Monday, December 29, 2014 - link

    All these COTS solutions seem so expensive for what you get - especially for the DIY portion of AT's readership. The obvious tradeoff is buying something functional from the start for a high price or spending a lot of time building something less expensive.

    A review/comparison of soft-RAID, home-build NAS/server devices would be nice, but understandably time consuming. Two or three tiers of hardware (Atom-class vs i-3-class vs Xeon-class), two or three tiers of OS (UnRAID, Windows Server/Windows 8.1, Stablebit DrivePool, FreeNAS), and then just a preset number of drives (four? eight?) for all of them.
  • PaulJeff - Monday, December 29, 2014 - link

    I considered buying a FreeNAS Mini from IXSystems (FreeNAS OEM). I'm too lazy to request a quote, but I suspect that for what you get hardware-wise would be price competitive with what AT normally reviews.

    I do agree that, though time consuming, it would be well worth the effort to do a comprehensive DIY vs COTS NAS and with benchmarks to follow.

    My FreeNAS box, (ASUS P9A-I, 16GB ECC RAM, 128GB SSD L2ARC and 6x 4TB HGST RAIDZ2), run cycles around most COTS NAS at half the price (minus HDD).
  • DanNeely - Tuesday, December 30, 2014 - link

    Dunno why they have the request a quote option (maybe for business customers whose burrocrazy needs a PO instead of a number off the webpage?) because the pricing tab has actual prices. The FreeNAS Mini is $995 without disks and has prices for 4-24TB (pre-redundancy) configurations along with options for more ram and SSD caches.
  • ap90033 - Friday, January 2, 2015 - link

    That would be a useful Review!!!!
  • Adrian3 - Monday, December 29, 2014 - link

    I'll be buying one of these very soon for home use. Why do you think it's insane? I'll be populating it with eight 6TB Red drives - the cost of the unit is a fraction of the cost of the drives. I want something that is fast, can do transcoding, is metal instead of plastic, has a small footprint, low power usage and easy to set up. So, what's the alternative that fits those requirements?
  • Morawka - Tuesday, December 30, 2014 - link

    can these little baytrail units even do realy 1080p on the fly transcoding? i was under the impression that they were to slow for hi def content like for a plex server.
  • Adrian3 - Tuesday, December 30, 2014 - link

    Yes - definitely. Read the section "Real-time & offline HD video transcoding" on this page: http://www.qnap.com/i/uk/product/model.php?II=151
  • nathanddrews - Tuesday, December 30, 2014 - link

    I can tell you right now with 100% certainty that Bay Trail can't do that - it can barely transcode one stream, let alone five. They must have extra embedded silicon dedicated to the process - which explains some of the cost of the unit. Reading through the details, it seems obvious that they have a means for transcoding through the built in hardware:

    "Transcode Full HD videos on-the-fly or offline with QNAP’s unique transcoding technology. It allows for up to 5 devices to simultaneously view different videos stored on the TS-853 Pro with on-the-fly hardware accelerated transcoding.

    1. Transcode video files to 240p, 360p, 480p, 720p and 1080p resolution
    2. Automatic video transcoding for watched folders
    3. Hardware accelerated transcoding support"
  • Adrian3 - Tuesday, December 30, 2014 - link

    This guy mentions a special transcoding chip: https://forums.plex.tv/index.php/topic/126237-qnap...

    ..and that it's only usable by Qnap's own software - and I wouldn't be using that anyway.

    Still not a deal breaker for me - but maybe I should consider some other options.
  • nathanddrews - Tuesday, December 30, 2014 - link

    I browsed the manual and they have special drivers and software to enable the hardware transcoding. The transcoding software has two functions: "real-time for up to five devices" but doesn't specify at what resolution, then also an pre-transcoded option where it will make up to five .MP4 versions of the source video at different resolutions (240/360/480/720/1080) that are simply streamed "as is". The manual is clear enough though that this specific model has dedicated transcoding hardware that uses its own software/drivers to operate. You should probably contact them first if you plan on using Plex or something.
  • ganeshts - Tuesday, December 30, 2014 - link

    It can definitely do five stream simultaneous hardware accelerated transcoding using the built-in Quick Sync engine.

    I have tested it and it works beautifully, but only with QNAP's own mobile apps for real-time processing. You can also set up offline accelerated transcoding if you want to use other playback apps / have plenty of disk space to spare.

    I do have a small piece coming up talking about hardware accelerated transcoding in NAS units.
  • nathanddrews - Tuesday, December 30, 2014 - link

    I'd love to see that piece, I can't get QS to do more than one Blu-ray transcode.
  • Gigaplex - Wednesday, December 31, 2014 - link

    Are you sure it's using Quick Sync? If it is, then 3rd party software shouldn't have much trouble making use of it.
  • ganeshts - Wednesday, December 31, 2014 - link

    I am quite confident that it uses Quick Sync - in particular, it just uses a customized build of ffmpeg with Quick Sync support [ https://trac.ffmpeg.org/ticket/2591 ]
  • mhaubr2 - Monday, December 29, 2014 - link

    I like the idea of a shoot-out or comparison with DYI solutions.
  • ap90033 - Friday, January 2, 2015 - link

    Yes please!
  • ap90033 - Friday, January 2, 2015 - link

    I have been trying to figure out a good fast ZFS NAS but cant come close to the price tag of this. Am I missing something? I am trying to build something that would hold at least 8 drives (like this unit). ECC Ram and controllers, etc jack the price up a bit...
  • fmaxwell - Sunday, April 30, 2017 - link

    IT professionals buy NASs just like they buy any other server. Any corporate IT director earning his pay knows that it's idiotic to have his staff building an 8-bay NAS when he can buy one this cheaply. If he's got any experience, he's seen ballooning costs and missed deadlines when some in-house, build-it-yourself project runs into trouble. By the time you add burdened labor on top of the parts cost, most companies recognize that having the IT staff build a NAS makes about as much sense as having the cafeteria staff start a dairy farm.

    As for home users, get some perspective. When I started out in computers, a single hard drive or a dumb terminal cost more than this. I swear to God that computer prices could drop to an average of $10 for a home-built desktop PC and someone would be posting that it's "madness" for a home user to spend $14 for a pre-built system with a warranty.

    I bought this NAS and loaded it up with 8 3TB WD Red drives for my home network. The whole thing cost me under $1,800. How can anyone get all riled up about that when it gives them 18TB of RAID 6 network storage and can act as a server for Wordpress, email, media, etc.? Doesn't your time have any value at all?
  • hrrmph - Monday, December 29, 2014 - link

    How about a shrunken down unit with 8 x 2.5 inch bays and some 1TB SSDs?

    When will the bandwidth to get the data in and out quickly be available?
  • Jeff7181 - Monday, December 29, 2014 - link

    It's available today if you can afford a few thousand dollars on 10 GbE or fibre channel.
  • fackamato - Monday, December 29, 2014 - link

    "8 x 2.5 inch bays and some 1TB SSDs"

    So say RAID5, 7TB of data, ~3.5GBps or ~28Gbps. Yeah you need to bond 3x 10Gb interfaces for that, at least.
  • SirGCal - Monday, December 29, 2014 - link

    Why are all of these 8-drive setups configured as RAID-5? Personally, the entire point of so many disks are for more redundancy. At least RAID 6 (or even RAID Z3).

    Personally, I have a 24TB array and a 12 TB array, effectively. Each 8-drive servers (not these pre-built boxes, but actual servers). One with 4 and one with 2TB drives. Raid 6 and Z2. Both easily out perform the networks they are attached to. But they were designed to be as reasonably secure as possible, and they are plenty fast for even small business use. But I have to lose 3 drives to lose data.

    When you do lose a drive, the time it takes, and the stress on the remaining drives, is when you are most likely to lose another drive. Assuming you don't do look-ahead drive replacement, etc. and just let it drive into the ground... Once one drive fails, the others are all tired and aging and the stress involved in rebuilding one drive can cause another one to go. Should that happen in RAID 5, you're done. With RAID 6, you at least have one more security step.

    Knock on wood, I've only once ever had a RAID 6 rebuild fail once where-as I've had multiple RAID 5's fail, and that's over many dozens of servers and many many many years (decades). Hence why moving to RAID 6 was important. IMHO, RAID 5 is peachy for systems with <= 5 drives. But after that, especially with larger drives taking longer rebuild times, moving up to more redundancy is the sole point of having more drives in a unit. (assuming one single volume, etc. There are always other configurations with multiple RAID 5 or other volumes...)

    Just my opining but that's what I see when I see all of the RAID 5 tests on these, could-be, very large arrays. And I'm not even going into the cost of these units, but I don't even see RAID 6 times as tested at all in the final page. If I was to ever be getting something like this, that would be the foremost important area, RAID 6 performances, that is...
  • Icehawk - Monday, December 29, 2014 - link

    Agreed - I run a RAID 1 (just 2 HDs) at home at it's sole purpose is live backup/redundancy of my critical files, I don't really care about speed just data security. I don't work in IT anymore but when I did that was also the driving force behind our RAID setups, is this no longer the case?
  • kpb321 - Monday, December 29, 2014 - link

    I am not an expert but my understanding is it is more than just that. The size of drives has increased so much that with a large array like that a rebuild to replace a failed disk is reading so much data that the drives Unrecoverable Error Rate ends up being a factor and a fully functionaly drive may still end up throwing a read error. At that point the best case scenario is that the system retries the read and gets the right data or ignores it and continues on just corrupting that piece of data but the worst case is that the raid now marks that drive as failed and thinks you've just lost all your data due to a two drive failure.

    The first random article about this topic =)
    http://subnetmask255x4.wordpress.com/2008/10/28/sa...
  • shodanshok - Wednesday, December 31, 2014 - link

    Please read all the articles speaking about URE rate with a (big) grain of salt.

    Articles as the one linked above suggest that a multi TB disk will be doomed to regularly throw URE. I even read one articles stating that with consumer URE rate (10^-14) it will be almost impossible to read a 12+ TB arrays without error.

    These statements are (generally) wrong, as we don't know _how_ the manufacturer arrive at the published URE numbers. For example, we don't know if they refer to a very large disk population, of a smaller set with aging (end-of-warranty) disks.

    Here you can find an interesting discussion with other (very prepared) peoples on the linux-raid mailing list: http://marc.info/?l=linux-raid&m=1406709331293...

    For the record: I read over 50 TB from an aging 500 GB Seagate 7200.12 disk without a single URE. At the same time, much younger disks (500 GB WD Green and Seagate 7200.11) gave me UREs much sooner than I expected.

    Bottom line: while UREs are a _real_ problem (and the main reason to ditch single-parity RAID schemes, mainly on hardware raid cards where a single unreadable error occurring in an already degraded scenario can kill the entire array), many articles on the subject are absolutely wrong in their statements.

    Regards.
  • PaulJeff - Monday, December 29, 2014 - link

    Being in the storage arena for a long time, you have to look at performance and storage requirements. If you need high IOPS, with low overhead of RAID-based read and write commands, RAID5 has less of a penalty than RAID6. In terms of data protection, mathematically, RAID6 is more "secure" when it comes to unrecoverable read error (URE) during RAID rebuilds with high capacity (>2TB) drives with 4 or more drives in the array.

    I never rebuild RAID arrays whether they be hardware or software-based (ZFS) due to the issue of URE and critically long rebuild times. I make sure I have perfect backups because of this. Blow out the array, recreate the array or zpool and restore data. MUCH faster and less likely to have a problem. Risk management at work here.

    To get over the IOPS issue with a large number of disks in an array, I use ZFS and max out the RAM onboard and large L2ARC when running VM's. For database and file storage, lots of RAM, decent sized L2ARC and ZIL are key.
  • SirGCal - Tuesday, December 30, 2014 - link

    My smaller array mirrors the bigger one on the critical folders. Simple rsync every night. And I have built similar arrays in pairs that mirror each other all the time for just that reason. However I haven't had an issue with rebuild times... Even on my larger 24TB array, the rebuilt takes ~ 14 hours. However, even doing a full copy of the entire 12TB array parts only would take longer over a standard 1G network. The 'can not live without' bits are stored off-sight sure but still, pulling them back down over the internet and our wonderfully fast (sarcastic) USA internet would be painful also. I think it comes down to how big your arrays are to how long it actually takes to rebuild vs repopulate. My very large arrays can rebuild much faster then repopulate for example.
  • ganeshts - Tuesday, December 30, 2014 - link

    The reason we do RAID-5 is just to have a standard comparison metric across different NAS units that we evaluate. RAID-5 stresses the parity acceleration available, while also evaluating the storage controllers (direct SATA, SATA - PCIe bridges etc.)

    I do mention in the review that we expect users to have multiple volumes (possibly with different RAID levels) for units with 6 or more bays when using in real life.

    We could do RAID-6 comparison if we had more time for evaluation at our disposal :) Already, testing our RAID-5 rebuild / migration / expansion takes 5 - 6 days as the table on the last page shows.
  • ap90033 - Wednesday, December 31, 2014 - link

    RAID is not a REPLACEMENT for BACKUP and BACKUP is not a REPLACEMENT for RAID.... RAID 5 can be perfectly fine... Especially if you have it backed up. ;)
  • shodanshok - Wednesday, December 31, 2014 - link

    I think you should consider raid10: recovery is much faster (the system "only" need to copy the content of a disk to another) and URE-imposed threat is way lower.

    Moreover, remember that large RAIDZ arrays have the IOPS of a single disk. While you can use a large ZIL device to transform random writes into sequential ones, the moment you hit the platters the low IOPS performance can bite you.

    For reference: https://blogs.oracle.com/roch/entry/when_to_and_no...
  • shodanshok - Wednesday, December 31, 2014 - link

    I agree.

    The only thing to remember when using large RAIDZ system is that, by design, RAIDZ arrays have the IOPS of a single disk, no matter how much disks you throw at it (throughput will linearly increase, though). For increased IOPS capability, you should construct your ZPOOL from multiple, striped RAIDZ arrays (similar to how RAID50/RAID60 work).

    For more information: https://blogs.oracle.com/roch/entry/when_to_and_no...
  • ap90033 - Friday, January 2, 2015 - link

    That is why RAID is not Backup and Backup is not RAID. ;)
  • cjs150 - Wednesday, January 7, 2015 - link

    Totally agree. As a home user, Raid 5 on a 4 bay NAS unit is fine, but I have had it fall over twice in 4 yrs, once when a disk failed and a second time when a disk worked loose (probably my fault). Failure was picked up, disk replaced and riad rebuilt. Once you have 5+ discs, Raid 5 is too risky for me.
  • jwcalla - Monday, December 29, 2014 - link

    Just doing some research and it's impossible to find out if this has ECC RAM or not, which is usually a good indication that it doesn't. (Which is kind of surprising for the price.)

    I don't know why they even bother making storage systems w/o ECC RAM. It's like saying, "Hey, let's set up this empty fire extinguisher here in the kitchen... you know... just in case."
  • Brett Howse - Monday, December 29, 2014 - link

    The J1900 doesn't support ECC:
    http://ark.intel.com/products/78867/Intel-Celeron-...
  • icrf - Monday, December 29, 2014 - link

    I thought the whole "ECC required for a reliable file system" was really only a thing for ZFS, and even then, only barely, with dangers generally over-stated.
  • shodanshok - Wednesday, December 31, 2014 - link

    It's not over-stated: any filesystem that proactively scrubs the disk/array (BTRFS and ZFS, at the moment) subsystem _need_ ECC memory.

    While you can ignore this fact on a client system (where the value of the corrupted data is probably low), on NAS or multi-user storage system ECC is almost mandatory.

    This is the very same reason why hardware RAID cards have ECC memory: when they scrubs the disks, any memory-related corruption can wreak havoc on array (and data) integrity.

    Regards.
  • creed3020 - Monday, December 29, 2014 - link

    I hope that Synology is working on something similar to the QvM solution here. The day I started my Synology NAS was the day I shutdown my Windows Server. I would, however, still love to have an always on Windows machine for the use cases that my NAS cannot perform or would be onerous to set up and get running.
  • lorribot - Monday, December 29, 2014 - link

    Two things strike me $210 for 8GB ram, how can anyone justify that? Even Apple aren't that expensive.
    Raid 5 really? With 4TB SATA disks if you are going to bother with rundundancy then raid 6 please. from painful experience Raid 5 no longer cuts the mustard for protection given SATA's poor data verification and the huge rebuild time on a 4TB based array I really wouldn't bother, if you data is that important then you need to be backing up the changes or use a proper storage syatem.
    Pro NAS boxes like these are overpriced for what they offer, which in reality is not a lot, as for running a VMs off of it I personally wouldn't bother.
    Halve the price and offer some form of asyncronous replcation and you may just be on to something.
    As it is one of HP's micro servers with a bunch of disks in it would offer better value.
  • mhaubr2 - Monday, December 29, 2014 - link

    Seriously not trolling here - trying to better understand. Coming from the original Windows Home Server and its Drive Pool concept has me spoiled. I'm now using WHS2011 and Drive Bender, and it seems like the way to go. With pooled drives I can expand capacity easily using mix-and-match drives of different brands, sizes and vintages. This seems far less risky than using 3 or more identical drives in a RAID-5 or 6 array. I don't have to worry about getting a bad batch of drives or having a second (or third) drive fail on rebuild. This is how I see it, but I know there are plenty of folks out there that are proponents of RAID-x. I'm looking to build a new media server, so why should I consider a RAID setup over drive pooling?
  • PEJUman - Monday, December 29, 2014 - link

    I actually have the same thought process as you. but my mindset was setup around a single family file server demands. where the single drive with duplication would be sufficient in terms of performance/reliabilty. The Raid arrays allows much higher theoretical performance compared to Drive Bender's, not to mention better than N/2 efficiency for single disk failure tolerance.

    I personally like Drive Bender's solution for my needs, but will not use it for business oriented needs: 100% uptime, high performance and multi disk failure tolerant setup.
  • DanNeely - Tuesday, December 30, 2014 - link

    Between long rebuild times and the risk of an URE bringing down the array, RAID10 (or it's equivalents) have largely replaced RAID5/6 in larger arrays and SANs.
  • DanNeely - Tuesday, December 30, 2014 - link

    FWIW I'm running WHS2011 but with DrivePool instead. Quite happy with it so far, but it's only 16 months until end of life; and with the WHS series seemingly dead as well I've been paying closer attention to the rest of the nas world hoping to find a suitable replacement. So far without much luck.

    ZFS seems like it's the closest option; but unless I've missed something (or newer features have been added since the blogs/etc that I've read) options for expanding are limited to: Swapping out all the drives one at a time for larger ones rebuilding each time and only getting more usable space after all the drives have been replaced; or by adding a minimum of two drives (in a raid1 analog) as a separate sub array.

    Aside from Drobo, which has recovery issues due to its proprietary FS (no option to pull drives and stick into a normal PC to get data off if it goes down) and is reported to slow down severely at it fills to near capacity, I'm not aware of anything else on the market that would allow for creating and expanding a mirrored storage pool out of mismatched disks the way WHSv1 did or WHS2011 does with 3rd party disk management addons.
  • Brett Howse - Tuesday, December 30, 2014 - link

    If you are happy with WHS 2011 (that's what I run too) you may want to check out Storage Spaces in Windows 8/8.1 and Server 2012/2012 R2.
    http://technet.microsoft.com/en-us/library/hh83173...

    It's like WHS v1's drive extender but done right. You can do mirror or parity to one or more drives, as well as mix and match the drives including SSDs for different speed tiers. Might be worth your time to check out.

    Because this is all available on Windows 8.1, you can do it for a low cost compared to buying Windows Server. What you'd lose though (and this is why I haven't moved off WHS yet) is the amazing full device backup that WHS offers. This is only available in Windows Server Essentials as far as I know, which is a big licensing fee compared to what WHS used to retail for.
  • Gigaplex - Wednesday, December 31, 2014 - link

    It's not done right. If you have a parity pool and add one more drive later, well, you can't. If you started with 3 drives, the only way to expand is to add 3 drives at a time.
  • jabber - Tuesday, December 30, 2014 - link

    Why do folks keep bleating on about RAID5? It's been classed as obsolete for nearly 5 years.

    Move on folks.
  • fackamato - Friday, January 2, 2015 - link

    Because it's still applicable for small drives e.g. SSD's or sub-2TB.
  • chocosmith - Tuesday, December 30, 2014 - link

    i have the ts-453 pro, as a nas its great but i also got it for the hdmi so i could kill two birds with one stone and use it as a media box.
    unforuntately there is a huge amount of video tearing and the power supply fan is too loud for it to hang near the tv. overall if i was doing it again i'd simply get a celeron chip a small case and build it myself, i'd also probably use windows.

    also as others noted with the raid setup. After failing a raid 1 during rebuild i now simply use no raid. one disk can flood a 1gb lan so speed isn't an issue.
    Instead i just have the two disk, one is shared the other isn't. At 2am every morning i copy the changed files to the other. this gives me also some "opppp i deleted something" breathing space. I don't need critical raid.
    my primary is a ssd, its also used for torrents and other chatty stuff
  • chocosmith - Tuesday, December 30, 2014 - link

    just to add, as others stated its expensive for the "ram" upgrade. I took out the 2gb that came with it and install 8gb (2*4gb) much cheaper. a guy on the forums managed to get 16gb working (even though the intel chip says that it can't handle it)
  • Adrian3 - Tuesday, December 30, 2014 - link

    I'm using an Intel NUC as my Media Box - with MediaBrowser. It's fantastic (and tiny). The fan can get a bit loud if it's transcoding, but I have it behind a cabinet door, so I can't hear it. And anyway, 99% of the stuff I watch is direct played not transcoded, and it's very quiet when doing that.
  • ganeshts - Tuesday, December 30, 2014 - link

    I am also not bullish on using a NAS as a HTPC, which is why I don't give too much importance to the HTPC / XBMC aspect. A NAS should fulfil its primary duties - serving files well, and doing real-time transcoding if necessary. Anything else is just gravy on the top. VM capabilities are appreciated - particularly if the VM works on data that is on the NAS itself. Other HTPC aspects - not so much - this is why I think Synology is not missing much by avoiding HDMI output on their NAS units.
  • shaunpugh - Tuesday, December 30, 2014 - link

    The thing all of these types of review seem to miss is support. Try logging a support call and see what kind of response you get. Synology might have a perceived 'win' in this review but their support, at least in the UK, is non-existent.
  • Adrian3 - Tuesday, December 30, 2014 - link

    I had a problem with my current (older version) 8 bay Qnap which was causing a streaming pause when I started to copy new data to it. The support guys were great. They spent quite a bit of time troubleshooting with me logged in using Teamviewer. They eventually supplied a firmware patch, which was eventually incorporated into an official firmware release.
  • intiims - Tuesday, December 30, 2014 - link

    All of these devices are very expensive.. And all of them are almost the same..
    Read about Hard Drives on http://www.hddmag.com/
  • CiccioB - Sunday, January 4, 2015 - link

    I would like to add my vote for an article of NAS targeted to home users.
    It is nice to read about these articles, but a review of a $1000 NAS, disks excluded, is quite useless for almost everyone.
    The market offers a lot of solutions for home users and it is not really easy to understand which is the one that is the right one for price/performance/features and most of all, easy of use.
    For example you have never reviewed a single WD cloud solution(1, 2 or 4 disks) that are cheap and may be enough for most of the users if they only knew what these devices can and cannot do.
    Comparing them directly with QNAP/Synology/Buffalo more expensive solutions may be an indication if all those added features and setup+maintenance time are really worth what they cost.

    Thanks in advance
  • Evadman - Monday, January 5, 2015 - link

    I have a TS-853 Pro 8G; currently populated with eight 3TB HGST drives; 7 in RAID6, one hot spare. I got it to replace a power hungry server that was using a Adaptec 52445 controller with 20 drives. I also used the server as a VM host for testing so finding a NAS that supported a VM really helped me choose the QNAP.

    My old server could transfer around 200MB/s while the QNAP is at around 115 MB/s which is acceptable for my use case. The VM setup is decent, but transferring a VM from a Windows Server to the QNAP box is a PITA. Not QNAP's fault really, that's a windows proprietary issue with the VHD setup. So far, no issues running a VM. As far as I can tell, the board will support another 8 GB of memory, but QNAP doesn't support it. I haven't yet purchased more memory to test that though. It would help with the VM hosting. The VM station only supports 2 administrator accounts which can be trouble for SMBs.

    As a note, this review lists the 853 can only support 2 VM's at a time. That is incorrect. In July Virtualisation Station 1.1v2088 and later no longer have a hard limitation for the number of concurrent active VMs. As long as there is memory and CPU available, have at creating more VM's. That's why I want more than 8GB of memory.

    The only worry I really have with QNAP is that their support seems not-so-good. If you read though the forum it is filled with issue reports without response from QNAP. As a power user, I suppose that is alright, but for a SMB, especially one without an IT professional, that would worry me greatly. What happens if the box breaks and you need to swap the drives to a replacement box? What if there is a config issue you can't fix? Defiantly worrying. The forum has a few non-QNAP people that really know what they are doing though.

    Also, something I researched and tried out was QNAP's hook into amazon Glacier. QNAP really messed up the beta of Glacier support for Amazon, which is probably why it isn't available in their app system yet. It is not optomized at all for Glacier. My NAS has something like 1.7 million files on it. Uploading them to Glacer in the way the app does it would cost $340 just to upoad. The app doesn't warn the user that it isn't optmimized or that it will cost so much. Download is even worse at something like a grand because of the coding. They really need to understand how to work with 3rd party groups because other apps have similar issues.

    the NAS itself has been great for the core features though. A bit pricy but so far I am happy with it.

Log in

Don't have an account? Sign up now