Advertisement
Guest User

Untitled

a guest
Feb 7th, 2018
198
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 11.05 KB | None | 0 0
  1.  
  2. Danielgingerich
  3. Serrano
  4. OP
  5. Danielgingerich Feb 7, 2018 at 1:40 PM
  6.  
  7. Scott Alan Miller wrote:
  8.  
  9. This is completely wrong. Virtualization is ALWAYS necessary. That you have only one or two servers has nothing to do with it. This is a dead horse, we've covered this so much for so long. This was a discussion around 2007, it's been more than a decade since the last remnants of "it's okay not to virtualize" or getting virtualization and consolidation confused should have died out.
  10.  
  11. http://www.smbitjournal.com/2015/04/virtualizing-even-a-single-server/
  12.  
  13. This is just bad. Virtualization is free itself. This is just bad estimation. If you have to do this, you have significant problems somewhere either in politics or the estimation process. Anytime you intentionally do this, it means you have something that should be fixed rather than providing false estimates.
  14.  
  15. Again, totally backwards. "You get what you pay for" is nonsense in the real world. We also say "the best things in life are free." There aren't restrictions on those free hypervisors, except ESXi. Basically, you see paying money as value. So anyone who overcharges you is better than anyone that donates time or gives you good value. This is terrible logic and should never be used in business.
  16.  
  17. This is "often true" but is not a hard rule. A hard rule would imply best practice, this is at best a rule of thumb. There are TONS of times that you should combine roles. Like AD, DNS and DHCP. Or application and database on the same VM. In the SMB, it's likely most of the time, rather than once in a while.
  18.  
  19. Again, cases where this might be okay, but very rarely. Nothing in IT, nothing ever, is an "always do" like this. For something this extreme, it's not even good 50% of the time. This is a maybe 1% of the time kind of thing. If you did all this, why not do HA with it instead of just having it as parts, as well? Not the best use of what you have on hand.
  20.  
  21. While nothing is "always" this is pretty close to being the opposite. In all but super rare cases, you keep storage WITH hosts. It is cheaper, safer, and faster. This is one of the most important mistakes that people make that we spend time correcting. Other than RAID decisions, no single thing has been more often addressed here on SW in the last five years.
  22.  
  23. There are loads of cases where this makes sense, but just as many where it doesn't. Like everything in IT this should be "evaluate your needs and do the right thing." One and two socket systems are the most logical most of the time, with two being the biggest overall winner. But there are loads of cases for four, eight, sixteen, even 256 CPU systems. there is no basis for such a wild claim as one socket systems being the only one plausible socket count in the industry.
  24.  
  25. This is kinda true, but is anything but a hard and fast rule. Every major vendor agrees that this is outdated and wrong (as a rule.) It has its place, but below the 50% of the time mark.
  26.  
  27. Actually the majority should be just one. You want to do fewer cores than people think, not more. Never do you go to more than one without a reason. Often you need more than one, but not most of the time. Never could this be a hard rule, it's not even a valid rule of thumb.
  28.  
  29. Just no. This is purely for reporting licensing to the VM and there is no IT needs, concerns or rules about this whatsoever. This is all a legal issue having to do with licensing, nothing else. This rule is pointless and simply wrong.
  30.  
  31. This is good to consider, but only applies when talking about HA clustering, not clustering in general. your rule is too broad and doesn't take into consideration the type of cluster.
  32. + expand
  33.  
  34. 1&2. I was having a little humorous start with it. Some non-tech people think they have only one use for a server. I was pointing out there is always more than one use for a server, and virtualization helps to make that happen easier and more securely.
  35.  
  36. 3. I've gone through far too many projects where the person in charge underestimated cost and time. I have never once been on a project that has been on time and under budget. Most projects I've been on have been both over budget and have run more than twice the estimated time. I have not run a project with other people. The projects I have run were always just me, and I have always been done ahead of schedule and under budget, but that was just with me.
  37.  
  38. 4. I have never, ever seen good free software. I do use both Hyper-V server (NOT the Hyper-V role in Windows Server) and VMWare's free hypervisor at home for learning, but I would never use it for ANY business use, mostly because it requires independent management on every server but also because it is missing so many other features that are required to run in a business. Hyper-V Server is useful if you can get the management working, but again, it is single server at a time management, unless you have SCCM virtualization manager, which costs a TON. (That kind of takes out the "free" of it.) Sure, many versions of Linux are free to have and install, but the good distros have a support staff that the customer has to pay for. The free ones simply do not have the support behind them that the paid ones do. So, yes, nothing comes free, not really, and you ALWAYS get what you pay for. If you think it is free in any way, you are paying elsewhere, and usually far more than you realize. If you think the Hyper-V role is in this category, then you aren't counting the cost of the Windows Server license itself. That's not free.
  39.  
  40. 6. Yes, it can be good to keep DNS and DHCP on a DC. You have me there. Beyond that, no. Any additional roles on a VM makes for additional complexity, which brings up the rule of unintended consequences. Increase complexity, and you get more unintended effects, many of which you might not even see, than intended effects. This is a hard and fast rule that I have always encountered. If it is not security holes then it is crashes. Keep each VM with just one role, and it keeps both to a minimum.
  41.  
  42. 7. It is a matter of balance. More spare parts around means part are replaced immediately if they fail. Talk to me once you have a drive fail and then a RAID controller fail half a day later. Importing a degraded array onto a new controller is a very tricky thing, and if the RAID controller isn't of the highest quality, it could wipe the entire set. I lost a repository of test data that way, and had to have USB hard drives shipped in by FedEx from the Irvine office to Colorado in order to repopulate it once I had it fixed. It stalled testing for a day and a half.
  43.  
  44. 8. Example of why is is super, duper BAD to keep storage on the same machine as the VM host: An all in one machine has a motherboard failure on a Friday evening and has next business day coverage. There is another VM host that could take on the VMs and keep things up, but the storage is inaccessible, so all the VMs stay down until Tuesday morning when the replacement part comes in. In a pinch, with separate storage, an admin could reformat a desktop PC or two to bring up the most valuable VMs using a free hypervisor. It would allow things to limp along while the part is on the way. There is no way that keeping the storage on the VM host is a good thing. One failed part creates an untenable situation for any business.
  45.  
  46. 9. With the latest server CPUs having as many as 28 or 32 cores, why go with multiple sockets? It increases complexity, and that means more points of possible failure. I've been in situations where the leader of the project ordered huge quad socket Xeon E5 systems with 1TB of memory, and then one failed and there wasn't enough resources to run all the VMs. He had to leave several off while the replacement part came in. With the same budget, he could have bought twice as many CPU cores and twice as much memory in 1U Xeon E3 systems in the same space, and used less power in the process. Of course, in that case, the cluster was populated with a couple hundred web and low traffic db servers with a max of 4 virtual cpus. I suppose, if someone had a monster of a VM that actually used 256 virtual cores, more sockets might be needed, but I have never encountered such a thing. It's myth as far as I'm concerned.
  47.  
  48. 10. I did some small business support back in 1999, and I have seen how a small business owner gets when his one hard drive fails and he loses everything. Those small business owners lose EVERYTHING when they lose that server data. We're talking they lose the business, their lifestyle, their home, EVERYTHING. After a business gets a server, it will never be able to continue operating if they lose it. Don't ever, EVER sell a small business owner a server without drive redundancy, no matter how small they think their budget is. That budget will get bigger when they realize that they could lose the entire business if they lose that server.
  49.  
  50. Oh, and RAID 5 is the DEVIL of IT. It will say that the data is on the other drives, and maybe even allow access to most of it, but once a drive fails and a replacement is put in to rebuild, 2 out of 3 times another drive will fail during the rebuild and cause all the data to be lost.
  51.  
  52. 11. No, no, no, no, NO. I have had to deal with single core VMs far too much around my current job, with Windows Updates that take freaking HOURS to finish. (I have to update QA this afternoon, with 6 single core VMs created by my idiot predecessor, and it takes about 3 and a half hours to do a month of updates on those, while the dual core VMs, on the same host cluster, get them done no problem in less than a hour.) Windows Update is made around multithreaded systems. Single cores make then DRAAAAAAAAAAAAAAAAAAAAAG. They're HORRIBLE. Curse them ALL to Hell. I never want to deal with single core VMs again. (but I have to this afternoon anyway.)
  53.  
  54. 12. Really, is it too much to ask for people to NOT change the 1 in the "Virtual Cores" config in VMWare VMs? I have just seen far too many problems with that, and the fix is SO simple a monkey could understand it. Besides, there is NO reason whatsoever to make multiple virtual socket VMs when multiple cores are needed. It is far more trouble to do it the other way, just stick to the more simple way. It is an evil setting that should just be left out, but VMWare still keeps it around.
  55.  
  56. Soft rule response:
  57. 3. What other type of cluster is there? I have not seen another reason to cluster VM hosts other than redundancy and keeping VMs up and running. Even VMWare's cluster configuration is only all about redundancy. There is no other mention of any other purpose. (My idiot predecessor also somehow managed to cluster 2 VMWare hosts that both have internal storage and no external storage. I don't know what he expected to do with them, but they sure couldn't do redundancy. Leaving all but boot drives out of those two hosts would have left enough money to afford a basic iscsi storage appliance and it could have been clustered easily, and we'd be able to keep our systems up if one every went down, but no...)
  58.  
  59. 0 of 1 found this helpful
  60. Spice
  61. Reply
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement