Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2011-06-28 17:00:01 <LowValueTarget> Anyone familiar with the VCDB database?
- 2011-06-28 17:00:11 <LowValueTarget> Trying to figure out how VPX_IP_ADDRESSES works
- 2011-06-28 17:00:14 <LowValueTarget> doesnt seem to be accurate
- 2011-06-28 17:02:08 <Veers> hrmm
- 2011-06-28 17:02:22 * tmclaugh (~anonymous@dhcp-132-183-243-190.mgh.harvard.edu) has left #vmware
- 2011-06-28 17:02:23 * Guybrush____ (~Guybrush@de-mail.selectaworld.com) has joined #vmware
- 2011-06-28 17:02:45 * Axeman has quit (Ping timeout: 255 seconds)
- 2011-06-28 17:03:05 <LowValueTarget> Veers, for the most part it is accurate, but for one or two VMs (who do have tools running) there is not a single entry
- 2011-06-28 17:04:37 <LowValueTarget> Veers, here is my sql query (in python string format) where 'vms' is a list of vms in single quotes comma separated
- 2011-06-28 17:04:38 <LowValueTarget> -
- 2011-06-28 17:04:39 <LowValueTarget> "SELECT a.IP_ADDRESS,v.NAME FROM VPX_IP_ADDRESS AS a LEFT OUTER JOIN VPXV_VMS AS v ON v.VMID = a.ENTITY_ID WHERE v.NAME IN (%s) ORDER BY v.NAME" % vms
- 2011-06-28 17:05:42 * o0oScoRcHo0o is now known as FiresOut
- 2011-06-28 17:08:59 <genec> LOJ? why?
- 2011-06-28 17:09:34 * pace_t_zulu_ has quit ()
- 2011-06-28 17:10:10 <LowValueTarget> genec: need an easy way to enumerate ip addressess for each VM
- 2011-06-28 17:10:25 <theacolyte> Why not use powercli/vcli?
- 2011-06-28 17:10:49 <synegy34> <sarcasm> cuz hackin the SQL's is safe ............ </sarcasm>
- 2011-06-28 17:11:05 * pace_t_zulu (~pace_t_zu@unaffiliated/pacetzulu/x-585030) has joined #vmware
- 2011-06-28 17:11:38 <genec> LowValueTarget: Inner Join should still work
- 2011-06-28 17:12:01 <LowValueTarget> genec: query is fine.
- 2011-06-28 17:12:06 * jrickman has quit (Quit: Leaving...)
- 2011-06-28 17:12:10 <LowValueTarget> just curious as to why some VMs are in there, some arent
- 2011-06-28 17:12:15 <LowValueTarget> all have tools running
- 2011-06-28 17:12:32 * daunce has quit (Ping timeout: 258 seconds)
- 2011-06-28 17:12:54 * pace_t_zulu has quit (Remote host closed the connection)
- 2011-06-28 17:13:33 * pace_t_zulu (~pace_t_zu@unaffiliated/pacetzulu/x-585030) has joined #vmware
- 2011-06-28 17:14:10 <genec> %s as a parameter?
- 2011-06-28 17:14:19 * elgar (~elgar@38.106.144.30) has joined #vmware
- 2011-06-28 17:14:56 <LowValueTarget> yeah ... left my python junk in there
- 2011-06-28 17:15:15 <LowValueTarget> %s would be replaced by 'VM1', 'VM2', 'VM3' etc
- 2011-06-28 17:15:33 <genec> still, suboptimal
- 2011-06-28 17:15:53 <LowValueTarget> regardless of what it is and how you view it..... why is the VPX_IP_ADDRESSES table not accurate
- 2011-06-28 17:16:00 <LowValueTarget> that's what I'm curious about
- 2011-06-28 17:16:35 <Veers> LowValueTarget: not sure; I regularly get angry at VCD
- 2011-06-28 17:16:42 <Veers> its possible its out of sync with vcenter
- 2011-06-28 17:16:45 <Veers> which happens
- 2011-06-28 17:17:00 <Veers> I'm guessing VCD thinks it assigned a VM X when someone else actually gave it Y
- 2011-06-28 17:17:09 <genec> it's also possible that that's an old table/field that was used in an old version
- 2011-06-28 17:17:20 <Veers> there's only 1 version
- 2011-06-28 17:17:23 <Veers> so thats unlikely
- 2011-06-28 17:17:47 <genec> VPX_IP_ADDRESSES was only in v4.1?
- 2011-06-28 17:17:55 <LowValueTarget> in 4 too
- 2011-06-28 17:17:58 <Veers> oh wait
- 2011-06-28 17:18:03 <Veers> I read as vCLoud Director
- 2011-06-28 17:18:04 <Veers> ignore me
- 2011-06-28 17:18:07 <Veers> anf evyerhitn I said
- 2011-06-28 17:18:16 <genec> :-P
- 2011-06-28 17:18:37 * fr0nk_ has quit (Ping timeout: 240 seconds)
- 2011-06-28 17:19:11 <genec> LowValueTarget: I've occasionally used the DB for collecting data but I've also noticed subtle changes over time that break queries with old lingering data
- 2011-06-28 17:19:48 * fiz- has quit (Ping timeout: 276 seconds)
- 2011-06-28 17:20:57 * basstscho has quit (Ping timeout: 258 seconds)
- 2011-06-28 17:21:12 <genec> LowValueTarget: there's probably a view that doles out the right data but the APIs would be better as previously mentioned.
- 2011-06-28 17:21:39 * TigerRage has quit ()
- 2011-06-28 17:21:58 * Notify: v-fox is online (1FN-GNX).
- 2011-06-28 17:27:13 <Veers> I'd do a powerCLI query myself for it
- 2011-06-28 17:27:52 <LowValueTarget> running this remotely in a linux shell
- 2011-06-28 17:28:15 <theacolyte> Still not sure why you're querying the DB directly either
- 2011-06-28 17:28:42 <LowValueTarget> theacolyte: easier, faster
- 2011-06-28 17:28:55 <LowValueTarget> theacolyte: i assume thats what powercli and vcli do anyway
- 2011-06-28 17:28:58 <theacolyte> Probably faster, but it's not accurate apparently
- 2011-06-28 17:29:00 <LowValueTarget> with more overhead
- 2011-06-28 17:29:25 <LowValueTarget> ok then, in the name of science i will run vcli to see if there is a difference
- 2011-06-28 17:29:38 <kkress> science!
- 2011-06-28 17:29:55 <theacolyte> I haven't had any accuracy issues with powercli
- 2011-06-28 17:31:25 * VSecurity (~VirtSecur@70.252.227.68) has joined #vmware
- 2011-06-28 17:31:25 * VirtSecurity has quit (Read error: Connection reset by peer)
- 2011-06-28 17:31:55 <LowValueTarget> theacolyte: i would rather do it manually then dick with windows
- 2011-06-28 17:32:02 <LowValueTarget> :)
- 2011-06-28 17:32:06 <theacolyte> SMILES
- 2011-06-28 17:32:27 <LowValueTarget> necessary evil, but evil nonetheless #no-flame
- 2011-06-28 17:32:35 <LowValueTarget> just preference
- 2011-06-28 17:32:50 <genec> LowValueTarget: vCLI is Perl
- 2011-06-28 17:32:57 <LowValueTarget> yeah
- 2011-06-28 17:32:59 <LowValueTarget> still yuck
- 2011-06-28 17:33:01 <LowValueTarget> but much better
- 2011-06-28 17:33:10 * steubens_web has quit (Ping timeout: 252 seconds)
- 2011-06-28 17:36:03 * VSecurity has quit (Ping timeout: 255 seconds)
- 2011-06-28 17:39:17 <LowValueTarget> i assume i want to use vmware-cmd with getguestinfo and reference a specific var
- 2011-06-28 17:40:01 <theacolyte> Couldn't tell you, I use Windows.
- 2011-06-28 17:41:43 * pickett has quit (Read error: Connection reset by peer)
- 2011-06-28 17:41:58 * elgar has quit (Ping timeout: 250 seconds)
- 2011-06-28 17:42:22 * Spec has quit (Quit: drop it)
- 2011-06-28 17:44:16 * Spec (~nwheeler@unaffiliated/spec) has joined #vmware
- 2011-06-28 17:44:39 * pickett (~pickett@203-206-47-214.dyn.iinet.net.au) has joined #vmware
- 2011-06-28 17:46:54 <genec> VirtualMachineGuestSummary.ipaddress perhaps?
- 2011-06-28 17:48:11 * minimoose has quit (Quit: minimoose)
- 2011-06-28 17:55:24 * suprsonic (~suprsonic@66.170.2.190) has joined #vmware
- 2011-06-28 17:55:46 <suprsonic> how many datastores can be registered with vcenter 4.1?
- 2011-06-28 17:56:41 <theacolyte> I'd like to say 255
- 2011-06-28 17:56:48 <theacolyte> you can check on the config maximumum sheet
- 2011-06-28 17:56:58 <Alowishus> well there's a 256 LUN per ESX server limit, plus 64 NFS datastores
- 2011-06-28 17:57:02 <Veers> on a vCenter or an individual ESX server?
- 2011-06-28 17:57:11 <Alowishus> I don't think vCenter itself imposes anything, but multiple the number of your hosts * those numbers above and you have a pretty good idea
- 2011-06-28 17:58:13 <Veers> no specific limit; one could assume no more than 102000 though
- 2011-06-28 17:58:17 <suprsonic> Number of datastores registered in vCenter Server 100
- 2011-06-28 17:58:27 <suprsonic> but thats under vSphere Storage Management Initiative - Specification (SMI-S)
- 2011-06-28 17:58:28 <Veers> since you have 400 hosts max in a VC; and 255 usable datastores (#256 being the boot device)
- 2011-06-28 17:58:45 <suprsonic> whatever SMI is
- 2011-06-28 17:58:53 <Veers> suprsonic: its unlikely you need to exceed even 100 though
- 2011-06-28 17:59:15 * Spec has quit (Quit: Leaving)
- 2011-06-28 17:59:26 <suprsonic> well, there's a job I going to go for
- 2011-06-28 17:59:33 <suprsonic> and they said they create a LUN for each vm
- 2011-06-28 17:59:38 <suprsonic> and I didn't think that was the best idea
- 2011-06-28 17:59:38 <theacolyte> hahahadhha
- 2011-06-28 17:59:49 <theacolyte> Did they mention why?
- 2011-06-28 18:00:06 <Veers> thats cooky
- 2011-06-28 18:00:06 <suprsonic> they said thats what compellant recommended
- 2011-06-28 18:00:13 <Veers> they misheard
- 2011-06-28 18:00:14 <Kevin`> it's a great idea in general, but it doesn't work particularly well with some vmware stuff
- 2011-06-28 18:00:23 <Veers> its a terrible idea in general
- 2011-06-28 18:00:23 <theacolyte> They definately got that wrong
- 2011-06-28 18:00:31 <suprsonic> thats what I said
- 2011-06-28 18:00:37 <Veers> I should have 1500 LUNs coming off my storage array; 1 per VM?
- 2011-06-28 18:00:40 <theacolyte> I have a compellent and can tell you with absolute certainty that's not what they recommend
- 2011-06-28 18:00:58 <Kevin`> Veers: why not? provided you have proper management of it
- 2011-06-28 18:01:32 <theacolyte> You aren't being serious are you?
- 2011-06-28 18:01:35 <Kevin`> Veers: at least you don't have 1500 files spread across 100 proprietary filesystem images stuck on luns
- 2011-06-28 18:01:47 <theacolyte> So instead you'd rather manage thousands of LUNs?
- 2011-06-28 18:02:14 <Kevin`> I already have hundreds of them on my current system. at home.
- 2011-06-28 18:02:17 <Kevin`> it's not really that bad.
- 2011-06-28 18:02:17 <suprsonic> yeah, Im glad I didn't take the gig
- 2011-06-28 18:02:28 <suprsonic> the overhead to manage storage must be a bitch
- 2011-06-28 18:02:42 <theacolyte> So what are you gaining exactly by one VM per LUN?
- 2011-06-28 18:03:00 <kkress> duh. More LUNs
- 2011-06-28 18:03:01 <suprsonic> crap I have to run, I'll get more information tomorrow
- 2011-06-28 18:03:27 <theacolyte> The only thing I can think of is snapshots may be a little more useful, but.....
- 2011-06-28 18:03:30 <suprsonic> theacolyte tiered storage and SAN level snaphots of the vms?
- 2011-06-28 18:03:40 <theacolyte> suprsonic: you can get that without doing 1 LUN per VM
- 2011-06-28 18:03:45 <suprsonic> agreed
- 2011-06-28 18:03:47 <Veers> Kevin`: 1500 files I can deal with
- 2011-06-28 18:03:58 <suprsonic> he said they dont trust VMware's snapshots
- 2011-06-28 18:04:02 <Kevin`> theacolyte: easier storage management (provided your storage system is easy to manage and automated together with the vm stuff), easier migration and recovery. doesn't vmware kind of mess up parallel IO if you add a bunch of luns to it, making this pointless in context?
- 2011-06-28 18:04:03 <Veers> yeah its easy for you yourself to manage it because you control everything
- 2011-06-28 18:04:56 <Veers> one of our environments here has 8000 virtual machines; that'd be 8000 LUNs with 8000 separatte LUN queues to manage and we'd be hitting configuration support maximums on a LOT of vendors arrays
- 2011-06-28 18:05:50 <theacolyte> Well
- 2011-06-28 18:06:01 <theacolyte> You could just get a seperate storage arrray for each VM
- 2011-06-28 18:06:04 <theacolyte> Sorry, sorry
- 2011-06-28 18:06:12 <Veers> now when I awnt to do storage replication for DR for example; I need to setup say 1000 relationships
- 2011-06-28 18:06:16 <Veers> instead of say 10
- 2011-06-28 18:06:26 <Veers> which means I buy more storage controllers to handle the extra overhead
- 2011-06-28 18:06:43 <Veers> and dealing with array snapshots with over 1000 devices would probably be nothing short of painful
- 2011-06-28 18:06:48 <Alowishus> but you could be sure you're never DR replicating any more than absolutely necessary!
- 2011-06-28 18:07:20 <Veers> and now to provision a VM I need to touch the storage which means I increase the odds of something going wrong
- 2011-06-28 18:07:30 <Veers> as oopposed to the VMware admin just creating a file
- 2011-06-28 18:07:40 <Veers> or Xen, or Hyper-V
- 2011-06-28 18:07:58 <Veers> a lot of enterprises avoided hyper-V because of that whole 1 VM per LUN if you want quick migrate functionality issue
- 2011-06-28 18:08:33 <Veers> also a lot of HBAs max out at 128 or 256 devices; so a cluster would effectively be limited to a max of that many virtual machines
- 2011-06-28 18:09:04 * soultekkie has quit (Quit: Ex-Chat)
- 2011-06-28 18:09:36 <Kevin`> doesn't hyper-v have some strange limited clustered ntfs support specifically for doing that?
- 2011-06-28 18:10:08 * of-logic-carl1 has quit (Quit: Leaving.)
- 2011-06-28 18:10:51 <Veers> they do now
- 2011-06-28 18:10:56 <Veers> when it cmae out it didn't
- 2011-06-28 18:11:05 <Veers> they're inching closer and closer to "good enough"
- 2011-06-28 18:16:31 * VirtSecurity (~VirtSecur@cpe-72-177-136-197.satx.res.rr.com) has joined #vmware
- 2011-06-28 18:16:42 * VirtSecurity has quit (Client Quit)
- 2011-06-28 18:19:03 <Veers> basically without VMFS my next choice is NFS
- 2011-06-28 18:19:57 <Veers> with FAST I expect the idea of say 10TB LUNs to be less painful to a lot of people
- 2011-06-28 18:20:45 * fester has quit (Ping timeout: 258 seconds)
- 2011-06-28 18:20:55 <Veers> hahha
- 2011-06-28 18:21:02 * Eitan (~Eitan@adsl-99-22-192-148.dsl.lsan03.sbcglobal.net) has joined #vmware
- 2011-06-28 18:21:02 <vader--> do you think alot of people are leaning towards XEN over View for VDI?
- 2011-06-28 18:21:10 * benkevan has quit (Quit: leaving)
- 2011-06-28 18:21:39 * dspDrew has quit (Quit: Miranda IM on Steroids)
- 2011-06-28 18:22:26 * RaycisCharles has quit (Quit: Later.)
- 2011-06-28 18:22:34 <Veers> I see a lot of people with existing citrix stuff that stick wiht it
- 2011-06-28 18:22:43 <Veers> I'd recommend it over View if you have a lot of remote sites
- 2011-06-28 18:23:03 <Veers> bu there is a little more complexity with Citrix vs View
- 2011-06-28 18:24:23 * Agiofws (~Agiofws@athedsl-117769.home.otenet.gr) has joined #vmware
- 2011-06-28 18:29:19 * Stenbryggen (~Stenbrygg@78.156.98.103) has joined #vmware
- 2011-06-28 18:29:38 <Eitan> so ive been put in charge of managing some vms i did not set up... a few of the guest computers get terribly slow internet connection, like 15 -20k per sec. I set up a brand new one, a centos box to see if i still got hte issue. and i do, anybody ever heard of this on esxi4.1 ?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement