Advertisement
Guest User

Untitled

a guest
Sep 20th, 2017
432
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 13.70 KB | None | 0 0
  1. 2011-06-28 17:00:01 <LowValueTarget> Anyone familiar with the VCDB database?
  2. 2011-06-28 17:00:11 <LowValueTarget> Trying to figure out how VPX_IP_ADDRESSES works
  3. 2011-06-28 17:00:14 <LowValueTarget> doesnt seem to be accurate
  4. 2011-06-28 17:02:08 <Veers> hrmm
  5. 2011-06-28 17:02:22 * tmclaugh (~anonymous@dhcp-132-183-243-190.mgh.harvard.edu) has left #vmware
  6. 2011-06-28 17:02:23 * Guybrush____ (~Guybrush@de-mail.selectaworld.com) has joined #vmware
  7. 2011-06-28 17:02:45 * Axeman has quit (Ping timeout: 255 seconds)
  8. 2011-06-28 17:03:05 <LowValueTarget> Veers, for the most part it is accurate, but for one or two VMs (who do have tools running) there is not a single entry
  9. 2011-06-28 17:04:37 <LowValueTarget> Veers, here is my sql query (in python string format) where 'vms' is a list of vms in single quotes comma separated
  10. 2011-06-28 17:04:38 <LowValueTarget> -
  11. 2011-06-28 17:04:39 <LowValueTarget> "SELECT a.IP_ADDRESS,v.NAME FROM VPX_IP_ADDRESS AS a LEFT OUTER JOIN VPXV_VMS AS v ON v.VMID = a.ENTITY_ID WHERE v.NAME IN (%s) ORDER BY v.NAME" % vms
  12. 2011-06-28 17:05:42 * o0oScoRcHo0o is now known as FiresOut
  13. 2011-06-28 17:08:59 <genec> LOJ? why?
  14. 2011-06-28 17:09:34 * pace_t_zulu_ has quit ()
  15. 2011-06-28 17:10:10 <LowValueTarget> genec: need an easy way to enumerate ip addressess for each VM
  16. 2011-06-28 17:10:25 <theacolyte> Why not use powercli/vcli?
  17. 2011-06-28 17:10:49 <synegy34> <sarcasm> cuz hackin the SQL's is safe ............ </sarcasm>
  18. 2011-06-28 17:11:05 * pace_t_zulu (~pace_t_zu@unaffiliated/pacetzulu/x-585030) has joined #vmware
  19. 2011-06-28 17:11:38 <genec> LowValueTarget: Inner Join should still work
  20. 2011-06-28 17:12:01 <LowValueTarget> genec: query is fine.
  21. 2011-06-28 17:12:06 * jrickman has quit (Quit: Leaving...)
  22. 2011-06-28 17:12:10 <LowValueTarget> just curious as to why some VMs are in there, some arent
  23. 2011-06-28 17:12:15 <LowValueTarget> all have tools running
  24. 2011-06-28 17:12:32 * daunce has quit (Ping timeout: 258 seconds)
  25. 2011-06-28 17:12:54 * pace_t_zulu has quit (Remote host closed the connection)
  26. 2011-06-28 17:13:33 * pace_t_zulu (~pace_t_zu@unaffiliated/pacetzulu/x-585030) has joined #vmware
  27. 2011-06-28 17:14:10 <genec> %s as a parameter?
  28. 2011-06-28 17:14:19 * elgar (~elgar@38.106.144.30) has joined #vmware
  29. 2011-06-28 17:14:56 <LowValueTarget> yeah ... left my python junk in there
  30. 2011-06-28 17:15:15 <LowValueTarget> %s would be replaced by 'VM1', 'VM2', 'VM3' etc
  31. 2011-06-28 17:15:33 <genec> still, suboptimal
  32. 2011-06-28 17:15:53 <LowValueTarget> regardless of what it is and how you view it..... why is the VPX_IP_ADDRESSES table not accurate
  33. 2011-06-28 17:16:00 <LowValueTarget> that's what I'm curious about
  34. 2011-06-28 17:16:35 <Veers> LowValueTarget: not sure; I regularly get angry at VCD
  35. 2011-06-28 17:16:42 <Veers> its possible its out of sync with vcenter
  36. 2011-06-28 17:16:45 <Veers> which happens
  37. 2011-06-28 17:17:00 <Veers> I'm guessing VCD thinks it assigned a VM X when someone else actually gave it Y
  38. 2011-06-28 17:17:09 <genec> it's also possible that that's an old table/field that was used in an old version
  39. 2011-06-28 17:17:20 <Veers> there's only 1 version
  40. 2011-06-28 17:17:23 <Veers> so thats unlikely
  41. 2011-06-28 17:17:47 <genec> VPX_IP_ADDRESSES was only in v4.1?
  42. 2011-06-28 17:17:55 <LowValueTarget> in 4 too
  43. 2011-06-28 17:17:58 <Veers> oh wait
  44. 2011-06-28 17:18:03 <Veers> I read as vCLoud Director
  45. 2011-06-28 17:18:04 <Veers> ignore me
  46. 2011-06-28 17:18:07 <Veers> anf evyerhitn I said
  47. 2011-06-28 17:18:16 <genec> :-P
  48. 2011-06-28 17:18:37 * fr0nk_ has quit (Ping timeout: 240 seconds)
  49. 2011-06-28 17:19:11 <genec> LowValueTarget: I've occasionally used the DB for collecting data but I've also noticed subtle changes over time that break queries with old lingering data
  50. 2011-06-28 17:19:48 * fiz- has quit (Ping timeout: 276 seconds)
  51. 2011-06-28 17:20:57 * basstscho has quit (Ping timeout: 258 seconds)
  52. 2011-06-28 17:21:12 <genec> LowValueTarget: there's probably a view that doles out the right data but the APIs would be better as previously mentioned.
  53. 2011-06-28 17:21:39 * TigerRage has quit ()
  54. 2011-06-28 17:21:58 * Notify: v-fox is online (1FN-GNX).
  55. 2011-06-28 17:27:13 <Veers> I'd do a powerCLI query myself for it
  56. 2011-06-28 17:27:52 <LowValueTarget> running this remotely in a linux shell
  57. 2011-06-28 17:28:15 <theacolyte> Still not sure why you're querying the DB directly either
  58. 2011-06-28 17:28:42 <LowValueTarget> theacolyte: easier, faster
  59. 2011-06-28 17:28:55 <LowValueTarget> theacolyte: i assume thats what powercli and vcli do anyway
  60. 2011-06-28 17:28:58 <theacolyte> Probably faster, but it's not accurate apparently
  61. 2011-06-28 17:29:00 <LowValueTarget> with more overhead
  62. 2011-06-28 17:29:25 <LowValueTarget> ok then, in the name of science i will run vcli to see if there is a difference
  63. 2011-06-28 17:29:38 <kkress> science!
  64. 2011-06-28 17:29:55 <theacolyte> I haven't had any accuracy issues with powercli
  65. 2011-06-28 17:31:25 * VSecurity (~VirtSecur@70.252.227.68) has joined #vmware
  66. 2011-06-28 17:31:25 * VirtSecurity has quit (Read error: Connection reset by peer)
  67. 2011-06-28 17:31:55 <LowValueTarget> theacolyte: i would rather do it manually then dick with windows
  68. 2011-06-28 17:32:02 <LowValueTarget> :)
  69. 2011-06-28 17:32:06 <theacolyte> SMILES
  70. 2011-06-28 17:32:27 <LowValueTarget> necessary evil, but evil nonetheless #no-flame
  71. 2011-06-28 17:32:35 <LowValueTarget> just preference
  72. 2011-06-28 17:32:50 <genec> LowValueTarget: vCLI is Perl
  73. 2011-06-28 17:32:57 <LowValueTarget> yeah
  74. 2011-06-28 17:32:59 <LowValueTarget> still yuck
  75. 2011-06-28 17:33:01 <LowValueTarget> but much better
  76. 2011-06-28 17:33:10 * steubens_web has quit (Ping timeout: 252 seconds)
  77. 2011-06-28 17:36:03 * VSecurity has quit (Ping timeout: 255 seconds)
  78. 2011-06-28 17:39:17 <LowValueTarget> i assume i want to use vmware-cmd with getguestinfo and reference a specific var
  79. 2011-06-28 17:40:01 <theacolyte> Couldn't tell you, I use Windows.
  80. 2011-06-28 17:41:43 * pickett has quit (Read error: Connection reset by peer)
  81. 2011-06-28 17:41:58 * elgar has quit (Ping timeout: 250 seconds)
  82. 2011-06-28 17:42:22 * Spec has quit (Quit: drop it)
  83. 2011-06-28 17:44:16 * Spec (~nwheeler@unaffiliated/spec) has joined #vmware
  84. 2011-06-28 17:44:39 * pickett (~pickett@203-206-47-214.dyn.iinet.net.au) has joined #vmware
  85. 2011-06-28 17:46:54 <genec> VirtualMachineGuestSummary.ipaddress perhaps?
  86. 2011-06-28 17:48:11 * minimoose has quit (Quit: minimoose)
  87. 2011-06-28 17:55:24 * suprsonic (~suprsonic@66.170.2.190) has joined #vmware
  88. 2011-06-28 17:55:46 <suprsonic> how many datastores can be registered with vcenter 4.1?
  89. 2011-06-28 17:56:41 <theacolyte> I'd like to say 255
  90. 2011-06-28 17:56:48 <theacolyte> you can check on the config maximumum sheet
  91. 2011-06-28 17:56:58 <Alowishus> well there's a 256 LUN per ESX server limit, plus 64 NFS datastores
  92. 2011-06-28 17:57:02 <Veers> on a vCenter or an individual ESX server?
  93. 2011-06-28 17:57:11 <Alowishus> I don't think vCenter itself imposes anything, but multiple the number of your hosts * those numbers above and you have a pretty good idea
  94. 2011-06-28 17:58:13 <Veers> no specific limit; one could assume no more than 102000 though
  95. 2011-06-28 17:58:17 <suprsonic> Number of datastores registered in vCenter Server 100
  96. 2011-06-28 17:58:27 <suprsonic> but thats under vSphere Storage Management Initiative - Specification (SMI-S)
  97. 2011-06-28 17:58:28 <Veers> since you have 400 hosts max in a VC; and 255 usable datastores (#256 being the boot device)
  98. 2011-06-28 17:58:45 <suprsonic> whatever SMI is
  99. 2011-06-28 17:58:53 <Veers> suprsonic: its unlikely you need to exceed even 100 though
  100. 2011-06-28 17:59:15 * Spec has quit (Quit: Leaving)
  101. 2011-06-28 17:59:26 <suprsonic> well, there's a job I going to go for
  102. 2011-06-28 17:59:33 <suprsonic> and they said they create a LUN for each vm
  103. 2011-06-28 17:59:38 <suprsonic> and I didn't think that was the best idea
  104. 2011-06-28 17:59:38 <theacolyte> hahahadhha
  105. 2011-06-28 17:59:49 <theacolyte> Did they mention why?
  106. 2011-06-28 18:00:06 <Veers> thats cooky
  107. 2011-06-28 18:00:06 <suprsonic> they said thats what compellant recommended
  108. 2011-06-28 18:00:13 <Veers> they misheard
  109. 2011-06-28 18:00:14 <Kevin`> it's a great idea in general, but it doesn't work particularly well with some vmware stuff
  110. 2011-06-28 18:00:23 <Veers> its a terrible idea in general
  111. 2011-06-28 18:00:23 <theacolyte> They definately got that wrong
  112. 2011-06-28 18:00:31 <suprsonic> thats what I said
  113. 2011-06-28 18:00:37 <Veers> I should have 1500 LUNs coming off my storage array; 1 per VM?
  114. 2011-06-28 18:00:40 <theacolyte> I have a compellent and can tell you with absolute certainty that's not what they recommend
  115. 2011-06-28 18:00:58 <Kevin`> Veers: why not? provided you have proper management of it
  116. 2011-06-28 18:01:32 <theacolyte> You aren't being serious are you?
  117. 2011-06-28 18:01:35 <Kevin`> Veers: at least you don't have 1500 files spread across 100 proprietary filesystem images stuck on luns
  118. 2011-06-28 18:01:47 <theacolyte> So instead you'd rather manage thousands of LUNs?
  119. 2011-06-28 18:02:14 <Kevin`> I already have hundreds of them on my current system. at home.
  120. 2011-06-28 18:02:17 <Kevin`> it's not really that bad.
  121. 2011-06-28 18:02:17 <suprsonic> yeah, Im glad I didn't take the gig
  122. 2011-06-28 18:02:28 <suprsonic> the overhead to manage storage must be a bitch
  123. 2011-06-28 18:02:42 <theacolyte> So what are you gaining exactly by one VM per LUN?
  124. 2011-06-28 18:03:00 <kkress> duh. More LUNs
  125. 2011-06-28 18:03:01 <suprsonic> crap I have to run, I'll get more information tomorrow
  126. 2011-06-28 18:03:27 <theacolyte> The only thing I can think of is snapshots may be a little more useful, but.....
  127. 2011-06-28 18:03:30 <suprsonic> theacolyte tiered storage and SAN level snaphots of the vms?
  128. 2011-06-28 18:03:40 <theacolyte> suprsonic: you can get that without doing 1 LUN per VM
  129. 2011-06-28 18:03:45 <suprsonic> agreed
  130. 2011-06-28 18:03:47 <Veers> Kevin`: 1500 files I can deal with
  131. 2011-06-28 18:03:58 <suprsonic> he said they dont trust VMware's snapshots
  132. 2011-06-28 18:04:02 <Kevin`> theacolyte: easier storage management (provided your storage system is easy to manage and automated together with the vm stuff), easier migration and recovery. doesn't vmware kind of mess up parallel IO if you add a bunch of luns to it, making this pointless in context?
  133. 2011-06-28 18:04:03 <Veers> yeah its easy for you yourself to manage it because you control everything
  134. 2011-06-28 18:04:56 <Veers> one of our environments here has 8000 virtual machines; that'd be 8000 LUNs with 8000 separatte LUN queues to manage and we'd be hitting configuration support maximums on a LOT of vendors arrays
  135. 2011-06-28 18:05:50 <theacolyte> Well
  136. 2011-06-28 18:06:01 <theacolyte> You could just get a seperate storage arrray for each VM
  137. 2011-06-28 18:06:04 <theacolyte> Sorry, sorry
  138. 2011-06-28 18:06:12 <Veers> now when I awnt to do storage replication for DR for example; I need to setup say 1000 relationships
  139. 2011-06-28 18:06:16 <Veers> instead of say 10
  140. 2011-06-28 18:06:26 <Veers> which means I buy more storage controllers to handle the extra overhead
  141. 2011-06-28 18:06:43 <Veers> and dealing with array snapshots with over 1000 devices would probably be nothing short of painful
  142. 2011-06-28 18:06:48 <Alowishus> but you could be sure you're never DR replicating any more than absolutely necessary!
  143. 2011-06-28 18:07:20 <Veers> and now to provision a VM I need to touch the storage which means I increase the odds of something going wrong
  144. 2011-06-28 18:07:30 <Veers> as oopposed to the VMware admin just creating a file
  145. 2011-06-28 18:07:40 <Veers> or Xen, or Hyper-V
  146. 2011-06-28 18:07:58 <Veers> a lot of enterprises avoided hyper-V because of that whole 1 VM per LUN if you want quick migrate functionality issue
  147. 2011-06-28 18:08:33 <Veers> also a lot of HBAs max out at 128 or 256 devices; so a cluster would effectively be limited to a max of that many virtual machines
  148. 2011-06-28 18:09:04 * soultekkie has quit (Quit: Ex-Chat)
  149. 2011-06-28 18:09:36 <Kevin`> doesn't hyper-v have some strange limited clustered ntfs support specifically for doing that?
  150. 2011-06-28 18:10:08 * of-logic-carl1 has quit (Quit: Leaving.)
  151. 2011-06-28 18:10:51 <Veers> they do now
  152. 2011-06-28 18:10:56 <Veers> when it cmae out it didn't
  153. 2011-06-28 18:11:05 <Veers> they're inching closer and closer to "good enough"
  154. 2011-06-28 18:16:31 * VirtSecurity (~VirtSecur@cpe-72-177-136-197.satx.res.rr.com) has joined #vmware
  155. 2011-06-28 18:16:42 * VirtSecurity has quit (Client Quit)
  156. 2011-06-28 18:19:03 <Veers> basically without VMFS my next choice is NFS
  157. 2011-06-28 18:19:57 <Veers> with FAST I expect the idea of say 10TB LUNs to be less painful to a lot of people
  158. 2011-06-28 18:20:45 * fester has quit (Ping timeout: 258 seconds)
  159. 2011-06-28 18:20:55 <Veers> hahha
  160. 2011-06-28 18:21:02 * Eitan (~Eitan@adsl-99-22-192-148.dsl.lsan03.sbcglobal.net) has joined #vmware
  161. 2011-06-28 18:21:02 <vader--> do you think alot of people are leaning towards XEN over View for VDI?
  162. 2011-06-28 18:21:10 * benkevan has quit (Quit: leaving)
  163. 2011-06-28 18:21:39 * dspDrew has quit (Quit: Miranda IM on Steroids)
  164. 2011-06-28 18:22:26 * RaycisCharles has quit (Quit: Later.)
  165. 2011-06-28 18:22:34 <Veers> I see a lot of people with existing citrix stuff that stick wiht it
  166. 2011-06-28 18:22:43 <Veers> I'd recommend it over View if you have a lot of remote sites
  167. 2011-06-28 18:23:03 <Veers> bu there is a little more complexity with Citrix vs View
  168. 2011-06-28 18:24:23 * Agiofws (~Agiofws@athedsl-117769.home.otenet.gr) has joined #vmware
  169. 2011-06-28 18:29:19 * Stenbryggen (~Stenbrygg@78.156.98.103) has joined #vmware
  170. 2011-06-28 18:29:38 <Eitan> so ive been put in charge of managing some vms i did not set up... a few of the guest computers get terribly slow internet connection, like 15 -20k per sec. I set up a brand new one, a centos box to see if i still got hte issue. and i do, anybody ever heard of this on esxi4.1 ?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement