Advertisement
Guest User

Untitled

a guest
Apr 9th, 2014
1,525
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.32 KB | None | 0 0
  1. [quote="bgstaal, post:1, topic:15271"]
  2. I have read several posts describing issues with connecting multiple kinects to one computer and it seems that the number of kinects that can be connected is directly linked to the number of internal usb buses on the system. I have seen claims that one bus can manage two kinect streams, while others claim you will need a dedicated bus pr. unit. Can anyone confirm?
  3. [/quote]
  4.  
  5. Yes, each Kinect sensor requires its own USB bus. This is because the OS reserves 10% of the USB 2.0 bandwidth and Kinect reserves roughly 45% of the available USB bus bandwidth. regardless of which streams are in use. You can share the bus with a few very lower overhead items, such as keyboard, mouse, but webcam or USB drive not recommended.
  6.  
  7. For multiple Kinect sensors on one PC, you do need multiple USB buses. Most laptops expose a single external bus. Most PCs expose two - one in front and one in back. If you want more, you need to use a PC with expansion space.
  8.  
  9. For the Liberty University Video Wall project (featured here: http://blogs.msdn.com/b/kinectforwindows/archive/2014/01/27/kinect-for-windows-sets-tone-at-high-tech-library.aspx video is on its way), we built the system to use a single very powerful computer and run with four K4W 1.0 sensors. In the end, we only used three sensors but that was just to optimize the overlap of the field of view. The application and K4W SDK worked with four Kinects simultaneously at full frame rate just fine, with one caveat.
  10.  
  11. We added three "Rocketfish USB 3.0 PCI Express Card RF-P2USB3" cards, which uses the Renesas USB chipset. We plugged one K4W sensor to each of these three cards, plus one in one of the other built-in ports. For our particular machine (an HP Z820), we had to experiment with which of the built-in ports would work properly. When running all four Kinects at once, using one particular built-in port caused all the Kinects to be slow, another caused just the one Kinect to be slow, and a third worked. I suspect something weird with the Northbridge and interrupt handling. Either way, in the end we could have used four.
  12.  
  13. The caveat I mentioned was that in testing, sometimes the machine would hang on shutdown. I wasn't able to isolate it to a USB driver or using a particular USB card or port before it resolved itself, but it could have been related.
  14.  
  15. [quote="bgstaal, post:1, topic:15271"]
  16. To solve this issue we are looking at getting a quad-bus usb card like this: http://www.unibrain.com/products/dual-bus-usb-3-0-pci-express-adapter/
  17.  
  18. It is a USB 3.0 card but it claims to have "Legacy connectivity support for USB 2.0". This might or might not be an issue. Does anyone have an experience with this? If some of you have a working set up with four kinects on a windows machine, do you mind sharing some details on the hardware-setup?
  19. [/quote]
  20.  
  21. Go ahead and try this (Renesas chipset) card, but I'm 95% sure that it will still only allow you to use one K4W sensor per card. "But why?" you object! "USB 3.0 has much more bandwidth, and this fancy card has four USB 3.0 channels!" Well yes, but K4W 1.0 is USB 2.0, and these cards implement that support by adding a single USB 2.0 chip somewhere, with the same USB 2.0 bandwidth limitations. It is possible a manufacturer might make a card with multiple USB 2.0 channels/buses with a Full TT for each port, but I haven't seen one.
  22.  
  23. [quote="bgstaal, post:1, topic:15271"]
  24. I'm also missing a method for converting from depth-map coordinates to 3d coordinates like the getWorldCoordinateAt() method from ofxKinect. Is there an equivalent function in the official kinect SDK or do we need to implement this ourselves?
  25. [/quote]
  26.  
  27. If you're using the Kinect SDK, you'll want to use CoordinateMapper:
  28. C#:
  29. http://msdn.microsoft.com/en-us/library/microsoft.kinect.coordinatemapper_members.aspx
  30. C++:
  31. http://msdn.microsoft.com/en-us/library/nuisensor.inuicoordinatemapper.aspx
  32. It provides methods to transform single points as well as entire frames (much more efficient.) You can transform color space to depth space and back, and depth space to skeleton (real world meters) space and back. It works out-of-the-box using the factory calibration.
  33.  
  34. [quote="bgstaal, post:1, topic:15271"]
  35. Do we physically measure and input the position and angles of the kinects in relation to each other? Do we do some sort of reference object calibration routine (checkerboard or similar?). Has anyone tried a ICP approach?
  36. [/quote]
  37.  
  38. Don't bother physically measuring if you need things to actually line up. Since the Kinects will be opposed, you could use a calibration object, such as a large cube with a different AR marker on each face. Write something (using AR Toolkit or whatever is more popular now) to recover the 6 DOF transformation matrix when it sees the marker, and you will know the scale from physically measuring the cube. When Kinect 1 sees Face 1 and Kinect 2 sees Face 2, you can then get the Kinect 1 -> 2 transformation matrix by combining the individual measurements plus the physical offset of Face 1 to Face 2. Of course, you'll want to do this measurement multiple times in different locations and average the results. Chessboards could also work, but I suggested AR markers because they can be automatically identified and it would be easier to do on site.
  39.  
  40. [quote="bgstaal, post:1, topic:15271"]
  41. We are hoping to avoid this by trying to avoid pointing the kinects directly at each other.
  42. [/quote]
  43.  
  44. Yes, you'll be fine here. You won't need the vibration trick. If the Kinects are permanently mounted and there does happen to be interference, just hit one once so it doesn't line up with the others IR patterns anymore. In your case the only interference you'll likely see is a small spot if one can directly see the IR projector of the other, but that area is probably not of interest to your reconstruction.
  45.  
  46. For reference, for the video wall project referenced above, the interference with two Kinects 50% overlapping was not bad. Skeleton tracking still worked fine for both if you were visible to both. The depth image had a few holes, but the holes are typically only in one depth image or another. If you have three Kinects overlapping in one area, then there can be more trouble, but this probably won't happen for you except maybe on the floor depending on how you line things up.
  47.  
  48. [quote="bgstaal, post:1, topic:15271"]
  49. I'm already looking at PLC
  50. [/quote]
  51.  
  52. Heads up, PCL has lots of dependencies and they also tend to hard fork other projects. For example, if you want to use Kinect with PCL directly, you need to use their fork of an old version of OpenNI. Ugh. Not sure what the ofxPCL story is though. They also don't tend to care much about running on Windows or with K4W at all.
  53.  
  54. [quote="bgstaal, post:1, topic:15271"]
  55. The information we are mostly interested in is the position, height and roughly the volume of each person in the room.
  56. [/quote]
  57.  
  58. Not sure what all the other junk will be in the room, but if Kinect SDK skeleton tracking ends up working, even just for a few seconds per person, you can get the height smoothing the head joint and adding an offset (distance from center of head to top of head, possibly scaled by a factor and the distance between shoulder and hip, if shorter people have smaller heads.) This can also give you the position.
  59.  
  60. If you do a blob based tracker on top of this, then you can simply associate the skeleton metrics with the blob and track the person even when the skeleton tracking fails.
  61.  
  62. Depending upon what this other stuff in the space will be, you might get false skeletons, or "mini-me's". Not sure.
  63.  
  64. [quote="bgstaal, post:1, topic:15271"]
  65. Another option I'm thinking about is generating some kind of height map based on all the data and perform normal opencv based blob tracking.
  66. [/quote]
  67.  
  68. If you have a merged point cloud space, then yes you could project all the points onto the floor to form an occupancy map, then blob track that. The blobs might be outlines only if you don't get a lot of the head points.
  69.  
  70. [quote="bgstaal, post:1, topic:15271"]
  71. We will also have to try to predict where people will be in about 500ms. I'm thinking an approximation based on the velocity/trajectory of each person will do?
  72. [/quote]
  73.  
  74. Sounds reasonable. You'll have to play with the prediction vs smoothing parameters to get a good effect for your visualization.
  75.  
  76. Hope my input was helpful! Let me know if you have any additional questions.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement