Guest User

Untitled

a guest
Jul 15th, 2018
92
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.13 KB | None | 0 0
  1. Originally Posted by gillianms (read):
  2.  
  3. “AIIDE once again showed me that there are two sides to academic game AI research. One that takes concerns about games and tries to solve problems with new AI techniques, and one that takes new AI research and tries to apply it to problems in games, which are sometimes artificially constructed.”
  4.  
  5. It seemed to me at times that a typical academic approach was a little more self-serving in the sense that it existed only to generate more academic research. In essence, this creates a huge cyclical chain that doesn't really serve anyone's purpose but that of the academics themselves. Peer-reviewed work plays right into this in that all you need get is someone else to say "yep... that's cool." No one steps in and says "but what is it good for?"
  6.  
  7. “1. I love seeing screenshots and lectures on AI authoring tools. I find them extremely informative, and go out of my way to find information on them. However, I think there is some frustration in academia that many of these tools are not written about in great detail, or made available for us to play with. I suspect that for as many as are shown to us, there are many more that stay private. As proprietary tools, this is entirely understandable. I think academics are used to a more open policy, where we try to make demos of our work available for download, and frequently share our research code with collaborators. I am not sure what the best solution to this problem is.”
  8.  
  9. The problem with your premise is to think that using someone else's tools will be helpful. This is the same issue that most of industry has with middleware - that is, most of the tools have to be game-specific because the data you are manipulating is game-specific. As an obvious example, even if Richard Evans gave you the source for their custom tool on associating the needs of the Sims to certain actions they can take, this would have nothing to do with creating a tool for another game that doesn't have those KR models and outputs. Same with the Halo 3 tools.
  10.  
  11. “2. There seemed to be some miscommunication over the positioning of the particular work that fueled this debate in the first place. Many of us are particularly interested in making tools for *non-technical* audiences, which seems to be a different focus than internal tools used by industry developers.”
  12.  
  13. You saw in the last session how Havok behavior has a graphical tool for HFSMs. So do the products by Xaitment. There are many tools that can be used by people for designing and constructing rudimentary FSMs, etc. Can they be used by non-technical people? That's a nebulous word. But it is reasonable to ask subject matter experts (SME) to understand some rudimentary rule construction inputs on a tool? I believe it is.
  14.  
  15. As much as the "my grandma" premise that was offered is very idealistic, it isn't terribly feasible in practice. The goal shouldn't be moving the tool to the simplest implementation possible but rather moving to the sweet spot of the most robust implementation in the simplest package. The product we were looking at during that lecture was too simple to be of any use. Therefore, proving that "my grandma can use it" is irrelevant since the tool itself does so very little that it wouldn't be useful in practice. The case that needs to be explored is, "at what level of complexity of behavior is there a divergence?"
  16.  
  17. In the security guard case, there is no divergence from hand-coded rules that could be handled in about 30 minutes of coding. Therefore, putting together a tool that a SME could use on his own has no benefit in that example. In theory, he could sit with a programmer, scribble down a handful of rules, and expect the programmer to have a working product in a short period of time.
  18.  
  19. This is the problem that I had with many of the papers. In order to "prove the concept" of the paper, the test had to be so simple that there was no benefit to it from a production standpoint. The overlying question then became "and how does this scale?" In many of the cases, it was obvious that the touted benefits would evaporate once that scaling happened. In order to achieve the necessary complexity of behavior to do anything worthwhile, the SME is going to have to encounter and manage the complexity of the ruleset. Nothing I saw showed that they are offering a way other than what is already in use and proven by the industry tools that are in use. This was the point that was made by the commenter in the session. He was dead on.
  20.  
  21. “3. The role of academia is often to look 5 and 10 years out, and provide useful demos as a proof of concept of our work. We typically do not have the resources available to make products that are ready to drop into your game or development environment. My understanding is that other disciplines in computer science address this by having "research industry" - companies like Microsoft, HP, or IBM that fund research at universities and in their own departments and have teams of developers ready to take good ideas and turn them into usable products (a recent example of this is Songsmith, from MSR).”
  22.  
  23. I can't speak to this point as much but it seems like looking 5-10 years out in an industry that changes every 18 months is a little counter-productive.
  24.  
  25. Note that the preamble of the allegedly "disturbed" commenter was "industry is often disparaged by academia, and yet..." The ferociously defensive reaction of the hard line academics in the room only served to prove that point. Moreso, they also proved that they really don't know what is necessary to work outside of the sterile, controlled environment of an academic test.
  26.  
  27. The results of this are that industry churns out robust tools... because we have to. The fact that these results are looked down upon by academia because they aren't... well... academic, is at the root of the problem. The aire of snobbery is further heightened by the apparent gaps in the awareness of the academics expressed in such comments as "I haven't seen tools such as these" and "what is a behavior tree?" If you aren't aware of these things, it means you haven't peeked out of the ivory tower in a while... for whatever reason.
  28.  
  29. More on this later, if necessary.
Add Comment
Please, Sign In to add comment