Advertisement
Guest User

Untitled

a guest
Apr 29th, 2010
150
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 31.32 KB | None | 0 0
  1.  
  2.  
  3.  
  4.  
  5. How is motion capture used in video game development
  6.  
  7.  
  8. GD110
  9. Thomas Campbell
  10. GDMar10
  11. 30 April 2010
  12.  
  13.  
  14.  
  15.  
  16.  
  17.  
  18.  
  19.  
  20.  
  21.  
  22.  
  23.  
  24.  
  25.  
  26.  
  27.  
  28.  
  29.  
  30.  
  31.  
  32.  
  33.  
  34.  
  35.  
  36.  
  37.  
  38.  
  39.  
  40.  
  41.  
  42.  
  43.  
  44.  
  45.  
  46.  
  47.  
  48.  
  49. How is motion capture in video game development
  50.  
  51. 1. Abstract.
  52. 2. Introduction.
  53. 3. Search Methods.
  54. 4. Research Management Process Followed.
  55. 5. Literature Review.
  56. 6. Data Collection.
  57. 7. Data Analysis.
  58. 8. Improvements to Research Method.
  59. 9. Improvements to Research Storage Method.
  60. 10. Conclusions.
  61. 11. Further Work.
  62.  
  63.  
  64. Abstract
  65.  
  66. This research paper is a piece dedicated to explaining the technical details about how motion capture is used in video game development. This paper will discuss methods of motion capture analysis, and the impact motion capture has on the development process for video games. Examples will be given and conclusions drawn, about the level of acceptance, the effectiveness of the technology, and the impact its use has on video game development.
  67.  
  68.  
  69. Introduction
  70.  
  71. Motion capture is the process of taking movement data from specialized motion capture analysis systems, and applying the data to digital objects. The most common use of motion capture is for helping speed up the process of animating, and for raising the quality of animation for digital characters.
  72.  
  73. Motion capture works by taking footage, and running it through specialized software packages that analyze and measure any objects position and orientation in the recorded footage. It then outputs that information as data that is usable in other software packages. Most often these are 3D suites, and the data is applied as animation to the models in the scenes.
  74.  
  75. In order to help the motion capture software, the objects that are being analyzed will have on them points of reference that the software can detect more easily. The most common practice of reference points on a object is using by placing reflective balls on specific points on the objects surface (Does not make good sense) For example if a studio wished to capture movement data from a motion capture actor, and apply it to a digital model the points of reference would be placed at locations on the actor's body that are close or at the same relative location as where the digital model has control points for it's skeleton. These are often, but not always located at major joints on the human body - for example, the shoulder, elbow, and wrist. Placement at these points allows the analysis software to not only focus specifically on these points of reference, but also to ignore all else in the footage. The relevant movement data is then taken and recorded for production animators to apply to their digital skeletons.
  76.  
  77. For facial movement capture, the same process is followed, but the points of reference are normally greatly increased in number and are often replaced with dots on the actor’s face. An example of this in practice can be seen in [11] where, in the production stage of the video game 'Heavy Rain' the studio used the advantages of motion capture to increase the realism of their characters in order to make them more life-like in their roles. In the video released by the studio of an actress reciting a scene, the actress is shown with many points of reference over the entirety of her face. The reason the studio has done this is because the greater the number of points, the greater the accuracy of animation that will be applied to the character's face in the game. This is because the human face can display a great deal of emotion, and it has a very high number of slight movements that are often overlooked when a digital face is being manually animated. The placement of these reference points on the face allow for the analysis software to detect these slight movements and record them accurately for the animators to use. It can also be noted that the video of the actor from Heavy Rain contains the large ball reference points on the actor's head. The reason there are only five points is because the head does not have any other moving parts on it that need to be captured; only the orientation and location of the head is important(although this can be changed to suit any situation).
  78.  
  79. An alternative to motion capture that uses no reference points is known as 'Marker-less motion capture'. This method does not require the actors to wear any special bodysuits or reflective balls for the analysis software to pick up, but only requires specialized algorithms that are designed around recognizing specific shapes. As [5] shows, in the tech demos of MemoMan the researchers on the team were able to use this method to map a human mesh over an actor’s body, and then sync the movement of the actor moving to the skeleton of the model. This technology has been used in various AAA titles, and these are covered below in the Findings.
  80.  
  81. Another method of motion capture is offered by the company XSENS who provide MVN. This system utilizes a technology known as 'inertial motion capture'. This is similar to the methods listed above in that it places markers on an object and captures movement data from these markers through specialized analysis software. What is different about this method is that the actor has to wear a special body suit that contains 'inertial sensors' within the suit. These sensors capture changes in acceleration, which in turn provide velocity and positional change as the actor moves. The disadvantages of this process is that the system cannot provide absolute 'world data', which is the data that tells the animators where exactly in the recording studio space the actor was. The reason this is important is because this data is commonly needed in order to create realistic animations where the actor is moving around a scene (such as cut-scenes). Not recording this data can also be advantageous because studios are not required to have large open areas to record in, or specialized rigs that can take a great deal of time and space to install. Because the inertial system cannot capture world space data this can become problematic with the recording of movement – for example, data drifting from its correct position compared to the markers on the actor’s body. This can lead to errors in the data that can be problematic if the animation needs to be very precise or specific - such as a character picking up an object in the game: if the data is off, then the character may be reaching into empty space or might stick a hand through a box. This method is relatively cheap when compared to Marker-less motion capture.
  82.  
  83.  
  84. Search Methods.
  85.  
  86. All of the information for this research paper was found on the Internet. Predominantly most of the information was from opinion-based articles or academic journals from the Internet. The Internet based journals gave more reliable and scientific academic rigor and had evidence that was comprised of pragmatic facts and information supporting their claims. In comparison, the articles were opinion based and most did not often contain supporting evidence to back up their reports. Overall the information found was of a pragmatic and quantitative nature, although throughout the research several articles that at first appeared to be quantitative were unsubstantial and did not contain enough applicable information for this paper's purposes. The search method could be improved by using more qualitative surveys and other research pieces that are based around a similar subject to the chosen question.
  87.  
  88.  
  89. Research Management Process Follow.
  90.  
  91. For this research piece, the research management process that is prescribed by Media Design School was used. The first stage was to define a research subject and question. The next stage was to find articles related to the research area and the question – and then to complete an annotated bibliography for each piece of supporting evidence, (the template for the annotated bibliography was supplied). A critical analysis was conducted to measure the worth of the article. Once completed, the research paper was composed using the analysis conducted at the earlier stages.
  92.  
  93.  
  94. Literature Review.
  95.  
  96. Marker-less facial motion capture created by Mova has been developed for use in various high end production 3D packages and has effectively been incorporated into the Unreal 3 engine [1]. The Contour Reality System package the company provides is a high end option aimed at studios with a large budget for facial motion capture and animation. The camera rig the software requires is large and bulky, but produces very precise and optimal results.
  97.  
  98. A report from GDC (Game Developers Conference) in 2009 [2] reported on the use of several Motion Capture middle-ware options and how each particular option specifically captures movement data and how it analysis this data. The article then goes on to explain how it makes the data available for digital packages to read. The conclusion of each section of the article is a thorough evaluation of each middle-ware option including advantages, disadvantages and the personal opinion of the reporter
  99.  
  100. A research paper written by David J. Sturman [3] details the history of motion capture since the early nineteen-eighties through to nineteen-ninety-nine. The paper discusses in depth how the motion capture process and technology has changed over its use. Sturman refers to various video games and companies that have used the technology as examples. This data was unapplicable to the findings but is able to be referred to in part as a source of information that was useful in the writing of this paper.
  101.  
  102. A research paper written by the company Xsens Technologies [4], scientifically describes the math and logic behind motion capture analysis. Their product 'MVN' is mentioned in [2] as a middle-ware candidate for studio motion capture. The MVN package offered by the company is an inertial motion capture middle-ware option, but it does require the motion capture actor to wear a body suit in order to capture the movement data. The advantages and disadvantages of this package are covered in the introduction.
  103.  
  104. A publication for the team that worked on the motion capture package MeMoMan [5] details their research into Marker-less motion capture. The research shows that their method for full body Marker-less motion is a viable option for developers that cannot afford expensive rigs. Their researched method uses several regular cameras that are placed at specific places in a scene. They then record any movement information within the scene, which is run through the MemoMan software. This gives them the movement data that they can apply to a 3D character model.
  105.  
  106. A public article on Wikipedia, lists information about Microsoft's Project Natal [6] - a Marker-less motion capture device that has been designed for the Microsoft's 'Xbox 360' console. It is designed to be cheap and accessible for public use as an add-on device for the console. The article lists examples of video games that are in development to be played specifically with this device. Another article run by and located at the company Primesense's website details the technical aspects of the 'Natal' camera [7], and describes how the device analyzes a scene it is presented with. It goes on to explain how the camera software takes relevant movement data from the scene and interprets it into data that the console or pc can use. The technical details are able to be compared to the VICON motion capture system in [18] and the specifications of the cameras to see if the 'Natal' is a cost effective and quality alternative to the higher end camera rigs.
  107.  
  108. In an interview of motion capture actor Reuben Langdon by Nathan Meunier [8], Reuben talks about his experience with using motion capture systems for various video game developers. He details what is involved in a typical day of motion capture, and he give an example of rigs he has worked with while in the industry.
  109.  
  110. An article from a lead animator who worked on Uncharted 2 [10] by Naughty Dog, and a developer video released by the same studio [11] contains details about in-house development processes and motion capture information about the middle-ware tools the studio used to interpret the data taken from their motion capture rigs and import it into Maya for applying it to their characters.
  111. A video example taken from the development of Heavy Rain by Quantic Dream [11], shows the motion capture of an actor acting a scene of the game out. The video shows the motion capture method and process the studio used for the scene.
  112.  
  113. NAMCO BANDAI games located in San Francisco, CA released a developer video of their currently in development game Dead to Rights: Retribution. The video shows the studio's motion capture process for taking regular motion capture animation and how the studio's animators stylized the captured movement data then applied it to their characters
  114.  
  115. The internet news blog 'The Gadget Show' released a video interview with an motion capture actor that acted in Assassins Creed 2 [14] by Ubisoft . It shows the motion capture method the studio used and the actor's opinion and impressions of the work he did while on the project.
  116.  
  117. In a press release in August of 2005, the statement details that Electronic Arts made a large purchase of the high quality motion capture system VICON for use in their forthcoming video game development.
  118.  
  119. An article named Prince of Persia and the birth of motion capture hosted on dans.blog [15], details a reference video that was filmed by Jordan Mechner who is the original creator of the video game Prince of Persia which was released in 1989. The video was used as reference for the game's animation. The game was later hailed for its ground breaking quality of animations on the characters. This is an early form of motion capture, and it shows that the term itself is not limited to only data collection and analysis from motion capture rigs.
  120.  
  121. Two articles written by Heath Firestone (one on Motion Capture as a subject [16] and the other on The Future of Motion Capture [17]) both talk about how motion capture works and what it brings to film and video games. The article details different methods of motion capture and refers to Mova's [1] method of 'Retrospective Vertex Tracking' as well as the traditional method of creating points of reference on a actors body with colored balls or infrared diodes. The article also talks in detail about the production process and work flow that motion capture studios use when taking the data from their analysis machines and preparing it for their animators.
  122.  
  123. An article by The Inquirer [20] in 2009 details how a research group at the University of Sussex have developed a low cost but effective motion capture rig - one that is very different from current methods of movement capture systems. They have created an exoskeleton that moves with the user’s limbs and gathers the data from the movement of the rig itself. The lost cost of the system means the team can provide an alternative to development teams that do not have the budget to facilitate using one of the higher end rigs (such as those detailed in [2]).
  124.  
  125. In 2008, Leah D'Emilio, a reporter for `MahaloDaily.com` took part in an interview of motion capture software developer Image Metrics [22], where she spoke with Pete Busch, the Director of Production at Image Metrics. The main focus of this interview was on Image Metrics' The Emily Project which was a project by the studio to develop a highly realistic human model and use the software developed, in conjunction with the USC Institute for Creative Technologies method of capturing facial data and animation, to record and apply animation to this model. The video and an article about the project is here [23]. The article refers to the ICT and the part they played in the production pipeline. .Image Metrics, ‘The Emily Project' and the USC ICT is covered further below and in the Findings of this paper.
  126.  
  127. In another interview by Leah D'Emilio, she spoke with Paul Debeverac [24] the associate director for graphics research at the USC Institute for Creative Technologies- this studio is most well known for their work on Image Metrics' 'Emily Project' and they have worked in conjunction with other studios to provide high resolution image scanning of actors faces for video games.
  128.  
  129. In an article regarding the company Bungie, and how they will be using Image Metrics' motion capture systems in their upcoming big budget game, Halo:Reach [25], the CEO of Image Metrics and the Bungie are both quoted on their opinions of the choice to bring the technology in-house, and use it in conjunction with their pipeline to create high quality facial animation. These quotes are covered below in Findings.
  130.  
  131.  
  132. Data Collection
  133.  
  134. All the information was gathered from secondary sources and each source was specifically chosen for this paper and has been checked for relevance to the chosen subject and question. Any irrelevant information was ignored by the primary source.
  135.  
  136.  
  137. Findings
  138.  
  139. Through literature review and research this paper has found that motion capture is an effective alternative to manual animation when a team needs to quickly animate characters realistically in their production pipeline.
  140.  
  141. Motion capture is currently widely used in current game development as it allows for a high level of realism at a low time cost, and it allows for production teams to quickly take an idea from just being an idea to being able to see how it works in their video game.
  142.  
  143. Motion capture is an accepted and quickly adopted method of animating characters in current video game development pipelines. Developers such as Capcom, Naughty Dog and Ubisoft have used motion capture in their video development process. Each studio has used their motion capture in similar ways, but have tailored the process differently for each game. For example on Uncharted 2 by Naughty dog, the studio used the voice actors to do the motion capture work for their respective characters. In contrast, for Resident Evil 5 [8] by Capcom and Assassins Creed 2 [21] by Ubisoft, classically trained actors were brought into to the studio to do this job.
  144.  
  145. In April 2009, an interview was run by 'CheatCodeCentral of motion capture actor Reuben Langdon [8], who has worked on the motion capture for various Resident Evil 5 franchise video games and had ten years of experience in the field. He told Nathan Meunier (the interviewer) that his experience on working with the studios involved had been very different through each iteration of the series, with the motion capture process going from being physically wired up to the analysis computers to using the VICON camera system. Langdon said that “..with the new technology, even the subtlest moves we picked up. There was no need to really over-act or make things really big, because the data was able to pick up all the small subtle nuances..” . Langdon was also quoted saying “I'm talkin' full 3D, like Star Trek holo-deck kinda stuff. The principles behind the technology are there, they exist! It's just a matter of refining it and making it cheap enough for the masses. Maybe, in the future, we'll all have them in our homes..”
  146.  
  147. Two months later in June of 2009, Microsoft announced Project Natal - a motion capture system that could be sold to the average consumer (who could take the unit home) to allow them to interact with games in a different way. This shows that motion capture is being taken seriously, and is considered profitable and worthwhile technology to invest time and money in as a different method of interaction with video games.
  148.  
  149. The Natal unit utilizes a camera that works by using a method of measuring the depth of the scene it is viewing. This is done by projecting an infrared light onto the scene, which is turn reflected back at a camera within the unit. The software analyses this reflected information and turns it into data the game can interpret and respond to. This method of capturing a scene is similar to what regular motion capture uses, although in regular motion capture the marked points on the body are what is reflected back at the cameras. This technology will be released to the public in late 2010, and will be offered at a much lower cost, enabling consumers to buy who may otherwise not have experienced motion capture. While the system will be cheaper and readily available, it sacrifices the accuracy of production quality motion capture that is needed in video game development (such as what the VICON system provides).
  150.  
  151. In a news video article by Leah D'Emilio [22], she interviews a developer that worked on the 'Emily Project' [23]. An experiment of mixing motion capture data and live footage to attempt to create a photo realistic human face. In [23], the article refers to ICT (University of Southern California Institute for Creative Technologies) as the studio helped them to scan and process high-resolution images in order to build the face for 'Emily'. The actor was photographed in thirty-five different extreme facial poses in order to capture the full range of emotions that her face could display. A three-dimensional model was built using this information and was sent to Image Metrics, who then ran the model through their own proprietary software along with the facial performance of Emily O'Brien (the person whom 'Emily' was modeled after). The end result is the video shown in [23], a live action video scene that was overlaid with a fully animated highly-realistic three-dimensional model of the actor's face.
  152.  
  153. What makes the ICT method different to other motion capture methods is the way the rigs are built and the way software is used to analyze the images that are taken. In an interview at the ICT lab, [24] Leah D'Emilio was given a tour of the studio and a video showed the tools and rigs that were used. Dr. Paul Debevec, the associate director of the graphics research at the USC (who won an academy award for the `Emily Project` in February of 2010) also talked about the technical details of the work he and his team were doing at the ICT. Debevec was quoted saying when talking about how their system works,“..that (the reference shots of the face taken) gives you, like this massive amount of data set that gives what a person’s face looks like under any lighting direction.” As mentioned above this data is required to build the three dimensional mesh that is used by studios to create the animation or characters. When talking about the image capture rig that the studio used to capture her while on the tour, Leah D'Emilio was quoted saying “It was comprised of, off the shelf led lights, polarizing filters and Nikon cameras plugged into a computer.” The ICT method is completely Marker-less, making it relatively easy for video game development studios to bring in actors for recording lines and facial animation.
  154.  
  155. Both Image Metrics and the ICT have been used in various major video game titles because of their experience with facial motion capture – titles such as Assassin's Creed II, Gears of War 2, GTA IV, Operation Flashpoint: Dragon Rising and God of War III
  156.  
  157. Upcoming big budget game Halo: Reach will be also be taking advantage of Image-Metric's FaceWare software for their facial animation. The CEO of Image Metrics, Michael Starkenburg was quoted saying, said that, “Image Metrics has used Faceware for years to create believable facial animation at all levels of quality. The technology has been proven repeatedly, whether on AAA game titles, Oscar-winning films, and Grammy-winning music videos. We are excited to put this powerful tool in the hands of world-class developers like Bungie, so that they can benefit from the reduced production schedules, increased believability, and cost savings Image Metrics has always provided through our technology.”
  158.  
  159. Marcus Lehto, the creative director at Bungie, said, when talking about the choice: “Creating realistic facial animation is a critical component to bringing believable, character-driven performances to life for our next studio project. Faceware lets us raise our extremely high quality bar and maintain the ultra fast turnaround times and animator efficiency we need without impacting our existing pipeline. No other tool gives us this level of flexibility and control.” Bungie owns the IP of the game Halo which is arguably the largest and most popular video game on Microsoft's Xbox 360 console. This step of the studio using ImageMetric's motion capture techniques is one that further enforces the overall theme that has been found during the research of this paper - that motion capture is being taken seriously as a process that can enhance current video games when brought into the development pipeline.
  160.  
  161. Mova and their Contour Reality System have brought a high end option of Marker-less motion capture for realistic facial animation capture to video game studios. This allows a rig of high quality cameras to create 100,000 polygon facial models that are accurate to within one tenth of a millimeter, and on top of that, capture movement data occurs in real time. The system can also automatically create materials from the captured footage at the same time. This process allows for a very high level of accuracy while recording the facial movement of an actor. Their Contour Reality System has been proven to allow animators to import into the Unreal 3 engine and this is shown in [1], where an actor was brought in to recite a line, and the Contour system automatically created a highly accurate likeness of the actors head and recorded the performance. It was then successfully imported into the Unreal 3 engine and the line was played back with the head animating along. On the Mova website, there are tech demos displaying the facial mesh that was created to show the ability to import into the Unreal engine is not limited to using the original actor to record lines, but replacements can be used to do the animation as the technology is able to distinguish human faces and automatically adjust to suit the situation. An example is shown by the Mova team bringing in female actors reciting that gave short performances, and the male voice head was synced to the actor’s facial movements along side. As is common in most video games, the studio needs to use a voice actor who is physically completely different to the characters they have created for their game. This technology allows studios to do this with ease.
  162.  
  163.  
  164. Improvements to Research Method
  165.  
  166. The research method could be greatly improved by collecting and analyzing more research papers on the chosen subject, and focusing less on news articles. This would allow for more qualitative information to be used in this paper, which would provide stronger supporting evidence for the findings of the research.
  167.  
  168. Other ways to improve the research method would be by speaking directly to the production studios that are known to use motion capture and asking for an interview to discuss their current practices and what motion capture techniques they use in their pipeline.
  169.  
  170.  
  171. Conclusions
  172.  
  173. The main findings of the research is that motion capture is a popular method in current video development production pipelines as a way of speeding up and raising the quality of the animation that is created for digital characters in video games. Current motion capture studios are taking steps into research and development for high quality motion capture that can be applied to the development of video games. In return video game studio's are currently expanding their interest in funding for bringing motion capture to their production pipelines - this is shown in the number of video games that have used motion capture to enhance their animation which have been released within the last few years and those that are coming out in the coming months.
  174.  
  175. During the research for this paper it has become apparent that the most popular method of full body motion capture is by applying markers onto a human model and allowing the analysis rigs to capture the movement of only the points specified. This is because this option is currently the cheapest to install and maintain, and often requires less time for processing the recorded movement when compared to other available options. When compared with inertial motion capture, the quality of regular motion capture is significantly higher and provides advantages over inertial capture - such as capturing world space locations of the marked points within a set area.
  176.  
  177. Currently the most popular method of facial motion capture is the Marker-less motion capture because of the very high quality of emotion movement that it can capture, as well as its other advantages such as automatic scanning and texturing of an actor’s face. Specifically, the options offered by Image Metrics for analysis techniques, in conjunction with VICON for the recording rigs they have developed, together these are currently the most popular in game development.
  178.  
  179. The University of Southern California’s Institute for Creative Technologies (or ICT), has been mentioned several times in the research for this paper because of their connections to both Image Metrics and VICON. This is because they have been providing high quality motion capture methods to both film and video games for several years.
  180.  
  181. Marker-less motion capture has been proven to be popular in the video game industry as many big budget video games studios have been using this method in their development pipelines more often over the last few years. This is in evidence from blockbuster titles such as Grand Theft Auto 4, recently released Assassin's Creed 2, and the soon to be released Halo: Reach. Such examples show that game development studios are eager to make use of the advantages that motion capture offers.
  182.  
  183. The main ways to improve the research method would be by collecting a greater number of qualitative research papers on the subject. This would offer scientific data that is able to be applied to the paper. Another way would be to collect first hand evidence to help support any evidence found while in the literature view - such as first hand interviews of professionals in the field.
  184.  
  185. Future research projects can benefit in their research and analysis by developing experiments to test motion capture rigs first to test for the advantages and disadvantages of each option. This will allow for primary source scientific data to be used as backing evidence in the reporting of any future research papers. They would also benefit from more interviews of game developers and studios first hand to get their opinion and experiences with motion capture. This would give the researcher up to date opinions from professionals on the subject of motion capture which could be used to support any evidence that the researcher discovers in the literature review (applicable quotes from the interview inserted).
  186.  
  187. YOU NEED TO LOOK AT THIS SECTION _ SOME OF IT HAS ALREDY BEEN STATED IN THE SECTION ABOVE UNDER IMPROVEMENTS TO RESEARCH METHOD
  188. Further work
  189.  
  190. The research papers that were collected were enlightening on the subject of motion capture. They show that there is a strong level of acceptance of adopting motion capture technology into current video game development pipelines. However, as shown in the research analysis of [1] and [2] the cost of the motion capture rigs can put them outside the range of many small development studios budgets, although, as shown in [20], there are motion capture systems that can offer a option for this demographic. That being said, further research could be done into comparing and evaluating the quality of such systems and comparing their efficiency, quality and accuracy of data output to the higher budget motion capture rigs.
  191.  
  192.  
  193. Reference:
  194.  
  195. [1]
  196. [2]
  197. [3]
  198. [4]
  199. [5]
  200. [6]
  201. [7]
  202. [8]
  203. [9]
  204. [10]
  205. [11]
  206. [12]
  207. [13]
  208. [14]
  209. [15]
  210. [16]
  211. [17]
  212. [18]
  213. [19]
  214. [20]
  215. [21]
  216. [22]
  217. [23]
  218. [24]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement