Advertisement
Guest User

Untitled

a guest
Dec 20th, 2017
373
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 23.52 KB | None | 0 0
  1. HMS 201 - Section C: Active Learning and Research Methodology A.M. Hamzeh
  2. - 1 -
  3. AMERICAN UNIVERSITY OF SCIENCE & TECHNOLOGY
  4. DEPARTMENT OF HUMANITIES AND SOCIAL SCIENCES
  5. HMS 201: Active Learning and Research Methodology
  6. Fall Term 2017-2018
  7. Homework No. 3
  8. Due: Wednesday December 20, 2017
  9. You are asked to practice, skimming, active reading and summarizing the following article.
  10. You have to submit a hard-copy, typed summary.
  11. The first page should be the result of your previewing and skimming.
  12. The second page should be the result of your active reading.
  13. The third page should be your final complete summary.
  14. HMS 201 - Section C: Active Learning and Research Methodology A.M. Hamzeh
  15. - 2 -
  16. The Design and Development of a Lie Detection
  17. System using Facial Micro-Expressions
  18. Michel Owayjan, Ahmad Kashour, Nancy Al Haddad, Mohamad Fadel, and Ghinwa Al Souki
  19. Department of Computer and Communications Engineering
  20. American University of Science & Technology (AUST)
  21. Beirut, Lebanon
  22. mowayjan@aust.edu.lb, {ahmad_s13, nancy.had}@hotmail.com, mohd.fadel@ymail.com, ghino2007_souki@hotmail.com
  23. Abstract— Detecting lies is crucial in many areas, such as airport
  24. security, police investigations, counter-terrorism, etc. One
  25. technique to detect lies is through the identification of facial
  26. micro-expressions, which are brief, involuntary expressions
  27. shown on the face of humans when they are trying to conceal or
  28. repress emotions. Manual measurement of micro-expressions is
  29. hard labor, time consuming, and inaccurate. This paper presents
  30. the Design and Development of a Lie Detection System using
  31. Facial Micro-Expressions. It is an automated vision system
  32. designed and implemented using LabVIEW. An Embedded Vision
  33. System (EVS) is used to capture the subject’s interview. Then, a
  34. LabVIEW program converts the video into series of frames and
  35. processes the frames, each at a time, in four consecutive stages.
  36. The first two stages deal with color conversion and filtering. The
  37. third stage applies geometric-based dynamic templates on each
  38. frame to specify key features of the facial structure. The fourth
  39. stage extracts the needed measurements in order to detect facial
  40. micro-expressions to determine whether the subject is lying or
  41. not. Testing results show that this system can be used for
  42. interpreting eight facial expressions: happiness, sadness, joy,
  43. anger, fear, surprise, disgust, and contempt, and detecting facial
  44. micro-expressions. It extracts accurate output that can be
  45. employed in other fields of studies such as psychological
  46. assessment. The results indicate high precision that allows future
  47. development of applications that respond to spontaneous facial
  48. expressions in real time.
  49. Keywords— Lie Detection; Facial Micro-Expressions;
  50. LabVIEW; Image Processing; Vision System
  51. Introduction
  52. For as long as human beings have deceived one another,
  53. people have tried to develop techniques to detect deception
  54. and find the truth. Lie detection took on aspects of modern
  55. science with the development in the twentieth century of
  56. techniques intended for the psycho physiological detection of
  57. deception, most prominently, polygraph testing. The polygraph
  58. instrument measures several physiological processes and
  59. changes in those processes. On a polygraph test, examiners
  60. observe the charts of the above measures in response to
  61. questions, and then infer whether a person is lying or telling
  62. the truth [1]. Polygraph testing is used for three main purposes:
  63. event-specific investigations, employee screening, and
  64. reemployment screening. Each use involves the search for
  65. different kinds of information and has different implications
  66. [2].
  67. Researchers are developing several techniques to detect
  68. lying individuals. British airport authorities are testing one
  69. system based on the Facial Action Coding System (FACS) [1].
  70. The human face is a sign vehicle that sends messages using not
  71. only its basic structure and muscle tone, but also changes in
  72. the face conveying expressions, such as smiles, frowns, etc.
  73. The person’s mood and intentions can be read from the facial
  74. expressions. Moreover, micro-expressions could be developed
  75. based on certain physiological responses that most of humans
  76. undergo when attempting to deceive another person. They are
  77. denoted as micro-expressions because they are present for
  78. fractions of a second besides being involuntarily expressions.
  79. In addition to lie detection these systems may also be used in
  80. detecting some diseases or in testing for alcohol where some
  81. changes in the face may occur. Another domain that may use
  82. these kinds of systems is psychiatry [3].
  83. Facial micro-expressions were proven to be an important
  84. behavior source for hostile intent and danger demeanor
  85. detection [4]. The specific objective of this paper is to design
  86. and develop a lie detection system using facial microexpressions
  87. recognition in real-time.
  88. Background
  89. Many systems were developed to detect lying subjects in
  90. several domains, such as police investigations, airport and
  91. homeland security, clinical testing, and human resource
  92. departments in organizations and companies.
  93. The polygraph, popularly referred to as a lie detector,
  94. measures and records several physiological indices such as
  95. blood pressure, pulse, respiration, and skin conductivity while
  96. the subject is asked and answers a series of questions. The
  97. belief is that deceptive answers will produce physiological
  98. responses that can be differentiated from those associated with
  99. non-deceptive answers. However, several countermeasures
  100. designed to pass polygraph tests have been described [5-6].
  101. HMS 201 - Section C: Active Learning and Research Methodology A.M. Hamzeh
  102. - 3 -
  103. Recent advances in camera and computer technology have
  104. led to the development of a method that uses body heat to
  105. detect deception. Special thermal cameras capture subtle
  106. changes in temperature of the person’s face, usually around the
  107. eyes, that are associated with physiological arousal. When
  108. these areas become warmer, they signal that the person has
  109. reacted to the picture, word, or question that was presented to
  110. him or her. These changes may also be triggered during
  111. deception. One of the main advantages of thermal imaging is
  112. that it is non-contact, so it does not entail the placing of
  113. sensors on the body as with the polygraph. This opens the
  114. potential for rapid screening applications, such as at airports.
  115. The main disadvantage of thermal imaging is that the cameras
  116. and associated instrumentation are very expensive. Also, the
  117. changes that occur during deception are very fast and very
  118. small, so algorithms are necessary to detect the patterns that
  119. appear during lying. These algorithms have not yet been
  120. validated [7].
  121. On the other hand, US researchers at Temple University
  122. have found out that a medical scan that can pick up brain
  123. tumors can also be used to tell whether a person is lying or not.
  124. According to [8], when a person is telling the truth they use
  125. different parts of their brain than when people lie. These
  126. changes were detected by functional magnetic resonance
  127. imaging. The method may prove more accurate than traditional
  128. machines; however, it requires huge and expensive scanners
  129. [8].
  130. Researchers at Drexel University and the University of
  131. Pennsylvania in Philadelphia, have developed a new lie
  132. detection method that relies on infrared waves beamed directly
  133. into the brain. Called the functional near-infrared sensor
  134. (FNIR), their headband monitors the amount of oxygen in the
  135. blood in various portions of the brain to determine when
  136. subjects are lying. The headband can also be used to detect
  137. and differentiate between guilt, anxiety, and fear. This new
  138. method significantly limits both false positives and false
  139. negatives by more accurately differentiating between
  140. intentional deception, guilt, and anxiety. The specifics of the
  141. hardware, detection method, and signal processing analysis are
  142. not currently publicly available [9].
  143. Another approach is based on detecting micro-expressions
  144. which are facial expressions that are exhibited during a short
  145. time interval, usually few milliseconds. This method is noncontact
  146. since it is based on pictures of the face of the
  147. individual, captured by high-speed cameras. Humans convey
  148. voluntarily and involuntarily messages using their faces. There
  149. are eight basic facial expressions: anger, contempt, disgust,
  150. fear, happiness, joy, sadness, and surprise. They are encoded
  151. as combinations of Action Units (AU) of different muscles in
  152. the face according to the Facial Action Coding System (FACS)
  153. developed by Ekman and summarized in Table I [10-12].
  154. Facial muscle movements can be classified as two types:
  155. the obvious and easy to observe by the eye, and the micro
  156. muscle movement that is volatile and hard to be seen. As its
  157. name implies, the micro movement occurs in 1/25 of a second.
  158. The movement of these muscles may be horizontal, vertical or
  159. even oblique. For example, the extremities of the lips may go
  160. closer to each other (when a person is not smiling) or go far
  161. apart (when a person is smiling) creating an expanded
  162. horizontal line the distance of which can be measured and
  163. varies according to how the subject is responding to a given
  164. question [10-12].
  165. TABLE I. EMOTIONS AND THEIR EQUIVALENT FACS CODES
  166. Emotion
  167. FACS Code
  168. Muscle description Associated AUs
  169. Anger
  170. Nostrils raised, mouth
  171. compressed, furrowed brow,
  172. eyes wide open, head erect
  173. 4,5, 24, 38
  174. Contempt
  175. Lip protrusion, nose
  176. wrinkle, partial closure of
  177. eyelids, turn away eyes,
  178. upper lip raised
  179. 9,10, 22,41,61 or 62
  180. Disgust
  181. Lower lip turned down,
  182. upper lip raised, expiration,
  183. mouth open, blowing out
  184. protruding lips, lower lip,
  185. tongue protruded
  186. 10,16, 22,25or 26
  187. Fear Eyes open, mouth open, lips
  188. retracted, eye brows raised 1,2, 5, 20
  189. Happiness
  190. Eyes sparkle, skin under
  191. eyes wrinkled, mouth drawn
  192. back at corners
  193. 6,12
  194. Joy
  195. Zygomatic, orbicularis,
  196. upper lip raised, nasolabial
  197. fold formed
  198. 6,7, 12
  199. Sadness Corner mouth depressed,
  200. Inner corner eyebrows raised 1,15
  201. Surprise
  202. Eyebrows raised, mouth
  203. open, eyes open, lips
  204. protruded,
  205. 1,2, 5, 25 or 26
  206. Polikovsky et al. proposed, in 2010, a computer vision
  207. method of measuring the facial micro-expression with a GUI
  208. interface, which is useful for acquiring efficient ground truth
  209. tagging of micro expressions from the recorded videos. For
  210. initial testing, they prepared a simple database containing
  211. paused micro-expressions of 10 participants. It is another
  212. approach that is based on direct tracking of 20 facial feature
  213. points (eye, mouth corner, eyebrow edges, etc.) by particular
  214. filters. The 3D gradient oriented histogram descriptor was
  215. chosen for facial motion detection due to its ability to capture
  216. the correlation between the frames. 3D gradient descriptors
  217. were proved to be an effective approach for classifying
  218. motions in video signals [13].
  219. Pfister et al. showed, in 2011, how temporal interpolation
  220. model together with Multiple Kernel Learning (MKL) and
  221. Random Forest (RF) classifiers have enabled them to
  222. accurately recognize these very short expressions which are
  223. the facial micro-expressions. Inside their framework, they used
  224. temporal interpolation method (TIM) to counter short video
  225. lengths, spatiotemporal local texture descriptors to handle
  226. dynamic features and SVM, MKL, RF to perform
  227. classifications. They created an algorithm that shows their
  228. framework for recognizing spontaneous micro-expressions
  229. with high accuracy. To address the large variations in the
  230. spatial appearances of micro expressions, they cropped and
  231. HMS 201 - Section C: Active Learning and Research Methodology A.M. Hamzeh
  232. - 4 -
  233. normalized the face geometry according to the eye positions
  234. from a Haar eye detector and the feature points from an Active
  235. Shape Model (ASM) deformation. ASMs are statistical models
  236. of the shape of an object that are iteratively deformed to fit an
  237. example of the object [14].
  238. Fasel et al. developed, in 2006, an automatic detector
  239. which enables fully automated Facial Action Coding System
  240. (FACS). The face detector employs boosting techniques in a
  241. generative framework; it is an extension on the work done by
  242. Viola & Jones in 2001. The system works in real time at 30
  243. frames per second on a fast PC [15].
  244. Impaired facial expressions of emotions have been
  245. described as characteristic symptoms of schizophrenia.
  246. Differences regarding individual facial muscle changes
  247. associated with specific emotions in posed and evoked
  248. expressions remain unclear [16]. Christian G. Kohler et al
  249. examined, in 2008, static facial expressions of emotions for
  250. evidence of flattened and inappropriate affect in persons with
  251. stable schizophrenia [16].
  252. In 2001, Tian et al. divided the face in two areas and used
  253. two artificial neural networks to classify AUs in real time. The
  254. recognition of AUs averaged of 93.3% and their system
  255. achieved automatic face detection while handling head motion
  256. [17]. While in 2002, Pardas and Bonafonte used Hidden
  257. Markov Models to achieve 98% recognition with joy, surprise,
  258. and anger [18]. In 2003, Michel and Kaliouby used Support
  259. Vector Machine to build a real-time system that does not
  260. require any preprocessing [19]. A year later, Buciu and Pitas
  261. published their research in facial expression recognition using
  262. nearest neighbor classifiers [20]. Later on, Pantic and Patras
  263. achieved a 90% average recognition using temporal rules on
  264. 27 AUs and invariant to occlusions such as glasses and facial
  265. hair [21-22]. In 2006, Zheng et al. selected 34 facial landmark
  266. points that were converted into a Labeled Graph (LG) using
  267. Gabor wavelet transform. Then a semantic expression vector
  268. built for each training face. Kernel Canonical Correlation
  269. Analysis (KCCA) was used to learn the correlation between
  270. the LG vector and the semantic vector [23]. In 2007, Sebe et
  271. al. evaluated different machine learning algorithms to
  272. recognize spontaneous expressions where subjects are showing
  273. their natural facial expressions [24]. And in the same year,
  274. Kotsia and Pitas attained very high recognition rates with six
  275. basic expressions and then worked with occlusions [25-26].
  276. More research was also conducted in facial micro-expressions,
  277. among which [27-29].
  278. Materials and Methods
  279. The proposed lie detection system using facial microexpressions
  280. is composed of a hardware part and a software
  281. part. A high speed camera is used to capture the face which is
  282. then divided to specific regions. For testing this approach, a
  283. new dataset of facial micro-expressions, is created and
  284. manually tagged as a ground truth.
  285. Materials
  286. The hardware components used in the system consist of a
  287. high speed camera with its accessories, a laptop to see the
  288. results, and an NI Embedded Vision System (EVS) (National
  289. Instruments, TX, USA). Figure 1 depicts the hardware setup of
  290. the lie detection system developed in this study.
  291. As for the software, the detection algorithm was
  292. programmed with NI LabVIEW™ (National Instruments, TX,
  293. USA) and the IMAQ vision system that is integrated with the
  294. LabVIEW™.
  295. Figure 1. The lie detection system using facial micro-expressions hardware.
  296. The high speed camera is the most important component in
  297. the system. In order to detect a facial micro expression, which
  298. takes 1/25 of a second, a minimum of ten frames per second
  299. needs to be captured and analyzed. The camera used in the
  300. study captures 25 frames per second. In the settings of Figure
  301. 1, the camera lens needs to have a focal length between 50 mm
  302. and 180 mm in order to get the best quality. The lens
  303. employed has a focal length of 90 mm offering the desired
  304. sharpness. The camera and the lens were mounted on a tripod
  305. facing the subject’s face at a distance of two meters. In order
  306. to minimize reflections and shadows, a light gray background
  307. is used ensuring the capture of pre-filtered video.
  308. High speed computing is needed by the lie detection
  309. system because the camera is capturing a video with a high
  310. number of frames per second to be able to detect microexpressions.
  311. The EVS is a high-performance, multi-core
  312. processor running a real-time LabVIEW™ Operating System
  313. dedicated to process the frames captured by the camera at
  314. extremely high speeds. It is a dedicated computer system that
  315. can be programmed with LabVIEW™ and can run
  316. independently in industrial environments. The laptop
  317. connected to the EVS in the developed lie detection system is
  318. used only as an output screen to show the results of the
  319. processing made by the EVS.
  320. Methods
  321. The hardware setup is the first stage in the method used to
  322. detect facial micro-expressions and therefore, infer that the
  323. interviewed individual is lying or telling the truth. Figure 2
  324. summarizes the steps of the lie detection system using facial
  325. micro-expressions.
  326. In preparing the subject for interview, his/her face should
  327. always be facing the camera in order to detect all possible
  328. HMS 201 - Section C: Active Learning and Research Methodology A.M. Hamzeh
  329. - 5 -
  330. muscle changes. The rotation of the individual’s head may lead
  331. to a miss prediction. That is why; prior to shooting, these
  332. restrictions should be applied.
  333. After capturing the interview, the EVS, containing the
  334. LabVIEW™ program, starts processing the captured video.
  335. First the video is converted into a sequence of frames for
  336. analysis. Second, geometric-based dynamic templates on
  337. specific parts of the face (such as the eyes, the mouth, the
  338. cheeks, etc.) are used for marking key features of the
  339. expression.
  340. Figure 2. The steps used in the method of the lie detection system.
  341. The program starts by reading one frame at a time and
  342. simultaneously processing the extracted frame by two parallel
  343. loops. The first loop plays back the video on the computer
  344. screen. While the second loop inputs the frame into a vision
  345. assistant block that saves the frame as image into an already
  346. specified path, and then, processes the saved image according
  347. to predefined templates. The templates are predefined using
  348. the NI IMAQ Vision Assistant and represent the following
  349. areas on the face: the left and right edges of both eyebrows, the
  350. left and right edges of the eyes, the left and right edges of the
  351. mouth, and the cheeks. When templates are detected, the
  352. program measures nine different distances between center
  353. points of the templates, such as the horizontal length of the
  354. mouth. Then these different distances are individually saved
  355. into separate arrays. Figure 3 shows the templates detected on
  356. the face of a subject, and Figure 4 shows the corresponding
  357. distances of lines detected on the face. The total arrays
  358. referring to all sets of points are compared according to
  359. preprogrammed rules derived from the emotion’s muscle
  360. descriptions equivalents in Table I.
  361. Figure 3. Templates detection on the face of a subject.
  362. Figure 4. Distances shown on the face of a subject.
  363. Every basic expression is interpreted from the AUs in
  364. Table I as a combination of changes in the distances stored in
  365. the arrays between different frames. The system takes the first
  366. element of each array and checks the variation of the distance
  367. between the points; each basic expression has specific point
  368. measurements and distances combination. For example, during
  369. a smile, the mouth’s horizontal distance increases and the eyes
  370. close a bit. The system then indicates that the expression is
  371. joy. The system runs the logic shown in Figure 5 to determine
  372. and store the code of the expression detected in another array.
  373. The expression codes are shown in Table II.
  374. Figure 5. Distances shown on the face of a subject.
  375. HMS 201 - Section C: Active Learning and Research Methodology A.M. Hamzeh
  376. - 6 -
  377. TABLE II. EXPRESSION CODES
  378. Expression
  379. Code Expression
  380. 0 Not Used
  381. 1 Happiness/Joy
  382. 2 Surprise
  383. 3 Anger
  384. 4 Disgust/Contempt
  385. 5 Sadness
  386. The program then, iterates over the expression codes in the
  387. array and notes the time taken by each expression. If the same
  388. expression is repeated less than five times consecutively, it is
  389. marked as a micro-expression and a indicator LED is turned
  390. on to indicate the presence of a micro-expression.
  391. Accordingly, the program can distinguish whether the subject
  392. is lying or saying the truth.
  393. Testing and Results
  394. The system was tested on four subjects. The team
  395. developed a questionnaire containing seven control questions
  396. and eight relevant questions based on [30]. The questions
  397. asked during the interview are:
  398. 1. Is your name Sandy Hill? (Control question)
  399. 2. Are you 43 years old? (Control question)
  400. 3. Is your cat's name Josie? (Control question)
  401. 4. Were you born in 1956? (Control question)
  402. 5. Do you rent a house? (Control question)
  403. 6. Do you live on Vine Street in Iowa? (Control question)
  404. 7. Is today (day of week)? (Control question)
  405. 8. Have you stolen more than four hundred dollars in
  406. cash or property from an employer? (Relevant
  407. question)
  408. 9. Based on your personal bias, have you ever committed
  409. a negative act against anyone? (Relevant question)
  410. 10. During a domestic dispute, have you physically harmed
  411. a significant other? (Relevant question)
  412. 11. Prior to your application, did you ever lie to someone
  413. in a position of authority? (Relevant question)
  414. 12. Before this year, did you ever put false information on
  415. an official document? (Relevant question)
  416. 13. Prior to this year, did you ever betray someone who
  417. trusted your word? (Relevant question)
  418. 14. Before this year, did you ever take credit for something
  419. you didn't do? (Relevant question)
  420. 15. Prior to this year, did you ever deceive a family
  421. member? (Relevant question)
  422. Each subject was prepared for the test and then asked the
  423. 15 questions while the system is running and showing the
  424. results on the GUI front panel. The system, when detecting an
  425. certain expression, stores this expression and analyzes it; when
  426. it detects a micro-expression on the face of the subject, a green
  427. light flashes, indicating the presence of the micro-expression
  428. resulting from the subject’s attempt to hide the real answer and
  429. lie.
  430. Figures 6 and 7 are screenshots taken from the tests
  431. showing a lying subject and a truthful one respectively. The
  432. results show that expressions and micro-expressions are
  433. correctly detected on the face of the four different subjects.
  434. Using the derived template models for classification, the
  435. expression recognition accuracy is 85% on a database of five
  436. expressions. More work is being done on expanding the
  437. database to cover other expressions as well as to increase the
  438. accuracy of the system.
  439. Figure 6. A subject lying (Green LED is ON).
  440. Figure 7. A subject telling the truth (Green LED is OFF).
  441. Conclusions and Future Work
  442. The team has derived a mathematical algorithm and
  443. implemented a computer vision system capable of detailed
  444. analysis of facial expressions within an active and dynamic
  445. framework. The purpose of this system is to analyze real facial
  446. motion in order to derive the spatial and temporal patterns
  447. exhibited by the human face while attempting to lie. The
  448. system analyzes facial expressions by observing significant
  449. articulations of the subject’s face over a sequence of frames
  450. extracted from a video. By observing the parameters over a
  451. wide range of frames, a parametric representation of the face
  452. which could be useful for static analysis of facial expression in
  453. other fields of studies was extracted. This motion is then
  454. HMS 201 - Section C: Active Learning and Research Methodology A.M. Hamzeh
  455. - 7 -
  456. coupled to a physical model by which geometric-based
  457. dynamic templates are applied on the facial structure.
  458. Human emotion on the basis of facial micro-expressions is
  459. an important topic of research in psychology. It is believed that
  460. the developed system can be useful in many areas where
  461. psychological interpretation is needed such as in police
  462. interrogations, airport and homeland security, employment,
  463. and clinical tests.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement