Guest User

Untitled

a guest
Oct 18th, 2018
98
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 15.71 KB | None | 0 0
  1. The 2012 Programming Languages Day will be held at the IBM Thomas J.
  2. Watson Research Center on Thursday, June 28, 2012. The day will be held in
  3. cooperation with the New Jersey and New England Programming Languages and
  4. Systems Seminars. The main goal of the event is to increase awareness of
  5. each other's work, and to encourage interaction and collaboration.
  6.  
  7. The Programming Languages Day features a keynote presentation and 8
  8. regular presentations. Dr. Shriram Krishnamurthi of Brown University
  9. will deliver the keynote presentation this year, "Securing JavaScript
  10. on the Web". Details of the program are below.
  11.  
  12. You are welcome from 9AM onwards, and the keynote presentation will start
  13. at 10AM sharp. We expect the program to run until 4PM. Programming
  14. Languages day will be held in the Auditorium (room GN-F15) in the
  15. Hawthorne-1 building of the Thomas J. Watson Research Center at 19
  16. Skyline Drive, Hawthorne, New York 10538.
  17.  
  18. If you plan to attend the Programming Languages Day, please register by
  19. sending an e-mail with your name, affiliation, and contact information to
  20. dolby@us.ibm.com. It is important that everyone register in advance,
  21. since security regulations at Watson require us to provide a list of
  22. attendees in advance; also, registering enables us to plan for lunch
  23. and refreshments to be available.
  24.  
  25. If you plan to attend, please let me know ideally by JUNE 8, but no
  26. later than JUNE 14 (2 weeks before the event).
  27.  
  28. Please note that we are currently have trouble with the Web site that
  29. hosts the PL Day 2012 page. I shall send further mail when the
  30. situation with the Web site has been resolved.
  31.  
  32. -------------------------------------------------------
  33. Program Overview
  34.  
  35. 9:00-10:00 Sign-in, Welcome
  36.  
  37. 10:00-11:00 Keynote
  38. Shriram Krishnamurthi
  39. Securing JavaScript on the Web
  40.  
  41. 11:00-11:15 Break
  42.  
  43. 11:15-12:30 Concurrency
  44. (3 25-minute slots)
  45.  
  46. Stephen Freund
  47. Cooperative Concurrency for a Multicore World
  48.  
  49. Y. Annie Liu
  50. From Clarity to Efficiency for Distributed Algorithms
  51.  
  52. Sebastien Mondet, Ashish Agarwal, Paul Scheid, Aviv Madar, Richard Bonneau, Jane Carlton, Kristin C. Gunsalus
  53. Managing and Analyzing Big-Data in Genomics
  54.  
  55.  
  56. 12:30-1:40 Lunch
  57.  
  58.  
  59. 1:40-2:55 Program Analysis and Verification
  60. (3 25-minute slots)
  61.  
  62. J. Ian Johnson
  63. Designing Precise Higher-Order Pushdown Flow Analyses
  64.  
  65. Eric Koskinen
  66. Temporal property verification as a program analysis task
  67.  
  68. Gregory Malecha, Adam Chlipala, Patrick Hulin, and Edward Yang
  69. A Framework for Verifying Low-level Programs
  70.  
  71.  
  72. 2:55-3:10 Break
  73.  
  74.  
  75. 3:10-4:00 Runtime Issues
  76. (2 25-minute slots)
  77.  
  78. Kangkook Jee
  79. A General Approach for Efficiently Accelerating Software-based Dynamic
  80. Data Flow Tracking on Commodity Hardware
  81.  
  82. Jose Castanos, David Edelsohn, Kazuaki Ishizaki,Toshio Nakatani,
  83. Takeshi Ogasawara, Priya Nagpurkar, Peng Wu
  84. On the Benefits and Pitfalls of Extending a Statically Typed Language JIT Compiler for Dynamic Scripting Languages
  85.  
  86. -------------------------------------------------------
  87. Program Details
  88.  
  89. 10:00-11:00 Keynote
  90. Shriram Krishnamurthi
  91. Securing JavaScript on the Web
  92.  
  93. JavaScript is the principal language for Web clients. It powers both
  94. the content of pages and, in many cases, the browser itself.
  95. JavaScript is also a large language with a complex semantics. In this
  96. talk I will discuss our efforts to faithfully capture the essence of
  97. JavaScript through an operational semantics and type system, and the
  98. application of these to verifying multiple real-world systems.
  99.  
  100. Joint work with Arjun Guha, Joe Gibbs Politz, Ben Lerner, Claudiu
  101. Saftoiu, Matt Carroll, and Hannah Quay-de la Vallee.
  102.  
  103.  
  104. 11:15-12:30 Concurrency
  105. (3 25-minute slots)
  106.  
  107. Stephen Freund
  108. Cooperative Concurrency for a Multicore World
  109.  
  110. Multithreaded programs are notoriously prone to unintended
  111. interference between concurrent threads. To address this
  112. problem, we argue that yield annotations in the source code
  113. should document all thread interference, and we present a type
  114. system for verifying the absence of undocumented interference.
  115. Well-typed programs behave as if context switches occur only at
  116. yield annotations. Thus, they can be understood using intuitive
  117. sequential reasoning, except where yield annotations remind the
  118. programmer to account for thread interference.
  119.  
  120. Experimental results show that our type system for yield
  121. annotations is more precise at capturing thread interference than
  122. prior techniques based on method-level atomicity specifications
  123. and reduces the number of interference points a programmer must
  124. reason about by an order of magnitude. The type system is also
  125. more precise than prior methods targeting race freedom. In
  126. addition, yield annotations highlight all known concurrency
  127. defects in the Java programs studied.
  128.  
  129. This is joint work with Cormac Flanagan, Tim Disney, Caitlin
  130. Sadowski, and Jaeheon Yi at UC Santa Cruz.
  131.  
  132.  
  133. Y. Annie Liu
  134. From Clarity to Efficiency for Distributed Algorithms
  135.  
  136. This talk presents a very high-level language for clear description of
  137. distributed algorithms, compilation to executable code, and
  138. optimizations necessary for generating efficient implementations. The
  139. language supports high-level control flows where complex
  140. synchronization conditions can be expressed using high-level queries,
  141. especially logic quantifications, over message history sequences.
  142. Unfortunately, these programs would be extremely inefficient,
  143. including consuming unbounded memory, if executed straightforwardly.
  144.  
  145. We present new optimizations that automatically transform complex
  146. synchronization conditions into incremental updates of necessary
  147. auxiliary values as messages are sent and received. The core of the
  148. optimizations is the first general method for transforming logic
  149. quantifications into efficient implementations. We have developed an
  150. operational semantics of the language, implemented a prototype of the
  151. compiler and the optimizations, and successfully used the language and
  152. implementation on a variety of important distributed algorithms,
  153. including multi-Paxos for distributed consensus.
  154.  
  155. Our high-level specification and optimization method also helped us
  156. discover improvements to some of the algorithms, both for correctness
  157. and for optimizations. Our language has also been used by
  158. undergraduate and graduate students to easily implement a variety of
  159. distributed algorithms used in distributed file sharing, routing,
  160. etc., including Chord, Kademlia, Pastry, and Tapestry, AODV, and parts
  161. of HDFS and Upright.
  162.  
  163. (Joined work with Scott Stoller, Bo Lin, and Michael Gorbovitski)
  164.  
  165.  
  166. Sebastien Mondet, Ashish Agarwal, Paul Scheid, Aviv Madar, Richard Bonneau, Jane Carlton, Kristin C. Gunsalus
  167. Managing and Analyzing Big-Data in Genomics
  168.  
  169. Biology is an increasingly computational discipline. Rapid advances in
  170. experimental techniques, especially DNA sequencing, are generating data at
  171. exponentially increasing rates. Aside from the algorithmic challenges this
  172. poses, researchers must manage large volumes and innumerable varieties of
  173. data, run computational jobs on an HPC cluster, and track the
  174. inputs/outputs of the numerous computational tools they employ. Here we
  175. describe a software stack fully implemented in OCaml that operates the
  176. Genomics Core Facility at NYUÔæïs Center for Genomics and Systems Biology.
  177.  
  178. We define a domain specific language (DSL) that allows us to easily
  179. describe the data we need to track. More importantly, the DSL approach
  180. provides us with code generation capabilities. From a single description,
  181. we generate PostgreSQL schema definitions, OCaml bindings to the database,
  182. and web pages and forms for end-users to interact with the database.
  183. Strong type safety is provided at each stage. Database bindings check
  184. properties not expressible in SQL, and web pages, forms, and links are
  185. validated at compile time by the Ocsigen framework. Since the entire stack
  186. depends on this single data description, rapid updates are easy; the
  187. compiler informs us of all necessary changes.
  188.  
  189. The application launches compute intensive jobs on a high-performance
  190. compute (HPC) cluster, requiring consideration of concurrency and
  191. fault-tolerance. We have implemented what we call a ÔæíflowÔæì monad that
  192. combines error and thread monads. Errors are modeled with polymorphic
  193. variants, which get arranged automatically into a hierarchical structure
  194. from lower level system calls to high level functions. The net result is
  195. extremely precise information in the case of any failures and reasonably
  196. straightforward concurrency management.
  197.  
  198.  
  199. 1:40-2:55 Program Analysis and Verification
  200. (3 25-minute slots)
  201.  
  202. J. Ian Johnson
  203. Designing Precise Higher-Order Pushdown Flow Analyses
  204.  
  205. Formalisms for context-free approaches to higher-order control-flow
  206. analysis are complex and require significant effort to prove correct.
  207. However, these approaches are enticing because they provide improved
  208. precision over finite state approaches. We present a new method for
  209. deriving context-free analyses that results in Òobviously correctÓ
  210. formalisms that consists of making small changes to the original concrete
  211. semantics. We validate this method by using it to derive existing
  212. context-free analyses from abstract machines. We further exercise the
  213. technique by applying it to abstract machines that more closely represent
  214. real language implementations and consequently derive analyses more
  215. precise than existing ones. Specifically, we use an escape analysis to
  216. derive better stack allocation, and use garbage collection to derive
  217. better heap allocation. We also present a novel semantics for call/cc
  218. that, when turned into an analysis, handles non-escaping continuations
  219. more precisely than prior treatments of first-class control.
  220.  
  221.  
  222. Eric Koskinen
  223. Temporal property verification as a program analysis task
  224.  
  225. We describe a reduction from temporal property verification to a program
  226. analysis problem. We produce an encoding which, with the use of recursion
  227. and nondeterminism, enables off-the-shelf program analysis tools to
  228. naturally perform the reasoning necessary for proving temporal properties
  229. (e.g. backtracking, eventuality checking, tree counterexamples for
  230. branching-time properties, abstraction refinement, etc.). This reduction
  231. allows us to prove branching-time properties of programs, as well as
  232. trace-based properties such as those expressed in Linear Temporal Logic
  233. (LTL) when the reduction is combined with our recently described iterative
  234. symbolic determinization procedure. We demonstrate---using examples taken
  235. from the PostgreSQL database server, Apache web server, and Windows OS
  236. kernel---that our method can yield enormous performance improvements in
  237. comparison to known tools, allowing us to automatically prove properties
  238. of programs where we could not prove them before.
  239.  
  240.  
  241.  
  242. Gregory Malecha, Adam Chlipala, Patrick Hulin, and Edward Yang
  243. A Framework for Verifying Low-level Programs
  244.  
  245. High level languages provide abstractions that assist programmers;
  246. however these abstractions are not always sufficient and, in some cases,
  247. they get in the way of writing efficient or functioning correct code. In
  248. this work we develop Bedrock2, a Coq framework for foundational
  249. reasoning about low-level pro- grams using a program logic based on Ni
  250. and Shao’s XCAP [2]. Bedrock2 is an extension and rewrite of
  251. Chlipala’s Bedrock language [1]. Bedrock2 allows programmers to define
  252. both control and data abstractions and integrate them into the system
  253. in a first-class way. Control abstractions, e.g. conditionals and
  254. function calls, are defined by providing a denotation into the core
  255. language and derived inference rules are verified in a foundational way
  256. with respect to the core language semantics. These inference rules are
  257. used by the verification condition generator simplify the proof
  258. obligations provided to the programmer. Verification conditions are
  259. expressed as pre- and post- conditions on execution traces allowing
  260. the bulk of the work to be done by symbolic evaluation. Unlike
  261. Bedrock, the Bedrock2 symbolic evaluator incorpo- rates user-defined
  262. abstract predicates to enable efficient manipulation of arrays,
  263. structures, stack frames, and other data abstractions. Final
  264. verification condi- tions are discharged using a cancellation-based
  265. separation logic prover. Proofs are generated using computational
  266. reflection to enable good performance that, experiences suggest, will
  267. scale to larger programs than previous work. Bedrock2 embraces a more
  268. realistic machine model that exposes machine word sizes, byte
  269. ordering, and finite memory. We are working to extend the language to
  270. more interesting abstractions, real assembly languages, and
  271. concurrency.
  272.  
  273.  
  274. 3:10-4:00 Runtime Issues
  275. (2 25-minute slots)
  276.  
  277. Kangkook Jee
  278. A General Approach for Efficiently Accelerating Software-based Dynamic
  279. Data Flow Tracking on Commodity Hardware
  280.  
  281. Despite the demonstrated usefulness of dynamic data flow tracking
  282. (DDFT) techniques in a variety of security applications, the poor
  283. performance achieved by available prototypes prevents their widespread
  284. adoption and use in production systems. We present and evaluate a
  285. novel methodology for improving the performance overhead of DDFT
  286. frameworks, by combining static and dynamic analysis. Our intuition is
  287. to separate the program logic from the corresponding tracking logic,
  288. extracting the semantics of the latter and abstracting them using a
  289. Taint Flow Algebra. We then apply optimization techniques to eliminate
  290. redundant tracking logic and minimize interference with the target
  291. program. Our optimizations are directly applicable to binary-only
  292. software and do not require any high level semantics. Furthermore,
  293. they do not require additional resources to improve performance,
  294. neither do they restrict or remove functionality. Most importantly,
  295. our approach is orthogonal to optimizations devised in the past, and
  296. can deliver additive performance benefits. We extensively evaluate the
  297. correctness and impact of our optimizations, by augmenting a freely
  298. available high-performance DDFT framework, and applying it to multiple
  299. applications, including command line utilities, server applications,
  300. language runtimes, and web browsers. Our results show a speedup of
  301. DDFT by as much as 2:23x, with an average of 1:72x across all tested
  302. applications.
  303.  
  304.  
  305. Jose Castanos, David Edelsohn, Kazuaki Ishizaki,Toshio Nakatani, Takeshi Ogasawara, Priya Nagpurkar, Peng Wu
  306. On the Benefits and Pitfalls of Extending a Statically Typed Language JIT Compiler for Dynamic Scripting Languages
  307.  
  308. Whenever the need to compile a new dynamically typed language arises, the
  309. option of reusing an existing statically typed language Just-In-Time (JIT)
  310. compiler (reusing JITs) always seems appealing. Existing reusing JITs,
  311. however, do not deliver the kind of performance boost as proponents have
  312. hoped. The performance of JVM languages, for instance, often lags behind
  313. standard interpreter implementations. Even more customized solutions that
  314. extend the internals of a JIT compiler for the target language cannot
  315. compete with those designed specifically for dynamically typed languages.
  316. Our own Fiorano JIT compiler is a living example of such a phenomenon. As
  317. a state-of-the-art reusing JIT compiler for Python, Fiorano JIT compiler
  318. outperforms two other reusing JITs (Unladen Swallow and Jython), but still
  319. has a noticeable performance gap with PyPy, the best performing Python JIT
  320. compiler today.
  321.  
  322. In this talk, we offer an in-depth look on the reusing JITs phenomenon.
  323. Based on our Fiorano JIT experience and evaluation of PyPy, Jython,
  324. Fiorano JIT, and Unalden-swallow JIT, we discuss techniques that have
  325. proved effective and quantify weakness of the reusing JIT approach. More
  326. importantly, we identify several common pitfalls of today's reusing JITs,
  327. the most important one of them is not focusing sufficiently on
  328. specialization, an abundant optimization opportunity unique to dynamically
  329. typed languages. These findings, though presented in the context of
  330. Python, are applicable to other scripting languages as well.
Add Comment
Please, Sign In to add comment