Wed 15 Jun 2016 16:30 - 17:00 at Grand Ballroom Santa Ynez - New Languages Chair(s): Michael Carbin

Latte system comprises of a domain specific language (DSL) for specifying Deep Neural Networks (DDNs) and its high performance implementation. Users of Latte specify DNNs by constructing ensembles of neurons and applying connections between them. The Latte compiler synthesizes code from the DNN specification, performs a series of domain specific optimizations, and generates efficient code targeting high performance heterogeneous clusters of Intel multicore and manycore architectures. Unlike prominent library-based frameworks such as Caffe, Latte is not limited to a pre-specified list of network layers. In addition, it can perform cross-layer optimizations such as fusion that provide 3-6x speedup over Caffe for three recent ImageNet challenge winning models. Furthermore, Latte runtime manages the communication of data across nodes in a cluster and across host and accelerators in each node. Overall, the Latte system greatly improves the programmability, performance, and portability of DNNs.

Wed 15 Jun

Displayed time zone: Tijuana, Baja California change

15:30 - 17:00
15:30
30m
Talk
Configuration Synthesis for Programmable Analog Devices with Arco
Research Papers
Sara Achour Massachusetts Institute of Technology, USA, Rahul Sarpeshkar MIT, Martin C. Rinard Massachusetts Institute of Technology, USA
Media Attached
16:00
30m
Talk
From Datalog to Flix: A Declarative Language for Fixed Points on Lattices
Research Papers
Magnus Madsen University of Waterloo, Ming-Ho Yee University of Waterloo, Ondřej Lhoták University of Waterloo
DOI Media Attached
16:30
30m
Talk
Latte: A Language, Compiler, and Runtime for Elegant and Efficient Deep Neural Networks
Research Papers
Leonard Truong UC Berkeley / Intel Labs, Raj Barik Intel Labs, Ehsan Totoni Intel Labs, Hai Liu Intel Labs, Chick Markley UC Berkeley, Armando Fox UC Berkeley, Tatiana Shpeisman Intel Labs
Media Attached