Wed 15 Jun 2016 16:30 - 17:00 at Grand Ballroom Santa Ynez - New Languages Chair(s): Michael Carbin

Latte system comprises of a domain specific language (DSL) for specifying Deep Neural Networks (DDNs) and its high performance implementation. Users of Latte specify DNNs by constructing ensembles of neurons and applying connections between them. The Latte compiler synthesizes code from the DNN specification, performs a series of domain specific optimizations, and generates efficient code targeting high performance heterogeneous clusters of Intel multicore and manycore architectures. Unlike prominent library-based frameworks such as Caffe, Latte is not limited to a pre-specified list of network layers. In addition, it can perform cross-layer optimizations such as fusion that provide 3-6x speedup over Caffe for three recent ImageNet challenge winning models. Furthermore, Latte runtime manages the communication of data across nodes in a cluster and across host and accelerators in each node. Overall, the Latte system greatly improves the programmability, performance, and portability of DNNs.

Wed 15 Jun
Times are displayed in time zone: (GMT-07:00) Tijuana, Baja California change

15:30 - 17:00: Research Papers - New Languages at Grand Ballroom Santa Ynez
Chair(s): Michael CarbinMIT
pldi-2016-papers15:30 - 16:00
Sara AchourMassachusetts Institute of Technology, USA, Rahul SarpeshkarMIT, Martin RinardMassachusetts Institute of Technology, USA
Media Attached
pldi-2016-papers16:00 - 16:30
Magnus MadsenUniversity of Waterloo, Ming-Ho YeeUniversity of Waterloo, Ondřej LhotákUniversity of Waterloo
DOI Media Attached
pldi-2016-papers16:30 - 17:00
Leonard TruongUC Berkeley / Intel Labs, Raj BarikIntel Labs, Ehsan TotoniIntel Labs, Hai LiuIntel Labs, Chick MarkleyUC Berkeley, Armando FoxUC Berkeley, Tatiana ShpeismanIntel Labs
Media Attached