Call for papers
The term "Big Data" refers to the continuing massive expansion in the data volume and diversity as well as the speed and complexity of data processing. The use of big data underpins critical activities in all sectors of our society. Achieving the full transformative potential of big data in this increasingly digital world requires both new data analysis algorithms and a new class of systems to handle the dramatic data growth, the demand to integrate structured and unstructured data analytics, and the increasing computing needs of massive-scale analytics.
We are pleased to request papers for presentation at the upcoming seventh Workshop on Architectures and Systems for Big Data (ASBD 2017) held in conjunction with ISCA-44. The workshop will provide a forum to exchange research ideas related to all critical aspects of emerging analytics systems for big data, including architectural support, benchmarks and metrics, data management software, operating systems, and emerging challenges and opportunities. We hope to attract a group of interdisciplinary researchers from academia, industry and government research labs. To encourage discussion between participants, the workshop will include significant time for interactions between the presenters and the audience. We also plan to have a keynote speaker and/or panel session.
Topics of interest include but are not limited to:
- Processor, memory and system architectures for data analytics
- Benchmarks, metrics and workload characterization for big data
- Accelerators for analytics and data-intensive computing
- Heterogeneous computing and heterogeneous system architecture
- Implications of data analytics to mobile and embedded systems
- Energy efficiency and energy-efficient designs for analytics
- Availability, fault tolerance and data recovery in big data environments
- Scalable system and network designs for high concurrency/bandwidth streaming
- Data management and analytics for vast amounts of unstructured data
- Evaluation tools, methodologies and workload synthesis
- OS, distributed systems and system management support for large-scale analytics
- Debugging and performance analysis tools for analytics and big data
- Programming systems and language support for deep analytics
- MapReduce and other processing paradigms for analytics
We encourage researchers from all institutions to submit their work for review. Preliminary results of interesting ideas and work-in-progress are welcome. Submissions that are likely to generate vigorous discussion will be favored!
Submission format: All papers should be submitted in PDF format, using 10 point or larger font for text (8 points or larger for figures and tables), total length not to exceed 6 pages.
Paper Submission : https://www.easychair.org/conferences/?conf=asbd2017
Saturday afternoon 6/24
Keynote, Lingjia Tang
Keynote, Daniel Sanchez
||Making Parallelism Pervasive with the Swarm Architecture
Daniel Sanchez, MIT CSAIL
Abstract: With Moore's Law coming to an end, architects must find ways to sustain performance growth without technology scaling. The most promising path is to build highly parallel systems that harness thousands of simple and efficient cores. But this approach will require new techniques to make massive parallelism practical, as current multicores fall short of this goal: they squander most of the parallelism available in applications and are hard to program.
I will present Swarm, a new architecture that successfully parallelizes algorithms that are often considered sequential and is much easier to program than conventional multicores. Swarm programs consist of tiny tasks, as small as tens of instructions each. Parallelism is implicit: all tasks follow a programmer-specified total or partial order, eliminating the correctness pitfalls of explicit synchronization (e.g., deadlock, data races, etc.). To scale, Swarm executes tasks speculatively and out of order, and efficiently speculates thousands of tasks ahead of the earliest active task to uncover enough parallelism.
Swarm builds on decades of work on speculative architectures and contributes new techniques to scale to large core counts, including a new execution model, speculation-aware hardware task management, selective aborts, and scalable ordered task commits. Swarm also incorporates new techniques to exploit locality and to harness nested parallelism, making parallel algorithms easy to compose and uncovering abundant parallelism in large applications.
Swarm accelerates challenging irregular applications from a broad set of domains, including graph analytics, machine learning, simulation, and databases. At 256 cores, Swarm is 53-561x faster than a single-core system, and outperforms state-of-the-art software-only parallel algorithms by one to two orders of magnitude. Besides achieving near-linear scalability, the resulting Swarm programs are almost as simple as their sequential counterparts, as they do not use explicit synchronization.
Biography: Daniel Sanchez is an Assistant Professor of Electrical Engineering and Computer Science at MIT. His research interests include parallel computer systems, scalable and efficient memory hierarchies, architectural support for parallelization, and architectures with quality-of-service guarantees. He earned a Ph.D. in Electrical Engineering from Stanford University in 2012 and received the NSF CAREER award in 2015.
FPGA acceleration of Spark on a Pynq-based cluster
How Much Computation Power do you need for Near-Data Processing in Cloud?
Christina Delimitrou (Cornell University)
Yuhang Liu, ICT/CAS China
- Tom Wenisch (Michigan University)
- José Martinez (Cornell University)
- Daniel Sanchez (MIT University)
- Eric Chung (Microsoft)
- David Meisner (Facebook)
- Heiner Litz (Google)
- Jana Giceva (ETH)
Submissions deadline: May 16, 2017
Author notification: June 06, 2017
Submissions deadline: May 26, 2017
Author notification: June 06, 2017