Tutorial, ISCA 2009

Saturday Afternoon Session

 

Scalability Challenges for Future Memory Systems

 

Organizers

 

 

Hillery Hunter, IBM Research, Yorktown Heights, NY

Viji Srinivasan, IBM Research, Yorktown Heights, NY

Kenneth Wright, IBM Server & Technology Group, Austin, TX

 

Tutorial Features

 

If you’ve ever asked yourself any of the following questions… come check out this tutorial & find out!!

 

·   Isn’t main memory a commodity? Why is anyone still working on it?

·   What is the memory wall: a latency wall or a bandwidth wall?

·   I’ve heard memory consumes as much power as processors… is that true?  How are we going to pack more bits into future systems?

·   What’s so hard about memory power management?

·   What’s all the fuss about “3D memory?”

·   What’s so neat about flash, phase-change memory and other DRAM alternatives?  What problems might they solve?

 

The tutorial will be interactive, and will include a hands-on exercise in balanced processor/memory design with emerging technologies, under emerging system-level constraints.

 

Abstract

 

When the “memory wall” was first defined, it was viewed as a latency challenge, requiring increasing levels of cache to be inserted between a processor and its main memory, in order to ensure a steady supply of data to the compute engine.  Increasingly, memory bandwidth challenges have also been discussed, and there is significant concern in the computer architecture community about supplying sufficient data bandwidth to many-core systems. 

 

In this tutorial we will take a new approach, and discuss the significant scalability challenges that are attacking all levels of the memory subsystem, particularly outer cache levels and main memory.  In the most recent processor generations, growing core counts, increasing application memory footprints, and chip and system-level power walls have each made it very difficult to build scalable shared & coherent cache structures.  In recent years, architects have debated whether the “memory wall” should be scaled by improving (a) latency or (b) bandwidth, but in fact the game has changed, and the “memory wall” is now a scalability wall, most frequently manifesting itself as a chip-level and system-level memory power wall.  

 

This tutorial will cover material related to commercial systems, both in the high-end server and enterprise (mainframe) domains, and will shed light on the particular scalability challenges presented by large-scale applications.  Attendees will learn about the unique power, capacity, and technology challenges of highly-scaled systems, from the top to the bottom of the memory hierarchy.  We will begin by teaching about the unique challenges of caches in highly-scaled systems, and move onto a nuts-and-bolts view of emerging challenges in main memory subsystem power delivery, power consumption, and cooling.   Having motivated the challenges of today’s systems, we will also cover the possibilities and limitations of emerging non-traditional memory technologies (such as 3D DRAMs, flash and phase-change memory).

 

Topics to be covered

 

·   The cache’s view of the memory subsystem, in a multi-core, multi-processor context

·   DRAM architecture and design fundamentals

·   Memory DIMM design fundamentals

·  Fundamentals of memory power consumption

·  Memory power management

·  System-level factors in memory design (space, cooling, etc.)

·  System-level energy management

·  Future DRAM technologies (e.g., 3D DRAM, DDR4)

·  Future non-DRAM memory technologies (flash, PCM, etc.)

 

Expected duration

 

4 Hours

 

Organizer Biographies

 

Hillery Hunter is a Research Staff Member in the Exploratory Systems Architecture Department of IBM's T.J. Watson Research Center in Yorktown Heights, NY.  She is interested in cross-disciplinary research, spanning circuits, microarchitecture, and compilers to achieve new solutions to traditional problems.  She has published in the area of embedded DRAM, and is currently engaged with IBM server development as DDR3-generation end-to-end memory power lead.  She received the Ph.D. degree in Electrical Engineering from the University of Illinois, Urbana-Champaign.

 

Viji Srinivasan is a Research Staff Member at IBM T.J. Watson Research Center in Yorktown Heights. She joined IBM in 2001 after completing her PhD at the University of Michigan. Her research interests include computer architecture, specifically processor micro-architecture, and multi-core/multiprocessor memory systems.  She is co-author of an ISCA 2009 paper on phase-change memory.

 

Kenneth Wright is a Senior Engineer in the IBM Server & Technology Group, Austin, TX. He is presently the End-to-End Memory Sub-system lead for ipServer Development, and was previously Development Bring-up Lead for Power4+ and Power5+ processor systems. He received his MS in Electrical Engineering from the University of Virginia, where he worked on the Stream Memory Controller (with the group that published the original “memory wall” paper), and his BS in Computer Engineering and BS in Electrical Engineering from North Carolina State University.