Tutorial, ISCA 2009
Saturday Afternoon Session
Scalability Challenges for Future Memory Systems
Hillery Hunter, IBM
Viji Srinivasan, IBM
Kenneth Wright, IBM Server
& Technology Group,
If you’ve ever asked yourself any of the following questions… come check out this tutorial & find out!!
· Isn’t main memory a commodity? Why is anyone still working on it?
· What is the memory wall: a latency wall or a bandwidth wall?
· I’ve heard memory consumes as much power as processors… is that true? How are we going to pack more bits into future systems?
· What’s so hard about memory power management?
· What’s all the fuss about “3D memory?”
· What’s so neat about flash, phase-change memory and other DRAM alternatives? What problems might they solve?
The tutorial will be interactive, and will include a hands-on exercise in balanced processor/memory design with emerging technologies, under emerging system-level constraints.
When the “memory wall” was first defined, it was viewed as a latency challenge, requiring increasing levels of cache to be inserted between a processor and its main memory, in order to ensure a steady supply of data to the compute engine. Increasingly, memory bandwidth challenges have also been discussed, and there is significant concern in the computer architecture community about supplying sufficient data bandwidth to many-core systems.
In this tutorial we will take a new approach, and discuss the significant scalability challenges that are attacking all levels of the memory subsystem, particularly outer cache levels and main memory. In the most recent processor generations, growing core counts, increasing application memory footprints, and chip and system-level power walls have each made it very difficult to build scalable shared & coherent cache structures. In recent years, architects have debated whether the “memory wall” should be scaled by improving (a) latency or (b) bandwidth, but in fact the game has changed, and the “memory wall” is now a scalability wall, most frequently manifesting itself as a chip-level and system-level memory power wall.
This tutorial will cover material related to commercial systems, both in the high-end server and enterprise (mainframe) domains, and will shed light on the particular scalability challenges presented by large-scale applications. Attendees will learn about the unique power, capacity, and technology challenges of highly-scaled systems, from the top to the bottom of the memory hierarchy. We will begin by teaching about the unique challenges of caches in highly-scaled systems, and move onto a nuts-and-bolts view of emerging challenges in main memory subsystem power delivery, power consumption, and cooling. Having motivated the challenges of today’s systems, we will also cover the possibilities and limitations of emerging non-traditional memory technologies (such as 3D DRAMs, flash and phase-change memory).
Topics to be covered
· The cache’s view of the memory subsystem, in a multi-core, multi-processor context
· DRAM architecture and design fundamentals
· Memory DIMM design fundamentals
· Fundamentals of memory power consumption
· Memory power management
· System-level factors in memory design (space, cooling, etc.)
· System-level energy management
· Future DRAM technologies (e.g., 3D DRAM, DDR4)
· Future non-DRAM memory technologies (flash, PCM, etc.)
Hillery Hunter is a Research Staff Member in the Exploratory
Systems Architecture Department of IBM's
Viji Srinivasan is a Research Staff Member at
Kenneth Wright is a Senior Engineer in the IBM Server &