Samsung Answers Questions About DRAM, HBM, and Storage Class Memory – Blocks & Files

We sent a Q&A email to Young-Soo Sohn, Corporate Vice President / Group Head of DRAM Planning and Activation Group, Samsung Electronics, regarding DRAM, memory developments High Bandwidth (HBM) and Storage Class Memory (SCM) and have received comprehensive responses regarding DRAM and HBM. Less with SCM which is more of an emerging technology than the other two technologies.

DRACHMA

Blocks and files: Which memory nodes does Samsung support? 1Z, 1alpha, 1 beta, 1 gamma? I’m asking this question because Micron is providing an update on future memory node support.

Young-Soo Sohn: We recently announced the industry’s most advanced 14nm DRAM, based on Extreme Ultraviolet (EUV) technology *. By increasing the number of EUV layers to five, we were able to create the smallest DRAM node possible today, which will enable unprecedented speeds. To capitalize on this, we plan to mass-produce 14nm-based DDR5 in the second half of this year.

Additionally, we are developing next-generation process nodes to suit the industry’s most demanding applications, all of which require improvements in density, performance and power consumption.

* Samsung was the first in the industry to adopt EUV for DRAM production. EUV technology reduces the repetitive steps of multiple pattern creation and improves the accuracy of pattern creation, resulting in increased performance and shorter development time.

What are Samsung’s plans for DDR5?

We have already provided samples to our customers and fully intend to meet their needs when launching DDR5 mass production in the second half of this year.

Samsung is working closely with major industry players to deploy the DDR5 product in various high performance applications. We are currently sampling different variations of our DDR5 family to customers for verification and soon for certification with their cutting edge products to accelerate AI / ML, exascale computing, analysis, networking and other workloads. data-intensive work.

When does Samsung think DDR6 technology will emerge? What benefits will it bring to the systems?

While we are unable to detail our DDR6 plans at this point, please rest assured that we are committed to delivering high performance DDR6 modules in a timely manner once the market is ready.

Generally speaking, the computer industry is changing rapidly and DRAM will continue to play a central role in the evolution of the industry. As computing power continues to increase, DRAM performance will also need to improve to keep pace. From this perspective, the transition to DDR6 will be inevitable, and we are partnering with global manufacturers and investing in cutting-edge technologies to ensure high-performance, energy-efficient memory solutions that will accelerate AI / ML and other computing-intensive applications. Without a doubt, DDR6 will be the next key player and we intend to stay at the forefront of the industry when this transition occurs.

HBM

What capacity advantage would HBM offer over DRAM?

It varies from generation to generation. The current HBM2E standard (an extension of the second generation HBM2) supports stacking up to eight stacks, which would allow an HBM2E SiP to provide 128 GB of capacity. In practice, however, currently available HBM2E capacities cap at 16 GB. The HBM3 standard (under development) is expected to expand stacking capabilities – in conjunction with increasing device densities, this will produce maximum capacities. considerably higher.

Samsung table.

What bandwidth advantage would HBM offer over DRAM?

This is where the advantages of HBM are most evident in terms of bus width and processing speed. Since their first generation, HBM standards have supported a bus width of 1024 bits, compared to only 32 bits for GDDR, and the HBM2E standard signaling rate is 3.6 GB per second (Gbit / sec) per pin, with up to 460 GB per second (GB / sec) of bandwidth per stack. At this point in HBM3’s development, we expect to achieve a consistently fast processing speed of up to 6.4 Gbps, well above many expectations.

Unlike the case of capacity, however, in practice the bandwidth has exceeded the norm – Samsung’s HBM2E Flash the devices have a processing speed of 3.6 Gbps per pin and a bandwidth of up to 460 Gbps. This compares to the data rates specified by the 16 Gbps standard for GDDR6 generation of DRAM. And for processors with a 4096-bit memory interface, such as GPUs and FPGAs, a combination of eight Flashbolt stacks can deliver 128GB of memory with a maximum bandwidth of 3.68TB / s, of course. beyond traditional DRAM.

How would HBM be constructed in terms of, say, the number of DRAM stacks (layers) and an interposer and processor?

As noted above, the current HBM2E standard supports up to eight matrices each, while the as yet unpublished HBM3 standard is expected to increase the maximum number of matrices.

The use of vias and microbosses through silicon (rather than the wire link) for the interconnection of stacked memory arrays allows for a very small footprint and fast data transfer rates, while allowing better distribution of the data. heat to minimize thermal problems.

Additionally, the use of a silicon interposer for interconnecting HBM stacks and processors means that memory and processor can be very close together for reduced access time, taking advantage of the high efficiency of based interconnects. silicon. The end result is both improved performance and significantly reduced power consumption compared to board-level interconnects typically used for traditional DRAMs. And in the future, the interposer offers opportunities for the inclusion of additional active circuits that could take the HBM model from its current 2.5D status to true 3D integration. [including the very promising processor-in-memory [HBM-PIM] Street].

Which vendor fixes the CPU on the HBM?

Assembling and testing the HBM and processor requires advanced techniques, but they are similar to those increasingly used in chip-oriented designs where multiple chips are connected in an IC package. Each system OEM will make their own manufacturing supply chain decisions, but it now appears that members of the existing Outsourced Semiconductor Assembly and Testing (OSAT) community (ASE, Amkor, etc.) will have the necessary capacities, as do some manufacturers of integrated devices (eg Intel) and foundries (eg TSMC).

Which processors would be supported? Which generation Xeon? What generation of AMD? Arm? Risc-V?

We work with a variety of processor developers. For ARM, Fujitsu is supplying and Xeon / RISC-V is currently developing.

Would / could HBM be connected to servers via a CXL bus? Is this a good or a bad idea?

We are deeply involved in the Compute Express Link (CXL) interconnection standard and continue to closely monitor its development. As with most technologies, the role of a specific memory technology will be determined in several ways by the requirements of the application.

Will the systems (servers, workstations, laptops) use both DRAM and HBM? Why would they do this?

New products and projects have always presented system designers with unique sets of considerations about performance, power consumption, cost, form factor, time to market, etc. Today, as computer technology is clearly in an era of unprecedented advancement, calculations and tradeoffs are more complex than ever – with AI, machine learning, HPC, and other emerging applications resulting in changes at every level, from supercomputing centers to cutting-edge peripherals.

In response to these dynamics, and with traditional computing models and architectures under increasing pressure, technical expectations for memory are rapidly evolving and diverging. As a result, a growing array of memory types (GDDR6, HBM, DDR5, HBM + PIM, etc.) are now in use, especially in the HPC world, and this growing heterogeneity is likely to spread to almost all of them. market sectors over time. , as application demands become more and more specialized. Again, these options are catalysts for creative and innovative engineering and we expect designers to take advantage of all available options to achieve their specific goals.

Storage Class Memory (SCM)

What’s going on with Z-NAND?

We carefully consider the development direction of Z-NAND by collecting customers’ requirements.

If HBM is adopted by customers, will SCM still be needed in the in-memory storage hierarchy?

With the explosive growth in data generation, the need to deliver data-centric solutions for diverse workloads has never been greater. We prepare solutions in close collaboration with a variety of partners to meet these radical demands for innovation in memory.

Which SCM technologies have caught Samsung’s attention and why?

There are different workloads in the industry with different requirements, which need to be addressed accordingly.

How does Samsung view Intel’s 3D XPoint technology?

As mentioned above, there are various needs in the industry and a variety of solutions to meet those needs.

Comment

Samsung is energetically active with DDR5 and DDR6 memories. It is also strongly developing its HBM and HBM-PIM stacked memory technology products. We note that he is thinking of HBM with the X86, Arm and RISC-V processors. HBM-Arm and HBM-RISC-V would be attractive competition for Xeon-HBM systems, especially in edge environments with low latency, data intensive processing needs.

The situation of storage class memory is less clear. It appears that Samsung sees a general need for SCM technology, but it is possible that an application specific technology is needed rather than a single SCM technology.


Source link

Comments are closed.