Development of video memory in games: from the first video cards to future DDR7
First generation: early video cards and EDO RAM
Video memory has been an important part of computer systems since the earliest video cards. In the 1980s and early 1990s, when computers began to be used heavily for gaming, video memory was relatively primitive. Early video cards, such as the IBM Monochrome Display Adapter (MDA) and Color Graphics Adapter (CGA), used conventional dynamic random access memory (DRAM). This memory provided minimal capabilities for displaying graphics and offered little support for modern concepts of image processing.
A key stage in the evolution of video memory was the emergence of Extended Data Out RAM (EDO RAM). EDO RAM, introduced in the mid-1990s, provided significant performance improvements by more efficiently managing memory access. Unlike conventional DRAM, which required one access cycle to be completed before starting the next, EDO RAM allowed a new cycle to be started without waiting for the previous one to complete. This significantly accelerated the processes of reading and writing data.
Graphics cards with EDO RAM, such as the Matrox Millennium, have become popular among gamers due to their improved graphics performance. They made it possible to display more complex scenes and improve image quality, which was especially important for games of that time. Thanks to these improvements, gamers could enjoy more detailed and colorful virtual worlds, which contributed to the rise in popularity of computer games. This subsequently led to the development of more complex and interactive games that required more video memory power.
Second generation: SDRAM and GDDR1
As we moved into the 2000s, video memory continued to evolve. The advent of Synchronous Dynamic RAM (SDRAM) was another important step forward. SDRAM was synchronized with the processor clock signal, which significantly improved its performance compared to EDO RAM. One of the first video cards with SDRAM was NVIDIA RIVA TNT, which was released in 1998. It provided noticeably better performance and supported higher resolutions and more complex graphics effects.
Soon after this, the first version of Graphics Double Data Rate (GDDR) memory appeared - GDDR1. GDDR1 was a specialized type of SDRAM optimized for graphics applications. It offered improved performance over conventional SDRAM, such as increased bandwidth and reduced latency. GDDR1 graphics cards such as the NVIDIA GeForce 256 were the first to offer significant increases in graphics performance, opening up new possibilities for game developers.
GDDR1 has significantly improved the graphics quality in games. The developers were able to introduce more complex textures, increase detail and add realistic lighting effects. This generation of video memory became the starting point for the further development of graphics in games. Gamers were able to enjoy a smoother and better gaming experience, which contributed to the growth of interest in computer games. It also spurred the development of gaming technology, including improved graphics engines and rendering techniques.
Third generation: GDDR2 and GDDR3
The next stage in the development of video memory was the appearance of GDDR2 in the early 2000s. GDDR2 offered significant improvements over GDDR1, including increased bandwidth and improved power efficiency. GDDR2 graphics cards such as the ATI Radeon 9700 have become popular among gamers due to their ability to handle more complex graphics tasks and support high resolutions.
However, despite the improvements, GDDR2 had some limitations, such as increased power consumption and heat dissipation. This stimulated further research and development of new types of video memory.
GDDR3, which appeared in the mid-2000s, was the next major milestone. GDDR3 offered even higher bandwidth and improved power efficiency compared to GDDR2. GDDR3 graphics cards such as the NVIDIA GeForce 6800 have become the standard for gamers and graphics professionals. GDDR3 allowed for more complex graphical effects, such as improved lighting and shadows, and more realistic textures.
The development of GDDR3 also included improved thermal performance, allowing graphics cards to operate at higher frequencies without overheating. This was especially important for gamers, as high frame rates have become a critical aspect for a smooth, high-quality gaming experience. With the introduction of GDDR3, video memory became more stable and productive, which made it possible to create games with a high level of graphics and interactivity. Game developers began to use new technologies to create more realistic and immersive virtual worlds, leading to popular game franchises and significant growth in the industry.
Fourth generation: GDDR4, GDDR5 and HBM
GDDR4, which appeared in the late 2000s, offered even more improvements. It provided significantly higher throughput and lower power consumption compared to GDDR3. GDDR4 graphics cards like the ATI Radeon HD 2900 XT have given gamers and professionals even more graphics power.
However, despite the improvements, GDDR4 was not widely adopted and was quickly replaced by GDDR5. GDDR5 was a real breakthrough in the field of video memory. It offered double the bandwidth of GDDR4, allowing graphics cards to handle even more complex graphics tasks. GDDR5 graphics cards such as the NVIDIA GeForce GTX 480 have become the new industry standard.
GDDR5 allowed game developers to implement more complex graphical effects such as realistic lighting, shadows and reflections, and high-resolution textures. This generation of video memory became the basis for many modern graphics technologies. With the advent of GDDR5, video memory has reached a new level of performance, allowing you to create games with incredible graphics and detail. GDDR5 has also significantly improved the overall energy efficiency of graphics cards, allowing them to be used in more compact and powerful devices.
During the same period, the first version of High Bandwidth Memory (HBM) was developed. HBM was a completely new video memory architecture designed to provide high bandwidth and low power consumption. The first version of HBM was used in AMD Fury X video cards in 2015. HBM offered significantly higher bandwidth than traditional GDDR memory by using vertically integrated memory chips, allowing for higher density and lower latency.
HBM was a major milestone in the development of video memory, opening up new possibilities for processing large amounts of data in real time. This allowed game and application developers to create even more complex and detailed virtual worlds. HBM has also played an important role in the development of professional graphics solutions used in areas such as scientific computing and machine learning.
Fifth generation: GDDR6, GDDR6X and HBM2
As technology has advanced, video memory has continued to improve. GDDR6, introduced in 2018, offered even higher bandwidth and improved power efficiency. GDDR6 graphics cards like the NVIDIA GeForce RTX 2080 have become the standard for today's high-resolution, graphics-intensive games and applications.
GDDR6 allowed for even more complex graphics effects such as real-time ray tracing and deep learning to improve image quality. These technologies are made possible by the high bandwidth and low latency of GDDR6.
GDDR6X, introduced in 2020, offered even greater improvements. It uses PAM4 (Pulse Amplitude Modulation) technology to double the bandwidth compared to GDDR6. GDDR6X graphics cards like the NVIDIA GeForce RTX 3080 have become the new benchmark for graphics performance. These improvements have allowed game developers to create more realistic and immersive game worlds, greatly improving the gaming experience for users. GDDR6X has significantly increased data density and improved the efficiency of video cards, which has become an important step forward for the entire industry.
Another important achievement of this period was the development of HBM2. HBM2 offered even greater throughput and improved energy efficiency compared to the first version of HBM. It has found its application in such high-performance video cards as NVIDIA Tesla V100 and AMD Radeon VII. HBM2 provided the ability to work with large volumes of data in real time, which has become especially important for machine learning, artificial intelligence and scientific computing tasks.
HBM2 has enabled the creation of more powerful and efficient graphics solutions that can handle the most complex tasks. It has also played a key role in the development of cloud computing and data centers, where high throughput and low memory latency are required.
Sixth generation: HBM2E
The development of video memory did not stop with HBM2. The next step was the development of HBM2E, which is an improved version of HBM2. HBM2E offered even greater bandwidth and data density, allowing graphics cards to handle even more complex tasks.
HBM2E has found its application in high-performance computing systems and graphics solutions such as NVIDIA A100 and AMD Instinct MI100. It provided the ability to work with large volumes of data and increased performance for machine learning and artificial intelligence tasks.
Future: GDDR7 and HBM3
With the development of video memory, we can expect the appearance of GDDR7 in the near future. Micron says its new GDDR7 memory will deliver up to 30% performance gains in games, especially in ray tracing and rasterization. Micron's GDDR7, offering speeds from 28 to 32 Gbps, promises significant improvements in memory bandwidth and power efficiency.
GDDR7, delivering speeds up to 32 Gbps, provides 30% more performance compared to their own GDDR6 memory running at 20 Gbps. This improvement is due to new memory technologies, which is impressive. Although the company did not disclose the testing platform, the results appear to be quite reliable.
The new GDDR7 memory also offers up to 60% increase in memory bandwidth, 50% improvement in power efficiency and up to 20% improvement in response time. When used in gaming consoles and PCs, GDDR7 promises a revolution in gaming performance, especially at 4K Ultra.
NVIDIA plans to integrate GDDR7 into its "RTX 50" Blackwell products, and AMD intends to use it in RDNA 4. Intel may remain on GDDR6 with Battlemage "Xe2" for now, leaving GDDR7 for future graphics generations.
Besides GDDR7, HBM3 is also on the horizon. HBM3 promises even higher throughput and energy efficiency compared to HBM2E. It will be used in the most high-performance computing systems and graphics solutions, providing the ability to work with huge volumes of data in real time.
These improvements will open up new possibilities for game and application developers, allowing them to create even more realistic and immersive virtual worlds. The development of video memory remains a key aspect of the evolution of computer graphics, and GDDR7, together with HBM3, will be an important milestone along this path.