Samsung launched a brand new HBM chip, it is twice as fast and %70 more efficient and it boosts AI processing speed.
In 2015 AMD boasted graphics cards with HBM memory, a theoretical revolution in performance and features. But GDDR6 memories have ended up dominating the market due to their better price/performance ratio.
Samsung HBM2 memory will increase AI processing speed
However, Samsung has just presented its new HBM-PIM (Process-In-Memory) chips, which stand out for their HBM2 technology in addition to integrating an artificial intelligence system that makes processing speed faster and more efficient than ever.
These memory chips have an artificial intelligence engine that is responsible for managing many of the operations performed on them, which helps when moving data from memory to the processor or vice versa to consume less energy and gain integers in transfers.
According to Samsung, by applying this system to HBM2 Aquablot memories, it is possible to double performance, but also reduce power consumption by more than 70%, claims are certainly striking and could once again boost the use of this type of technology on a massive scale.
There’s no need for significant software or hardware changes
These new memories do not require any further software or hardware changes and are already in the testing phase to be available on the market probably in the second half of the year.
Each memory bank has a small Programmable Computing Unit (PCU) running at 300MHz. But there’s a disadvantage because there is no such room for the memory and each memory die has half the capacity (4GB) compared to conventional 8GB HBM2 dies. To balance that situation, Samsung actually combines dies with PCU with others without PCU to achieve 6GB chips.
At the moment these memories will not be available for graphics cards like the ones AMD launched years ago, and the idea is to offer these modules in data centers and high-performance computing (HPC) systems.