Jon Peddie Research’s Market Watch reports GPU shipments up
Revised data reveals PC GPU market increased 0.2% quarter-to-quarter
Revised data reveals PC GPU market increased 0.2% quarter-to-quarter
A graphics processing unit (GPU) with the power and ability to present gamers with gorgeously rendered graphics and physics effects at high frame rates is a valued asset for a gamer. Increasingly AI is also going to become part of the equation. As better games evolve, the demand for more powerful graphics processors will continue to go up, and there … Read more
In 2011, a scant seven years ago, Florida-based Magic Leap wow’ed the world and investors with fake videos of its planned augmented reality (AR) headset. The videos were so good, investors fell over themselves trying to stuff money into Magic Leap’s bank account, and they succeeded to the tune of something north of $2.3 billion US-dollars—making it the richest startup … Read more
If you were lucky enough to get to go to the Computer Vision and Pattern Recognition conference in Salt Lake City, and if you were one of the twenty lucky enough once there to get tapped by Nvidia’s gregarious CEO Jensen Huang, then you went home with a 32 GB Titan V supercomputer AIB. Jensen Huang demonstrating his largesse in … Read more
In 2014, a disruptive approach to HPC enabled IBM to be awarded two contracts to build the next generation of supercomputers as part of the US Department of Energy’s Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (known as CORAL program). In partnership with Nvidia and Mellanox, IBM demonstrated to CORAL that a data-centric … Read more
AMD and Nvidia, despite how they poster and position themselves, have limited resources; and since there are only 25 hours in a day, they have to pick their priorities carefully. And even though there was a lot of fanfare when Koduri rejoined AMD in 2013, the Vega product line was less than stellar, although it did have a surprising advantage … Read more
Nvidia just won’t quit and has built a double-decker server rack-mount console of Volta AIBs that can be used for AI training among things. The hardcore specifications are: AIBs: 16x Tesla V100s, providing 10,240 Tensor cores, plus 81,920 CUDA cores with 500GB of GPU memory Performance: 2 petaFLOPS AI | 250 teraFLOPS, FP32 | 125 teraFLOPS FP64 NVSwitch Communication Channel powered … Read more
Breathless reporting makes news of Nvidia's and Arm's collaboration on Xavier sound like earth shattering news. Nvidia and Arm recently announced that they would be working together to make Nvidia’s Deep learning Accelerator (NVDLA) accessible to programmers using Arm’s Trillium platform. Project Trillium was announced by Arm in February this year as a way for developers to get direct access from … Read more
HP has just released 16 new products. One of them is, in my unhumble opinion, a real standout – the HP 34-inch 4K curved all-in-one. This is a system with built-in induction phone charging, a Nvidia GTX 1050 for graphics, four speakers, mic, camera, and other nifty features. Available in mid-June, at an estimated price of $1,300 (starting), it is … Read more
$3.2 billion in sales, $1.24 billion profit, up 11% from last quarter Nvidia reported another record of revenue for the first quarter ended May 1, 2018, of $3.21 billion, up 66% from $1.9 billion a year earlier, and up 10% from $2.9 billion in the previous quarter, with growth across most of its platforms. Revenues were up in all segments. … Read more
What Nvidia is doing to Intel, Wave is trying to do to Nvidia, and it might just work We got a look at Nvidia’s DGX2 datacenter accelerator at GTC a few weeks ago and about the same time Wave Computing started dropping hints that their DPU (Dataflow Processing Unit) is about ready to ship in datacenter-ready systems and that we … Read more
The evolution of future processors Is it a revolution — or just evolution? The GPU with its hyper-dense compute capacity and relatively low cost, is an amazingly powerful workload accelerator for certain classes of problems — those that lend themselves to massive parallel processing and multi-threaded workloads. When programmable vertex shaders were first introduced in 2002 (by 3Dlabs), and the … Read more