Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm kind of questioning whether it even makes sense for Intel to try to compete with Nvidia and AMD when it comes to GPUs or deep learning accelerators. They just need to really win in one area in a sense, or at least to have a product with a really compelling price/performance ratio. Though you could argue there's a certain synergy between laptops, desktops and servers (people will generally favor running the same architecture they develop on).


I think Intel SIMD (AVX-512) is still best-of-class for vector math. Run Length Encoded algorithms in column stores like SAP HANA are currently the key use-case (revenue/market wise). What I've learned since the release of Apple M1 is that ISA extensions for Matrix math, as incorporated in Apple's AMX, complement the Vector-centric SIMD ISA and are probably a key battle front for ML/DL. A good Vector/Matrix SIMD-like ISA should cover many ML/DL use cases currently addressed using a discreet accelerator like Apple's Neural Engine and Nvidia GPUs.

Amazon's Graviton ARM Neoverse CPUs only implement ARM NEON SIMD rather than the newer SVE2. I don't know if ARM SVE2 has Matrix math instructions like AMX does. Intel AVX++ might be an important alternative to discreet ML/DL accelerators. DL Training accelerators appear to be focused on RDMA Over Converged Ethernet [1] (RoCE) and I'm assuming that this technology is new enough for Intel to gain a foothold.

I don't know if I understand this space well enough but I think there are enough truths in this explanations to keep me from ruling out Intel.

[1] https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet


I wouldn't rule them out either. AMD survived some very bad years. It's not impossible for a company to shrink, trim the fat and grow again. However, having known people who worked at Intel, it's not just a question of lack of innovation, they had some very smart engineers, but it seemed like the culture there was very toxic. Lots of unhealthy internal competition, bureaucracy, infighting, etc. I think that they need to improve their internal culture first, in order for the company to make a real comeback. If anything, I think their recent losses are a serious wake up call. I hope they make the right decisions, because the hardware industry needs competition.


What you're talking about here is the client/inference side, not the server/training side of the ML/DL compute battle. The former is meaningful in a sense, but practically insignificant in relation to the latter. Almost all ML/DL compute GPU, CPU, or otherwise is on the server side of the equation, and if you're relying on vector CPU math for it, well good luck.

Accelerators have a very clear space for themselves on the training side, and no amount of fancy ISA magic is really going to make up for the fact that TPUs and the like beckon. It's where most of the real money is.


Wayne Gretzky: "I skate to where the puck is going, not where it has been.”

Intel Acquires Artificial Intelligence Chipmaker Habana Labs [1]:

> Intel estimates the total addressable market (TAM) for AI silicon by 2024 will be greater than $25 billion, and within that, AI silicon in the data center is expected to be greater than $10 billion in the same timeframe.

Amazon EC2 instances powered by Habana Gaudi [2]:

> Up to 40% better price performance for deep learning models

Matrix CPU math is another plausible future; Intel, AMD, and/or ARM are positioned to drive such an ISA extension. Accelerators, coprocessors, FPGAs, and CPU ISA extensions all seem to be in play; it is too early to pick winners in this nascent market.

[1] https://newsroom.intel.com/news-releases/intel-ai-acquisitio...

[2] https://aws.amazon.com/ec2/instance-types/habana-gaudi/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: