Hacker Newsnew | past | comments | ask | show | jobs | submit | oelang's commentslogin

VHDL mostly lost the ASIC consumer market and for some that's the only market that matters, but the hardware design ecosystem is much bigger than that.

I wonder what AI will do to RTL/verification, the rigid nature of VHDL may be a better target for AI than Verilog.


If AI does anything to the EDA space, I hope it helps break the chokehold the "big 3" have on tooling. Any startup that threatens their dominance gets acquired and disappeared.

I also hope this is coming anyway (see e.g.: KiCad nipping at Altium's heels, and Verilator's recent progress). There is just so much more to do, though...


(System)Verilog has delta cycles too you know, they call it an event queue, but it's basically the same. It's the direct variable updates that happen outside of this mechanism that cause all the issues. Imho it was a poor attempt at simulation optimization, and now you can't take it out of the language anymore.

I did not know!

It's important to have deterministic simulations and semantics that you can reliably reason about. Both VHDL and SystemVerilog offer this to some extent, but in the case of (System)Verilog the order of value updates is not as strictly enforced. In practice, this means that if you switch to another or a newer simulator, suddenly your testbenches will fail. The simulator vendors love this of course. This hidden cost is underestimated.

No sane hardware engineer would want randomness in their simulation unless they get to control it.


VHDL still dominates in medical, military, avionics, space etc. and it's generally considered the safer RTL language, any industry that requires functional safety seems to prefer it.

It's also the most used language for FPGA in Europe but that's probably mostly cultural.


And I wish you read the article, you're comments are completely off topic.


Jim was involved in the early versions of Zen & M1, I believe he knows.

Apples M series looks very impressive because typically, at launch, they are node ahead of the competition, so early access deals with TSMC is the secret weapon this buys them about 6 months. They also are primarily laptop chips, AMD has competitive technology but always launches the low power chips after the desktop & server parts.


> so early access deals with TSMC is the secret weapon this buys them about 6 months

Aren't Apple typically 2 years ahead? M1 came out 2020, other CPUs from the same node level (5 nm TSMC) came out 2022. If you mean apple launches their 6 months ahead of the rest of the industry gets on the previous node, sure, but not the current node.

What you are thinking about is maybe that AMD 7nm is comparable to Apple 5nm, but really what you should compare is todays AMD cpus with the Apple cpu from 2022, since they are on the same architecture.

But yeah, all the impressive bits about Apple performance disapears once you take architecture into account.


> but really what you should compare is todays AMD cpus with the Apple cpu from 2022, since they are on the same architecture

There only seems to be comparisons between laptop CPUs which are quiet limited.


Same node. Not same architecture.


Microsoft recently announced that they run chatgpt 3.5 & 4 on mi300 on Azure and the price/performance is better.

https://www.amd.com/en/newsroom/press-releases/2024-5-21-amd...


I've used ChatGPT on Azure. It sucks on so many levels, everything about it was clearly enforced by some bean counters who see X dollars for Y flops with zero regard for developers. So choosing AMD here would be about par for the course. There is a reason why everyone at the top is racing to buy Nvidia cards and pay the premium.


"Everyone" at top is also developing their own chips for inference and providing APIs for customers to not worry about using CUDA.

It looks like the price to performance of inference tasks gives providers a big incentive to move away from Nvidia.


There are only like 3 AI building companies who have the tech capability and resources to afford that and 2 of them don't even offer their chips to others or have gone back to Nvidia. The rest is manufacturers desperately trying to get a piece of the pie.


AMD is already competitive on inference


Their problem is that the ecosystem is still very CUDA-centric as a whole.


Any sensor that captures a ton of data that needs realtime processing to 'compress' the data before the data can be forwarded to data accumulator. Think MRI or CT scanners but industrially there are thousands of applications.

If you need a lot of realtime processing to drive motors (think industrial robots of all kinds), FPGAs are preferred of micro-controllers.

All kinds of industrial sorting systems are driven by fpgas because the moment of measurement (typically with a camera) & the sorting decision are less than a milisecond apart.

There are many more, it's a very 'industrial' product nowadays, but sometimes an FPGA will pop up in a high-end smartphone or TV because they allow to add certain features late in the design cycle.


If you're looking for fair comparisons don't ask nVidias marketing department, those guys are worse than Intel.

What AMD did was a true comparison, while nvidia is applying their transformer engine which modifies & optimizes some of the computation to FP8 & they claim no measurable change in output. So yes, nvidia has some software tricks left up on their sleeve and that makes comparisons hard, but the fact remains that their best hardware can't match the mi300x in raw power. Given some time, AMD can apply the same software optimizations, or one of their partners will.

I think AMD will likely hold the hardware advantage for a while, nVidia doesn't have any product that uses chiplets while AMD has been developing this technology for years. If the trend continues to have these huge AI chips, AMD has a better hand to economically scale their AI chips.


Not my area, but isn't a lot of NVIDIA's edge over AMD precisely software? NVIDIA seem to employ a lot of software dev (for a hardware company) & made CUDA into the de facto standard for much ML work. Do you know if AMD are closing that gap?


They have improved their software significantly in the last year, but there is a movement that's broader than AMD that wants to get rid of CUDA.

The entire industry is motivated to break the nvidia monopoly. The cloud providers, various startups & established players like intel are building their own AI solutions. Simultaneously, CUDA is rarely used directly, typically a higher level (Python) API that can target any low-level API like cuda, PTX or rocm.

What AMD is lacking right now is decent support for rocm on their customer cards on all platforms. Right now if you don't have one of these MI cards or a rx7900 & you're not running linux you're not going to have a nice time. I believe the reason for this is that they have 2 different architectures, CDNA (the MI cards) and RDNA (the customer hardware).


> Right now if you don't have one of these MI cards or a rx7900 & you're not running linux you're not going to have a nice time.

Are you saying that having rx7900 + linux = happy path for ML? This is news to me, can you tell more?

I would love to escape cuda & high prices for nvidia gpus.


That's what I have (RX 7900XT on Arch), and ROCm with pytorch has been reasonably stable so far. Certainly more than good enough for my experimentation. Pytorch itself has official support and things are pretty much plug & play.


> Given some time, AMD can apply the same software optimizations, or one of their partners will.

Except they have been given time, lots of it, and yet AMD is not anywhere close to parity with CUDA. It's almost like you can't just snap your figures and willy-nilly replicate the billions of dollars and decades investment that went into CUDA.


That was a year ago. AMD is changing their software ecosystem at a rapid pace with AI software as a #1 priority. Experienced engineers have been reassigned from legacy projects to focus on AI software. They've bought a number of software startups that were already developing software in this space. It also looks like they replaced the previous AMD top level management with directors from Xilinx to reenergize the team.

To get a picture of the current state which has changed a lot, this MS Ignite presentation from three weeks ago may be of interest. The slides show the drop in compatibility they have for higher levels of the stack and the tools for translation at the lower levels. Finally there's a live demo at the end.

https://youtu.be/7jqZBTduhAQ?t=61


The transformer engine is a fairly recent development (april this year I think) so I don't think they're very far behind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: