It "should" be possible, probably you need to install the onnx runtime environment first:
https://duckduckgo.com/?q=Intel+onnx+ru ... =h_&ia=web
I myself have no experience with onnx.
--
Srdja
Moderator: Ras
It "should" be possible, probably you need to install the onnx runtime environment first:
smatovic wrote: ↑Thu Jan 16, 2025 8:17 pmAMD Zen 5 Ryzen and Zen 5 Epyc "Turin" are fabricated on TSMC N4X node, Zen 5c "Turin "Dense" on TSMC N3E .
https://en.wikipedia.org/wiki/Zen_5
https://en.wikipedia.org/wiki/Epyc#Fift ... rin_Dense)
So we can expect with Zen 6 and likely switch to 3nm or 2nm significant core-count increase for the former ones.
https://en.wikipedia.org/wiki/5_nm_process#Nodes
https://en.wikipedia.org/wiki/3_nm_proc ... cess_nodes
Zen 6 initially on track for 2025, now maybe 2026-2027?
TSMC 2nm process to enter volume production in H2 2025.
https://en.wikipedia.org/wiki/2_nm_proc ... cess_nodes
Intel skipped 2nm node and bets on 18A, Samsung is in with SF2.
--
Srdja
smatovic wrote: ↑Wed Mar 19, 2025 7:46 pm Nvidia announces Blackwell Ultra and Rubin AI chips
https://www.cnbc.com/2025/03/18/nvidia- ... hips-.html
Next gen Nvidia Rubin (2026) will be probably a bigger step for end users in regard of core count cos of smaller fab process (form. called tick-tock cycle).
And Nvidia DGX Spark and Station announced, Nvidia supercomputers in a desktop:
NVIDIA Announces DGX Spark and DGX Station Personal AI Computers
https://nvidianews.nvidia.com/news/nvid ... -computers
--
Srdja
smatovic wrote: ↑Wed Mar 19, 2025 7:46 pmAnd Nvidia DGX Spark and Station announced, Nvidia supercomputers in a desktop:
NVIDIA Announces DGX Spark and DGX Station Personal AI Computers
https://nvidianews.nvidia.com/news/nvid ... -computers
Nvidia Spark is former DIGITS:towforce wrote: ↑Thu Mar 20, 2025 11:31 pm [...]
My reading of that article is that the Spark will be a petascale computer, which was achieved in the supercomputer world 16 years ago (link). That was too long ago to fairly make the comparison. After all, the Exynos W1000, the SOC in the Galaxy Watch 7, will handily beat any supercomputer from the 1970s.![]()
And, the mentioned petaFLOPS achieved by modern GPUs/TPUs are in matrix compute, not vector (SIMD), not scalar (ALU/FPU).
...contrary to Intel's former tick-tock cycle Nvidia releases every ~two years* a new microarchitecture with new (external) fab process (die shrink), and in-between an optimization step of microarchitecture and fab process.