Intel has introduced its first update for the software side of Project Battlematrix. The company’s new inference-optimized software stack optimizes AI workload orchestration on the company’s Arc Pro B-series GPUs in multi-GPU workstations. The suite includes a Linux-based LLM Scaler for AI inference workflows.
Project Battlematrix is an Intel AI-focused initiative, designed to provide highly capable Intel-powered AI workstations for the market. The project combines Intel hardware and software to create a cohesive workstation solution involving multiple Arc Pro B-series GPUs in a single system. Project Battlematrix workstations will come with Xeon CPUs, up to eight GPUs, and up to 192GB of total VRAM, with pricing for ‘Project Battlematrix’ AI workstations ranging from $5,000 to $10,000.
Powering these systems is the Arc Pro B60, the workstation counterpart to Intel’s Arc B580 with more memory and PCIe 5.0 support. The Pro B60 packs 20 Xe Cores, 24GB of GDDR6 memory, 160 XMX engines, PCIe 5.0 support, multi-GPU support, and a variable TDP (ranging from 120 to 200 watts).
Supporting Project Battlematrix workstations is a validated full-stack containerized Linux solution, which will include everything needed to get the servers up and running quickly and easily. The LLM Scaler is just one of several containers Intel is developing for its full-stack containerized Linux solution.
You may like
-
Maxsun unveils Intel dual-GPU Battlemage graphics card with 48GB GDDR6 to compete with Nvidia and AMD -
MLPerf Client 1.0 AI benchmark released -
Intel says you can download more FPS for Lunar Lake
The LLM Scaler Release 1.0 on GitHub focuses on “early customer enablement” and includes optimizations for several AI model types, as well as added feature support, including speculative decoding and torch.compile. Ten optimizations and features have been incorporated into release 1.0.