DeepSeek V4‑Pro: A Frontier-Scale Chinese LLM Built on Non‑Nvidia Chips
DeepSeek V4‑Pro is the latest large language model from Chinese AI start-up DeepSeek, designed to deliver frontier-level performance at a fraction of Western prices. The model is built on a 1.6 trillion-parameter architecture but activates only 49 billion parameters per token, giving it the output quality of a cutting-edge system at what one expert called the compute cost of a 37‑billion model. Crucially, V4‑Pro was trained on Huawei Ascend chips rather than Nvidia GPUs, a symbolic break from reliance on US hardware. DeepSeek previously used Nvidia chips for its V3 and R1 models, but Huawei has now become a strategic backer, reportedly providing Ascend 950 silicon. DeepSeek pitches V4‑Pro as a value play: the company says it trails top US models like GPT‑5.4 and Gemini 3.1 Pro by only a few months, yet it offers API access at radically lower prices than Anthropic and OpenAI.

Why Nvidia’s US$5 Trillion Valuation Depends on AI Compute Demand
Nvidia sits at the centre of the global AI boom. Its GPUs power most leading generative AI models, and its CUDA software stack has become the de facto standard for accelerating AI workloads beyond graphics. This dominance has helped Nvidia reach a market capitalisation of about US$5.06 trillion (approx. RM24 trillion), ahead of tech giants like Alphabet, with Wall Street analysts overwhelmingly rating the stock a buy. Investor optimism is anchored in sustained demand for AI training and inference, as hyperscalers, enterprises and start-ups race to deploy ever larger models. Nvidia’s roadmap reinforces that story: it continues to roll out more powerful GPU architectures, such as the upcoming Vera Rubin generation, to lock in its lead. Any technology that lowers the compute needed for frontier-level AI, or shifts workloads to rival chips, therefore strikes at the narrative that underpins much of Nvidia’s current valuation and growth expectations.
Performance‑per‑Dollar Shock: How DeepSeek Undercuts Western Models
DeepSeek V4‑Pro is engineered to slash AI infrastructure costs by maximizing performance per dollar. According to public pricing, V4‑Pro’s output costs about US$3.48 (approx. RM17) per million tokens, versus roughly US$30 (approx. RM143) for Anthropic’s Claude Opus and US$25 (approx. RM119) for OpenAI’s comparable tier. That implies a near order‑of‑magnitude cost advantage, even before considering DeepSeek’s ultra‑budget V4‑Flash, which is priced at around US$0.28 (approx. RM1.34) per million tokens. These economics matter because inference, not just training, dominates the long‑term cost of running generative AI services. If enterprises can obtain near‑frontier quality from a Chinese LLM at a fraction of the cost, they may be less inclined to pay premium prices for Western closed-source APIs. Christian Schmidt of Samsung Mena noted that “the cost of frontier AI just dropped again,” underscoring the deflationary shock DeepSeek introduces into the AI model price war.
Chinese LLM Competition, Export Controls and the Huawei Factor
V4‑Pro’s reliance on Huawei Ascend chips gives it significance beyond pricing. US export controls were designed to slow China’s access to advanced Nvidia GPUs, implicitly reinforcing a US chip monopoly at the high end of AI. DeepSeek’s success on Huawei hardware shows this assumption is now “empirically wrong,” as one AI founder put it. The 2026 Stanford AI Index already describes Chinese labs as having “effectively closed” the performance gap with Western peers, and DeepSeek claims it is only a few months behind leading US models. Huawei’s reported decision to deny Nvidia and OpenAI access to its latest Ascend chips further sharpens the geopolitical edge. Nvidia’s CEO Jensen Huang has warned that the DeepSeek–Huawei partnership would be “horrible for the US,” reflecting fears that a viable, lower‑cost Chinese hardware–software stack could anchor an alternative AI ecosystem across China and much of Asia, diluting Nvidia’s influence.

What It Means for Malaysia: Cheaper AI, New Clouds and Hybrid Strategies
For Malaysian enterprises and cloud providers, DeepSeek V4‑Pro signals the arrival of serious Chinese LLM competition that could reshape AI infrastructure costs. If V4‑Pro’s economics hold at scale, regional platforms could offer AI APIs at far lower prices than those tied to Nvidia-heavy stacks, easing barriers for SMEs, local SaaS players and public-sector projects. At the same time, US export controls and data‑sovereignty concerns mean many organisations will prefer or be required to keep at least part of their workloads on US‑aligned or local infrastructure. This points to a likely hybrid future: Malaysian companies selectively mixing US models from Nvidia‑backed ecosystems with Chinese options like DeepSeek for cost-sensitive or domestic use cases. Investors should watch for local telcos and data‑centre operators partnering with Chinese cloud vendors or Huawei‑based AI stacks, as these moves could accelerate an AI model price war in Southeast Asia and compress margins across the value chain.
