Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
The new generation AI supercomputing chip architecture has been officially released, achieving significant breakthroughs in performance metrics. Compared to the previous generation, inference phase costs have been reduced to one-tenth, marking a turning point for the economics of large-scale model deployment. At the same time, the number of GPUs required for training has been cut by 75%, meaning enterprises can accomplish the same computational tasks with less hardware. Energy efficiency has increased fivefold, significantly reducing power consumption and heat dissipation under the same computing power.
Innovations at the technical architecture level are equally impressive—this is the first time confidentiality computing capabilities have been achieved at the rack level. The interconnection bandwidth between GPUs has reached an astonishing 260 TB/s, a data flow rate sufficient to support ultra-large-scale parallel computing scenarios. The entire platform has been thoroughly redesigned, abandoning traditional cable hoses and fan solutions in favor of a more compact and efficient hardware organization. The core engine consists of six modular components, offering greater flexibility for customization and expansion. The release of this generation will undoubtedly reshape the cost structure and deployment methods of the AI computing market.