🌕 Gate Square · Mid-Autumn Creator Incentive Program is Live!
Share trending topic posts, and split $5,000 in prizes! 🎁
👉 Check details & join: https://www.gate.com/campaigns/1953
💝 New users: Post for the first time and complete the interaction tasks to share $600 newcomer pool!
🔥 Today's Hot Topic: #MyTopAICoin#
Altcoins are heating up, AI tokens rising! #WLD# and #KAITO# lead the surge, with WLD up nearly 48% in a single day. AI, IO, VIRTUAL follow suit. Which potential AI coins are you eyeing? Share your investment insights!
💡 Post Ideas:
1️⃣ How do you see AI tokens evolving?
2️⃣ Wh
Microsoft Open Source New Version of Phi-4: Inference Efficiency Rises 10 Times, Can Run on Laptops
Jin10 data reported on July 10, this morning, Microsoft open sourced the latest version of the Phi-4 family, Phi-4-mini-flash-reasoning, on its official website. The mini-flash version continues the Phi-4 family’s characteristics of small parameters and strong performance, specifically designed for scenarios limited by Computing Power, memory, and latency, capable of running on a single GPU, suitable for edge devices like laptops and tablets. Compared to the previous version, mini-flash utilizes Microsoft’s self-developed innovative architecture, SambaY, resulting in a big pump in inference efficiency by 10 times, with average latency reduced by 2-3 times, achieving a significant improvement in overall inference performance.