💥 Gate Square Event: #PostToWinFLK 💥
Post original content on Gate Square related to FLK, the HODLer Airdrop, or Launchpool, and get a chance to share 200 FLK rewards!
📅 Event Period: Oct 15, 2025, 10:00 – Oct 24, 2025, 16:00 UTC
📌 Related Campaigns:
HODLer Airdrop 👉 https://www.gate.com/announcements/article/47573
Launchpool 👉 https://www.gate.com/announcements/article/47592
FLK Campaign Collection 👉 https://www.gate.com/announcements/article/47586
📌 How to Participate:
1️⃣ Post original content related to FLK or one of the above campaigns (HODLer Airdrop / Launchpool).
2️⃣ Content mu
British media: OpenAI has shortened the safety testing time for AI models.
Jin10 data reported on April 11, according to the Financial Times, OpenAI has significantly reduced the time and resources used to test the safety of its powerful artificial intelligence models, raising concerns about the hasty launch of its technology without sufficient safeguards. Compared to a few months ago, staff and third-party teams have recently had only a few days to “evaluate” OpenAI’s latest large language model. According to eight people familiar with OpenAI’s testing process, the startup’s testing has become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks, as this $300 billion startup faces pressure to rapidly release new models and maintain its competitive edge. Insider sources revealed that OpenAI has been striving to release its new model o3 as early as next week, giving some testers less than a week for safety checks. Previously, OpenAI allowed months for safety testing.