【SenseTime Results】SenseTime's Net Loss Narrowed 59% to RMB 1.77 Billion Last Year, First Recording Positive EBITDA in Second Half

robot
Abstract generation in progress

SenseTime (00020) Last year’s revenue increased by 33% year-on-year to 5.01 billion RMB, reaching a new high, while annual net loss narrowed by 59% to 1.78 billion RMB. Shareholders’ attributable loss also narrowed by 59% to 1.77 billion RMB, with no dividends declared. Excluding interest, tax, depreciation, and amortization (LBITDA) was 470 million RMB, an 85% decrease year-on-year.

Adjusted LBITDA was 650 million RMB, a 79% decrease year-on-year; adjusted net loss was 1.96 billion RMB, a 54% reduction. SenseTime stated that a continuous trend of significant loss reduction has been established, with the group’s net loss and adjusted net loss “achieving four consecutive half-year periods of accelerated year-on-year decrease.” In the second half of last year, EBITDA before interest, tax, depreciation, and amortization was approximately 380 million RMB, turning positive for the first time since listing. In the second half of 2025, the group’s operating cash flow will achieve its first positive net inflow since going public.

The results show that SenseTime’s gross profit last year was 2.06 billion RMB, up 27% year-on-year. Benefiting from reduced employee benefits expenses and partly offset by increased server operation and cloud service costs, the company’s annual R&D expenditure was 3.78 billion RMB, down 9% year-on-year.

GenAI revenue increased by 51% year-on-year

SenseTime indicated that last year’s revenue increased by 33%, mainly due to the continued growth of generative AI. Revenue from generative AI grew 51% year-on-year to 3.63 billion RMB, primarily driven by explosive demand for training, fine-tuning, and inference of generative AI models. This growth was also supported by integrated industry solutions, which promoted the joint commercialization of computing platforms, models, and applications, cultivating replicable best practices across industries and supporting overall revenue growth.

Revenue from visual AI increased by 3% year-on-year to 1.08 billion RMB. Benefiting from domestic demand recovery and sustained overseas market growth, visual AI is entering a secondary growth phase through multimodal visual intelligence systems. Revenue in the second half of last year grew 21% year-on-year.

Revenue from X innovative businesses declined 6% to 300 million RMB. SenseTime explained that this was mainly because the autonomous driving business separated from the consolidated financial statements in August last year. The company stated that in the second half of last year, four X innovative businesses contributed to revenue: autonomous driving, smart healthcare, Yuan Luo Bo (home robots), and smart retail. “Over time, we expect the composition of X innovative businesses to evolve as we incubate more X innovations or attract external investors to exit from consolidated financials. Therefore, the year-over-year comparison for this specific revenue will have less significance in the future.”

Next quarter, a new foundational model based on the second-generation NEO architecture will be launched

SenseTime said that current mainstream multimodal model architectures have obvious limitations in further breaking through intelligence. Therefore, in Q4 last year, the company launched and open-sourced a new generation of native multimodal architecture, NEO. This architecture completely abandons the mainstream “encoder-connector-LLM core” concatenation structure, achieving a “left brain (logic) + right brain (spatial)” underlying unification, enabling understanding of complex physical world analysis and decision-making. SenseTime claimed that the NEO architecture has very high learning efficiency, requiring only 1/10 of the data and computational power of comparable models to achieve top performance, redefining the performance boundary and marking a new era of “native architecture” in multimodal systems.

SenseTime plans to release a new foundational model based on the second-generation NEO architecture in Q2 this year. It will be the industry’s first to verify the new “scaling law” of understanding and generation under native multimodal architecture. The model is expected to excel in full-modal reasoning, perception and generation interaction, spatial intelligence, and other fields. Its strong visual reasoning capabilities will significantly enhance the AI’s ability to process multimodal information such as images, videos, documents, and web pages, enabling more efficient handling of complex scene tasks and deeply empowering AI and embodied intelligence applications.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin