[AI Key Points Summary]
Financial Performance
Full-year revenue for 2025 was 79 million USD, a year-on-year increase of 159%
Revenue from AI-native products was 53 million USD, a year-on-year increase of 143%
Open platform revenue approached 26 million USD, a year-on-year increase of 198%
Gross margin improved to 25.4%, optimizing by 13 percentage points compared to 12.2% in 2024
Business Progress
- Released three language models: M2, M2.1, and M2 Her, with M2 becoming the first Chinese model on OpenRouter to exceed 50 billion tokens in a single day
- Released the M2.5 model, setting a new industry record in the SWE-bench Verified test, with inference efficiency improving by 37% compared to M2.1
- Served over 236 million users cumulatively, covering more than 200 countries and regions
- Provided services to 214,000 enterprise clients and developers, with international market revenue accounting for over 70%
Next quarter's performance guidance
- ARR as of February 2026 has exceeded 150 million US dollars
- The average daily token consumption of the M2 series models in February 2026 increased to over six times that of December 2025
- Token consumption for the programming package grew by more than 10 times over the past two months
- The number of new registered users in February 2026 reached more than four times that of December 2025
Opportunity
- Plans to launch the M3 and Conch Three series models, designed to address L4 to L5 level intelligence challenges
- Expanding application scenarios in fields such as programming, office work, and dynamic creation
- Collaborating with top global cloud providers like Google AI, Azure AI Foundry, and AWS
- Inference costs for the M2 series service model have decreased by more than 50% per million tokens
Risk
- Facing fierce market competition from giants and other ventures
- Needing to continuously optimize computing power efficiency and infrastructure development
[AI Conference Record]
Operator
Ladies and gentlemen, hello, and welcome to the minimax full-year 2025 earnings conference call. Please note that this conference call provides simultaneous English interpretation of management’s remarks and the Q&A session. You will need to switch to the English channel to listen to the simultaneous interpretation provided by a third-party interpreter. Please be advised that today's conference is being recorded. Now, let us invite Meredith Yu, the company’s Director of Investor Relations, to speak.
Meredith Yu
Thank you, operator. Hello, everyone. I’m Meredith Yu, and welcome to the minimax full-year 2025 earnings conference call. Before we begin, we would like to remind you that today’s conference may contain forward-looking statements involving various risks and uncertainties. Actual conclusions and results may differ from those discussed today. Unless required by law, the company assumes no obligation to update this forward-looking information.
All important materials related to this conference, including forward-looking statements, should be referenced from the company’s public filings or the full-year 2025 earnings announcement released earlier today on the company’s website as of December 31, 2025. During today’s conference call, management will also discuss certain non-IFRS financial metrics, which are provided for supplementary consideration only and should not replace the financial performance indicators prepared in accordance with IFRS.
For the definitions of non-IFRS financial metrics and their reconciliation with IFRS financial results, as well as relevant risk factors, please refer to the company's full-year 2025 earnings announcement. In today's conference call, management will primarily communicate in Chinese, including their presentations and the Q&A session. A third-party interpreter will provide real-time English interpretation to enhance the efficiency of the meeting. If there are discrepancies between the interpretation and the original Chinese content, the original Chinese statements by management shall prevail.
Finally, unless otherwise specified, all monetary units mentioned during this conference call are in US dollars. Now, let’s welcome Dr. Yan Junjie, Founder, Chairman, and CEO of Minimax, to speak.
Yan Junjie
Dear investors and analysts, good day. This is Yan Junjie. Thank you for joining the company’s first earnings release conference call after our IPO. I would like to take this opportunity to share with you our progress over the past year as well as the next phase of our corporate strategy.
First, looking back at 2025, the keyword for this year was 'building a solid foundation.' In 2025, we developed comprehensive R&D capabilities across all modalities, including language, video, voice, and music, each equipped with globally competitive models. At the same time, we continuously upgraded our products through technological innovation, including development platforms for enterprises and developers, as well as consumer-facing products like Minimax Agent, Hello AI, Talk Xingye, and others.
Our globalization efforts also advanced more deeply and substantively in 2025. Regarding language models, in the fourth quarter of last year, we updated three models: M2, M2.1, and M2 Her. Among them, M2 redefined the balance between performance, price, and speed while featuring three key capabilities: programming, tool usage, and deep search, approaching a globally top-tier level.
After the release of M2, it quickly gained recognition from developers worldwide, becoming the first Chinese model on OpenRouter to exceed 50 billion tokens in a single day and topping Hugging Face’s global weekly trending list. Building upon M2, we soon launched M2.1, which focused on enhancing performance for complex tasks in real-world scenarios, particularly improving the ability to understand and execute complex instructions in programming and office environments.
Meanwhile, we introduced M2 Her as the foundational model supporting two AI interactive products, Xingye and Talk, focusing on delivering more natural and personalized conversational experiences. In long-form dialogue tests spanning 100 rounds, its overall performance ranked first globally. In February 2026, we released M2.5, which achieved top-tier levels across programming, tool usage, and office scenarios.
In terms of programming, M2.5 set a new industry record in the SWE-bench Verified test, showing a 37% improvement in reasoning efficiency compared to our previous generation, M2.1. More importantly, M2.5 made the cost of running complex agents economically feasible. For instance, with M2.5 outputting at the fastest inference speed of 100 tokens per second among mainstream global models, operating continuously for an hour costs only one US dollar.
This means that ten thousand US dollars could basically allow four agents to run continuously for a year. The breakthrough in model capability also drove rapid growth in usage. After its release, it dominated OpenRouter’s rankings for two consecutive weeks. From M2 to M2.1 and then to M2.5, each generation of our models has seen significant improvements in both capability and usage. By February 2026, the average daily token consumption of the M2 series models had grown to more than six times that of December 2025.
Among this, the token consumption from programming packages has actually increased more than tenfold over the past two months. In terms of multimodality, we have already developed the ability to generate across three major modalities: video, voice, and music. Last October, we released our video model Conch 2.3, which has achieved significant improvements in character movement, image quality, and stylization.
At the same time, we also launched a faster Fast model, which can reduce batch production costs by up to 50%. We have upgraded the Media Agent in Conch AI, supporting all-modal comprehensive creation with one-click video generation. By the end of 2025, our video model has helped creators worldwide cumulatively generate over 600 million videos.
Last October, we also released our voice model Speech 2.6, optimized for Voice scenarios, improving the experience of voice interaction to achieve ultra-low latency at a globally leading level, while also supporting more than forty languages. By the end of 2025, our voice model has helped global users cumulatively generate over 200 million hours of voice content, becoming one of the core technological infrastructures in the field of voice intelligence.
Our newly released music models Music 2.0 and 2.5 have also achieved significant improvements, enabling stable handling of various singing styles and emotional expressions. During the development of these models and products, we have continued to evolve towards an AI-native organization. Internally, our agent interns now cover more than 90% of the company's workforce, spanning programming development, data analysis, operations management, human resources recruitment, marketing, and sales.
We also view ourselves as an experimental ground for evolving into an AI-native organization. We believe that this experience is crucial for enhancing the company’s ability to seize AI opportunities. In January this year, we further productized the capabilities accumulated above and launched Minimax Agent 2.0, allowing agents to enter local workspaces. We also introduced expert agent functionality, enabling users to create agents specialized in professional fields.
By the end of February after its launch, our professional users had cumulatively created over 50,000 expert agents, injecting deep knowledge and capabilities to solve more specialized domain problems. We are aware that the OpenCoder project has recently gained significant popularity. In fact, long before the OpenCoder project became popular, its founder Peter gave extremely high praise to our model, then at version 2.1, considering it his top choice among the best open-source models.
After the official release of OpenCoder, the M2 series of models demonstrated comprehensive advantages in both performance and cost, helping many developers use OpenCoder with low barriers and broad accessibility. Our own AI products also launched Mars Cloud not long ago to further lower the threshold for users.
Next, let us discuss the progress in commercialization. In 2025, we achieved revenue of 79 million US dollars for the entire year, representing a year-on-year increase of 159%. Among this, revenue from AI-native products was 53 million US dollars, growing 143% year-on-year. Revenue from the open platform approached 26 million US dollars, increasing by 198% year-on-year.
After 2025, we can see that growth has significantly accelerated. For example, our open platform products targeting enterprise customers and developers saw new user registrations in February 2026 reach more than four times that of December 2025. As of December 31, 2025, we have cumulatively served over 236 million users across more than 200 countries and regions, including 214,000 enterprise clients and developers from over 100 countries and regions.
In 2025, our international market revenue accounted for over 70% of total revenue. The proportion of international market income in our B2B open platform products also exceeded 50%. After the release of our M2.5 model, it garnered significant attention in the international market, attracting many new international clients to actively seek cooperation. The word-of-mouth effect continues to spread, with leading global cloud providers and AI-native platforms such as Google AI, Azure AI Foundry, AWS, and Fireworks AI having deployed our models.
At the same time, it is also the default preferred model for leading coding platforms like OpenCoder. In the early hours of today, we also noticed that Notion launched M2.5, which is Notion's first and only open-source model option to date. While providing these services, our computing efficiency has also significantly improved.
We have been continuously optimizing our computing efficiency. Thanks to iterative improvements in algorithm optimization, operator implementation, and encoding-decoding engineering, as of February 2026, the inference cost per million tokens for the M2 series service model has decreased by more than 50% compared to December 2025. The inference latency of video generation models, which is equivalent to their inference cost, has dropped by over 30% during the same period.
As the capabilities of model technology continue to iterate, our economies of scale are beginning to show. The gross profit for the full year of 2025 was 20 million US dollars, a year-on-year increase of 437%. The gross margin rose to 25.4%, improving by 13 percentage points from 12.2% in 2024. In terms of expenses, marketing costs in 2025 decreased by 40% year-on-year, while R&D costs increased by 33.8% year-on-year but were significantly lower than the revenue growth rate.
The adjusted net loss for the full year of 2025 was 250 million US dollars. With the continuous advancement of commercialization and cost optimization brought by model fine-tuning, the adjusted net loss ratio has also narrowed significantly. In the first two months of 2026, we have already seen strong growth momentum. Our ARR in February 2026 has exceeded 150 million US dollars.
Next, I would like to share with you our outlook for 2026. We believe that in 2026, intelligence will still see significant development. Our goals for improvement focus on three main areas. First, we believe that in the field of programming, L4 to L5 level intelligence will emerge, moving from being a tool to achieving colleague-level collaboration.
Second, we believe that in the office domain, the next year will replicate the progress speed seen in the programming field last year. The delivery capability and penetration rate of AI agents in the office sector will significantly improve. Third, we believe that dynamic content creation this year will evolve into deliverable long-form content, even approaching real-time streaming output formats.
These three developments combined imply new technical challenges, an explosion in large-scale intelligent supply, and a vast window of innovation at the application layer. This also means that the demand we handle could potentially be further amplified. Token volumes are likely to grow by one to two orders of magnitude. The M3 and Conch-3 series models currently under development are specifically designed to address these challenges.
In this process, we are rapidly enhancing our infrastructure and continuing to attract talent. Last year’s focus was solely on training efficiency, but now we are shifting towards higher R&D efficiency and faster model iteration cycles. At the company strategy level, we will evolve from being a large-model company to becoming a platform company in the AI era. We know that platform companies in the internet era are traffic gateways; in the AI era, we believe platform companies will define and drive new intelligent paradigms, enjoying paradigm dividends in both products and business.
This depends on the ability to define intelligent paradigms, technological and product innovation capabilities, and scalable infrastructure with high token throughput efficiency. The value of a platform company in the AI era, we believe, can simply be estimated as the density of provided intelligence multiplied by token throughput. When both are sufficiently strong, the value of the platform will naturally become apparent.
We can clearly see the accelerating expansion trend in the current AI industry. Breakthroughs in model capabilities, the implementation of agent applications, and the maturation of commercial pathways are continuously pushing the ceiling higher. Based on our proven potential for sustained growth through our own technology R&D and product capabilities, we are highly confident in striving to be a core builder of AI platforming. Thank you all for listening, and now we can move on to the Q&A session.
Operator
Thank you, management. Just a reminder: if you need to ask a question, please press the star key followed by the number one key. If you wish to withdraw your question, press the hash key. Please ask your question in Chinese. We will now move into the Q&A session, and the first question comes from Gary Yu of Morgan Stanley.
Gary Yu
Great, thank you, management, and thank you, Mr. Yan, for your insights. My question is about our goal to become an AI platform company. However, we can see that Google and OpenAI are also pursuing this path. How do we interpret what it means to be a platform company in the AI era? And why does Minimax, as a venture, have the opportunity to become an AI platform company? Thank you.
Yan Junjie
Thank you for your question. This is something we have been actively discussing internally. As mentioned earlier, when intelligence boundaries are pushed, a vast amount of new scenarios and users generally emerge, leading to the formation of new ecosystems and commercialization opportunities. For example, companies already exist in areas such as coding and image generation.
So why do we believe Minimax has the potential to become a platform company in the AI era? We have several reasons for this belief. First, we think that AI is not yet a market characterized by fixed competition; rather, it is a market where annual growth far exceeds existing capacity. Additionally, it's not a winner-takes-all market—anyone with innovation and uniqueness will have their own opportunity.
Moreover, in the next two to three years, we expect significant improvements in both our model development capabilities and infrastructure. We also see potential opportunities to open up new scenarios, whether in programming, office work, or interactive entertainment, all of which present enormous room for innovation and market expansion.
In the fast-growing market environment we are currently in, we see our opportunities at several levels. First, on the model level, we believe the core element is heavily reliant on long-term accumulation and rapid iteration. This itself represents a high barrier to entry.
For instance, over the past 108 days, we have consecutively launched versions M2, M2.1, and M2.5, each of which brought about exponential growth in user engagement. Additionally, since day one of our venture, we began building multi-modal models, which I believe makes us the only venture doing so. This positions us well to further amplify our advantages in the future trend towards multi-modal integration.
On the product side, we are actually the first domestic company that develops both models and products. We believe that combining model and product capabilities creates a stronger competitive advantage because the capability of the model directly defines the capability of the product.
This integrated capability of model-as-a-product is something we believe most companies will find hard to replicate because it is a highly unique ability. Additionally, there is another layer, which I think lies in the ecosystem layer. In fact, I believe that in the past, by leveraging the characteristics of our own models, we have already started to foster some small-scale ecosystems.
For example, within the OpenCoder ecosystem, I have played a very active role. From the earliest days when many of our models were used for development within OpenCoder, to the fact that our models offer great cost-performance value, making them highly suitable for high-throughput scenarios, thus helping many developers lower the threshold. Furthermore, as our product further integrates with OpenCoder, specifically through our agent Mars Cloud, it has further reduced the user's barrier to entry.
At the same time, we are also starting to contribute more of our own code to OpenCoder. We believe that we have demonstrated the ability to help an ecosystem grow rapidly, and we have proven this capability. However, forming our own ecosystem is something that I feel is only just beginning.
Going forward, on one hand, we plan to further push the boundaries of intelligence through the next-generation M3 and Conch 3 models mentioned earlier, establishing the uniqueness of our models. On the other hand, we hope to create unique products and ecosystems around our own models. I believe that apart from a few large companies, we may be the only venture in Asia currently capable of achieving both of these objectives. Thank you.
Operator
Next question. Your next question comes from Alex Yao with JP Morgan, please.
Alex Yao
Thank you, management, for the evening and congratulations on the strong performance. I would like to ask a question regarding multimodality: our company has always emphasized that multimodality is the ultimate goal of AI. If competitors focus on excelling in one modality first and then integrate backwards to achieve mastery across areas, could this potentially be a faster path? Could our insistence on multimodality actually be a slower and more burdensome strategy? Thank you.
Yan Junjie
Thank you for your question. In fact, this is a question we have been challenged with since day one of our venture. I would like to take this opportunity to explain why we are committed to multimodality. We believe that the integration of multiple modalities is a fundamental prerequisite for continuously enhancing intelligence. In fact, over the past six months, several models have pushed new intelligent boundaries and unlocked new scenarios through the fusion of modalities.
For instance, Google’s Imagen Pro, widely used in image generation, is such a model that greatly expands the boundaries of image generation by integrating visual understanding and generation. Regarding multimodality, we have divided our approach into two phases. We have completed the first phase and have now entered the second phase.
The first phase refers to the past four years during which we have continuously accumulated and developed influential models in each modality within the industry, building a strong reputation. For example, our previously mentioned language models, visual models, voice models, and music models. Thus, it means that we have successfully developed each modality independently and made significant progress.
What we are currently going through is the second step: now that we have succeeded in each modality individually, the next step is to integrate them as much as possible, hoping for new breakthroughs. In fact, M3 and Conch 3, which we plan to release in the first half of this year, will be the results of this effort.
I want to emphasize two points. First, the accumulation in each modality is actually a long process, from initial data, then methods, and finally corresponding talent. Each part of the chain requires a significant amount of time. I believe this is also our core strength and uniqueness. As we mentioned earlier, we are one of only three companies domestically that can achieve relatively leading levels in every modality, and we are also the only venture.
The second point I want to make is that video generation is currently one of the largest markets in the AGI field, aside from coding and intelligent assistants. We believe that this year, the field can advance into mid-to-long form videos and near real-time generation. We think such technological changes could significantly expand this market.
As we smoothly progress with this dynamic integration, we will also gain unique opportunities in this market. Regarding the question you just asked, whether this will pose challenges to our R&D: First, I think it does bring challenges, but I believe they are necessary. In fact, since the first day of our company's founding, we believed that the understanding of AGI must include dynamic inputs and dynamic outputs.
Therefore, we organized and built the underlying capabilities that allow different modalities to be reusable. Under our highly AI-native organizational structure, you can see from our financial data that the cost of doing full modality work is not very high compared to other ventures and is far lower than the investments made by industry giants.
However, we have already developed competitive models in each modality, even outperforming companies focused on a single modality. We believe this has been a testament to our technical judgment and foresight over the past few years, and we believe this will become even clearer moving forward. Thank you.
Operator
Next question. Your next question comes from Weis Young with UBS. Please go ahead.
Weis Young
Good evening, management team, congratulations on our company’s first strong performance since the IPO, and thank you for taking my question. I would like to ask about what was mentioned earlier regarding the advent of L4 to L5 level programming intelligence. Recently, there has been a lot of talk in the market about software companies being replaced by agents. So, I’d like to ask how do we view this industry trend and transformation, and where does our company stand in this? Thank you.
Yan Junjie
Yes, this is a very important question. Let me first explain what L4 to L5 level intelligence means, and then discuss the potential directions for future improvements and our own position. We believe that L3 level intelligence refers to what we currently consider normal agents, but L4 and L5 levels refer to colleague and organization-level intelligence.
Let me give you an example. One of our company's important missions is to create more advanced models, right? However, creating a more advanced model is something that requires many people to collaborate. It involves a lot of algorithmic innovation and experimentation, optimization in training efficiency, extensive data processing, and even a great deal of machine maintenance, making it a highly comprehensive task.
In such a research and development process, we consider L4 level tasks to be more innovative, such as tasks that an individual researcher can complete independently, for instance, conducting experiments based on a research paper, or proposing efficient solutions to challenging engineering problems. This represents the L4 level, meaning not only can deterministic tasks be accomplished, but there is also room for innovation within those tasks.
L5 level tasks refer not only to what one person can do, but also to effectively coordinating the work of many people, which defines L5 level intelligence. Regarding coding, we believe that it is part of an agent’s capabilities and represents one of the earliest validated abilities within the productivity scenarios of agents. Aside from coding, another important area we see is office work, which we mentioned earlier. We believe its development will progress very rapidly and its potential market may be larger than that of programming.
Next, how do we view our position amidst these changes? First, we believe we are facing a massive market. For instance, take programming; it's not just about helping professionals write better code, but enabling more people to write code. Even so, the number of people who need to write code at work is still relatively small, while most people spend their time on white-collar office tasks.
Many tasks, like data analysis, require us to handle numerous financial-related activities when preparing performance reports, eventually leading to writing up a public listing performance document, right? Sometimes we might also need to prepare presentations, all of which involve a larger number of professionals compared to pure programming.
In the fields of programming and office work, we have already made some initial progress and achieved certain unique advantages with very limited resources. The bigger picture, however, is just beginning. In this process, I think we have two main characteristics. The first is that we move fast.
As mentioned earlier, from M2 to M2.5 across three generations of models, we spent only 108 days, which means we maintained the fastest iteration speed in the industry. Each generation of models showed significant improvements in capability and usage. This demonstrates our R&D capacity and the ability of our models to handle traffic.
In fact, the M2 series models were developed using very limited resources. Currently, we have significantly more resources, and we believe that as our resources grow, the pace of model improvement will accelerate. Stronger models will further unlock higher ceilings.
So the historical performance everyone currently sees is actually based on the M2 series of models. Our goal, however, is that we hope our next series of models, the M3 series, can further expand possibilities and create a positive flywheel effect. In addition to running fast enough, I believe we also have the capability to develop some unique models.
This is something that has been repeatedly proven over a period of time. As mentioned earlier, the overall market is large enough, and in such a vast market, it's not necessary to aim for a 'winner takes all' scenario. Instead, what we need is to have our own distinct characteristics. These mainly refer to our ability to define unique technical roadmaps and R&D capabilities, rather than just following trends or echoing others.
In our second series of models, such as M2, Conch 2, and Speech 2, we did not pursue superiority across all dimensions. However, we defined our own unique advantages. For instance, with M2, our core focus was not only cost-effectiveness but also speed. For Conch 2, its complexity stood out, while for Speech 2, multilingual support and low latency were key.
These unique definitions have helped us differentiate ourselves and open up the market. As our resources increase significantly in the future, we believe this uniqueness will generate stronger momentum and deliver higher value. In summary, we are confident that, as our models grow stronger—especially in programming-driven agents and broader office scenarios—we can further increase our market share, achieve more breakthroughs, capture a larger market, iterate faster, and enhance our distinctiveness, positioning us more advantageously amid industry transformations. Thank you.
Operator
Thank you. Your next question comes from Ronald Young with Goldman Sachs. Please go ahead.
Ronald Young
Thank you, thank you Mr. Yan for sharing personally. In this space, there are giants as well as ventures, and there are open-source models. I would like to hear Mr. Yan's thoughts on which layer of competition we should focus on and which battles must be fought. Thank you.
Yan Junjie
Regarding this, as previously introduced, we are working hard to position ourselves as a platform company in the AI era. The core driving forces here are the continuous improvement of intelligence density and token throughput capacity. We believe that compared to other companies in our industry, we have made the following significantly differentiated strategic choices.
Firstly, in terms of strategic positioning, from day one, we have focused on full modality capabilities, continuously enhancing the intelligence density and boundaries of our models to create unique value. Around this unique value, we build products and businesses, choosing where to focus and where to step back in order to allocate resources effectively. For example, in 2023, we made a clear decision that we would firmly avoid developing general-purpose personal AI assistants for mobile devices, similar to DouBao and ChatGPT dialogue products.
We have been clear from the very beginning that if we firmly choose not to do something, it's because we don't create unique value in that area. On the contrary, we concentrate our resources on model R&D and product innovation where we can generate unique value, such as our agent, products like Touxin Yinliang, and our Conch Video. We believe this strategic choice will help us build more long-term differentiation and increase our odds of making successful decisions.
The second example is that we have adhered to multimodality from the start. As introduced earlier, the accumulation in different modalities is crucial, and now we have reached a critical point where different modalities must be integrated. This allows us to secure a more advantageous position under the trend of full modality integration.
Next, I want to emphasize R&D efficiency. In the AI era, what ultimately determines success isn’t simply burning cash or resources, but how fast a model progresses, thereby generating larger-scale commercial revenue and market size.
In every aspect of R&D, we are implementing our iteration efficiency. From algorithm optimization to experimental design, from the number of experimental iterations to analytical decision-making mechanisms, we fully leverage our venture’s more agile organizational structure, combining top-down and bottom-up approaches, and reusing experiences and infrastructures across different modalities to ensure our R&D efficiency stays ahead continuously.
In the long term, globally, we believe only a few AI platform companies will remain in the core echelon of the entire industry. We believe we already possess certain advantages and uniqueness, and we consider ourselves one of the few potential independent companies.
Operator
Next question please. Thank you. Your next question comes from Zhang Hai Yu with CICC. Please go ahead.
Zhang Hai Yu
Hello everyone, leaders, hello. Congratulations on the company's performance, which exceeded our expectations by quite a margin. You just mentioned that in the first two months of 2026, the token usage for our M2 series models has already reached six times the level of December last year. Though we anticipated growth, this pace is still astonishing. I estimate it may be related to factors like OpenCoder going viral, including the significant upgrade in usability of our M2 series' coding capabilities.
So my question is: Do you think this trend represents an early burst of one-time红利, showing short-term peaks and troughs, or is it the beginning of a longer sustainable trend? Thank you.
Yan Junjie
Thank you for this question. I think the signals we are seeing represent the beginning of what we believe to be a long-term trend, not a one-time dividend. Of course, I think we all need to anticipate that the growth in this industry will be stepwise, rather than a simple linear extrapolation. It's more of a staircase-like growth.
We believe that our ability to continuously roll out new models and capture a larger proportion of industry opportunities hinges on one key thing: leveraging our understanding of intelligent iteration to prepare our R&D resources in advance and define each generation of models. Beyond what we’ve already seen with M2, I’d like to talk about where we believe the next phase of growth will come from.
First of all, we have actually been actively preparing since the second half of 2025 to welcome 2026, during which we expect several super PMFs (Product-Market Fits) driven by the emergence of intelligence. We believe that the penetration rate and acceleration over the next year will be faster than expected, and the sources of growth will be more diversified.
The first point is that we believe the programming field still has significant room for growth. Although programming tools are already quite advanced as auxiliary instruments, we firmly believe there will be substantial improvements this year, moving towards colleague-level collaboration and possibly towards innovative discoveries and complex organizational coordination at a higher level of intelligence.
In programming, whether from the perspective of technological reserves, market demand, or our R&D progress, we see this happening with high probability this year. The second point relates to office scenarios across various professions, which is an area broader and potentially larger in market size than programming.
In reality, office tasks are more complex than programming because they involve many different professions and the use of more intricate tools. Moreover, other learning tasks often include unverifiable elements, which pose challenges to foundational iteration.
However, we remain highly confident and have made extensive preparations. We believe that the pace of development in the office domain this year may match the rapid progress we saw last year in programming. The third super PMF, we believe, lies in advancements in dynamic content generation—specifically, the ability to deliver mid-to-long-form content directly, which could further lower the threshold for adoption.
Looking back over the past two to three years, the competition among models has been a back-and-forth process with wins and losses. All companies face challenges, and no company can guarantee perpetual leadership. However, I am confident in our ability to continue winning key battles.
At the core of our strategy is breaking through technological boundaries and leveraging these breakthroughs to enhance the ecosystem attributes of our products and services, allowing us to benefit from greater dividends. We are confident in growing alongside the industry, improving our uniqueness, R&D efficiency, innovation capabilities, and global commercialization capacity, thereby building a more scalable and sustainable competitive edge.
Operator
Next question, please. Thank you. Your next question comes from Thomas Chong with Jefferies. Please go ahead.
Thomas Chong
Good evening, thank you to the management for taking my question. We just mentioned that the agent interns cover 90% of the employees, which is very advanced timing. What unique insights does using the company as a testing ground bring us that others might not see? And how do these insights feed back into our products and technology? Thank you.
Yan Junjie
Thank you for this question. In fact, I think we are not only a company that develops AI, but also hope to become a platform-oriented AI company in the future. Actually, during this process, we hope that while developing AI, we ourselves can become an AI-native organization. This is actually a core pursuit in our organizational goals.
Here, I want to talk about two things. One is speed, which mainly refers to the speed of progress. The core motivation for wanting ourselves to become an AI-native organization comes from the limited resources of a venture. We must maximize our organizational efficiency to have a greater chance of success.
Through promoting it within the company from the beginning until now, more and more colleagues have started integrating AI into their work. We have actually observed a clear trend: initially, humans taught agents how to perform tasks, but increasingly, humans are observing how agents perform tasks, and sometimes, agents even bring surprises.
This has significantly shortened our organizational chain, allowing each link to benefit from increased intelligent dividends. From model iteration to product innovation, to serving users, our iterative loop is actually constantly accelerating. Our team members also have time to focus on higher-value tasks, further enhancing the thinking and innovation capabilities of our entire organization, which I believe is quite critical.
Moreover, while doing this, it has brought significant benefits to our model development. It allows us to better understand the goal definition of model intelligence. For example, as our agents start to operate more within the company, we can clearly observe that even the best models currently available still fall short in many areas.
It is precisely in these areas where they fall short that possess the highest economic and practical value. These issues will directly shape the research and development direction of our next-generation models and agents, helping us define our goals more quickly and clearly. I believe that as our models get closer to being among the world's top-tier models, the value of this effort will be increasingly amplified.
In the past few months, the iteration speed of our models, revenue growth, ability to serve users, and token throughput capacity have all been continuously improving. This also enables us to define model objectives more quickly and allow AI to fully deliver value internally. We believe that the concept of being an agent-native or AI-native organization is already showing positive flywheel effects within our company, and we consider this a core competitive advantage that will persist.
Operator
Thank you. Today’s Q&A session ends here. Now, let’s invite Mr. Yan to give his concluding remarks.
Yan Junjie
Thank you all for your participation today. If you have more questions, feel free to contact our investor relations team at any time. Thank you everyone.
More details:MINIMAX-WP IR
Note: The above content is generated by an AI language model based on publicly available data and third-party automatic subtitles. The content does not represent any position of Futu, nor does it constitute any investment advice. Futu Group makes no express or implied guarantees or statements regarding the accuracy, timeliness, or completeness of the above content.
Risk Disclaimer: The above content only represents the author's view. It does not represent any position or investment advice of Futu. Futu makes no representation or warranty.Read more
Comments (178)
to post a comment
85
36
