Close Menu
    What's Hot

    Crypto Set to Soar as QT Ends and Global Stimulus Returns

    November 6, 2025

    New model design could fix high enterprise AI costs

    November 6, 2025

    Appeals Court Rejects Prisoner’s Lawsuit Over Alleged $354M Bitcoin Loss

    November 6, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    CryptoMarketVision
    • Home
    • AI News
    • Altcoin
    • Bitcoin
    • Business
    • Market Analysis
    • Mining
    • Trending Cryptos
    • Moneyprofitt
    • More
      • About Us
      • Contact Us
      • Terms and Conditions
      • Privacy Policy
      • Disclaimer
    CryptoMarketVision
    Home»AI News»New model design could fix high enterprise AI costs
    New model design could fix high enterprise AI costs
    AI News

    New model design could fix high enterprise AI costs

    adminBy adminNovember 6, 2025No Comments4 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Enterprise leaders grappling with the steep costs of deploying AI models could find a reprieve thanks to a new architecture design.

    While the capabilities of generative AI are attractive, their immense computational demands for both training and inference result in prohibitive expenses and mounting environmental concerns. At the centre of this inefficiency is the models’ “fundamental bottleneck” of an autoregressive process that generates text sequentially, token-by-token.

    For enterprises processing vast data streams, from IoT networks to financial markets, this limitation makes generating long-form analysis both slow and economically challenging. However, a new research paper from Tencent AI and Tsinghua University proposes an alternative.

    A new approach to AI efficiency

    The research introduces Continuous Autoregressive Language Models (CALM). This method re-engineers the generation process to predict a continuous vector rather than a discrete token.

    A high-fidelity autoencoder “compress[es] a chunk of K tokens into a single continuous vector,” which holds a much higher semantic bandwidth.

    Instead of processing something like “the”, “cat”, “sat” in three steps, the model compresses them into one. This design directly “reduces the number of generative steps,” attacking the computational load.

    The experimental results demonstrate a better performance-compute trade-off. A CALM AI model grouping four tokens delivered performance “comparable to strong discrete baselines, but at a significantly lower computational cost” for an enterprise.

    One CALM model, for instance, required 44 percent fewer training FLOPs and 34 percent fewer inference FLOPs than a baseline Transformer of similar capability. This points to a saving on both the initial capital expense of training and the recurring operational expense of inference.

    Rebuilding the toolkit for the continuous domain

    Moving from a finite, discrete vocabulary to an infinite, continuous vector space breaks the standard LLM toolkit. The researchers had to develop a “comprehensive likelihood-free framework” to make the new model viable.

    For training, the model cannot use a standard softmax layer or maximum likelihood estimation. To solve this, the team used a “likelihood-free” objective with an Energy Transformer, which rewards the model for accurate predictions without computing explicit probabilities.

    This new training method also required a new evaluation metric. Standard benchmarks like Perplexity are inapplicable as they rely on the same likelihoods the model no longer computes.

    The team proposed BrierLM, a novel metric based on the Brier score that can be estimated purely from model samples. Validation confirmed BrierLM as a reliable alternative, showing a “Spearman’s rank correlation of -0.991” with traditional loss metrics.

    Finally, the framework restores controlled generation, a key feature for enterprise use. Standard temperature sampling is impossible without a probability distribution. The paper introduces a new “likelihood-free sampling algorithm,” including a practical batch approximation method, to manage the trade-off between output accuracy and diversity.

    Reducing enterprise AI costs

    This research offers a glimpse into a future where generative AI is not defined purely by ever-larger parameter counts, but by architectural efficiency.

    The current path of scaling models is hitting a wall of diminishing returns and escalating costs. The CALM framework establishes a “new design axis for LLM scaling: increasing the semantic bandwidth of each generative step”.

    While this is a research framework and not an off-the-shelf product, it points to a powerful and scalable pathway towards ultra-efficient language models. When evaluating vendor roadmaps, tech leaders should look beyond model size and begin asking about architectural efficiency.

    The ability to reduce FLOPs per generated token will become a defining competitive advantage, enabling AI to be deployed more economically and sustainably across the enterprise to reduce costs—from the data centre to data-heavy edge applications.

    See also: Flawed AI benchmarks put enterprise budgets at risk

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    The Hyundai Metaplant: A New Era in EV Manufacturing

    November 5, 2025

    Databricks research reveals that building better AI judges isn't just a technical concern, it's a people problem

    November 5, 2025

    Snowflake builds new intelligence that goes beyond RAG to query and aggregate thousands of documents at once

    November 4, 2025

    OpenAI spreads $600B cloud AI bet across AWS, Oracle, Microsoft

    November 4, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Crypto Set to Soar as QT Ends and Global Stimulus Returns

    November 6, 2025

    New model design could fix high enterprise AI costs

    November 6, 2025

    Appeals Court Rejects Prisoner’s Lawsuit Over Alleged $354M Bitcoin Loss

    November 6, 2025

    Subscribe to Updates

    Get the latest sports news from SportsSite about soccer, football and tennis.

    Welcome to Crypto Market Vision – your trusted source for everything crypto Our mission is simple: to make the world of cryptocurrency clear, accessible, and actionable for everyone. Whether you are a beginner exploring Bitcoin for the first time or a seasoned trader looking for market insights, our goal is to keep you informed, empowered, and ahead of the curve.

    Facebook X (Twitter) Instagram Pinterest YouTube
    Top Insights

    Crypto Set to Soar as QT Ends and Global Stimulus Returns

    November 6, 2025

    New model design could fix high enterprise AI costs

    November 6, 2025

    Appeals Court Rejects Prisoner’s Lawsuit Over Alleged $354M Bitcoin Loss

    November 6, 2025
    Get Informed

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • Contact Us
    • About Us
    • Terms and Conditions
    • Privacy Policy
    • Disclaimer

    © 2025 cryptomarketvision.com. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.

    ethereum
    Ethereum (ETH) $ 3,386.78
    tether
    Tether (USDT) $ 1.00
    bitcoin
    Bitcoin (BTC) $ 103,370.70
    xrp
    XRP (XRP) $ 2.33
    bnb
    BNB (BNB) $ 954.52
    solana
    Wrapped SOL (SOL) $ 159.14
    usd-coin
    USDC (USDC) $ 1.00
    dogecoin
    Dogecoin (DOGE) $ 0.163624