Liquid AI has been released LFM2-24B-A2BA model optimized for local, low-latency device dispatch local peerAn open-source desktop agent application is available Liquid4All GitHub Cookbook. This release provides a deployable architecture …
Liquid
-
-
Generative AI
Liquid AI’s new LFM2-24B-A2B hybrid architecture combines focus with resolution to solve the scaling constraints of modern LLMs.
The generative AI race has long been a game of ‘bigger is better’. But as the industry approaches the limits of power consumption and memory constraints, the conversation is shifting …
-
Machine Learning
Axi Bio and Databricks: Accelerating AI-powered liquid biopsies for early cancer detection
Liquid biopsy unlocks non-invasive cancer screening and monitoring by analyzing cancer biomarkers in the blood, but signals can be sparse and noisy. Axi Bio has taken the lead AI-powered liquid …
-
Every day, Arctic Wolf Processes over a trillion events, converting billions of rich records into security-relevant insights. This translates to over 60TB of compressed telemetry, providing AI-powered threat detection and …
-
Generative AI
Liquid AI releases LFM2.5-1.2b-thinking: a 1.2b parameter reasoning model that fits under 1GB on-device
Liquid AI has released LFM2.5-1.2b-Thinking, a 1.2 billion parameter reasoning model that runs entirely on device and fits in about 900 MB on a modern phone. What was required in …
-
Liquid AI has introduced LFM2.5, a new generation of small foundation models built on the LFM2 architecture and focused on device and edge deployments. The model family includes LFM2.5-1.2b-Base and …
-
Generative AI
Liquid AI’s LFM2-2.6B-Exp uses pure reinforcement learning RL and dynamic hybrid reasoning to optimize small model behavior
Liquid AI has introduced LFM2-2.6b-XP, an experimental checkpoint of its LFM2-2.6b language model trained with pure reinforcement learning on top of the existing LFM2 stack. The goal is simple, to …