Ford’s AI voice assistant coming later this year, L3 driving in 2028

by
0 comments
Ford's AI voice assistant coming later this year, L3 driving in 2028

Ford’s new AI-powered voice assistant will be made available to customers later this year, the company’s top software executive said at CES today. And in 2028, the automaker will introduce a hands-free, eyes-away Level 3 autonomous driving feature as part of its more affordable (and hopefully more profitable) universal electric vehicle (UEV) platform, which is set to launch in 2027.

Most importantly, Ford said it will develop several core technologies around these products to reduce costs and maintain greater control. Keep in mind, the company isn’t building its own big-language models or designing its own silicon like Tesla and Rivian. Instead, it will build its own electronic and computer modules that will be smaller and more efficient than the systems currently in place.

“By designing our own software and hardware in-house, we’ve found a way to make this technology more affordable,” Doug Field, Ford’s chief officer for EVs and software, wrote in a blog post. “This means we can put advanced hands-free driving into vehicles that people actually buy, not just into vehicles with unaffordable prices.”

Ford said that it will develop many key technologies related to these products in-house.

The new one comes as Ford faces pressure to introduce more affordable EVs after big bets on electric versions of the Mustang and F-150 pickup trucks failed to excite customers or turn a profit. The company recently canceled the F-150 Lightning amid declining EV sales and said it would build more hybrid vehicles as well as battery storage systems to meet growing demand for AI data center manufacturing. Ford is also realigning its AI strategy after kicking off its autonomous vehicle program with Argo AI in 2022, moving from fully autonomous Level 4 vehicles to Level 2 and Level 3 conditional autonomous driver assistance features.

Amidst all this, the company is trying to find a middle ground on AI: not going full-on robot army like Tesla and Hyundai, while still committing to some AI-powered products like voice assistants and automated driving features.

Ford said its AI assistant will launch on Ford and Lincoln mobile apps in 2026 before expanding to the in-car experience in 2027. An example would be a Ford owner standing at the hardware, unsure how many bags of mulch will fit in the bed of their truck. The owner can snap a photo of wet grass and ask the assistant, who can give a more accurate answer than ChatGPT or Google’s Gemini, because it has all the information about the owner’s vehicle, including truck bed size and trim level.

At a recent technology conference, Ford CFO Sheri House said Ford will integrate Google’s Gemini In its vehicles. That said, the automaker is designing its Assistant to be chatbot-agnostic, meaning it will work with a variety of LLMs.

Amidst all this, the company is trying to find a middle ground on AI.

“The key part is that we take this LLM, and then we give it access to all the relevant Ford systems so that the LLM knows what specific vehicle you’re using,” Sammy Omari, Ford’s head of ADAS and infotainment, told me.

Autonomous driving features will come later with the launch of Ford’s universal EV platform. Ford’s flagship product right now is BlueCruise, its hands-free Level 2 driver assistance feature that’s only available on most highways. Ford is planning to introduce a point-to-point hands-free system that can recognize traffic lights and navigate through intersections. And then eventually it will launch a Level 3 system where the driver will still have to be able to take over the vehicle on request, but can also take their eyes off the road in certain situations. (Some experts have argued that L3 systems can be dangerous given the need for drivers to remain attentive despite the vehicle doing most of the driving work.)

Omari explained that by rigorously testing every sensor, software component and compute unit, the team achieved a system that costs about 30 percent less than today’s hands-free systems, while offering significantly more capacity.

It will all depend on a “radical rethinking” of Ford’s computing architecture, Fields said in the blog post. This means a more integrated “brain” that can process infotainment, ADAS, voice commands, and more.

For nearly a decade, Ford has been building a team with the relevant expertise to lead these projects. The former Argo AI team, which originally focused on Level 4 robotaxi development, was brought into the mothership for their expertise in machine learning, robotics, and software. And A team of BlackBerry engineersPaul Costa, Ford’s executive director of electronics platforms, who was initially hired in 2017, told me the company is now working on building the next generation of electronic modules to enable some of these innovations.

But Ford doesn’t want to get involved in a “TOPS arms race,” Costa said, referring to the metric for measuring the speed of AI processors in trillions of operations a second. Other companies like Tesla and Rivian have emphasized the processing speed of their AI chips to prove how powerful their automated driving systems will be. Ford has no interest in playing that game.

Instead of optimizing for performance alone, they balanced performance, cost, and size. The result is a compute module that is significantly more powerful, lower in cost, and 44 percent smaller than the system it replaces.

“We’re not just selecting one area here for optimization at the expense of everything else,” Costa said. “We’ve really been able to adapt across the board, and that’s why we’re so excited about it.”

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.


Related Articles

Leave a Comment