Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, I’m focusing on how and why AI will grow from something that chats to something that works in 2026. I also focus on a new privacy-focused AI platform from the maker of Signal, and on Google’s work on e-commerce agents.
Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.
Our relationship with AI is changing rapidly
Anthropic kicked off 2026 with a bang. It announced Coworker, a new version of its powerful Claude Code coding assistant that’s built for non-developers. As I wrote on January 14, Coworker lets users put AI agents, or teams of agents, to work on complex tasks. It offers all the agentic power of Claude Code while being far more approachable for regular workers (it runs within the Claude chatbot desktop app, not in the terminal as Claude Code does). It also runs at the file system level on the user’s computer, and can access email and third-party work apps such as Teams.
Coworker is very likely just the first product of its kind that we’ll see this year. Some have expressed surprise that OpenAI hasn’t already offered such an agentic tool to consumers and enterprises—it probably will, as may Google and Microsoft, in some form. I think we’ll look back at Coworker a year from now and recognize it as a real shift in the way we think about and use AI for our work tasks. AI companies have been talking for a long time about viewing AI as a coworker or “copilot,” but Cowork may make that concept a reality for many nontechnical workers.
OpenAI’s ChatGPT, which debuted in late 2022, gave us a mental picture of how consumer AI would look and act. It was just a little dialog box, mainly nonvisual and text-based. This shouldn’t have been too surprising. After all, the chatbot interface was built by a bunch of researchers who spent their careers teaching machines how to understand words and text.
Functionally, early chatbots could act like a search engine. They could write or summarize text, or listen to problems and give supportive feedback. But their outputs were driven almost entirely by their pretraining, in which they ingested and processed a compressed version of the entire internet. Using ChatGPT was something like text messaging with a smart and informed friend.
Large language models do way, way more than that today. They understand imagery, they reason, they search the web, and call external tools. But the AI labs continue to try to push much of their new functionality through that same chatbot-style interface. It’s time to graduate from that mindset and put more time and effort into meeting human users where they live—that is, delivering intelligence through lots of different interfaces that match the growing number of tasks where AI can be profitably applied.
That will begin to happen in 2026. AI will expand into a full workspace, or into a full web browser (à la OpenAI’s Atlas), and will eventually disappear into the operating system. As we saw at this year’s Consumer Electronics Show, it may go further: An AI tool may come wrapped in a cute animal form factor.
Interacting with AI will become more flexible, too. You’ll see more AI systems that accept real-time voice input this year. Anthropic added a feature to (desktop) Claude in October that lets users talk to the chatbot in natural language after hitting a keyboard shortcut. And Wispr Flow lets users dictate into any input window by holding down a function key.
Signal creator Moxie Marlinspike launches encrypted AI chatbot
People talk to AI chatbots about all kinds of things, including some very personal matters. Personally, I hesitate to discuss just anything with a chatbot, because I can’t be sure that my questions and prompts, and the answers the AI gives, won’t somehow be shared with someone who shouldn’t see them.
My worry is well-founded, it turns out. Last year a federal court ordered OpenAI to retain all user inputs and AI outputs, because they may be relevant to discovery in a copyright case. And there’s always a possibility that unencrypted conversations stored by an AI company could be stolen as part of a hack. Meanwhile, the conversational nature of chatbots invites users to share more and more personal information, including the sensitive kind.
In short, there’s a growing need for provably secure and private AI tools. Now the creator of the popular encrypted messaging platform Signal, who goes by the pseudonym Moxie Marlinspike, has created an end-to-end encrypted AI chatbot called Confer. The new platform protects user prompts and AI responses, and makes it impossible to connect users’ online identities with their real-world ones. Marlinspike told Ars Technica that Confer users have better conversations with the AI because they’re empowered to speak more freely.
When I signed up for a Confer account, the first thing the site asked was that I set up a six-digit encryption passkey, which would be stored within the secure element of my computer (or phone), which hackers can’t access. Another key is created for the Confer server, and both keys must match before the user can interact with the chatbot. Confer is powered by open-source AI models it hosts, not by models accessed from a third party.
Confer’s developers are serious about supporting sensitive conversations. After I logged in, I saw that Confer displays a few suggested conversations near the input window, such as “practice a difficult conversation,” “negotiate my salary,” and “talk through my mental health.”
Google is building the foundations of agentic e-commerce
Agents, of course, will do more than work tasks. They’ll be involved in more personal things, too, like online shopping. Right now human shoppers move through a long process of searching, clicking, data input, and payment-making in order to buy something. Merchants and brands hope that AI agents will one day do a lot of that work on the human’s behalf.
But for this to work, a whole ecosystem of agents, consumer-shopping sites, and brand back-end systems must be able to exchange information in standardized ways. For example, a consumer might want to use a shopping agent to buy a product that comes up in a Google AI Mode search, so the shopping agent would need to shake hands with the Google platform and the product merchant, and they’d both have to connect through a payment agent in the middle.
Google is off to a strong start on building the agentic infrastructure that will make this all work. On January 11, the company announced a new Universal Commerce Protocol (UCP) that creates a common language for consumers, agents, and businesses to ensure that all types of commerce actions are standardized and secure. The protocol relieves all parties involved from having to create an individual agent handshake for every consumer platform and tech partner.
UCP now standardizes three key aspects of a transaction: It offers a standard for guaranteeing the identity of the buyer and seller, a standard for the buying workflow, and a standard for the payment, which uses Google’s Agent Payment Protocol (AP2) extension.
Vidhya Srinivasan, Google’s VP/GM of Advertising & Commerce, tells Fast Company that this is just the beginning, that the company intends to build out the UCP to support more parts of the sales process, including related-product suggestion and post-purchase support. Google developed UCP with merchant platforms including Shopify, Etsy, Target, and Walmart. UCP is endorsed byAmerican Express, Mastercard, Stripe, Visa, and others.
More AI coverage from Fast Company:
- Why Anthropic’s new ‘Cowork’ could be the first really useful general-purpose AI agent
- Governments are considering bans on Grok’s app over AI sexual image scandal
- Docusign’s AI will now help you understand what you’re signing
- CES 2026: The year AI got serious
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
source https://www.fastcompany.com/91475312/chatgpt-anthropic-claude-coworker-signal-google
Discover more from The Veteran-Owned Business Blog
Subscribe to get the latest posts sent to your email.
You must be logged in to post a comment.