On "Dia" | Is an "AI browser" the future?

The Browser Company, (the Arc guys), have shown today their next project, Dia, a browser built from the ground up for AI. I...have some thoughts, and mixed feelings.

Hi! I’m Martino Wong. I talk about tech, AI and internet culture on well…the internet. I usually do it in italian with the handle @oradecima, but I wanna start in english too. This is a temporary home for my thoughts. If you want me to make videos or write for you, hit me up at martino@oradecima.com!


The Browser Company, (the Arc guys), have shown today their next project, Dia, a browser built from the ground up for AI. I...have some thoughts, and mixed feelings.

Arc is my browser of choice and I've come to rely on its features; I love how opinionated it is and I loved seeing each update. The Browser Company is shifting to Dia as an "AI browser"; they're saying they're not abandoning Arc, but they're also saying it's feature-complete (I kinda disagree and I would have loved to see new stuff).

On one hand, even though I hate how overhyped AI is, I see some interesting ideas that can be genuinely useful: the demo of the AI pasting the links from other tabs, or the idea of asking the AI to add things to an amazon cart is appealing.

And I totally get the point that "AI buttons" on browsers are a suboptimal solution. On the other hand I'm afraid this is once again an example of the unattainable promises of AI and LLMs. IF AI systems were as magical as the marketing tells us, Dia would be super cool.

But 2 years after ChatGPT we're still dealing with the fundamental issue of hallucinations, with no solution in sight.

What happens when the personalized emails for the call sheet that are shown in the video hallucinate something wrong? What if the AI that writes the specs of the original iPhone in the video gets them wrong sometimes like the Joanna Stern’s JoannaBot experiment showed us? (Check out this episode of The Vergecast about it).

You’re cute no matter what phone you have
On The Vergecast: we meet the Joannabot.

AI companies really like showing LLM-based systems like hallucinations weren't a thing, but they super are. They are an unreliable tool, and as such, they should be used intentionally and carefully. imo the LLM should be front and center, and not hidden away in abstractions, because the more [...]

abstractions we use, the deeper into the rabbit hole we need to go to check for mistakes and fix them. The deeper you add an LLM that doesn't have its output checked, the more unreliable and inconsistent your resulting system is. I want reliability from my browser

Let's not get swallowed by senseless AI hype.

Subscribe to Mirror Latte by Martino Wong

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe