Opera Neon: Ushering in the Era of AI Agentic Browsing
Opera launches Opera Neon, a new AI-powered browser with 'Chat, Do, and Make' capabilities, aiming to redefine web interaction and usher in the 'agentic web'.
Google unveils Stitch, a new AI-powered tool from Google Labs that generates UI designs and frontend code from text or image prompts, leveraging Gemini 2.5 Pro to streamline the workflow between designers and developers.
The journey from a brilliant app idea to a functional, user-friendly interface is often a complex dance between designers and developers. Traditionally, this process involves extensive manual effort, back-and-forth communication, and meticulous translation of visual concepts into tangible code. Google's latest experiment from its Labs division, unveiled at the Google I/O 2025 conference, seeks to significantly simplify this intricate process. Enter Stitch, a new tool poised to change how user interfaces (UIs) are conceptualized and built.
Stitch is an AI-powered tool designed to help create web and mobile app frontends by generating UI elements and the corresponding code. As described by Google, "Stitch is a new experiment from Google Labs that allows you to turn simple prompt and image inputs into complex UI designs and frontend code in minutes." The project was born from a collaboration between a designer and an engineer, both aiming to optimize their respective workflows, highlighting its practical origins.
The core idea is to bridge the gap that often exists between the initial design vision and the final coded product. "Building great applications always comes down to a powerful partnership between design and development," the Google Developers Blog notes. "Traditionally, connecting design ideas to working code took a lot of manual effort and back-and-forth. That's precisely the problem Stitch aims to solve."
Stitch leverages the advanced multimodal capabilities of Google's Gemini 2.5 Pro AI model. Users also have the option to choose the Gemini 2.5 Flash model. This allows the tool to understand and process diverse types of input to generate UIs.
Users can describe the application they envision in plain English. This includes specifying details like color palettes, desired user experience, or specific components. Stitch then generates a visual interface tailored to this description.
Alternatively, if a visual starting point exists – be it a sketch on a whiteboard, a screenshot of an inspiring UI, or a rough wireframe – users can upload it. Stitch processes the image to produce a corresponding digital UI, effectively translating initial visual concepts into a more concrete design.
Stitch offers several features to enhance the design and development process:
The tool also includes an interactive chat, theme selectors, and aims to let users "truly hone in on your creative designs and development needs."
Stitch enters a rapidly evolving field often referred to as "vibe coding," where AI models assist in or automate programming tasks. Several startups and established tech companies are exploring this area, with tools like Cursor, Cognition's Devin, Windsurf, OpenAI's Codex, and Microsoft's GitHub Copilot gaining traction.
While powerful, Stitch is positioned as an accelerator rather than a complete replacement for existing design tools or developer expertise. Google product manager Kathy Korevec clarified, "[Stitch is] where you can come and get your initial iteration done, and then you can keep going from there. What we want to do is make it super, super easy and approachable for people to do that next level of design thinking or that next level of software building for them." She also added that Stitch isn't meant to be a full-fledged design platform like Figma or Adobe XD.
Compared to a tool like Vercel's v0, which also generates UI components from prompts, Stitch offers a different Figma integration. While v0 primarily focuses on importing and converting Figma designs into code, Stitch allows its AI-generated designs to be exported to Figma for further refinement.
Google encourages developers and designers to explore Stitch's capabilities. The tool is accessible as an experiment from Google Labs.
To try out Stitch, visit its official website: stitch.withgoogle.com.
Google is keen on user feedback to shape its development. "We're thrilled to bring this experiment to you and can't wait to see what you'll build with it," stated the Google Developers Blog. A future planned feature, according to Korevec, will allow users to make changes in their UI designs by taking screenshots of an object they want to tweak and annotating it with the desired modifications.
Alongside Stitch, Google also announced expanded access to Jules, another AI tool aimed at developers. Now in public beta (accessible at jules.google.com), Jules is an AI agent designed to help developers fix bugs in their code, understand complex codebases, create pull requests on GitHub, and handle certain backlog items and programming tasks. Jules also currently utilizes the Gemini 2.5 Pro model, with plans to allow users to switch between different models in the future.
The introduction of tools like Stitch signifies a continuing trend towards integrating AI more deeply into the creative and technical aspects of software development. By automating initial design drafts and code generation, such tools have the potential to free up designers and developers to focus on more complex problem-solving, user experience nuances, and sophisticated functionalities. While the human element of creativity and critical design thinking remains paramount, AI assistants like Stitch are set to become increasingly valuable partners in the app creation process. The emphasis is on making sophisticated design and development more accessible and efficient for a wider range of creators.
Alright, so another bastion of human creativity – design – is getting the AI treatment. First, it was art, then writing, now the very interfaces we tap and swipe. One might wonder if designers will soon be prompting their way to a paycheck, curating AI-generated options rather than crafting from scratch. "Just make it pop, but with a minimalist Scandinavian vibe and a hint of brutalism, for a Gen Z audience... and make three versions by lunch." It’s a bit like ordering a bespoke suit from a robot tailor; efficient, perhaps, but where’s the soul?
But hold on, let's not just paint a dystopian picture of unemployed designers. Consider the upsides. What if Stitch isn't just about churning out another generic e-commerce UI?
Imagine this:
So, while the initial reaction might be a cautious eyebrow-raise about job displacement, tools like Stitch could also empower more people to build, allow for unprecedented levels of UI personalization, and speed up the testing of truly novel interaction paradigms. The trick will be to use it as a collaborator, not a crutch, pushing the boundaries of what's considered good design rather than settling for "good enough" AI output.