Vercel releases first AI model for v0, now in beta When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Vercel releases first AI model for v0, now in beta David Uzondu Neowin · May 22,..."> Vercel releases first AI model for v0, now in beta When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Vercel releases first AI model for v0, now in beta David Uzondu Neowin · May 22,..." /> Vercel releases first AI model for v0, now in beta When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Vercel releases first AI model for v0, now in beta David Uzondu Neowin · May 22,..." />

Atualizar para Plus

Vercel releases first AI model for v0, now in beta

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Vercel releases first AI model for v0, now in beta

David Uzondu

Neowin
·

May 22, 2025 02:18 EDT

Google recently showered us with AI goodies, including Gemma 3n, an AI model that's designed to run on low-end devices, like smartphones. Now, Vercel has stepped further into the ring with its own generative UI system, v0, by releasing its very first dedicated model. If you do not know what v0 is, it is a sort of competitor to tools like the recently announced Google Stitch, which also aims to let you describe a user interface and have AI generate the design. The tool first saw the light of day back in 2023 as an invite-only beta, promising to turn natural language into front-end code.
The newly available model is dubbed v0-1.0-md, and Vercel states it is specifically designed for building modern web applications. This multimodal model supports both text and image inputs, offers a 128,000-token context window with a 32,000-token output limit, and is priced at per million input tokens and per million output tokens.
It offers features like 'auto-fix' for common coding blunders and 'quick edit' for streaming inline changes as they are generated. Crucially, v0-1.0-md uses an OpenAI-compatible API, meaning you can plug it into existing tools like Cursor, Codex, or your own custom applications that already speak OpenAI's language, including Vercel's own AI SDK. It even supports function and tool calls, and promises low-latency streaming responses. Developers can poke around with this new model in the Vercel AI Playground to see how it handles different prompts.

Currently, access to the v0 API, and thus the v0-1.0-md model, is in beta, and you will need a Premium or Team plan on Vercel with usage-based billing enabled. To get started, you would grab an API key from v0.dev and then send requests to its POST api.v0.dev/v1/chat/completions endpoint, authenticating with a bearer token. While there are daily message limits around 200 messages and context size constraints that mirror its advertised capabilities, Vercel notes you can request higher limits if you hit those ceilings.
If you want to dig into the details or see how to set it up, the official v0 docs on Vercel's site have everything you need, including examples.

Tags

Report a problem with article

Follow @NeowinFeed
#vercel #releases #first #model #now
Vercel releases first AI model for v0, now in beta
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Vercel releases first AI model for v0, now in beta David Uzondu Neowin · May 22, 2025 02:18 EDT Google recently showered us with AI goodies, including Gemma 3n, an AI model that's designed to run on low-end devices, like smartphones. Now, Vercel has stepped further into the ring with its own generative UI system, v0, by releasing its very first dedicated model. If you do not know what v0 is, it is a sort of competitor to tools like the recently announced Google Stitch, which also aims to let you describe a user interface and have AI generate the design. The tool first saw the light of day back in 2023 as an invite-only beta, promising to turn natural language into front-end code. The newly available model is dubbed v0-1.0-md, and Vercel states it is specifically designed for building modern web applications. This multimodal model supports both text and image inputs, offers a 128,000-token context window with a 32,000-token output limit, and is priced at per million input tokens and per million output tokens. It offers features like 'auto-fix' for common coding blunders and 'quick edit' for streaming inline changes as they are generated. Crucially, v0-1.0-md uses an OpenAI-compatible API, meaning you can plug it into existing tools like Cursor, Codex, or your own custom applications that already speak OpenAI's language, including Vercel's own AI SDK. It even supports function and tool calls, and promises low-latency streaming responses. Developers can poke around with this new model in the Vercel AI Playground to see how it handles different prompts. Currently, access to the v0 API, and thus the v0-1.0-md model, is in beta, and you will need a Premium or Team plan on Vercel with usage-based billing enabled. To get started, you would grab an API key from v0.dev and then send requests to its POST api.v0.dev/v1/chat/completions endpoint, authenticating with a bearer token. While there are daily message limits around 200 messages and context size constraints that mirror its advertised capabilities, Vercel notes you can request higher limits if you hit those ceilings. If you want to dig into the details or see how to set it up, the official v0 docs on Vercel's site have everything you need, including examples. Tags Report a problem with article Follow @NeowinFeed #vercel #releases #first #model #now
WWW.NEOWIN.NET
Vercel releases first AI model for v0, now in beta
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Vercel releases first AI model for v0, now in beta David Uzondu Neowin · May 22, 2025 02:18 EDT Google recently showered us with AI goodies, including Gemma 3n, an AI model that's designed to run on low-end devices, like smartphones. Now, Vercel has stepped further into the ring with its own generative UI system, v0, by releasing its very first dedicated model. If you do not know what v0 is, it is a sort of competitor to tools like the recently announced Google Stitch, which also aims to let you describe a user interface and have AI generate the design. The tool first saw the light of day back in 2023 as an invite-only beta, promising to turn natural language into front-end code. The newly available model is dubbed v0-1.0-md, and Vercel states it is specifically designed for building modern web applications. This multimodal model supports both text and image inputs, offers a 128,000-token context window with a 32,000-token output limit, and is priced at $3 per million input tokens and $15 per million output tokens. It offers features like 'auto-fix' for common coding blunders and 'quick edit' for streaming inline changes as they are generated. Crucially, v0-1.0-md uses an OpenAI-compatible API, meaning you can plug it into existing tools like Cursor, Codex, or your own custom applications that already speak OpenAI's language, including Vercel's own AI SDK. It even supports function and tool calls, and promises low-latency streaming responses. Developers can poke around with this new model in the Vercel AI Playground to see how it handles different prompts. Currently, access to the v0 API, and thus the v0-1.0-md model, is in beta, and you will need a Premium or Team plan on Vercel with usage-based billing enabled. To get started, you would grab an API key from v0.dev and then send requests to its POST api.v0.dev/v1/chat/completions endpoint, authenticating with a bearer token. While there are daily message limits around 200 messages and context size constraints that mirror its advertised capabilities, Vercel notes you can request higher limits if you hit those ceilings. If you want to dig into the details or see how to set it up, the official v0 docs on Vercel's site have everything you need, including examples. Tags Report a problem with article Follow @NeowinFeed
·112 Visualizações