-
- EXPLORE
-
-
-
-
Obsessed with covering transformative technology.
Recent Updates
-
VENTUREBEAT.COMUnintended consequences: U.S. election results herald reckless AI developmentJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreWhile the 2024 U.S. election focused on traditional issues like the economy and immigration, its quiet impact on AI policy could prove even more transformative. Without a single debate question or major campaign promise about AI, voters inadvertently tipped the scales in favor of accelerationists those who advocate for rapid AI development with minimal regulatory hurdles. The implications of this acceleration are profound, heralding a new era of AI policy that prioritizes innovation over caution and signals a decisive shift in the debate between AIs potential risks and rewards.The pro-business stance of President-elect Donald Trump leads many to assume that his administration will favor those developing and marketing AI and other advanced technologies. His party platform has little to say about AI. However, it does emphasize a policy approach focused on repealing AI regulations, particularly targeting what they described as radical left-wing ideas within existing executive orders of the outgoing administration. In contrast, the platform supported AI development aimed at fostering free speech and human flourishing, calling for policies that enable innovation in AI while opposing measures perceived to hinder technological progress.Early indications based on appointments to leading government positions underscore this direction. However, there is a larger story unfolding: The resolution of the intense debate over AIs future.An intense debateEver since ChatGPT appeared in November 2022, there has been a raging debate between those in the AI field who want to accelerate AI development and those who want to decelerate.Famously, in March 2023 the latter group proposed a six-month AI pause in development of the most advanced systems, warning in an open letter that AI tools present profound risks to society and humanity. This letter, spearheaded by the Future of Life Institute, was prompted by OpenAIs release of the GPT-4 large language model (LLM), several months after ChatGPT launched.The letter was initially signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple Co-founder Steve Wozniak, 2020 Presidential candidate Andrew Yang, podcaster Lex Fridman, and AI pioneers Yoshua Bengio and Stuart Russell. The number of signees of the letter eventually swelled to more than 33,000. Collectively, they became known as doomers, a term to capture their concerns about potential existential risks from AI.Not everyone agreed. OpenAI CEO Sam Altman did not sign. Nor did Bill Gates and many others. Their reasons for not doing so varied, although many voiced concerns about potential harm from AI. This led to many conversations about the potential for AI to run amok, leading to disaster. It became fashionable for many in the AI field to talk about their assessment of the probability of doom, often referred to as an equation: p(doom). Nevertheless, work on AI development did not pause.For the record, my p(doom) in June 2023 was 5%. That might seem low, but it was not zero. I felt that the major AI labs were sincere in their efforts to stringently test new models prior to release and in providing significant guardrails for their use.Many observers concerned about AI dangers have rated existential risks higher than 5%, and some have rated much higher. AI safety researcher Roman Yampolskiy rated the probability of AI ending humanity at over 99%. That said, a study released early this year, well before the election andrepresenting the views of more than 2,700 AI researchers, showed that the median prediction for extremely bad outcomes, such as human extinction, was 5%. Would you board a plane if there were a 5% chance it might crash? This is the dilemma AI researchers and policymakers face.Must go fasterOthers have been openly dismissive of worries about AI, pointing instead to what they perceived as the huge upside of the technology. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of The Master Algorithm). They argued, instead, that AI is part of the solution. As put forward by Ng, there are indeed existential dangers, such as climate change and future pandemics, and AI can be part of how these are addressed and mitigated.Ng argued that AI development should not be paused, but should instead go faster. This utopian view of technology has been echoed by others who are collectively known as effective accelerationists or e/acc for short. They argue that technology and especially AI is not the problem, but the solution to most, if not all, of the worlds issues. Startup accelerator Y Combinator CEO Garry Tan, along with other prominent Silicon Valley leaders, included the term e/acc in their usernames on X to show alignment to the vision. Reporter Kevin Roose at the New York Times captured the essence of these accelerationists by saying they have an all-gas, no-brakes approach.A Substack newsletter from a couple years ago described the principles underlying effective accelerationism. Here is the summation they offer at the end of the article, plus a comment from OpenAI CEO Sam Altman.The 2024 election outcome may be seen as a turning point, putting the accelerationist vision in a position to shape U.S. AI policy for the next several years. For example, the President-elect recently appointed technology entrepreneur and venture capitalist David Sacks as AI czar.Sacks, a vocal critic of AI regulation and a proponent of market-driven innovation, brings his experience as a technology investor to this role. He is one of the leading voices in the AI industry, and much of what he has said about AI aligns with the accelerationist viewpoints expressed by the incoming party platform.In response to the AI executive order from the Biden administration in 2023, Sacks tweeted: The U.S. political and fiscal situation is hopelessly broken, but we have one unparalleled asset as a country: Cutting-edge innovation in AI driven by a completely free and unregulated market for software development. That just ended. While the amount of influence Sacks will have on AI policy remains to be seen, his appointment signals a shift toward policies favoring industry self-regulation and rapid innovation.Elections have consequencesI doubt most of the voting public gave much thought to AI policy implications when casting their votes. Nevertheless, in a very tangible way, the accelerationists have won as a consequence of the election, potentially sidelining those advocating for a more cautious approach by the federal government to mitigate AIs long-term risks.As accelerationists chart the path forward, the stakes could not be higher. Whether this era ushers in unparalleled progress or unintended catastrophe remains to be seen. As AI development accelerates, the need for informed public discourse and vigilant oversight becomes ever more paramount. How we navigate this era will define not only technological progress but also our collective future.As a counterbalance to a lack of action at the federal level, it is possible that one or more states will adopt various regulations, which has already happened to some extent in California and Colorado. For instance, Californias AI safety bills focus on transparency requirements, while Colorado addresses AI discrimination in hiring practices, offering models for state-level governance. Now, all eyes will be on the voluntary testing and self-imposed guardrails at Anthropic, Google, OpenAI and other AI model developers.In summary, the accelerationist victory means less restrictions on AI innovation. This increased speed may indeed lead to faster innovation, but also raises the risk of unintended consequences. Im now revising my p(doom) to 10%. What is yours?Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.DataDecisionMakersWelcome to the VentureBeat community!DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.You might even considercontributing an articleof your own!Read More From DataDecisionMakers0 Comments 0 Shares 12 ViewsPlease log in to like, share and comment!
-
VENTUREBEAT.COMLarge language overkill: How SLMs can beat their bigger, resource-intensive cousinsWhether a company begins with a proof-of-concept or live deployment, they should start small, test often and build on early successes.Read More0 Comments 0 Shares 0 Views
-
VENTUREBEAT.COMPlayers rebuke clumsy ad strategies, even in popular games | Mobile Premier LeagueMPL's latest report show that popular mobile titles can leave players dissatisfied with certain approaches to in-game ads and updates.Read More0 Comments 0 Shares 29 Views
-
VENTUREBEAT.COMPlayers invested 8.34B hours into Blizzard titles in 2024, says studioBlizzard has released its 2024 End-of-Year report, detailing the amount of player investment in its titles across the year.Read More0 Comments 0 Shares 28 Views
-
VENTUREBEAT.COMArm lawsuit against Qualcomm ends in mistrial and favorable ruling for QualcommQualcomm did not violate a license with Arm when it acquired Nuvia for $1.4 billion, in a ruling by a jury today.Read More0 Comments 0 Shares 29 Views
-
VENTUREBEAT.COMBuilding giant and ambitious games | Brendan Greene interviewJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreElvis Presley once said, Ambition is a dream with a V8 engine. Brendan Greene, the creator of PlayerUnknowns Battlegrounds (PUBG), has a lot of ambition. His battle royale game, inspired by the Japanese film Battle Royale (2000), has sold more than 80 million copies.And one of Greenes ambitions is doing something important like that again in video games. And so he just announced that his PlayerUnknown Productions is resurfacing after years of development with a three-game plan to bring on the next generation of survival games. And its ambitious.I talked to Greene, who is known as PlayerUnknown, about it in an exclusive interview. Its down at the bottom of this introduction and I hope you like it. At the end, I asked him about ambition. Greene got the idea from the movie that he could stage a battle where 100 people would compete with each other. With each player eliminated, the battle space would get smaller until the last two were battling it out in a very small circle. The last one standing was the winner. Greene first created a mod called DayZ in the Arma universe. Then he teamed up with South Koreas Krafton to make PUBG. The game debuted in 2017, disrupted shooter games like Call of Duty. On the strength of PUBGs 80 million in sales, Krafton went public and Greene became wealthy from that. That gave him the money to work on something even more ambitious.Brendan Greene is the creator of PUBG and he is on to his next survival project.I had a front row seat to this plan. Greene went off on his own to create a new startup, PlayerUnknown Productions, in 2021 to make a gaming survival world that was a lot like a metaverse. Then he gave me a scoop on his ambitions. Without anything to show me except a screenshot at the time, Greene said was creating a world called Prologue that had a huge amount of terrain about 100 square kilometers. That world, bigger than just about any existing game world, would be a test where players would drop into the world and try to survive until they exited the world in a given spot. It would be different every time they dropped into it.Now Greene has released a video that describes his intentions more concretely. Prologue now has a real preview in the video and the world looks very realistic, with trees and grasses swaying in the wind. And its still a huge world, fashioned with machine learning and AI tools. The aim is to release it sometime in the middle of next year as a single-player game for people to try to survive. AI will generate the terrain of Prologue.The challenge is that the open-world of Prologue will be an emergent place, where anything can happen and the weather will get progressively worse. It may seem simple to get to the exit point on the map, but its likely going to be hell getting there.Then there will be something else. The company will do a shadow drop of the companys free tech demo, called Preface: Undiscovered World, showcasing its in-house game engine called Melba.Preface will be able to generate terrain for an Earth-size virtual world, using very little in the way of computing resources. This demo aims to provide users with an early look at the innovative technology that will power the subsequent titles in the series, and eventually a third game called Project Artemis.Project Artemis is the large-scale end goal project of the series. As described in the past, Greene sees this as an Earth-size world where players can drop in and create their own gaming experiences in different sections of the world. We dont use the word metaverse so much anymore, but thats what it seems like to me. The journey to get there could take another five or ten years.In the video, Greene said he embarked on Prologue three years ago and then life happened and it has taken three years to get it into a solid and breakthrough shape. Now the company can start sharing it and getting feedback to make it into really something different. In our interview, Greene said that the team started pulling together when Laurent Gorga joined as CTO. About a year ago, Gorga started putting in motion a process that enabled the team to make a lot more process. While they were making the tech, the team would now create frequent builds to test the tech on a granular level. They started making enough progress so that they started scheduling the timelines for Prologue and Preface. And they talked about it in a video stream on December 6, during the PC Gaming Show. It made a lot of jaws drop. Prologue is expected to drop into early access on the second quarter of 2025.Heres a view of Preface, another test of technology from PlayerUnknown Productions.When I started this I was trying to make a larger open world experience than most people made, and we tried to provide a couple of years and we found a way to do that, Greene said. We essentially reinvented how you create these worlds using machine learning technology, using natural earth data to generate the terrain.Now the company is ready to test this terrain, which will form the basis for the larger worlds. He said the team broke the journey into three stages. The first job was to fill out the terrain of the world. The second was to fill that terrain with lots of interaction when scaling up. And then third, the goal was to pull a bunch of those players onto the world, Greene said. The company will keep enhancing Prologue with its current game engine and then it will move it over to the next version of its game engine.Prologue started off as an experiment in Unity and then it moved to Unreal a couple of years ago and the tools have proven to be a solid foundation. The proprietary tech will eventually be able to generate a world with millions if not billions of objects in it, with the help of machine learning. Its more about the large scale and again machine learning is very good at it because it will capture the patterns that we teach it, Greene said. The physics will be realistic. If the ground gets wet, the terrain becomes a slippery mud and rivers can form, and these will have repercussions for players as they try to survive in a wilderness. This will make the game challenging, but it cant be unbeatable, Greene said. Were discovering what is fun, what is not fun but at its core it is about survival. I think the more we can test, the more we can get the feedback from the users or the players, and thats one of the reasons why we are going to early access, Greene said. The more we can actually engage with the community and get their feedback the more it can reshape the models in the right way.Meanwhile, the company is working on Melba, the in-house game engine. Using machine learning, it should be able to generate worlds and then regenerate them for the next game.Preface technology will populate a world very quickly.The way that we build the engine is allowing us to scale up to large agent interaction, Greene said. We have an Earth-scale planner with some various biomes and some simple systems to allow you to explore it.The company is working on two projects at once one with Unreal and another with Melba so that it doesnt develop tech in a vacuum, said CTO Laurent Gorga, in the video. Unreal and Prologue will generate a piece of the world. Preface will help achieve the scale, and then Artemis will be the full expression.I want to get our tech into the hands of the people out there to help us perform what this tech will become, Greene said. Like this terrain tech is interesting, but I really need, I want to leave it open. I want to leave it moddable.Greene said this may be a five or 10-year journey, but Prologue could be available on Steam in the second quarter of next year. There were a lot of details about what hes doing that we talked about. Heres an edited transcript of our interview.Prologue: Go Wayback! is the first new game coming from PlayerUnknown.GamesBeat: I was very impressed by your demo. I saw the Discord event, as well as the announcement.Brendan Greene: Its been a busy six months. We finally got it out the door.GamesBeat: I remember the original vision and how you went about doing it. It sounded like there was a big technology pivot or approach pivot you made. What did that involve, from the time you were first talking about it? How has it turned out?Greene: We found Laurent Gorga, who we appointed as our CTO. Hes in the video we released. He wanted to make more of a product, rather than a research experiment. Try to focus our efforts on releasing something. He said he doesnt believe in developing tech in a vacuum. Laurent, Kim, Scott, Petter, they sat down and figured out how we could leverage the great team and tech we had, and the ideas we had, and make it into something we could release.He posted only last week on our Slack. He said, A year ago I joined the company, and said that in a years time we would release something. Not to the day, but in a years time we released something. Its a credit to him and the team for making it work.GamesBeat: Is there an easy way to explain what the approach is, and how it differs from what you had tried before?Greene: It was the approach that Petter brought to the production of Prologue, but also that Laurent broughtwe brought both projects into production rather than keeping them as research experiments. That was the previous tech leads view, that we should prove it all out before we move into a more production stage. Laurent really believedI remember Petter joining and asking the game team, Lets play the build. They said, Play what then? And within a week we had a playable build together.Since then weve shifted mentality, from experimenting and playing with ideas tonow that we have really strong leadership in tech and production. Thats put us on the right path. It brought in more traditional techniques. We have a seven-week sprint. We work fully remote, more or less. Were experimenting with how to make the teams work together well. We have a good synergy between all the different departments now. We have a core engine team. We have our art team. They all work together in conjunction on all the projects.Its a credit to Kim, Laurent, Scott, and Petter. I have the vision. I have the dreams. But theyre the guys that really make it work.GamesBeat: How many people did the team grow to now?Greene: Were 60 people now. Thats fully staffed for Prologue.Preface is part of a very ambitious project by Brendan Greene.GamesBeat: Thats higher than the original plan called for.Greene: Yes, I think we were around 50 or so. But now we have publishing. We have finance. We have a game team of about 30 people. The core engine team is about 10 or 15 people at the moment. Its a really tight team now. The team itselfwe have a presentation and Christmas party in a few days. Were doing five-year anniversary presentations. Thats quite something. A lot of the team have been with us for years. Im very happy now that we have leadership in place that can do what I want to do, rather than telling me we can do what I want to do and then not really having a plan.GamesBeat: The vision sounded the same. Youre going to build this world, and then the players will figure out what the game is.Greene: The vision really hasnt changed. Even when I looked at some old pitches I did from four years ago, when I was first pitching it internally to Kraftonagain, it was a three-game plan. They came back with slightly longer time frames and slightly more realistic goals, but it was still this idea that wed prove each stage of the tech with each game were building. The vision is still the same.I dont think anyone is serious about building a metaverse. I think everyones building IP bubbles that will sometimes have to talk to each other, I guess. I dont really see the metaverse as described by the people building it. What were doing, its open. We have it in Discord. People are already modding and hacking it. I see Artemis or Melba, that engine being hopefully an open-source world creation engine that will power some form of 3D internet. Its not just one world. Its hundreds of worlds, thousands of worlds. I see every world as like a web page.Since we did the releasethey have those things, deep links. You probably saw them in Discord, where you can hop around the planet. I had this flash in my mind. Maybe thats what a hyperlink will be. Theres this idea that you dont have to travel there on the planet. Someone will just send you a link to something cool on their planet or your planet or Toms planet. Then you can click and it will open up the app and bring you there, much like a browser will in todays internet. Its just a 3D location that has something interesting, or not. It might just be beautiful. The vision is still going for that.Its not meant to be like a game world. Its a world with game-like experiences, Im sure, but ultimately its just a huge world for players to come and build or view or share. Im not really sure what theyll do yet. I know Ill give them lots of tools to do stuff. I always thought that the world well provide, or the example well provide, will be like Minecraft survival. That will be our slice in all the worlds. Thats more just a big Earth-shaped thing that looks like Earth and has basic survival mechanics. Lets say civilization mechanics. You can do lots of stuff to eventually build communities. But again, thats 10 years away, I think.GamesBeat: I didnt quite grasp what the three games meant. Prologue is a geographically limited game. Preface is more like a demo. But I didnt know whether you counted that as one of the games. And then you have Artemis.Greene: Preface will be the final game, probably. Prologue was just us testing the small-scale systems, player interaction, and the terrain tech. The reason we have three games is that each is solving one step in the process, or one problem. The first is terrain. Prologue, we have our ML tech that powers the terrain, generates the terrain. We can leverage Unreal to test that in this box called Prologue. We can test out lots of player interaction systems. How do we store that? How do we have persistence? All this using this ML agent.A screenshot of Prologues wilderness.Game two will be testing the ML agent on a bigger scale, making bigger terrain. Hopefully the terrain tech will be relatively mature at that stage. And then thinking about multiplayer. Not on a crazy scale. Just whats usual at the time. But then lots of agent interaction. Its going bigger and testing the terrain, the systems, stuff like marketplaces on a slightly bigger world, before we finally go to massive multiplayer, where I hope hundreds of thousands if not millions of people, in 10 years, on this massive terrain, which should be generated locallythat should be well mature with all these other systems that weve tested through Prologue and game two. Its all just iterating on the vision.GamesBeat: Will each game then be a separate product that gets to market? Or do you see them more as demos?Greene: Prologue will be a product, for sure. Theres a story that we have, that I would like to leverage during early access, or after we launch into a full product. But it serves a purpose. I dont want to put every bell and whistle on it, but it will still be a product. Then, once its life cycle is over, well evolve it into the next stage. Prologue will move into the next game. Maybe you can play Prologue in the next game. I dont know. But its kind of like Rust. As we go bigger, the products will be separate products, but theyll bleed into each other and iterate on top of each other. Theyll stand on each others shoulders, so to speak.GamesBeat: If you have a story, it sounds like youre going to make your game within that game world. But youll also make it moddable so that other people can play with it and figure out what kind of game they want to make. Prologue can be that directed game whereit seems like its important for you to design a game, as opposed to leaving it all up to consumers.Greene: When I thought about this many years ago, when we were thinking about whether we could generate a terrain every time you press playthats an interesting idea. Whats the easiest thing to do here? I thought about a simple survival game where you get from A to B across a map. Its you every time. The weather gets worse, wave-based weather. It just keeps hitting you. Prologue is essentially that. Its not that Im making a game. I said in the Discord chat that I want to build games with the community, not for the community.This is an interesting way of generating game worlds. We have some simple systems in it, but already, during the playtest, people are suggesting, How about this? How about that? I want to stay in a cabin for four hours and play guitar and watch the weather outside and not do anything else. Im not trying to make people play a game. There are things you can do within Prologue to get to the other side of the map, get to the finish, and learn a bit of what the game may be about. But otherwise you can just sit in the cabin for five or six hours if you want.Im not trying to force people down a particular path. Thats why I want to get the community involved early. This way of creating game worlds is interesting and exciting to me. People who love survival games more than me will give some really good ideas when they get a chance to play it. Thats why we have playtests already. People are already finding weird and wonderful things about the game. That excites me. Sharing this tech early with the community and getting their input now is how we make this a great game. Its not just me directing everything. Its pulling feedback from people who really care about these games in ways that I havent thought about.Trees blow in the wind in Prologue.GamesBeat: One thing that I wonder is what kind of variations you can have if the game isI dont know if you call it procedural. You regenerate the world every time you log in, is that what youre actually doing?Greene: Its machine learning procedural, but its machine learning. The ML agent generates a low-res map at the start of the game. Technically, mathematically, we can do 4.2 billion-odd maps, or generations. If a million of those are interesting, Ill be happy. But you can see in the background, this is the ML map, but with us generating mountains. These are going to be impossible to create. You wont be able to traverse them. But the idea was, we want to get the weather station up here. How can we make it more interesting and get it up in the clouds? They got very excited when we generated this, but no, its not going to be traversable.The idea that it gives us a base to work on in Unrealthe maps we have, Ive seen a good deal of variation. Even now, its very early days with this tech. The guys are discovering new ways to manipulate the PGC system, the procedural generation system in Unreal, to create more interesting biomes, to leverage our tech to create different rivers, masks for rivers and mountains. It gives a pretty good variation of worlds. Weve seen some interesting worlds from the generations already, and that can only get better over the next six months.Before we did our very first playtest with the Dutch Game Association, we had gotten cabins spawning in the week previous. This is all very new for us. But its still exciting. This looks cool. Its not going to make it into the game because its far too high, but still, this kind of landscape, to meyes, I want to go explore that. I want to get up to the top of that. Thats why were doing it.GamesBeat: Theres the thrill of exploration that you can have in a world that generates over and over. What about the feeling of familiarity that some people may want? I can see myself thinking that I just want Earth, so I know where everything is. Or something that remains persistent that I can go back to and explore different parts of it. Is that going to be possible? Or will it be different every time you log in?Greene: Melba and Preface is meant to be persistent and deterministic. If you go back to the same place, youll see the same things, always. Thats the aim. With Prologue, its seed-generated. We can hopefully eventually share the seed of the map you just played with friends, and you can play that same map. There will hopefully be a meta-game. Maybe you can even race people. But thats probably DLC content down the road, because for the first launch its too much to expect from the dev team. This is not a fully-featured product. I dont want to split dev resources. I want to focus Prologue on what its there to do, which is test the terrain tech and make an interesting systemic survival mechanic or game loop that we can carry over.Itll never duplicate the Earth. Nvidias Earth 2, that kind of thing, our terrain tech isnt designed like that. Its not designed for replication. Its designed for Earth 5, Earth 10. It looks like the Earth. It might have the same feeling, the same biomes. But if you go to Barcelona itll look a lot different. Its not Barcelona. Its just that part of the world generated in a new way. Also, I just think Earths been done. So many other people are generating duplications of these things. Go on Google Maps and you can see the world. I want to create unique spaces. This is going to be Earth-like, of course, but itll be not-Earth-like as well, depending on whos putting in the design input. This will all be open.PlayerUnknown Productions team members: Alexander Helliwell and Hakan Kumar.GamesBeat: Some of the variety is going to come from how many biomes you can create, then? If you come up with 1,000 biomes, you can have wide variation in the terrain.Greene: Exactly. But again, you look at NASA data, and there are 20 defined biomes on the Earth. That fills the whole Earth. Theyre very high-level definitions of what a biome is, though. Tundra, this kind of stuff. Within these youll have sub-biomes and so on. Earth data already provides us with a huge amount of data to try to train these agents to give us the right combination and depth. We still style and theme the worlds. We decide on how many biomes, how frequently they should mix. That kind of thing is still decided by us rather than agents. Were still guiding their hands, so to speak.GamesBeat: If somebody wanted to re-create your battle royale inside Prologue, do you think that would work?Greene: Prologue, you wont be able to do that. Its Unreal. Its a single-player game. This is a survival game. Wed like to open it up for modding, but I dont know if thats on the table right now. Whereas Preface, the tech demo we released, thats being released with an open mind. Were leaving the files unencrypted. The models are there for you to play with if you can. Were not trying to hide that. I like to say its HTTP version 0.01.Its funny. If you think about biomes, there are already people in our Discord who say, Ive been going for hours and its still just the same rocky desert. Yes, because the Earth is big. The true scale of the Earth is massive. Its going to take time. The internet was pretty empty at the very start. I see the same thing with Preface. Right now its empty. Theres not much happening. But people in the Discord really see the possibility. You can see them getting what it is, or what it could be.GamesBeat: By Artemis, then, you have that world where anybody could create anything. You could do your battle royale there. But maybe you want to rope off territory and say, You can only play in this area.Greene: No, not necessarily. One of my earlier ideassay I discover this forested area here, and I want to do a motocross race. I should be able to just pull up something on my wrist, paint where I want the track, and the game provides the rest. The game enacts a motocross race for me, adds everything there. Thats what I would like. Were probably 10 years away from getting there, if not longer. But ultimately I would like that ease of creation. You can just wander around this big planet, fly around doing whatever, see something cool, and say, Yes, I want a battle royale there. Or a motocross race or whatever. The game should make that easy for you.A cabin in the woods.That requires whole layers of thinking, different networking layers specific for those types of game modes. Theyll probably lift and shard off that part of the world from the main world. As I said, five or 10 years. Probably longer.GamesBeat: If you look at what everyone else is trying in these different ways, theres the Nvidia Earth 2. Theres Hello Games trying something with a planet-sized world. Theres Flight Simulator doing it by adapting photos of the Earth that planes or satellites can take, getting their hands on all that available data to generate an Earth. Are there any approaches youve seen that youve thought about or found interesting? It seems like everyone is doing something different.Greene: As I said, I like our approach. I think we have a pretty good one. We use three agents to generate the world locally. Most of the stuff Ive seen from even Epics big world stuff is server-client. I dont think thats how you create massive worlds. Youre always dependent on a performant internet connection and all kinds of things that a kid in Africa doesnt have. How do you generate a world for everyone that half the world cant access?Our view on it, which is, you do the simulation as much as possible locally on the device, rather than worrying about server farms handling that for youI just think the future is local anyway. Ultimately I would like to have all my data stored locally and give it out to the network when I need to. Otherwise its here, rather than worrying about what server its on. Again, five or 10 yearsfor what were trying to create with Melba and the platform, these kinds of things are important to think about. They will come into play in a very big way. Trying to solve them with Band-Aids is not the way to do this.GamesBeat: The good thing is well have much more storage by the time this is ready. The interesting thing I talked to the Flight Simulator people about, if you added up everything they created for Flight Simulator 2020, it was about 500 gigabytes. Then they decided to shift almost completely to the Azure cloud. Now they have just 50 gigs on the local machine, and everything else streams in. That led to some hiccups at the beginning, trying to deal with so many players coming in, but that seems to be under control. But I wonder, why would that way of building a world be harder to do than the approach youre taking, where it sounds like most of it will be on the local machine?Biomes will provide the foundation for each section of a world. Greene: Im not familiar with how they do things. I guess the core difference between their tech and our tech is that its still generating game worlds in an old way, where you need to understand what they look like. Our tech understands that inherently. It understands what terrain is, what mountainous regions are, what biome placement is, what trees to place in various areas. Thats all done generatively and in real time around the player, rather than having everything baked. Thats why you have so much data, whether 50 gigabytes or 500. Our world, which is 500 million square kilometers, is 3.6 gigs. Thats all generated locally on the players side. Its just the way theyre thinking about doing it.We have three patents on what were doing because were making these breakthroughs. How were doing this is a new way. Weve seen other attempts at using inpainting and all kinds of stuff, using ML in other ways to create these worlds. But Ive been happy with what weve been able to do. Were generating millions of worlds in Unreal now, eight by eight, and they look pretty good, pretty high detail, not super fake. They look natural. It really excites me. I think this can open up games to a lot more varied experiences, rather than replaying the same map over again.I saw that The Long Dark is coming out. But also Dont Starve. That was a great game, super procedural, a different map every time. It was exciting to play. But weve never really had that in a single-player game. Maybe we have and the internet will shoot me down. But I really want to create this kind of replayable single-player game that focuses on exploration. We were even putting maybe a tent into the game, because people had said, Maybe I want to sit on a hill until the weather changes and see the vista. So lets put a tent in so people can survive there instead of being cold. Theres this kind of lovely back-and-forth with the community already.The dev team is excited. The community Discord is excited. I cant wait to see what we can do in the next six months as we ramp up to Q2.GamesBeat: I remember when we were talking about the metaverse before and what happens when you try to go between worlds, different worlds. Theres one question there. Did you consider breaking up something like Artemis into a bunch of worlds? You have so much territory here, something planet-sizedGreene: But I think it will be eventually. It will be millions of worlds. Its like the internet. It wont be one single page.GamesBeat: You mentioned that when you cross a border, AI is going to translate your stuff from one world into the next world.Greene: I would hope so.Laurent Gorga is CTO of PlayerUnknown Productions.GamesBeat: I thought that was crazy at the time. But the last year or two years of generative AIit seems like its made that possible. Has that become important for your plans?Greene: I wouldnt say very important, but theres definitely been some advances that we can leverage. For example, texture generation. For a whole planet, to ensure we have a variety of textures, ML generation is great. It gives you infinite variety, basically. It also speeds it up and lowers the cost. You dont need to store hundreds of texture files. Its all generated on the fly as you go through the world. Stuff like this, we can find specific ways for it to make the world run better, with a smaller footprint.Doing the photo to a 3D object, that kind of stuff is exciting to watch, but Im not all in on AI yet. Even though Im working on it quite a bit. There are some great possibilities. Its an exciting future. But we want to be careful about committing too hard in one way or another. Were pretty happy with what we have right now. But some advances in the last few years have filled me with a bit of excitement as well.GamesBeat: I was trying to think of game spaces within these different projects you have. With Artemis, it seems like youd have those millions of different kinds of spaces. People can choose to have very small game spaces, like a town where you could have a gunfight, or very large ones too. How many people do you envision in one game space? Is there a maximum youre thinking about?Greene: I dont know. In the shared experience I want millions of people. Having a massive Earth-scale world, you need millions if not billions of people. But I dont think thatsagain, solving the network problem. Weve solved the terrain issue, generating massive planets. Thats not that hard. Its not that costly anymore. We can do it locally. It doesnt ask for a lot of disc space. It generates pretty nicely. Its the same for multiplayer. We want to make sure the protocol, the layer we have works well allowing multiple people to get on the same space together.I would love to see a 1,000-player team deathmatch, with teams of 50 or 100 players going against each other. Why not? As long as the play space is big enough. With game two its something well try to explore, upping the player count to something thats still reasonably possible and then seeing how that large-scale interaction works. Again, if its a systemic world, if its emergent, like a lot of the spaces I like creating, its easier to build. But these kinds of large-scale interactions excite me because no ones really pursuing them. Everyones still happy with 20 or 30 or 100 players. Come on! Its been 20 years already. Give me millions of players, please.GamesBeat: A lot of game designers have said that thats all they can see as being fun. Would that many players in a game be fun for the individual? The Call of Duty designers are perfectly happy with six-on-six.Preface is the second project of PlayerUnknown Productions.Greene: Again, 100-player battle royale probably wasnt seen as fun before it happened, and it turned out to be a lot of fun. I dont think we can say something isnt fun if weve never experienced it. I struggle with that kind ofit can never be fun if its over whatever number? Lets try it. Maybe its fun and maybe its not.Im not trying to make games with millions of players. Im just trying to create these shared social spaces for millions of players to have experiences together. Maybe theyre games. Maybe theyre concerts. Maybe theyre all kinds of things. But its more that you have large-scale interaction. But hell, bring on 1,000-player battle royale and see what happens. Bring on 1,000-player search and destroy. Look at the real world. You see nowpaintball games used to be six-on-six, but now you have whole teams of hundreds of players going at each other in some of these massive paintball tournaments.I dont know. Any new technology scares the stalwarts, right? You saw it with that lovely ILM documentary, Light and Dark, about moving from puppetry to computer graphics. We cant do it? Oh, shit, we can do it. Of course puppetry has now evolved into something even more special. Its been forced to evolve because of other tech taking away the low-hanging fruit. Its always an evolution. You should want to see it move forward, rather than just trying to trap it in a box.GamesBeat: I remember games like World War II Online. They were trying to get 100,000 people or more into an MMO, so that they could replay historical battles. Would something like that be doable inside this kind of world?Greene: Wouldnt it be great? We could get 100,000 people all playing together. That would be great. The tech should hold up. But again, this is what game two and game three are intended to test and prove, to make sure that we have multiplayer, that we have interaction systems, that we have all these AI systems that work well together. By AI I mean bots in games, so you can control stuff. Having all this level of interaction and scale all working. As I said, Melba, Preface, its all open. Not open source technically right now, because that comes with certain responsibilities were not ready to commit to yet. We need time to work. But were still doing it with this open mentality, where nothings encrypted. It has to be built with the community. The internet was, and I think the metaverse has to be the same.GamesBeat: In this kind of game world, does the concept of shards still exist?Greene: No, because I dont see servers. Thats the thing. I think it will be peer to peer. Well have a hybrid peer system, where youll have peers that handleyou could be one of these peers if you have a decent enough system, handling the high-level simulation for physics, weather, ballistics, these other heavy needed simulations. That sends data to lower-end devices. Thats how I see this working. Well have some kind of peer to peer system that will self-validate or self-auth rather than being reliant on servers.I still think well have a hybrid peer-server type of model that will hopefully be able to distribute across both users and a more commercial grade. But again, I dont thinkit cant be based on servers, or else well never get to hundreds of thousands of players. It just doesnt work like that.GamesBeat: Is it starting to look more like a decentralized blockchain infrastructure?PlayerUnknown Productions team.Greene: No. Its decentralized in the sense of that word. I still think federated is better than decentralized. It achieves the same general goals. There was that interview I did a year ago with Nathan where he asked me about blockchain, and then the next day it was PUBG guy making blockchain game! That filled me with joy.Blockchain or hashgraph or whatever, decentralized ledgers are useful in certain regards, especially when youre trying to build a decentralized network. Whether well use them, we dont know. Were years away from actively investigating that. Its an interesting space, but I dont see us using it in a similar way to how its been used so far. As a tech stack or a tech layer its interesting, but its not something Im going to build games on. I dont get that part. Im building our own engine. It may incorporate some level of the tech as a layer to facilitate digital bookkeeping, but for me, thats about the usefulness of it.GamesBeat: Are you confident in the ability of a peer to peer system to handle something so large?Greene: Just brash confidence, right? With reckless abandon I say yes. I think weve seen, with Bittorrent and blockchain, that decentralized peer to peer can be secure. There are some new blockchains that do this kind of self-auth stuff quite well. Im relatively confident, as confident as I can be with the knowledge I have, that something will be there that can work.Because were not building a game, so to speak were building a world then theres certainwe dont have to make it as performant, for example, as an FPS game. There are certain things we dont need to ensure at that level. But then if you want to have an FPS game within our world, well probably have to use a more known network protocol to enable a good experience there.GamesBeat: What if the player is requesting a certain world? You have a great wilderness world, but I want a city. Can you generate that for me? Instead of getting a random world, can they wish for a certain kind of world?Greene: With Preface, everyone gets the same world. With Artemis, everyone will get the same world. If you want to create your own world, the tech stack will be there for you to do that. Maybe well provide a way where you can give us some money and we can create a world for you. I dont know. This is 10 years away. But for me its always been like Minecraft. Well give you Minecraft survival. You can go there, explore, create, do things in the world using the tools we provide, but if you want to create your own world, you have to put it together yourself, host it from your own machine, rather than relying on us.Well provide one layer, and experiences for lots of parts of the world, but you wont be creating a new world when you press play locally. Youll just be entering our world. Also, it may not be just our browser that you use to enter this world. Maybe someone has already created a new browser, better than the one we have, that allows you to do more in the world.GamesBeat: Do you think that your world is going to be a contiguous world, an actual 3D planet, as opposed to something likeSecond Life is this collection of places you can go, but its not the map of a world.Greene: I would like our world to be contiguous. I would like that it seems to be the one world. But again, I dont know. Ultimately I want to create a contiguous world. Thats what I would like to do. I would like something like this you see in the background, a massive world thats there to explore. Theres lots of stuff to do. People can do whatever they want with it. Great. Thats the aim. Lets talk again in a few years and see where its going. But thats the aim, to provide a contiguous, unique 3D planet that allows you to spawn at various locations and create some stuff. It might have some urbanization. Early on itll probably have very little. But as we add more systems it should get more interesting.PlayerUnknown Productions is generating terrain on a massive scale.GamesBeat: Would you get something like the actual physics of the Earth?Greene: Why not? Exactly. Then maybe we have a more extreme world, or a more playful world. It should be easy sliders for me. Thats ultimately what we want to create with Melba. It should be that easy. We can just change a slider and the gravity changes. The world is created in real time, so if the data slightly changes, we should be able to do that.GamesBeat: I think I know the answer to this, but others might be wondering. How do you build something this big without 10,000 game developers?Greene: That was always the aim. When we sat down to do a 100 kilometer by 100 kilometer map initially, when I was still at Krafton, we discoveredokay, you need that many game devs to build that world, because it takes so much time. Thats why we tried to solvehow do you create a world in real time and generate it? Thats how were doing it. We already have the terrain part of that solved. We still have to figure out how you store persistent data in an efficient way, but at least weve solved the terrain generation part.Now comes the gameplay and other systems. But since theyre always systemic, theyre pretty simple, especially in the real world. I hesitate to say I dont see this as much of a problem, but I think were solving the bigger problems. The terrain was a big challenge. Weve solved it in a pretty unique way, in a breakthrough way. Theres still a lot to do, a lot I dont know, but I think the vision is clear. Im confident about getting there.GamesBeat: Financially, is your situation still pretty similar to what it was a year ago? You had your own money. You had money from a couple of companies.Greene: We have funding to get us through launch and after. Of course we would like more money, but we prefer to make that from selling the game and using that to reinvest in the studio, rather than looking for another round. My aim with all of this, always, is to make sure the team can pursue the vision without having to worry about just pumping out products for sale. Whatever we choose to do moving forward, its always with that priority in mind. I have to give the team that safe space to dream, to be able to be psychologically safe. This is a good place to work. Were doing some good stuff. Weve achieved that pretty well over the last year. People feel good coming to work and excited about the project. I want to continue that. We need to sell games, but were pretty good right now.GamesBeat: When you look down on the micro level of things like the cabin you had, it was pretty detailed in there. On that side, do you envisiondo you have to have an army of creators making these small things that could be useful for players in this kind of world? How much work is that?Greene: Id love for our art director to give you a proper answer on this, but its more that the tools these days, for example Houdini, are allowing us to do a lot more variation on stuff like cabinets. Ultimately there will be some kind of blueprint that can generate multiple different variations. We have something like 300 variations of the cabin spawned across the world, because its relatively easy to do. It doesnt take a lot of dev time. The cabins still look pretty good. With the variation theyre relatively believable.It does take time. Im not going to say it doesnt take time. But Im impressed by how far theyve come in the last six months. When Petter, our producer, joined about nine months ago, he asked, Wheres the build? Where can I play the game? There werent many responses. Within a week he got a playable build up and running. Since then, the progress has been remarkable. We have a game that I get excited to start up, excited to run and try to find my way through it. I cant wait to get it in the hands of more people.GamesBeat: It sounded like one thing you were asking players to give feedback on was the level of detail in the world, if it was enough. Do you think youll have a difference in the quality of what you can generate compared to the quality theyd expect in single-player Unreal Engine 5 games?Greene: I think it looks pretty good already. The forest landscapeswe still need some more detail, for sure. Especially the terrain level, to make it a bit smoother. But its keeping me happy. Im pretty pleased with how it looks. The forests look natural enough. Its still early days. We still have six months of work to focus down on the look and feel. But Im pretty happy with what we have already. I think players should be excited to explore the world. Theres enough detail already that it doesnt look bad. Lets put it that way.GamesBeat: The Flight Simulator people said that compared to 2020, the 2024 game has 4,000 times more detail in the landscape. That suggests a rate of progress they can continue to ride on. Is that something you can do? If players do demand it, is that a curve you can ride in some way?Greene: Were trying to build the engine in a very generic way, so that as new tech comes on stream, we should be able to update that part or add it in. It shouldnt be much of a problem. The world were building in Prologue behind me, weve already gone through various iterations on the terrain uprezzing tech. Weve already gotten it down to finer detail. As our agents improve, as the training improves, it will get better and better. As youve seen with a lot of AI image generation, video generation it will always improve. Were building the engine with that in mind, that it will constantly be iterated. If a new thing comes online, we should be able to adopt it as quickly as possible.If people want more detail, sure. I dont know if youve played the playtest yet, the build, Im pretty happy with how the world looks. Its a bit rough still, but the forests look pretty good. Im excited.GamesBeat: Well, Im still very impressed with the scope of the ambition here.Greene: I try to be consistent with my madness, right?GamesBeat: Would you have advice for people around sticking with their ambitions?Greene: Just be stubborn. Or, well, no. Someone told me Im not stubborn. Im single-minded. Im in a privileged position to be able to do this. I know the games space right now is not the most wonderful place to work. Theres been a shit-ton of layoffs. Theres this conglomeration of IP where studios are just being thrown out the door. Were in a privileged place right now, that we can pursue this and have me in a position where I dont have to worry about anything else other than pursuing it. But being single-minded about what you doif someone tells me no, I look for a way around it. If you really believe and think its reasonable and possible, then you should pursue it.There are always going to be people that tell you no. Like you said about game designers whove decided that games of 1,000 people are probably not going to be interesting. They said that about games of 100 people, and now those are some of the most popular games out there. If youre sure about something, if youre confident and optimistic, just pursue it. Be single-minded about it.Thats not very wise stuff. Thats what everyone says. Its hard, though. Youre going to get knocked down a lot. But its having that anger inside you, the spite inside you, to say, Im going to prove you wrong. Just going and doing it. It takes a lot of work. We were lucky with battle royale. It took about three years to form a genre. Counter-Strike took a lot longer. DOTA took some time as well. Things take years to cement and become something. Thats the other thing to remember. It doesnt happen overnight. It might seem like it does, but it took me a year and a half or two years to make sure battle royale was in a place where it was picked up by someone bigger and went somewhere crazy. It does take time. Dont give up. Keep going.GamesBeat: The metaverse seemed to inspire a lot of people, including you, some years ago. Its gone out of fashion now. Do you still believe in the metaverse, or has your view of that changed?Greene: I just dont see the metaverse that everyone else is building. This idea that its an IP bubbleeven in the interviews that have been going around, that the biggest challenge is the business to business. The metaverse isnt controlled by companies. Its not my metaverse and your metaverse and this metaverse and that metaverse. Its the metaverse, I believe. Thats only achievable if someone builds an open-source platform or protocol that everyone can use. Theres no partnerships needed. Its just there, like HTTP. We tried to monetize that with AOL and other things, but really the metaverse just has to be an open-source platform.Thats what Im trying to provide with Melba, which is just this open-source tool that creates digital places, much like HTTP generates web pages. Thats where I think the metaverse is. I havent gone off it. Im still plugging forward toward it. I think thats what it should be, rather than what everyone else is trying to build, which seems to be just a funnel to sell you skins.I dont think we should be thinking about what fits in the world. Theres always going to be a joker in a crazy costume running the ultramarathon. This world might have billboards put up because someone can afford to do it. This is a beautiful world. What people make of it? Well, we dont know. But lets see.Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.0 Comments 0 Shares 28 Views
-
VENTUREBEAT.COMPerplexitys Carbon integration will make it easier for enterprises to connect their data to AI searchWith the acquisition of Carbon, Perplexity will expand the knowledge pool powering its AI search engine, making its responses more relevant.Read More0 Comments 0 Shares 28 Views
-
VENTUREBEAT.COMHugging Face shows how test-time scaling helps small language models punch above their weightGiven enough time to "think," small language models can beat LLMs at math and coding tasks by generating and verifying multiple answers.Read More0 Comments 0 Shares 28 Views
-
VENTUREBEAT.COMMy favorite games of 2024 | The DeanBeatSo many games. So little time. Seems like I say that every year, as my pile of shame gets bigger. But I had enough time to pick my fav games.Read More0 Comments 0 Shares 27 Views
-
VENTUREBEAT.COMOpenAI confirms new frontier models o3 and o3-miniOpenAI has just confirmed that is releasing a new reasoning model named o3 and o3 mini, a successor to the o1 and o1 mini models.Read More0 Comments 0 Shares 26 Views
-
VENTUREBEAT.COMBuilding giant and ambitious games | Brendan Greene interviewElvis Presley once said, "Ambition is a dream with a V8 engine." Brendan Greene, the creator of PUBG, still has a lot of ambition.Read More0 Comments 0 Shares 17 Views
-
VENTUREBEAT.COMStable Diffusion 3.5 hits Amazon Bedrock: What it means for enterprise AI workflowsStability AI CEO drives enterprise AI focus home as the flagship Stable Diffusion models land on Amazon Bedrock.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMGoogle unveils new reasoning model Gemini 2.0 Flash Thinking to rival OpenAI o1Unlike competitor reasoning model o1 from OpenAI, Gemini 2.0 enables users to access its step-by-step reasoning through a dropdown menu.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMGamefam closes year of growth with 5 of top 15 branded games on RobloxGamefam said that its branded games on Roblox and Fortnite have exceeded 2.7 billion brand engagements in 2024.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMSpongeBob SquarePants is the latest icon to join the UEFN platformZoned and Paramount have brought SpongeBob to Fortnites UEFN platform in four new UGC-based experiences.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMChatGPT adds more PC and Mac app integrations, getting closer to piloting your computerJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreOpenAI has expanded the number of applications its desktop apps can work with, including allowing Advanced Voice Mode to work with other apps, and is moving closer to ChatGPT using computers.The desktop app introduced integrations in November with an initial four applications. During Day 11 of its 12 Days of OpenAI event, OpenAI announced several new integrated development environments (IDEs), terminals and text apps it will support.ChatGPT now supports BBEdit, MatLab, Nova, Script Editor, TextMate as IDEs, VS Code for VSCode Insiders, VSCodium, Cursor, WindSurf, the JetBrains family of IDEs of Android Studio, AppCode, CLion, DataGrip, GoLand, IntelliJ IDEA, PHPStorm, PyCharm, RubyMine, RustRover and WebStorm. It also added the Warp and Prompt terminal apps as integration. These applications join VS Code, Xcode, Terminal, iTerm 2 and TextEdit as integrated apps.But coding applications wont be the only applications that ChatGPT desktop apps can access. OpenAI also added Apple Notes, Notion and Quip to its integrations. Advanced Voice Mode can work with these applications, considering the context of projects in the integrations.OpenAI emphasized that users must give ChatGPT permission to access these applications.Letting ChatGPT use your computer for youApp integrations with AI chatbots are, of course, nothing new. In October, GitHub Copilot added coding platform integrations. Connecting applications to ChatGPT or Copilot brings context from those platforms into the chat experience. Developers can prompt ChatGPT for some coding help for a project they have on VS Code, and the chatbot understands what theyve been working on.Kevin Weil, chief product officer at OpenAI, said during a live stream that improving the desktop app will help the company get closer to a more agentic user experience for ChatGPT.Weve been putting a lot of effort into our desktop apps, said Weil. As our models get increasingly powerful, ChatGPT will more and more become agentic. That means well go beyond just questions and answers; ChatGPT will begin doing things for you.He added that the desktop apps are a big part of that transformation.Being a desktop app you can do much more than you can than just a browser tab, he said. That includes things like, with your permission, being able to see whats on your screen and being able to automate a lot of the work youre doing on your desktop. Well have a lot more to say on that as we go into 2025.If OpenAI lets ChatGPT see more of your computer, ChatGPT will get closer to Anthropics Claude Computer Use feature, which allows Claude to click around a persons computer, navigate screens and even type text.OpenAI already announced a fairly similar feature for the mobile version of ChatGPT, although the chatbot cannot yet access computers or phones the same way. Users can share their screens with the chatbot so it can see what theyre reading or looking at. Microsoft and Google also developed comparable features with Copilot Vision and Project Astra.How to accessOn MacOS, users who want to open ChatGPT while using other applications can use option + space to pull up ChatGPT and choose the application they need through a button on the chat screen. Another shortcut, option + shift + 1, brings up the topmost application used.From that window, users can also access Advanced Voice Mode in the same way. Voice mode automatically detects context from the application.Integrations are available to ChatGPT Plus, Pro, Team, Enterprise and Edu users. However, Enterprise and Edu subscribers must ask their IT administrators to turn on the feature.Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMOpenAIs new hotline: Chat with ChatGPT anytime, anywhereUsers can now give ChatGPT a ring and ask questions. (800)-ChatGPT accepts any device that can make calls, even for those without a data planRead More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMEA Sports College Football 25 climbs sales rankings in quiet November | CircanaEA Sports College Football 25 posts banner sales, according to industry-tracking firm Circana, in an otherwise fairly mundane month.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMBeyond LLMs: How SandboxAQs large quantitative models could optimize enterprise AIAlphabet spinout SandboxAQ is advancing large quantitative models to help enterprise AI optimize value creation.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMOpenAI opens up its most powerful model, o1, to third-party developersJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreOn the ninth day of its holiday-themed stretch of product announcements known as 12 Days of OpenAI, OpenAI is rolling out its most advanced model, o1, to third-party developers through its application programming interface (API).This marks a major step forward for devs looking to build new advanced AI applications or integrate the most advanced OpenAI tech into their existing apps and workflows, be they enterprise or consumer-facing.If you arent yet acquainted with OpenAIs o1 series, heres the rundown: It was announced back in September 2024, the first in a new family of models from the ChatGPT company, moving beyond the large language models (LLMs) of the GPT-family series and offering reasoning capabilities.Basically, the o1 family of models o1 and o1 mini take longer to respond to a users prompts with answers, but check themselves while they are formulating an answer to see if theyre correct and avoid hallucinations. At the time, OpenAI said o1 could handle more complex, PhD-level problems something borne out by real world users, as well.While developers previously had access to a preview version of o1 on top of which they could build their own apps say, a PhD advisor or lab assistant the production-ready release of the full o1 model through the API brings improved performance, lower latency, and new features that make it easier to integrate into real-world applications.OpenAI had already made o1 available to consumers through its ChatGPT Plus and Pro plans roughly two and a half weeks ago, and added the capability for the models to analyze and respond to imagery and files uploaded by users, too.Alongside todays launch, OpenAI announced significant updates to its Realtime API, along with price reductions and a new fine-tuning method that gives developers greater control over their models.The full o1 model is now available to developers through OpenAIs APIThe new o1 model, available as o1-2024-12-17, is designed to excel at complex, multi-step reasoning tasks. Compared to the earlier o1-preview version, this release improves accuracy, efficiency, and flexibility. OpenAI reports significant gains across a range of benchmarks, including coding, mathematics, and visual reasoning tasks. For example, coding results on SWE-bench Verified increased from 41.3 to 48.9, while performance on the math-focused AIME test jumped from 42 to 79.2. These improvements make o1 well-suited for building tools that streamline customer support, optimize logistics, or solve challenging analytical problems.Several new features enhance o1s functionality for developers. Structured Outputs allow responses to reliably match custom formats such as JSON schemas, ensuring consistency when interacting with external systems. Function calling simplifies the process of connecting o1 to APIs and databases. And the ability to reason over visual inputs opens up use cases in manufacturing, science, and coding.Developers can also fine-tune o1s behavior using the new reasoning_effort parameter, which controls how long the model spends on a task to balance performance and response time.OpenAIs Realtime API gets a boost to power intelligent, conversational voice/audio AI assistants OpenAI also announced updates to its Realtime API, designed to power low-latency, natural conversational experiences like voice assistants, live translation tools, or virtual tutors. A new WebRTC integration simplifies building voice-based apps by providing direct support for audio streaming, noise suppression, and congestion control. Developers can now integrate real-time capabilities with minimal setup, even in variable network conditions.OpenAI is also introducing new pricing for its Realtime API, reducing costs by 60% for GPT-4o audio to $40 per one million input tokens and $80 per one million output tokens.Cached audio input costs are reduced by 87.5%, now priced at $2.50 per one million input tokens. To further improve affordability, OpenAI is adding GPT-4o mini, a smaller, cost-efficient model priced at $10 per one million input tokens and $20 per one million output tokens.Text token rates for GPT-4o mini are also significantly lower, starting at $0.60 for input tokens and $2.40 for output tokens.Beyond pricing, OpenAI is giving developers more control over responses in the Realtime API. Features like concurrent out-of-band responses allow background tasks, such as content moderation, to run without interrupting the user experience. Developers can also customize input contexts to focus on specific parts of a conversation and control when voice responses are triggered for more accurate and seamless interactions.Preference fine-tuning offers new customization optionsAnother major addition is preference fine-tuning, a method for customizing models based on user and developer preferences.Unlike supervised fine-tuning, which relies on exact input-output pairs, preference fine-tuning uses pairwise comparisons to teach the model which responses are preferred. This approach is particularly effective for subjective tasks, such as summarization, creative writing, or scenarios where tone and style matter.Early testing with partners like Rogo AI, which builds assistants for financial analysts, shows promising results. Rogo reported that preference fine-tuning helped their model handle complex, out-of-distribution queries better than traditional fine-tuning, improving task accuracy by over 5%. The feature is now available for gpt-4o-2024-08-06 and gpt-4o-mini-2024-07-18, with plans to expand support to newer models early next year.New SDKs for Go and Java developersTo streamline integration, OpenAI is expanding its official SDK offerings with beta releases for Go and Java. These SDKs join the existing Python, Node.js, and .NET libraries, making it easier for developers to interact with OpenAIs models across more programming environments. The Go SDK is particularly useful for building scalable backend systems, while the Java SDK is tailored for enterprise-grade applications that rely on strong typing and robust ecosystems.With these updates, OpenAI is offering developers an expanded toolkit to build advanced, customizable AI-powered applications. Whether through o1s improved reasoning capabilities, Realtime API enhancements, or fine-tuning options, OpenAIs latest offerings aim to deliver both improved performance and cost-efficiency for businesses pushing the boundaries of AI integration.Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMUAEs Falcon 3 challenges open-source leaders amid surging demand for small AI modelsThe UAE-backed institute has released Falcon 3 in four different sizes with the goal of democratizing access to advanced AI capabilities.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMAgave Games raises $18M in funding to expand Find the Cat mobile titleAgave Games announced today that it has secured $18 million in funding as part of its Series A round, following a successful seed round earlier this year. Felix Capital and Balderton Capital co-led the round, with E2VC participating. According to Agave, it plans to use the funding to expand its popular mobile puzzle title, Find the Cat, as well as explore development of new games and expand its team at its Istanbul location.Find the Cat is a casual puzzle game where players must, as the name implies, find the cat. According to Agave, the game had over 10 million downloads in its first quarter on the market. Alper Oner, Agave Games CEO, said in a statement, This investment allows us to double down on Find the Cats success while exploring new ideas that will redefine casual puzzle gaming. Were excited to take our studio to the next level by expanding our team and portfolio with games that set new standards within established genres.As stated, the studio is planning to develop new casual puzzling titles to follow up on Find the Cat, with plans to launch at least two of them in the coming year. According to Agave, it plans to focus on innovative gameplay mechanics and social features, while also using a dual monetization focus with in-app advertising and in-app purchases. Baran Terzioglu, Agaves CPO and co-founder, said in a statement that the new games will push the boundaries of what players expect from casual games, introducing new concepts that we believe will resonate globally.Rob Moffat, partner at Balderton Capital said in a separate statement, Find the Cat is an overnight success two years in the making. Agave have put together an exceptional team and worked super hard to identify and optimize new genres and produce polished games. Find the Cat is just the start, I am excited to see more of their pipeline go live in 2025.GB DailyStay in the know! Get the latest news in your inbox dailyRead our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMCasual and cute exploration in Revenge of the Savage Planet | hands-on previewGamesBeat had a chance to play Revenge of the Savage Planet in a hands-on preview.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMHow to get paid way more in 2025To secure a pay raise, finely-tuned negotiation tactics are generally required, but advice on negotiation is always changing.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMSlacks AI agents promise to reshape productivity with contextual powerSlack CPO Rob Seaman reveals how Agentforce 2.0 will transform workplace AI by leveraging contextual intelligence and deep platform integration, transforming how enterprises use AI agents for automation and collaboration.Read More0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMMidJourney adds Pinterest-like moodboards and support for multiple custom AI image modelsJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreMidJourney, the popular AI image generator with more than 19 million users (including several of us at VentureBeat), has introduced new features to enhance user customization. Today, the small company launched Pinterest-inspired Moodboards and support for multiple personalization profiles meaning users can now create and switch between multiple custom versions of Midjourneys latest image generator AI model, version 6.1, that are tailored to their unique aesthetics.The updates aim to streamline the creative process for individuals and teams, making it easier to integrate personalized styles across various projects.What are Midjourneys new moodboards?The standout feature, Moodboards, enables users to upload curated collections of images that act as inspiration for generating new art. The AI model adapts to the diversity and complexity of the uploaded images, creating a unique style profile that remixes the visual elements. This addition is complemented by the ability to create multiple personalization profiles, allowing users to organize and deploy their different styles seamlessly.Setting up a custom model has also become significantly faster, with the company claiming a fivefold improvement in image ranking speed. The ranking system is how Midjourney trains a custom model on behalf of you, the user. You need to navigate to Midjourneys image ranker and pick which of a pair of random images you like best, then continue rating pairs of images until youve reached a certain threshold where the model understands what kinds of images and aesthetics you like. Users now need just 40 ratings to begin creating a profile, with optimal stability achieved at 200. Previously, you needed 200 to personalize the model.While heavy users may still prefer contributing thousands of ratings for maximum precision, the streamlined process lowers the barrier to entry for new users.Users can begin rating pairs of images over at midjourney.com/personalize.Better organization featuresThe updates also introduce organizational improvements. Users can now name their profiles, designate one or multiple profiles as defaults, and track all images associated with specific profiles. MidJourney emphasizes that these features are particularly beneficial for those juggling multiple projects or collaborating with others.David Holz, founder of MidJourney, shared the announcement on the companys Discord server earlier today. He explained the motivation behind the updates, expressing a desire to make personalization accessible for a broader range of creative workflows. Holz highlighted that the new tools allow users to take control of their projects while maintaining the flexibility to work with diverse creative teams.As MidJourney continues to refine its personalization infrastructure, the company is soliciting user feedback through its ideas-and-features channel. These developments highlight the platforms commitment to empowering creators with tools that are both intuitive and powerful, marking another step forward in the evolution of AI-assisted creativity.The additions come as expected following Midjourneys announcement last week of an experimental new collaborative image making whiteboard feature called Patchwork. Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.0 Comments 0 Shares 4 Views
-
VENTUREBEAT.COMWeve come a long way from RPA: How AI agents are revolutionizing automationFrom chatbots to retrieval-augmented generation applications to autonomous multi-agent AI: What's every enterprise leader needs to know.Read More0 Comments 0 Shares 10 Views
-
VENTUREBEAT.COMSynthetic data has its limits why human-sourced data can help prevent AI model collapseWith model degradation, AI development could stall, leaving AI systems unable to ingest new data and essentially becoming stuck in time.Read More0 Comments 0 Shares 9 Views
-
VENTUREBEAT.COMThe best of The Game Awards and the redemption of Geoff Keighley | The DeanBeatGeoff Keighley redeemed himself with his tenth anniversary show for The Game Awards.Read More0 Comments 0 Shares 9 Views
-
VENTUREBEAT.COMPika 2.0 launches in wake of Sora, integrating your own characters, objects, scenes in new AI videosAdvanced image recognition tech ensures these components are seamlessly integrated into AI generated videos, giving creators more control.Read More0 Comments 0 Shares 7 Views
-
VENTUREBEAT.COMShutterstock pioneers research license model with Lightricks, lowering barriers to AI training dataShutterstock launches innovative "research license" model with Lightricks, making ethical AI training data more accessible for startups while ensuring fair compensation for creators.Read More0 Comments 0 Shares 8 Views
-
VENTUREBEAT.COMOpenAI launches ChatGPT Projects, letting you organize files, chats in groupsProjects in ChatGPT allows users to create folders and add conversations and documents, bringing these capabilities together in one place.Read More0 Comments 0 Shares 7 Views
-
VENTUREBEAT.COMEsports World Cup Foundation offers $20M in partner club expansionThe Esports World Cup Foundation is expanding its Club Partner Program to include 40 clubs, and is offering $20 million to invest.Read More0 Comments 0 Shares 6 Views
-
VENTUREBEAT.COMCoheres smallest, fastest R-series model excels at RAG, reasoning in 23 languagesCohere's Command R7B uses RAG, features a context length of 128K, supports 23 languages and outperforms Gemma, Llama and Ministral.Read More0 Comments 0 Shares 7 Views
-
VENTUREBEAT.COMChatGPT gets screensharing and real-time video analysis, rivaling Gemini 2OpenAI now lets users video chat with ChatGPT in advanced voice mode, and the chatbot will respond to real-time images.Read More0 Comments 0 Shares 9 Views
-
VENTUREBEAT.COMImmutable claims more wins than other Web3 game companies with 250 supported in 2024Immutable said it has signed 250 Web3 games to its platform this year, or more than any other Web3 company, the company claimed.Read More0 Comments 0 Shares 9 Views
-
VENTUREBEAT.COMMidjourney is launching a multiplayer collaborative worldbuilding tool called PatchworkThe news comes as other AI researchers and big tech companies seek to develop AI that can create 3D immersive, navigable worlds.Read More0 Comments 0 Shares 8 Views
-
VENTUREBEAT.COMOpenAI rolls out ChatGPT for iPhone in landmark AI integration with AppleApple and OpenAI are partnering to bring ChatGPT to iPhone, transforming Siri, camera and productivity features in iOS 18.2.Read More0 Comments 0 Shares 8 Views
-
VENTUREBEAT.COMHeres how OpenAI o1 might lose ground to open source modelso1 does not reveal its reasoning chain, which makes it difficult to get consistent results and correct the model's responses and logic.Read More0 Comments 0 Shares 8 Views
-
VENTUREBEAT.COMSingapore startup Sapient enters global enterprise AI race with new model architecturesJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreSapient Intelligence, Singapores first foundation model AI startup, has announced the successful closure of its seed funding round, raising $22 million at a valuation of $200 million. Backed by prominent investors including Vertex Ventures, Sumitomo Group, and JAFCO, the company is hoping to carve a distinctive path in AI development, addressing what it sees as fundamental shortcomings in GPT-style models. The goal of the startup, really, is to make a new generation of foundational model architectures to solve really complicated and long-horizon reasoning tasks that are really challenging for large language models (LLMs), especially for GPT architectures, to solve, said co-founder and CEO Austin Zheng in a recent interview with VentureBeat conducted over video chat.New architectures beyond traditional TransformersTraditional GPT-style models rely on autoregressive methods, which generate predictions by building sequentially on prior outputs. While effective for general tasks, this approach struggles with multi-step reasoning and complex problem-solving. With current models, theyre all trained with a autoregressive method, and with that, the benefit is its easier for the model to converge on general task, Zheng explained. So it sounds really smart, so it can solve a lot of different tasks. It has a really good generalization capability, but its really, really difficult for them to solve a sub, complicated and long horizon, multi-step tasks. And thats kind of where hallucination comes in, Zheng said. Sapients answer is a novel model architecture inspired by neuroscience and mathematics, blending transformer components with recurrent neural network structures and mimicking how the human brain works.The model will always evaluate the solution, evaluate options and give yourself a reward model based on that, Zheng said. And also the model can continuously calculate something recurrently until it gets to a correct solution. With that, our our agent will be able to deploy to an environment in an enterprise or production environment, and continuously learn and improve ourselves by trial and error and learn to be an expert on the existing code base.This design underpins the flexibility and power of Sapients models, enabling them to tackle a broad range of tasks with precision and reliability.It also puts them up against the new generation of reasoning models from OpenAI and its o1 series, as well as other Chinese competitors.Excelling in benchmarks and beyondThe companys innovations are reflected in benchmark performance. The first benchmark we use is actually Sudoku, Zheng told VentureBeat. Right now, our model is the best performing neural network in terms of solving Sudoku on the market 95% accuracy without using intermediate tools and data.According to Zheng, while other leading models needed to train on intermediate steps to solve the popular numeric ordering puzzle, Sapient only provided the model with unfinished Sudoko boards, the rules, and the final solutions, and must infer on its own how to solve them through trial and error. Similarly, Sapients models have excelled in tasks like two-dimensional navigation and complex mathematical problem-solving, consistently outperforming competing approaches.Training these models is another area where Sapient distinguishes itself. Unlike traditional models that require vast amounts of high-quality, step-by-step data, our approach needs only question-and-answer pairs. This significantly lowers the barrier for training complex models, Zheng said. By leveraging synthetic data, Sapient reduces the dependency on curated datasets, creating scalable and efficient training pipelines.Practical applications: from code to robotsSapients initial focus is on real-world applications, starting with enterprise coding and robotics. Its autonomous coding agents aim to revolutionize how businesses manage their software development and maintenance needs. The company is already deploying an autonomous AI coding agent in Sumitomos enterprise environments to learn the companys codebase and ultimately, begin maintaining and contributing to it.Sapient aims to offer a similar service to other enterprise clients, what Zheng describes as smart and tailored AI employees and AI software engineers that can help them maintain, update and also grow the existing tech stacks.Unlike Cognitions Devin, powered by GPT-4o, Sapent believes its coding AI agents will be able to work autonomously without any human guiding the process or troubleshooting issues, save for supervisors checking over the work before it is pushed live. The company is also advancing embodied AI, designing models that enable robots to interact, learn, and adapt in real-time. There are only a handful of startups working on understanding of environment, and also planning of options and tasks, and understanding what kind of tasks are possible also continuous, improving itself on understanding the environment, understanding the problem, and understanding the use cases, Zheng pointed out. This will be our main focus for the next 1-2 years.A global visionSapient is setting itself apart not just through technology but also its global and inclusive approach.There are very few AI startups at a foundational model level outside of China actually led by Asian founders, Zheng noted. We really want to position ourselves as an international and research-oriented organization. But also, we want to be one of the first, few Asian-led international research organizations that are solving really, really challenging problems, and were seeing that coming to fruition as well.With offices in Singapore and plans for the Bay Area, the company is building an AI research lab to bring together diverse perspectives and talent.Its team reflects this ethos, comprising scientists and engineers from leading institutions like DeepMind, Anthropic, and Microsoft AI.This diversity, combined with strong partnerships with Japanese investors like Sumitomo Group, positions Sapient as a unique player in the global AI ecosystem.Targeting individuals and enterprisesSapients long-term vision is ambitious, targeting technology that can be applied with equally useful results to individuals and enterprises.The goal at the very end will be to build a truly generalized agent that can actually solve a day-to-day task for our users an all agent solution for a personal assistant and for solving all your tasks..thats where we are in terms of our technological goal and also our direction, Zheng said. This includes future public-facing products like autonomous coding agents and general-purpose personal assistants.For now, Sapient is focused on refining its technology and delivering enterprise-grade solutions. Pricing models are still being explored but may include licensing, subscription fees, or task-based charges tied to successful completions.As Sapient scales its operations and capabilities, it remains a company to watch in the rapidly evolving AI landscape. VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.0 Comments 0 Shares 8 Views
More Stories