• Upcoming (serious) Web performance boost

    UpcomingWeb performance boostBy:
    Adam Scott5 June 2025Progress ReportSometimes, just adding a compiler flag can yield significant performance boosts. And that just happened.For about two years now, all major browsers have supported WASMSIMD. SIMD stands for “Single instruction, multiple data” and is a technology that permits CPUs to do some parallel computation, often speeding up the whole program. And that’s exactly why we tried it out recently.We got positive results.The need for performance on the WebThe Web platform is often overlooked as a viable target, because of its less-than-ideal environment and its perceived poor performance. And the perception is somewhat right: the Web environment has a lot of security-related quirks to take into account—the user needs to interact with a game frame before the browser allows it to play any sound1. Also, due to bandwidth and compatibility reasons, you rarely see high-fidelity games being played on a browser. Performance is better achieved when running software natively on the operating system.But don’t underestimate the potential of the Web platform. As I explained in broad terms at the talk I gave at the last GodotCon Boston 2025, the Web has caught up a lot since the days of Flash games. Not only are there more people playing Web games every year, but standards and browsers improve every year in functionality and in performance.And that’s why we are interested in using WASM SIMD.WASM SIMD BenchmarksOur resident benchmark expert Hugo Locurcioran the numbers for us on a stress test I made. We wanted to compare standard builds to builds with WASM SIMD enabled.Note: You may try to replicate his results, but be aware that he has a beast of a machine. Here are his PC’s specifications:CPU: Intel Core i9-13900KGPU: NVIDIA GeForce RTX 4090RAM: 64 GBSSD: Solidigm P44 Pro 2 TBOS: LinuxI built a Jolt physics stress test from a scene initially made by passivestar. By spawning more and more barrels into the contraption, we can easily test the performance difference between the WASM SIMD build and the other.Without WASM SIMDWith WASM SIMDImprovementTest linksLinkLink-Firefox 1382×Firefox 13810.17×*Chromium 1341.37×Chromium 13414.17×**Please note that once the physics engine enters a “spiral of death”, it is common for the framerate to drop to single digits, SIMD or not. These tests don’t prove 10× to 15× CPU computing speed improvements, but rather that games will be more resilient to framerate drops on the same machine in the same circumstances. The 1.5× to 2× numbers are more representative here of the performance gains by WASM SIMD.What it means for your gamesStarting with 4.5 dev 5, you can expect your Web games to run a little bit more smoothly, without having to do anything. Especially when things get chaotic. It isn’t a silver bullet for poorly optimized games, but it will help nonetheless. Also, note that it cannot do anything for GPU rendering bottlenecks.Be aware that the stress tests are meant by nature to only test the worst case scenarios, so you may not see such large improvements in normal circumstances. But it’s nice to see such stark improvements when the worst happens.AvailabilityFrom here on out, the 4.5 release official templates will only support WebAssembly SIMD-compatible browsers in order to keep the template sizes small. We generally aim to maintain compatibility with the oldest devices we can. But in this case, the performance gains are too large to ignore and the chances of users having browsers that are that far out of date is too small relative to the potential benefits.If you need to use non-SIMD templates, don’t fret. You can always build the Godot Editor and the engine templates without WebAssembly SIMD support by using the wasm_simd=no build option.What’s next?As I wrote in my last blog post, we’re currently working very hard to make C#/.NET exports a reality. We do have a promising prototype, we just need to make sure that it’s production-ready.I also mentioned in that article that I wanted to concentrate on improving our asset loading game. Preloading an entire game before even starting it hinders the ability to use Godot for commercial Web games. Once something is implemented to improve that issue, count on me to share the news with you.It’s either that, or we return to the old days of spam-webpages using the “Congratulations, you won!” sound effect when you least expect it. ↩
    #upcoming #serious #web #performance #boost
    Upcoming (serious) Web performance boost
    UpcomingWeb performance boostBy: Adam Scott5 June 2025Progress ReportSometimes, just adding a compiler flag can yield significant performance boosts. And that just happened.For about two years now, all major browsers have supported WASMSIMD. SIMD stands for “Single instruction, multiple data” and is a technology that permits CPUs to do some parallel computation, often speeding up the whole program. And that’s exactly why we tried it out recently.We got positive results.The need for performance on the WebThe Web platform is often overlooked as a viable target, because of its less-than-ideal environment and its perceived poor performance. And the perception is somewhat right: the Web environment has a lot of security-related quirks to take into account—the user needs to interact with a game frame before the browser allows it to play any sound1. Also, due to bandwidth and compatibility reasons, you rarely see high-fidelity games being played on a browser. Performance is better achieved when running software natively on the operating system.But don’t underestimate the potential of the Web platform. As I explained in broad terms at the talk I gave at the last GodotCon Boston 2025, the Web has caught up a lot since the days of Flash games. Not only are there more people playing Web games every year, but standards and browsers improve every year in functionality and in performance.And that’s why we are interested in using WASM SIMD.WASM SIMD BenchmarksOur resident benchmark expert Hugo Locurcioran the numbers for us on a stress test I made. We wanted to compare standard builds to builds with WASM SIMD enabled.Note: You may try to replicate his results, but be aware that he has a beast of a machine. Here are his PC’s specifications:CPU: Intel Core i9-13900KGPU: NVIDIA GeForce RTX 4090RAM: 64 GBSSD: Solidigm P44 Pro 2 TBOS: LinuxI built a Jolt physics stress test from a scene initially made by passivestar. By spawning more and more barrels into the contraption, we can easily test the performance difference between the WASM SIMD build and the other.Without WASM SIMDWith WASM SIMDImprovementTest linksLinkLink-Firefox 1382×Firefox 13810.17×*Chromium 1341.37×Chromium 13414.17×**Please note that once the physics engine enters a “spiral of death”, it is common for the framerate to drop to single digits, SIMD or not. These tests don’t prove 10× to 15× CPU computing speed improvements, but rather that games will be more resilient to framerate drops on the same machine in the same circumstances. The 1.5× to 2× numbers are more representative here of the performance gains by WASM SIMD.What it means for your gamesStarting with 4.5 dev 5, you can expect your Web games to run a little bit more smoothly, without having to do anything. Especially when things get chaotic. It isn’t a silver bullet for poorly optimized games, but it will help nonetheless. Also, note that it cannot do anything for GPU rendering bottlenecks.Be aware that the stress tests are meant by nature to only test the worst case scenarios, so you may not see such large improvements in normal circumstances. But it’s nice to see such stark improvements when the worst happens.AvailabilityFrom here on out, the 4.5 release official templates will only support WebAssembly SIMD-compatible browsers in order to keep the template sizes small. We generally aim to maintain compatibility with the oldest devices we can. But in this case, the performance gains are too large to ignore and the chances of users having browsers that are that far out of date is too small relative to the potential benefits.If you need to use non-SIMD templates, don’t fret. You can always build the Godot Editor and the engine templates without WebAssembly SIMD support by using the wasm_simd=no build option.What’s next?As I wrote in my last blog post, we’re currently working very hard to make C#/.NET exports a reality. We do have a promising prototype, we just need to make sure that it’s production-ready.I also mentioned in that article that I wanted to concentrate on improving our asset loading game. Preloading an entire game before even starting it hinders the ability to use Godot for commercial Web games. Once something is implemented to improve that issue, count on me to share the news with you.It’s either that, or we return to the old days of spam-webpages using the “Congratulations, you won!” sound effect when you least expect it. ↩ #upcoming #serious #web #performance #boost
    Upcoming (serious) Web performance boost
    godotengine.org
    Upcoming (serious) Web performance boostBy: Adam Scott5 June 2025Progress ReportSometimes, just adding a compiler flag can yield significant performance boosts. And that just happened.For about two years now, all major browsers have supported WASM (WebAssembly) SIMD. SIMD stands for “Single instruction, multiple data” and is a technology that permits CPUs to do some parallel computation, often speeding up the whole program. And that’s exactly why we tried it out recently.We got positive results.The need for performance on the WebThe Web platform is often overlooked as a viable target, because of its less-than-ideal environment and its perceived poor performance. And the perception is somewhat right: the Web environment has a lot of security-related quirks to take into account—the user needs to interact with a game frame before the browser allows it to play any sound1. Also, due to bandwidth and compatibility reasons, you rarely see high-fidelity games being played on a browser. Performance is better achieved when running software natively on the operating system.But don’t underestimate the potential of the Web platform. As I explained in broad terms at the talk I gave at the last GodotCon Boston 2025, the Web has caught up a lot since the days of Flash games. Not only are there more people playing Web games every year, but standards and browsers improve every year in functionality and in performance.And that’s why we are interested in using WASM SIMD.WASM SIMD BenchmarksOur resident benchmark expert Hugo Locurcio (better known as Calinou) ran the numbers for us on a stress test I made. We wanted to compare standard builds to builds with WASM SIMD enabled.Note: You may try to replicate his results, but be aware that he has a beast of a machine. Here are his PC’s specifications:CPU: Intel Core i9-13900KGPU: NVIDIA GeForce RTX 4090RAM: 64 GB (2×32 GB DDR5-5800 CL30)SSD: Solidigm P44 Pro 2 TBOS: Linux (Fedora 42)I built a Jolt physics stress test from a scene initially made by passivestar. By spawning more and more barrels into the contraption, we can easily test the performance difference between the WASM SIMD build and the other.Without WASM SIMDWith WASM SIMDImprovement (approx.)Test linksLinkLink-Firefox 138(“+100 barrels” 3 times)2×Firefox 138(“+100 barrels” 6 times)10.17×*Chromium 134(“+100 barrels” 3 times)1.37×Chromium 134(“+100 barrels” 6 times)14.17×**Please note that once the physics engine enters a “spiral of death”, it is common for the framerate to drop to single digits, SIMD or not. These tests don’t prove 10× to 15× CPU computing speed improvements, but rather that games will be more resilient to framerate drops on the same machine in the same circumstances. The 1.5× to 2× numbers are more representative here of the performance gains by WASM SIMD.What it means for your gamesStarting with 4.5 dev 5, you can expect your Web games to run a little bit more smoothly, without having to do anything. Especially when things get chaotic (for your CPU). It isn’t a silver bullet for poorly optimized games, but it will help nonetheless. Also, note that it cannot do anything for GPU rendering bottlenecks.Be aware that the stress tests are meant by nature to only test the worst case scenarios, so you may not see such large improvements in normal circumstances. But it’s nice to see such stark improvements when the worst happens.AvailabilityFrom here on out, the 4.5 release official templates will only support WebAssembly SIMD-compatible browsers in order to keep the template sizes small. We generally aim to maintain compatibility with the oldest devices we can. But in this case, the performance gains are too large to ignore and the chances of users having browsers that are that far out of date is too small relative to the potential benefits.If you need to use non-SIMD templates, don’t fret. You can always build the Godot Editor and the engine templates without WebAssembly SIMD support by using the wasm_simd=no build option.What’s next?As I wrote in my last blog post, we’re currently working very hard to make C#/.NET exports a reality. We do have a promising prototype, we just need to make sure that it’s production-ready.I also mentioned in that article that I wanted to concentrate on improving our asset loading game. Preloading an entire game before even starting it hinders the ability to use Godot for commercial Web games. Once something is implemented to improve that issue, count on me to share the news with you.It’s either that, or we return to the old days of spam-webpages using the “Congratulations, you won!” sound effect when you least expect it. ↩
    Like
    Love
    Wow
    Sad
    Angry
    312
    · 0 Comentários ·0 Compartilhamentos ·0 Anterior
  • You Can Sign up Now to Try Opera’s Mysterious AI Browser

    The company behind the Opera browser is launching yet another AI tool with Opera Neon, an agentic AI browser. This basically means that it's a browser with an AI agent built in, which can go beyond answering questions and will purportedly be able to browse the internet for you to help you get various things done. This includes helping you plan trips, booking vacations, and even creating web apps with simple natural language prompts. Oddly enough, this isn't Opera's first go at agentic AI, as it follows the announcement for the standard Opera browser's Browser Operator tool. Technically, Browser Operator isn't released yet, but it seems the difference is that Neon's use cases will be a bit broader, as the AI will supposedly even able to generate content in the cloud while you're offline.The catch is that Neon isn't free, and is currently invite-only. Opera says it'll require a paid subscription when it launches, and while the company hasn't revealed the pricing or the launch date yet, you can join a waitlist to get notified about details closer to release, plus get in line for an invite. Opera says you'll be able to use the integrated AI as a chatbot and it will be able to search the web to find answers for you. It'll also be able to handle repetitive tasks such as filling forms and shopping. The biggest draw seems to be its ability to create content, though. On the Opera Neon website, a sample screenshot shows a someone requesting the AI to make a "retro snake game" for them.One plus going for this product is that it claims to be able to analyze webpages without recording your screen all the time. Opera also claims that your browsing history, website data, and login information will be stored locally on your computer, which is good for anyone with privacy concerns.It goes without saying that all of these features will only be as useful as the AI model is accurate. The last thing I'd want is to have a faceless AI model book an overpriced hotel in a shady location, so I'll be taking all these trip planning claims with a pinch of salt until I see Neon in action. While launching new products always gets more attention, the sheer number of Opera's recent releases means that its browser lineup is getting a bit confusing. Opera currently has the following browsers listed on its website: Opera Browser, Opera GX, Opera Air, and Opera Mini. This makes Opera Neon the fifth product in the lineup. Each has its own specialty, but I'm starting to feel a little choice paralysis here.
    #you #can #sign #now #try
    You Can Sign up Now to Try Opera’s Mysterious AI Browser
    The company behind the Opera browser is launching yet another AI tool with Opera Neon, an agentic AI browser. This basically means that it's a browser with an AI agent built in, which can go beyond answering questions and will purportedly be able to browse the internet for you to help you get various things done. This includes helping you plan trips, booking vacations, and even creating web apps with simple natural language prompts. Oddly enough, this isn't Opera's first go at agentic AI, as it follows the announcement for the standard Opera browser's Browser Operator tool. Technically, Browser Operator isn't released yet, but it seems the difference is that Neon's use cases will be a bit broader, as the AI will supposedly even able to generate content in the cloud while you're offline.The catch is that Neon isn't free, and is currently invite-only. Opera says it'll require a paid subscription when it launches, and while the company hasn't revealed the pricing or the launch date yet, you can join a waitlist to get notified about details closer to release, plus get in line for an invite. Opera says you'll be able to use the integrated AI as a chatbot and it will be able to search the web to find answers for you. It'll also be able to handle repetitive tasks such as filling forms and shopping. The biggest draw seems to be its ability to create content, though. On the Opera Neon website, a sample screenshot shows a someone requesting the AI to make a "retro snake game" for them.One plus going for this product is that it claims to be able to analyze webpages without recording your screen all the time. Opera also claims that your browsing history, website data, and login information will be stored locally on your computer, which is good for anyone with privacy concerns.It goes without saying that all of these features will only be as useful as the AI model is accurate. The last thing I'd want is to have a faceless AI model book an overpriced hotel in a shady location, so I'll be taking all these trip planning claims with a pinch of salt until I see Neon in action. While launching new products always gets more attention, the sheer number of Opera's recent releases means that its browser lineup is getting a bit confusing. Opera currently has the following browsers listed on its website: Opera Browser, Opera GX, Opera Air, and Opera Mini. This makes Opera Neon the fifth product in the lineup. Each has its own specialty, but I'm starting to feel a little choice paralysis here. #you #can #sign #now #try
    You Can Sign up Now to Try Opera’s Mysterious AI Browser
    lifehacker.com
    The company behind the Opera browser is launching yet another AI tool with Opera Neon, an agentic AI browser. This basically means that it's a browser with an AI agent built in, which can go beyond answering questions and will purportedly be able to browse the internet for you to help you get various things done. This includes helping you plan trips, booking vacations, and even creating web apps with simple natural language prompts. Oddly enough, this isn't Opera's first go at agentic AI, as it follows the announcement for the standard Opera browser's Browser Operator tool. Technically, Browser Operator isn't released yet, but it seems the difference is that Neon's use cases will be a bit broader, as the AI will supposedly even able to generate content in the cloud while you're offline.The catch is that Neon isn't free, and is currently invite-only. Opera says it'll require a paid subscription when it launches, and while the company hasn't revealed the pricing or the launch date yet, you can join a waitlist to get notified about details closer to release, plus get in line for an invite. Opera says you'll be able to use the integrated AI as a chatbot and it will be able to search the web to find answers for you. It'll also be able to handle repetitive tasks such as filling forms and shopping. The biggest draw seems to be its ability to create content, though. On the Opera Neon website, a sample screenshot shows a someone requesting the AI to make a "retro snake game" for them.One plus going for this product is that it claims to be able to analyze webpages without recording your screen all the time (looking at you, Recall). Opera also claims that your browsing history, website data, and login information will be stored locally on your computer, which is good for anyone with privacy concerns.It goes without saying that all of these features will only be as useful as the AI model is accurate. The last thing I'd want is to have a faceless AI model book an overpriced hotel in a shady location, so I'll be taking all these trip planning claims with a pinch of salt until I see Neon in action. While launching new products always gets more attention, the sheer number of Opera's recent releases means that its browser lineup is getting a bit confusing. Opera currently has the following browsers listed on its website: Opera Browser, Opera GX, Opera Air, and Opera Mini. This makes Opera Neon the fifth product in the lineup. Each has its own specialty, but I'm starting to feel a little choice paralysis here.
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • AI is rotting your brain and making you stupid

    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up.

    Titled 'Common People', the episode is from series 7 of Black MirrorNetflix

    The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AII’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car?

    What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos

    Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence.

    LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos

    ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs. At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interestedwith the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggestthat writing makes us dumber.

    “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”

    Wrote Plato, quoting Socrates

    Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel.

    Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain

    I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systemsare erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop.

    “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?”

    Ted Chiang
    #rotting #your #brain #making #you
    AI is rotting your brain and making you stupid
    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up. Titled 'Common People', the episode is from series 7 of Black MirrorNetflix The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AII’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car? What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence. LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs. At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interestedwith the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggestthat writing makes us dumber. “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Wrote Plato, quoting Socrates Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel. Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systemsare erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop. “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?” Ted Chiang #rotting #your #brain #making #you
    AI is rotting your brain and making you stupid
    newatlas.com
    For nearly 10 years I have written about science and technology and I’ve been an early adopter of new tech for much longer. As a teenager in the mid-1990s I annoyed the hell out of my family by jamming up the phone line for hours with a dial-up modem; connecting to bulletin board communities all over the country.When I started writing professionally about technology in 2016 I was all for our seemingly inevitable transhumanist future. When the chip is ready I want it immediately stuck in my head, I remember saying proudly in our busy office. Why not improve ourselves where we can?Since then, my general view on technology has dramatically shifted. Watching a growing class of super-billionaires erode the democratizing nature of technology by maintaining corporate controls over what we use and how we use it has fundamentally changed my personal relationship with technology. Seeing deeply disturbing philosophical stances like longtermism, effective altruism, and singulartarianism envelop the minds of those rich, powerful men controlling the world has only further entrenched inequality.A recent Black Mirror episode really rammed home the perils we face by having technology so controlled by capitalist interests. A sick woman is given a brain implant connected to a cloud server to keep her alive. The system is managed through a subscription service where the user pays for monthly access to the cognitive abilities managed by the implant. As time passes, that subscription cost gets more and more expensive - and well, it’s Black Mirror, so you can imagine where things end up. Titled 'Common People', the episode is from series 7 of Black MirrorNetflix The enshittification of our digital world has been impossible to ignore. You’re not imagining things, Google Search is getting worse.But until the emergence of AI (or, as we’ll discuss later, language learning models that pretend to look and sound like an artificial intelligence) I’ve never been truly concerned about a technological innovation, in and of itself.A recent article looked at how generative AI tech such as ChatGPT is being used by university students. The piece was authored by a tech admin at New York University and it’s filled with striking insights into how AI is shaking the foundations of educational institutions.Not unsurprisingly, students are using ChatGPT for everything from summarizing complex texts to completely writing essays from scratch. But one of the reflections quoted in the article immediately jumped out at me.When a student was asked why they relied on generative AI so much when putting work together they responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”My first response was, of course, why wouldn’t you? It made complete sense.For a second.And then I thought, hang on, what is being lost by speeding from point A to point B in a car? What if the quickest way from point A to point B wasn't the best way to get there?Depositphotos Let’s further the analogy. You need to go to the grocery store. It’s a 10-minute walk away but a three-minute drive. Why wouldn’t you drive?Well, the only benefit of driving is saving time. That’s inarguable. You’ll be back home and cooking up your dinner before the person on foot even gets to the grocery store.Congratulations. You saved yourself about 20 minutes. In a world where efficiency trumps everything this is the best choice. Use that extra 20 minutes in your day wisely.But what are the benefits of not driving, taking the extra time, and walking?First, you have environmental benefits. Not using a car unnecessarily; spewing emissions into the air, either directly from combustion or indirectly for those with electric cars.Secondly, you have health benefits from the little bit of exercise you get by walking. Our stationary lives are quite literally killing us so a 20-minute walk a day is likely to be incredibly positive for your health.But there are also more abstract benefits to be gained by walking this short trip from A to B.Walking connects us to our neighborhood. It slows things down. Helps us better understand the community and environment we are living in. A recent study summarized the benefits of walking around your neighborhood, suggesting the practice leads to greater social connectedness and reduced feelings of isolation.So what are we losing when we use a car to get from point A to point B? Potentially a great deal.But let’s move out of abstraction and into the real world.An article in the Columbia Journalism Review asked nearly 20 news media professionals how they were integrating AI into their personal workflow. The responses were wildly varied. Some journalists refused to use AI for anything more than superficial interview transcription, while others use it broadly, to edit text, answer research questions, summarize large bodies of science text, or search massive troves of data for salient bits of information.In general, the line almost all those media professionals shared was they would never explicitly use AI to write their articles. But for some, almost every other stage of the creative process in developing a story was fair game for AI assistance.I found this a little horrifying. Farming out certain creative development processes to AI felt not only ethically wrong but also like key cognitive stages were being lost, skipped over, considered unimportant.I’ve never considered myself to be an extraordinarily creative person. I don’t feel like I come up with new or original ideas when I work. Instead, I see myself more as a compiler. I enjoy finding connections between seemingly disparate things. Linking ideas and using those pieces as building blocks to create my own work. As a writer and journalist I see this process as the whole point.A good example of this is a story I published in late 2023 investigating the relationship between long Covid and psychedelics. The story began earlier in the year when I read an intriguing study linking long Covid with serotonin abnormalities in the gut. Being interested in the science of psychedelics, and knowing that psychedelics very much influence serotonin receptors, I wondered if there could be some kind of link between these two seemingly disparate topics.The idea sat in the back of my mind for several months, until I came across a person who told me they had been actively treating their own long Covid symptoms with a variety of psychedelic remedies. After an expansive and fascinating interview I started diving into different studies looking to understand how certain psychedelics affect the body, and whether there could be any associations with long Covid treatments.Eventually I stumbled across a few compelling associations. It took weeks of reading different scientific studies, speaking to various researchers, and thinking about how several discordant threads could be somehow linked.Could AI have assisted me in the process of developing this story?No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience.And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence. LLMs are a sophisticated language imitator, delivering responses that resemble what they think a response would look likeDepositphotos ChatGPT, and all the assorted clones that have emerged over the last couple of years, are a form of technology called LLMs (large language models). At the risk of enraging those who actually work in this mind-bendingly complex field, I’m going to dangerously over-simplify how these things work.It’s important to know that when you ask a system like ChatGPT a question it doesn’t understand what you are asking it. The response these systems generate to any prompt is simply a simulation of what it computes a response would look like based on a massive dataset.So if I were to ask the system a random question like, “What color are cats?”, the system would scrape the world’s trove of information on cats and colors to create a response that mirrors the way most pre-existing text talks about cats and colors. The system builds its response word by word, creating something that reads coherently to us, by establishing a probability for what word should follow each prior word. It’s not thinking, it’s imitating.What these generative AI systems are spitting out are word salad amalgams of what it thinks the response to your prompt should look like, based on training from millions of books and webpages that have been previously published.Setting aside for a moment the accuracy of the responses these systems deliver, I am more interested (or concerned) with the cognitive stages that this technology allows us to skip past.For thousands of years we have used technology to improve our ability to manage highly complex tasks. The idea is called cognitive offloading, and it’s as simple as writing something down on a notepad or saving a contact number on your smartphone. There are pros and cons to cognitive offloading, and scientists have been digging into the phenomenon for years.As long as we have been doing it, there have been people criticizing the practice. The legendary Greek philosopher Socrates was notorious for his skepticism around the written word. He believed knowledge emerged through a dialectical process so writing itself was reductive. He even went so far as to suggest (according to his student Plato, who did write things down) that writing makes us dumber. “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Wrote Plato, quoting Socrates Almost every technological advancement in human history can be seen to be accompanied by someone suggesting it will be damaging. Calculators have destroyed our ability to properly do math. GPS has corrupted our spatial memory. Typewriters killed handwriting. Computer word processors killed typewriters. Video killed the radio star.And what have we lost? Well, zooming in on writing, for example, a 2020 study claimed brain activity is greater when a note is handwritten as opposed to being typed on a keyboard. And then a 2021 study suggested memory retention is better when using a pen and paper versus a stylus and tablet. So there are certainly trade-offs whenever we choose to use a technological tool to offload a cognitive task.There’s an oft-told story about gonzo journalist Hunter S. Thompson. It may be apocryphal but it certainly is meaningful. He once said he sat down and typed out the entirety of The Great Gatsby, word for word. According to Thompson, he wanted to know what it felt like to write a great novel. Thompson was infamous for writing everything on typewriters, even when computers emerged in the 1990sPublic Domain I don’t want to get all wishy-washy here, but these are the brass tacks we are ultimately falling on. What does it feel like to think? What does it feel like to be creative? What does it feel like to understand something?A recent interview with Satya Nadella, CEO of Microsoft, reveals how deeply AI has infiltrated his life and work. Not only does Nadella utilize nearly a dozen different custom-designed AI agents to manage every part of his workflow – from summarizing emails to managing his schedule – but he also uses AI to get through podcasts quickly on his way to work. Instead of actually listening to the podcasts he has transcripts uploaded to an AI assistant who he then chats to about the information while commuting.Why listen to the podcast when you can get the gist through a summary? Why read a book when you can listen to the audio version at X2 speed? Or better yet, watch the movie? Or just read a Wikipedia entry. Or get AI to summarize the wikipedia entry.I’m not here to judge anyone on the way they choose to use technology. Do what you want with ChatGPT. But for a moment consider what you may be skipping over by racing from point A to point B.Sure, you can give ChatGPT a set of increasingly detailed prompts; adding complexity to its summary of a scientific journal or a podcast, but at what point do the prompts get so granular that you may as well read the journal entry itself? If you get generative AI to skim and summarize something, what is it missing? If something was worth being written then surely it is worth being read?If there is a more succinct way to say something then maybe we should say it more succinctly.In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systems (or these simulations of intelligence) are erasing our ability to think, consider, and write. Where does it all end? For Chiang it's pretty dystopian feedback loop of dialectical slop. “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?” Ted Chiang
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • OpenAI upgrades the AI model powering its Operator agent

    OpenAI is updating the AI model powering Operator, its AI agent that can autonomously browse the web and use certain software within a cloud-hosted virtual machine to fulfill users’ requests.
    Soon, Operator will use a model based on o3, one of the latest in OpenAI’s o series of “reasoning” models. Previously, Operator relied on a custom version of GPT-4o.
    By many benchmarks, o3 is a far more advanced model, particularly on tasks involving math and reasoning.
    “We are replacing the existing GPT‑4o-based model for Operator with a version based on OpenAI o3,” OpenAI wrote in a blog post. “The API versionwill remain based on 4o.”
    Operator is one among many agentic tools released by AI companies in recent months. Companies are racing to make highly sophisticated agents that can reliably carry out chores more or less without supervision.
    Google offers a “computer use” agent through its Gemini API that can similarly browse the web and take actions on behalf of users, as well as a more consumer-focused offering called Mariner. Anthropic’s models are also able to perform computer tasks, including opening files and navigating webpages.
    According to OpenAI, the new Operator model, called o3 Operator, was “fine-tuned with additional safety data for computer use,” including data sets designed to “teach the modeldecision boundaries on confirmations and refusals.”

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    OpenAI has released a technical report showing o3 Operator’s performance on specific safety evaluations. Compared to the GPT-4o Operator model, o3 Operator is less likely to refuse to perform “illicit” activities and search for sensitive personal data, and less susceptible to a form of AI attack known as prompt injection, per the technical report.
    “o3 Operator uses the same multi-layered approach to safety that we used for the 4o version of Operator,” OpenAI wrote in its blog post. “Although o3 Operator inherits o3’s coding capabilities, it does not have native access to a coding environment or terminal.”
    #openai #upgrades #model #powering #its
    OpenAI upgrades the AI model powering its Operator agent
    OpenAI is updating the AI model powering Operator, its AI agent that can autonomously browse the web and use certain software within a cloud-hosted virtual machine to fulfill users’ requests. Soon, Operator will use a model based on o3, one of the latest in OpenAI’s o series of “reasoning” models. Previously, Operator relied on a custom version of GPT-4o. By many benchmarks, o3 is a far more advanced model, particularly on tasks involving math and reasoning. “We are replacing the existing GPT‑4o-based model for Operator with a version based on OpenAI o3,” OpenAI wrote in a blog post. “The API versionwill remain based on 4o.” Operator is one among many agentic tools released by AI companies in recent months. Companies are racing to make highly sophisticated agents that can reliably carry out chores more or less without supervision. Google offers a “computer use” agent through its Gemini API that can similarly browse the web and take actions on behalf of users, as well as a more consumer-focused offering called Mariner. Anthropic’s models are also able to perform computer tasks, including opening files and navigating webpages. According to OpenAI, the new Operator model, called o3 Operator, was “fine-tuned with additional safety data for computer use,” including data sets designed to “teach the modeldecision boundaries on confirmations and refusals.” Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW OpenAI has released a technical report showing o3 Operator’s performance on specific safety evaluations. Compared to the GPT-4o Operator model, o3 Operator is less likely to refuse to perform “illicit” activities and search for sensitive personal data, and less susceptible to a form of AI attack known as prompt injection, per the technical report. “o3 Operator uses the same multi-layered approach to safety that we used for the 4o version of Operator,” OpenAI wrote in its blog post. “Although o3 Operator inherits o3’s coding capabilities, it does not have native access to a coding environment or terminal.” #openai #upgrades #model #powering #its
    OpenAI upgrades the AI model powering its Operator agent
    techcrunch.com
    OpenAI is updating the AI model powering Operator, its AI agent that can autonomously browse the web and use certain software within a cloud-hosted virtual machine to fulfill users’ requests. Soon, Operator will use a model based on o3, one of the latest in OpenAI’s o series of “reasoning” models. Previously, Operator relied on a custom version of GPT-4o. By many benchmarks, o3 is a far more advanced model, particularly on tasks involving math and reasoning. “We are replacing the existing GPT‑4o-based model for Operator with a version based on OpenAI o3,” OpenAI wrote in a blog post. “The API version [of Operator] will remain based on 4o.” Operator is one among many agentic tools released by AI companies in recent months. Companies are racing to make highly sophisticated agents that can reliably carry out chores more or less without supervision. Google offers a “computer use” agent through its Gemini API that can similarly browse the web and take actions on behalf of users, as well as a more consumer-focused offering called Mariner. Anthropic’s models are also able to perform computer tasks, including opening files and navigating webpages. According to OpenAI, the new Operator model, called o3 Operator, was “fine-tuned with additional safety data for computer use,” including data sets designed to “teach the model [OpenAI’s] decision boundaries on confirmations and refusals.” Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW OpenAI has released a technical report showing o3 Operator’s performance on specific safety evaluations. Compared to the GPT-4o Operator model, o3 Operator is less likely to refuse to perform “illicit” activities and search for sensitive personal data, and less susceptible to a form of AI attack known as prompt injection, per the technical report. “o3 Operator uses the same multi-layered approach to safety that we used for the 4o version of Operator,” OpenAI wrote in its blog post. “Although o3 Operator inherits o3’s coding capabilities, it does not have native access to a coding environment or terminal.”
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • The 15 biggest announcements at Google I/O 2025

    Google just wrapped up its big keynote at I/O 2025. As expected, it was full of AI-related announcements, ranging from updates across Google’s image and video generation models to new features in Search and Gmail.But there were some surprises, too, like a new AI filmmaking app and an update to Project Starline. If you didn’t catch the event live, you can check out everything you missed in the roundup below.Google’s AI Mode for Search is coming to everyoneGoogle has announced that it’s rolling out AI Mode, a new tab that lets you search the web using the company’s Gemini AI chatbot, to all users in the US starting this week.Google will test new features in AI Mode this summer, such as deep search and a way to generate charts for finance and sports queries. It’s also rolling out the ability to shop in AI Mode in the “coming months.”Project Starline is now Google BeamImage: GoogleProject Starline, which began as a 3D video chat booth, is taking a big step forward. It’s becoming Google Beam and will soon launch inside an HP-branded device with a light field display and six cameras to create a 3D image of the person you’re chatting with on a video call.Companies like Deloitte, Duolingo, and Salesforce have already said that they will add HP’s Google Beam devices to their offices.Imagen and Veo are getting some big upgradesGoogle has announced Imagen 4, the latest version of its AI text-to-image generator, which the company says is better at generating text and offers the ability to export images in more formats, like square and landscape. Its next-gen AI video generator, Veo 3, will let you generate video and sound together, while Veo 2 now comes with tools like camera controls and object removal.Google launches an AI filmmaking appGIF: GoogleIn addition to updating its AI models, Google is launching a new AI filmmaking app called Flow. The tool uses Veo, Imagen, and Gemini to create eight-second AI-generated video clips based on text prompts and / or images. It also comes with scene-builder tools to stitch clips together and create longer AI videos.Gemini 2.5 Pro adds an “enhanced” reasoning modeThe experimental Deep Think mode is meant for complex queries related to math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.Xreal shows off its Project Aura prototypeXreal and Google are teaming up on Project Aura, a new pair of smart glasses that use the Android XR platform for mixed-reality devices. We don’t know much about the glasses just yet, but they’ll come with Gemini integration and a large field-of-view, along with what appears to be built-in cameras and microphones.Google is also partnering with Samsung, Gentle Monster, and Warby Parker to create other Android XR smart glasses, as well.Google’s experimental AI assistant is getting more proactive Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.Gemini is coming to ChromeGoogle is building its AI assistant into Chrome. Starting Wednesday, Google AI Pro and Ultra subscribers will be able to select the Gemini button in Chrome to clarify or summarize information across webpages and navigate sites on your behalf. The feature can work with up to two tabs for now, but Google plans on adding support for more later this year.Google’s new AI Ultra plan costs /monthGoogle is rolling out a new “AI Ultra” subscription that offers access to the company’s most advanced AI models and higher usage limits across apps like Gemini, NotebookLM, Flow, and more. The subscription also includes early access to Gemini in Chrome and Project Mariner, which can now complete up to 10 tasks at once.Search Live will let you discuss what’s on your camera in real-timeSpeaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. By selecting the new “Live” icon in AI Mode or Lens, you can talk back and forth with Search while showing what’s on your camera.After making Gemini Live’s screensharing feature free for all Android users last month, Google has announced that users on iOS will be able to access it for free as well.Image: GoogleGoogle has revealed Stitch, a new AI-powered tool that can generate user interfaces using selected themes and a description. You can also incorporate wireframes, rough sketches, and screenshots of other UI designs to guide Stitch’s output. The experiment is currently available on Google Labs.Google Meet adds AI speech translationImage: GoogleGoogle Meet is launching a new feature that translates your speech into your conversation partner’s preferred language in near real-time. The feature only supports English and Spanish for now. It’s rolling out in beta to Google AI Pro and Ultra subscribers.Gmail’s smart replies will soon pull info from your inboxImage: GoogleGmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to pre-write responses that sound more like you. The feature will also take your recipient’s tone into account, allowing it to suggest more formal responses in a conversation with your boss, for example.Gmail’s upgraded smart replies will be available in English on the web, iOS, and Android when it launches through Google Labs in July.Image: GoogleGoogle is testing a new “try-it-on” feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. The feature uses an AI model that “understands the human body and nuances of clothing.”Google will also soon let you shop in AI Mode, as well as use an “agentic checkout” feature that can purchase products on your behalf.Google Chrome will soon help you update compromised passwordsIf Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.See More:
    #biggest #announcements #google
    The 15 biggest announcements at Google I/O 2025
    Google just wrapped up its big keynote at I/O 2025. As expected, it was full of AI-related announcements, ranging from updates across Google’s image and video generation models to new features in Search and Gmail.But there were some surprises, too, like a new AI filmmaking app and an update to Project Starline. If you didn’t catch the event live, you can check out everything you missed in the roundup below.Google’s AI Mode for Search is coming to everyoneGoogle has announced that it’s rolling out AI Mode, a new tab that lets you search the web using the company’s Gemini AI chatbot, to all users in the US starting this week.Google will test new features in AI Mode this summer, such as deep search and a way to generate charts for finance and sports queries. It’s also rolling out the ability to shop in AI Mode in the “coming months.”Project Starline is now Google BeamImage: GoogleProject Starline, which began as a 3D video chat booth, is taking a big step forward. It’s becoming Google Beam and will soon launch inside an HP-branded device with a light field display and six cameras to create a 3D image of the person you’re chatting with on a video call.Companies like Deloitte, Duolingo, and Salesforce have already said that they will add HP’s Google Beam devices to their offices.Imagen and Veo are getting some big upgradesGoogle has announced Imagen 4, the latest version of its AI text-to-image generator, which the company says is better at generating text and offers the ability to export images in more formats, like square and landscape. Its next-gen AI video generator, Veo 3, will let you generate video and sound together, while Veo 2 now comes with tools like camera controls and object removal.Google launches an AI filmmaking appGIF: GoogleIn addition to updating its AI models, Google is launching a new AI filmmaking app called Flow. The tool uses Veo, Imagen, and Gemini to create eight-second AI-generated video clips based on text prompts and / or images. It also comes with scene-builder tools to stitch clips together and create longer AI videos.Gemini 2.5 Pro adds an “enhanced” reasoning modeThe experimental Deep Think mode is meant for complex queries related to math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.Xreal shows off its Project Aura prototypeXreal and Google are teaming up on Project Aura, a new pair of smart glasses that use the Android XR platform for mixed-reality devices. We don’t know much about the glasses just yet, but they’ll come with Gemini integration and a large field-of-view, along with what appears to be built-in cameras and microphones.Google is also partnering with Samsung, Gentle Monster, and Warby Parker to create other Android XR smart glasses, as well.Google’s experimental AI assistant is getting more proactive Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.Gemini is coming to ChromeGoogle is building its AI assistant into Chrome. Starting Wednesday, Google AI Pro and Ultra subscribers will be able to select the Gemini button in Chrome to clarify or summarize information across webpages and navigate sites on your behalf. The feature can work with up to two tabs for now, but Google plans on adding support for more later this year.Google’s new AI Ultra plan costs /monthGoogle is rolling out a new “AI Ultra” subscription that offers access to the company’s most advanced AI models and higher usage limits across apps like Gemini, NotebookLM, Flow, and more. The subscription also includes early access to Gemini in Chrome and Project Mariner, which can now complete up to 10 tasks at once.Search Live will let you discuss what’s on your camera in real-timeSpeaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. By selecting the new “Live” icon in AI Mode or Lens, you can talk back and forth with Search while showing what’s on your camera.After making Gemini Live’s screensharing feature free for all Android users last month, Google has announced that users on iOS will be able to access it for free as well.Image: GoogleGoogle has revealed Stitch, a new AI-powered tool that can generate user interfaces using selected themes and a description. You can also incorporate wireframes, rough sketches, and screenshots of other UI designs to guide Stitch’s output. The experiment is currently available on Google Labs.Google Meet adds AI speech translationImage: GoogleGoogle Meet is launching a new feature that translates your speech into your conversation partner’s preferred language in near real-time. The feature only supports English and Spanish for now. It’s rolling out in beta to Google AI Pro and Ultra subscribers.Gmail’s smart replies will soon pull info from your inboxImage: GoogleGmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to pre-write responses that sound more like you. The feature will also take your recipient’s tone into account, allowing it to suggest more formal responses in a conversation with your boss, for example.Gmail’s upgraded smart replies will be available in English on the web, iOS, and Android when it launches through Google Labs in July.Image: GoogleGoogle is testing a new “try-it-on” feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. The feature uses an AI model that “understands the human body and nuances of clothing.”Google will also soon let you shop in AI Mode, as well as use an “agentic checkout” feature that can purchase products on your behalf.Google Chrome will soon help you update compromised passwordsIf Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.See More: #biggest #announcements #google
    The 15 biggest announcements at Google I/O 2025
    www.theverge.com
    Google just wrapped up its big keynote at I/O 2025. As expected, it was full of AI-related announcements, ranging from updates across Google’s image and video generation models to new features in Search and Gmail.But there were some surprises, too, like a new AI filmmaking app and an update to Project Starline. If you didn’t catch the event live, you can check out everything you missed in the roundup below.Google’s AI Mode for Search is coming to everyoneGoogle has announced that it’s rolling out AI Mode, a new tab that lets you search the web using the company’s Gemini AI chatbot, to all users in the US starting this week.Google will test new features in AI Mode this summer, such as deep search and a way to generate charts for finance and sports queries. It’s also rolling out the ability to shop in AI Mode in the “coming months.”Project Starline is now Google BeamImage: GoogleProject Starline, which began as a 3D video chat booth, is taking a big step forward. It’s becoming Google Beam and will soon launch inside an HP-branded device with a light field display and six cameras to create a 3D image of the person you’re chatting with on a video call.Companies like Deloitte, Duolingo, and Salesforce have already said that they will add HP’s Google Beam devices to their offices.Imagen and Veo are getting some big upgradesGoogle has announced Imagen 4, the latest version of its AI text-to-image generator, which the company says is better at generating text and offers the ability to export images in more formats, like square and landscape. Its next-gen AI video generator, Veo 3, will let you generate video and sound together, while Veo 2 now comes with tools like camera controls and object removal.Google launches an AI filmmaking appGIF: GoogleIn addition to updating its AI models, Google is launching a new AI filmmaking app called Flow. The tool uses Veo, Imagen, and Gemini to create eight-second AI-generated video clips based on text prompts and / or images. It also comes with scene-builder tools to stitch clips together and create longer AI videos.Gemini 2.5 Pro adds an “enhanced” reasoning modeThe experimental Deep Think mode is meant for complex queries related to math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.Xreal shows off its Project Aura prototypeXreal and Google are teaming up on Project Aura, a new pair of smart glasses that use the Android XR platform for mixed-reality devices. We don’t know much about the glasses just yet, but they’ll come with Gemini integration and a large field-of-view, along with what appears to be built-in cameras and microphones.Google is also partnering with Samsung, Gentle Monster, and Warby Parker to create other Android XR smart glasses, as well.Google’s experimental AI assistant is getting more proactive Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.Gemini is coming to ChromeGoogle is building its AI assistant into Chrome. Starting Wednesday, Google AI Pro and Ultra subscribers will be able to select the Gemini button in Chrome to clarify or summarize information across webpages and navigate sites on your behalf. The feature can work with up to two tabs for now, but Google plans on adding support for more later this year.Google’s new AI Ultra plan costs $250/monthGoogle is rolling out a new “AI Ultra” subscription that offers access to the company’s most advanced AI models and higher usage limits across apps like Gemini, NotebookLM, Flow, and more. The subscription also includes early access to Gemini in Chrome and Project Mariner, which can now complete up to 10 tasks at once.Search Live will let you discuss what’s on your camera in real-timeSpeaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. By selecting the new “Live” icon in AI Mode or Lens, you can talk back and forth with Search while showing what’s on your camera.After making Gemini Live’s screensharing feature free for all Android users last month, Google has announced that users on iOS will be able to access it for free as well.Image: GoogleGoogle has revealed Stitch, a new AI-powered tool that can generate user interfaces using selected themes and a description. You can also incorporate wireframes, rough sketches, and screenshots of other UI designs to guide Stitch’s output. The experiment is currently available on Google Labs.Google Meet adds AI speech translationImage: GoogleGoogle Meet is launching a new feature that translates your speech into your conversation partner’s preferred language in near real-time. The feature only supports English and Spanish for now. It’s rolling out in beta to Google AI Pro and Ultra subscribers.Gmail’s smart replies will soon pull info from your inboxImage: GoogleGmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to pre-write responses that sound more like you. The feature will also take your recipient’s tone into account, allowing it to suggest more formal responses in a conversation with your boss, for example.Gmail’s upgraded smart replies will be available in English on the web, iOS, and Android when it launches through Google Labs in July.Image: GoogleGoogle is testing a new “try-it-on” feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. The feature uses an AI model that “understands the human body and nuances of clothing.”Google will also soon let you shop in AI Mode, as well as use an “agentic checkout” feature that can purchase products on your behalf.Google Chrome will soon help you update compromised passwordsIf Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.See More:
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • Feeling nostalgic? Mac Themes Garden has you covered

    If the word Kaleidoscope means anything to you, you’re in for a treat. As spotted by Rob Beschizza, Mac Themes Garden features a collection of nearly 4,000 Classic Mac OS GUI customizations.
    The site is a collection of pixelated bliss, and whether you’re already feeling fuzzy inside or have no idea what I’m talking about, you won’t want to miss out on this.

    All your favorite classic Mac OS themes, and then some
    Many years before Apple launched Appearance Manager, which let users natively customize the GUI on Mac OS 8, there was Kaleidoscope: “the ultimate in user interface customization, letting you completely overhaul your Mac’s user interface using plug-in Color Scheme files,” as the project’s description used to state.
    Kaleidoscope offered an easy way to apply themes to the entire system, and even after Apple released Appearance Manager, it remained the tool of choice to most Mac customization enthusiasts. So much so that Apple announced a tool that would easily import Kaleidoscope themes into Appearance Manager schemes. The tool was never released.
    After the launch of Mac OS X, Kaleidoscope stopped working. Like Winamp skins and Geocities webpages, these themes were lost in time—until now.
    Building on the work of the defunct Twitter account @kaleidoscopemac and the Kaleidoscope Scheme Archive on The Wayback Machine, French software engineer Damien Erambert launched Mac Themes Garden.

    It is a comprehensive index of Kaleidoscope themes, searchable by scheme and authors. From the classic BeOS theme to an adaptation of Apple’s failed Copland OS project, you can check out screen grabs and even download the actual themes, when available.
    Are you among the distinguished 9to5Mac readers who happened to catch the Kaleidoscope era? Did you have a favorite scheme? Let us know in the comments!

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #feeling #nostalgic #mac #themes #garden
    Feeling nostalgic? Mac Themes Garden has you covered
    If the word Kaleidoscope means anything to you, you’re in for a treat. As spotted by Rob Beschizza, Mac Themes Garden features a collection of nearly 4,000 Classic Mac OS GUI customizations. The site is a collection of pixelated bliss, and whether you’re already feeling fuzzy inside or have no idea what I’m talking about, you won’t want to miss out on this. All your favorite classic Mac OS themes, and then some Many years before Apple launched Appearance Manager, which let users natively customize the GUI on Mac OS 8, there was Kaleidoscope: “the ultimate in user interface customization, letting you completely overhaul your Mac’s user interface using plug-in Color Scheme files,” as the project’s description used to state. Kaleidoscope offered an easy way to apply themes to the entire system, and even after Apple released Appearance Manager, it remained the tool of choice to most Mac customization enthusiasts. So much so that Apple announced a tool that would easily import Kaleidoscope themes into Appearance Manager schemes. The tool was never released. After the launch of Mac OS X, Kaleidoscope stopped working. Like Winamp skins and Geocities webpages, these themes were lost in time—until now. Building on the work of the defunct Twitter account @kaleidoscopemac and the Kaleidoscope Scheme Archive on The Wayback Machine, French software engineer Damien Erambert launched Mac Themes Garden. It is a comprehensive index of Kaleidoscope themes, searchable by scheme and authors. From the classic BeOS theme to an adaptation of Apple’s failed Copland OS project, you can check out screen grabs and even download the actual themes, when available. Are you among the distinguished 9to5Mac readers who happened to catch the Kaleidoscope era? Did you have a favorite scheme? Let us know in the comments! Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #feeling #nostalgic #mac #themes #garden
    Feeling nostalgic? Mac Themes Garden has you covered
    9to5mac.com
    If the word Kaleidoscope means anything to you, you’re in for a treat. As spotted by Rob Beschizza (via BoingBoing), Mac Themes Garden features a collection of nearly 4,000 Classic Mac OS GUI customizations. The site is a collection of pixelated bliss, and whether you’re already feeling fuzzy inside or have no idea what I’m talking about, you won’t want to miss out on this. All your favorite classic Mac OS themes, and then some Many years before Apple launched Appearance Manager, which let users natively customize the GUI on Mac OS 8, there was Kaleidoscope: “the ultimate in user interface customization, letting you completely overhaul your Mac’s user interface using plug-in Color Scheme files,” as the project’s description used to state. Kaleidoscope offered an easy way to apply themes to the entire system, and even after Apple released Appearance Manager (for a hot minute, before Steve Jobs killed it upon his return to Apple), it remained the tool of choice to most Mac customization enthusiasts. So much so that Apple announced a tool that would easily import Kaleidoscope themes into Appearance Manager schemes. The tool was never released. After the launch of Mac OS X, Kaleidoscope stopped working. Like Winamp skins and Geocities webpages, these themes were lost in time—until now. Building on the work of the defunct Twitter account @kaleidoscopemac and the Kaleidoscope Scheme Archive on The Wayback Machine, French software engineer Damien Erambert launched Mac Themes Garden. It is a comprehensive index of Kaleidoscope themes, searchable by scheme and authors. From the classic BeOS theme to an adaptation of Apple’s failed Copland OS project, you can check out screen grabs and even download the actual themes, when available. Are you among the distinguished 9to5Mac readers who happened to catch the Kaleidoscope era? Did you have a favorite scheme? Let us know in the comments! Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • Trump USDA Restores Climate Information for Farmers after Lawsuit

    May 14, 20252 min readUSDA Reverses Course and Restores Climate Information for FarmersFacing a lawsuit, the Department of Agriculture says it will restore climate-related websites that the agency erased after President Donald Trump took officeBy Lesley Clark & E&E News Harvesting soybeans in Jarrettsville, Maryland. Edwin Remsberg/Getty ImagesCLIMATEWIRE | The Trump administration has reversed course and will restore U.S. Department of Agriculture websites related to climate change in response to a lawsuit brought by environmental organizations and farmers.Groups represented by Earthjustice and the Knight First Amendment Institute at Columbia University had sued in February, alleging that the removal of climate-related policies, datasets and other resources violated federal laws requiring advanced notice, reasoned decision-making and public access to certain information.In a letter late Monday, the administration told Judge Margaret Garnett of the U.S. District Court for the Southern District of New York that the USDA will restore the climate-related web content that was removed after President Donald Trump’s inauguration, including all USDA webpages and interactive tools listed in the lawsuit.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.It noted that the process of restoring the removed content was already underway and that the USDA expects to mostly complete the process in two weeks. The USDA also pledged that it “commits to complying with” federal law governing future “posting decisions.”The purged material had included information on climate-smart farming, forest conservation and adaptation. The USDA also took down climate sections from the websites for the Forest Service and the Natural Resources Conservation Service, including information to help farmers access money for conservation practices.The move comes days before Garnett was to hear the challengers’ request for a preliminary injunction that sought to require the USDA to restore the webpages and stop taking down additional climate information.“We’re glad that USDA recognized that its blatantly unlawful purge of climate-change-related information is harming farmers and communities across the country,” said Jeffrey Stein, an associate attorney with Earthjustice. Stein added that farmers depend on the websites to protect their farms from drought, wildfire and extreme weather.The Northeast Organic Farming Association of New York, which was one of the parties to the case, hailed the USDA for restoring the webpages.“Access to timely, accurate, science-based climate information is essential for organic, regenerative agriculture communities facing increasingly unpredictable weather patterns,” said Marcie Craig, the group’s executive director.The Environmental Working Group called the restoration of the webpages a “significant victory for the climate, the environment and farmers.”“The Trump administration’s reversal in response to this legal challenge highlights the critical importance of public interest advocates standing up in the name of transparency and government accountability,” said Anne Schechinger, the group’s Midwest director.Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals.
    #trump #usda #restores #climate #information
    Trump USDA Restores Climate Information for Farmers after Lawsuit
    May 14, 20252 min readUSDA Reverses Course and Restores Climate Information for FarmersFacing a lawsuit, the Department of Agriculture says it will restore climate-related websites that the agency erased after President Donald Trump took officeBy Lesley Clark & E&E News Harvesting soybeans in Jarrettsville, Maryland. Edwin Remsberg/Getty ImagesCLIMATEWIRE | The Trump administration has reversed course and will restore U.S. Department of Agriculture websites related to climate change in response to a lawsuit brought by environmental organizations and farmers.Groups represented by Earthjustice and the Knight First Amendment Institute at Columbia University had sued in February, alleging that the removal of climate-related policies, datasets and other resources violated federal laws requiring advanced notice, reasoned decision-making and public access to certain information.In a letter late Monday, the administration told Judge Margaret Garnett of the U.S. District Court for the Southern District of New York that the USDA will restore the climate-related web content that was removed after President Donald Trump’s inauguration, including all USDA webpages and interactive tools listed in the lawsuit.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.It noted that the process of restoring the removed content was already underway and that the USDA expects to mostly complete the process in two weeks. The USDA also pledged that it “commits to complying with” federal law governing future “posting decisions.”The purged material had included information on climate-smart farming, forest conservation and adaptation. The USDA also took down climate sections from the websites for the Forest Service and the Natural Resources Conservation Service, including information to help farmers access money for conservation practices.The move comes days before Garnett was to hear the challengers’ request for a preliminary injunction that sought to require the USDA to restore the webpages and stop taking down additional climate information.“We’re glad that USDA recognized that its blatantly unlawful purge of climate-change-related information is harming farmers and communities across the country,” said Jeffrey Stein, an associate attorney with Earthjustice. Stein added that farmers depend on the websites to protect their farms from drought, wildfire and extreme weather.The Northeast Organic Farming Association of New York, which was one of the parties to the case, hailed the USDA for restoring the webpages.“Access to timely, accurate, science-based climate information is essential for organic, regenerative agriculture communities facing increasingly unpredictable weather patterns,” said Marcie Craig, the group’s executive director.The Environmental Working Group called the restoration of the webpages a “significant victory for the climate, the environment and farmers.”“The Trump administration’s reversal in response to this legal challenge highlights the critical importance of public interest advocates standing up in the name of transparency and government accountability,” said Anne Schechinger, the group’s Midwest director.Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals. #trump #usda #restores #climate #information
    Trump USDA Restores Climate Information for Farmers after Lawsuit
    www.scientificamerican.com
    May 14, 20252 min readUSDA Reverses Course and Restores Climate Information for FarmersFacing a lawsuit, the Department of Agriculture says it will restore climate-related websites that the agency erased after President Donald Trump took officeBy Lesley Clark & E&E News Harvesting soybeans in Jarrettsville, Maryland. Edwin Remsberg/Getty ImagesCLIMATEWIRE | The Trump administration has reversed course and will restore U.S. Department of Agriculture websites related to climate change in response to a lawsuit brought by environmental organizations and farmers.Groups represented by Earthjustice and the Knight First Amendment Institute at Columbia University had sued in February, alleging that the removal of climate-related policies, datasets and other resources violated federal laws requiring advanced notice, reasoned decision-making and public access to certain information.In a letter late Monday, the administration told Judge Margaret Garnett of the U.S. District Court for the Southern District of New York that the USDA will restore the climate-related web content that was removed after President Donald Trump’s inauguration, including all USDA webpages and interactive tools listed in the lawsuit.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.It noted that the process of restoring the removed content was already underway and that the USDA expects to mostly complete the process in two weeks. The USDA also pledged that it “commits to complying with” federal law governing future “posting decisions.”The purged material had included information on climate-smart farming, forest conservation and adaptation. The USDA also took down climate sections from the websites for the Forest Service and the Natural Resources Conservation Service, including information to help farmers access money for conservation practices.The move comes days before Garnett was to hear the challengers’ request for a preliminary injunction that sought to require the USDA to restore the webpages and stop taking down additional climate information.“We’re glad that USDA recognized that its blatantly unlawful purge of climate-change-related information is harming farmers and communities across the country,” said Jeffrey Stein, an associate attorney with Earthjustice. Stein added that farmers depend on the websites to protect their farms from drought, wildfire and extreme weather.The Northeast Organic Farming Association of New York, which was one of the parties to the case, hailed the USDA for restoring the webpages.“Access to timely, accurate, science-based climate information is essential for organic, regenerative agriculture communities facing increasingly unpredictable weather patterns,” said Marcie Craig, the group’s executive director.The Environmental Working Group called the restoration of the webpages a “significant victory for the climate, the environment and farmers.”“The Trump administration’s reversal in response to this legal challenge highlights the critical importance of public interest advocates standing up in the name of transparency and government accountability,” said Anne Schechinger, the group’s Midwest director.Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals.
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • The USDA will republish climate change information online following farmer lawsuit
    In the early days of President Donald Trump's second administration, federal agencies including the US Department of Agriculture were ordered to remove information about climate change from their websites.
    Now, the USDA has committed to reinstating the deleted content following a lawsuit on behalf of the Northeast Organic Farming Association of New York, the National Resources Defense Council and the Environmental Working Group.
    According to a letter sent yesterday to a district court judge, the agency has already begun the restoration process and expects to "substantially complete" the effort in about two weeks.
    The material removed from USDA sites in February included content about climate-smart agriculture, forest conservation, climate change adaptation and clean energy project investments in rural areas.
    The trio of plaintiffs sued on the basis that removing that information violated the Freedom of Information Act that allows public access to important federal records, as well as failing to provide advanced notice required by the Paperwork Reduction Act and without the reasoned decision-making of the Administrative Procedure act.
    The USDA said that it "will restore the climate-change-related web content that was removed post-Inauguration, including all USDA webpages and interactive tools enumerated in plaintiffs' complaint." 
    "This is a major victory and an important first step.
    Members of the public, including our clients, rely on information from USDA to understand how climate change is affecting our nation’s forests, food supply, and energy systems," said Stephanie Krent, staff attorney with Knight First Amendment Institute, which helped file the lawsuit.
    "USDA was wrong to remove these webpages in the first place, and it must comply with federal law going forward."This article originally appeared on Engadget at https://www.engadget.com/science/the-usda-will-republish-climate-change-information-online-following-farmer-lawsuit-211907357.html?src=rss" style="color: #0066cc;">https://www.engadget.com/science/the-usda-will-republish-climate-change-information-online-following-farmer-lawsuit-211907357.html?src=rss
    Source: https://www.engadget.com/science/the-usda-will-republish-climate-change-information-online-following-farmer-lawsuit-211907357.html?src=rss" style="color: #0066cc;">https://www.engadget.com/science/the-usda-will-republish-climate-change-information-online-following-farmer-lawsuit-211907357.html?src=rss
    #the #usda #will #republish #climate #change #information #online #following #farmer #lawsuit
    The USDA will republish climate change information online following farmer lawsuit
    In the early days of President Donald Trump's second administration, federal agencies including the US Department of Agriculture were ordered to remove information about climate change from their websites. Now, the USDA has committed to reinstating the deleted content following a lawsuit on behalf of the Northeast Organic Farming Association of New York, the National Resources Defense Council and the Environmental Working Group. According to a letter sent yesterday to a district court judge, the agency has already begun the restoration process and expects to "substantially complete" the effort in about two weeks. The material removed from USDA sites in February included content about climate-smart agriculture, forest conservation, climate change adaptation and clean energy project investments in rural areas. The trio of plaintiffs sued on the basis that removing that information violated the Freedom of Information Act that allows public access to important federal records, as well as failing to provide advanced notice required by the Paperwork Reduction Act and without the reasoned decision-making of the Administrative Procedure act. The USDA said that it "will restore the climate-change-related web content that was removed post-Inauguration, including all USDA webpages and interactive tools enumerated in plaintiffs' complaint."  "This is a major victory and an important first step. Members of the public, including our clients, rely on information from USDA to understand how climate change is affecting our nation’s forests, food supply, and energy systems," said Stephanie Krent, staff attorney with Knight First Amendment Institute, which helped file the lawsuit. "USDA was wrong to remove these webpages in the first place, and it must comply with federal law going forward."This article originally appeared on Engadget at https://www.engadget.com/science/the-usda-will-republish-climate-change-information-online-following-farmer-lawsuit-211907357.html?src=rss Source: https://www.engadget.com/science/the-usda-will-republish-climate-change-information-online-following-farmer-lawsuit-211907357.html?src=rss #the #usda #will #republish #climate #change #information #online #following #farmer #lawsuit
    The USDA will republish climate change information online following farmer lawsuit
    www.engadget.com
    In the early days of President Donald Trump's second administration, federal agencies including the US Department of Agriculture were ordered to remove information about climate change from their websites. Now, the USDA has committed to reinstating the deleted content following a lawsuit on behalf of the Northeast Organic Farming Association of New York, the National Resources Defense Council and the Environmental Working Group. According to a letter sent yesterday to a district court judge, the agency has already begun the restoration process and expects to "substantially complete" the effort in about two weeks. The material removed from USDA sites in February included content about climate-smart agriculture, forest conservation, climate change adaptation and clean energy project investments in rural areas. The trio of plaintiffs sued on the basis that removing that information violated the Freedom of Information Act that allows public access to important federal records, as well as failing to provide advanced notice required by the Paperwork Reduction Act and without the reasoned decision-making of the Administrative Procedure act. The USDA said that it "will restore the climate-change-related web content that was removed post-Inauguration, including all USDA webpages and interactive tools enumerated in plaintiffs' complaint."  "This is a major victory and an important first step. Members of the public, including our clients, rely on information from USDA to understand how climate change is affecting our nation’s forests, food supply, and energy systems," said Stephanie Krent, staff attorney with Knight First Amendment Institute, which helped file the lawsuit. "USDA was wrong to remove these webpages in the first place, and it must comply with federal law going forward."This article originally appeared on Engadget at https://www.engadget.com/science/the-usda-will-republish-climate-change-information-online-following-farmer-lawsuit-211907357.html?src=rss
    0 Comentários ·0 Compartilhamentos ·0 Anterior
  • Farmers win legal fight to bring climate resources back to federal websites
    After farmers filed suit, the US Department of Agriculture (USDA) has agreed to restore climate information to webpages it took down soon after President Donald Trump took office this year.The US Department of Justice filed a letter late last night on behalf of the USDA that says the agency “will restore the climate-change-related web content that was removed post-inauguration, including all USDA webpages and interactive tools” that were named in the plaintiffs’ complaint.
    It says the work is already “underway” and should be mostly done in about two weeks.
    If the Trump administration fulfills that commitment, it’ll be a significant victory for farmers and other Americans who rely on scientific data that has disappeared from federal websites since January.“We’re ecstatic.”“I’ll be real frank, it feels good to win one, right? Farmers have been so put upon by the actions of this administration that, you know, it feels good to be able to say, we have something for you.
    This is back.
    You can rely on these resources,” says Marcie Craig, executive director of the Northeast Organic Farming Association of New York.
    “We’re ecstatic.”The group filed suit in February alongside two environmental organizations, alleging that the USDA threatened organic farmers’ livelihoods by removing information they relied on to help them make decisions about planting crops and managing their land — key resources as climate change leads to more unpredictable and extreme weather.
    One of the resources removed by the USDA is an online tool called the “Climate Risk Viewer” that showed the impacts of climate change on rivers and water sheds, and how that might affect water supplies in the future.“We’re really glad that USDA recognized that this blatantly unlawful purge is harming farmers and researchers and advocates all across the country, and we’re ready to ensure that USDA follows through on this promise,” Jeffrey Stein, an associate attorney with the nonprofit legal organization Earthjustice that represented the plaintiffs, tells The Verge.
    The initial complaint accused the USDA of violating the Freedom of Information Act (FOIA) that gives the public the right to access key records from any federal agency, the Paperwork Reduction Act that stipulates adequate notice before changing access to information, and the Administrative Procedure Act that governs the way federal agencies develop regulations.
    President Trump’s backing of the fossil fuel industry has also stripped farmers of federal funding through climate-related programs.
    The Northeast Organic Farming Association of New York has lost nearly half of its budget this year due to funding freezes, which it has been trying to make up for through donations, according to Craig.
    “This has been one of so many cuts.
    You know, pain by a thousand cuts,” Craig says.
    “This [legal victory] was good … then, of course, after the initial feeling, you sit back, you take a breath, and you say, ‘and we still have a whole lot of work to do.’”See More:
    Source: https://www.theverge.com/news/666150/farmers-organic-lawsuit-trump-usda-website-climate-change-data" style="color: #0066cc;">https://www.theverge.com/news/666150/farmers-organic-lawsuit-trump-usda-website-climate-change-data
    #farmers #win #legal #fight #bring #climate #resources #back #federal #websites
    Farmers win legal fight to bring climate resources back to federal websites
    After farmers filed suit, the US Department of Agriculture (USDA) has agreed to restore climate information to webpages it took down soon after President Donald Trump took office this year.The US Department of Justice filed a letter late last night on behalf of the USDA that says the agency “will restore the climate-change-related web content that was removed post-inauguration, including all USDA webpages and interactive tools” that were named in the plaintiffs’ complaint. It says the work is already “underway” and should be mostly done in about two weeks. If the Trump administration fulfills that commitment, it’ll be a significant victory for farmers and other Americans who rely on scientific data that has disappeared from federal websites since January.“We’re ecstatic.”“I’ll be real frank, it feels good to win one, right? Farmers have been so put upon by the actions of this administration that, you know, it feels good to be able to say, we have something for you. This is back. You can rely on these resources,” says Marcie Craig, executive director of the Northeast Organic Farming Association of New York. “We’re ecstatic.”The group filed suit in February alongside two environmental organizations, alleging that the USDA threatened organic farmers’ livelihoods by removing information they relied on to help them make decisions about planting crops and managing their land — key resources as climate change leads to more unpredictable and extreme weather. One of the resources removed by the USDA is an online tool called the “Climate Risk Viewer” that showed the impacts of climate change on rivers and water sheds, and how that might affect water supplies in the future.“We’re really glad that USDA recognized that this blatantly unlawful purge is harming farmers and researchers and advocates all across the country, and we’re ready to ensure that USDA follows through on this promise,” Jeffrey Stein, an associate attorney with the nonprofit legal organization Earthjustice that represented the plaintiffs, tells The Verge. The initial complaint accused the USDA of violating the Freedom of Information Act (FOIA) that gives the public the right to access key records from any federal agency, the Paperwork Reduction Act that stipulates adequate notice before changing access to information, and the Administrative Procedure Act that governs the way federal agencies develop regulations. President Trump’s backing of the fossil fuel industry has also stripped farmers of federal funding through climate-related programs. The Northeast Organic Farming Association of New York has lost nearly half of its budget this year due to funding freezes, which it has been trying to make up for through donations, according to Craig. “This has been one of so many cuts. You know, pain by a thousand cuts,” Craig says. “This [legal victory] was good … then, of course, after the initial feeling, you sit back, you take a breath, and you say, ‘and we still have a whole lot of work to do.’”See More: Source: https://www.theverge.com/news/666150/farmers-organic-lawsuit-trump-usda-website-climate-change-data #farmers #win #legal #fight #bring #climate #resources #back #federal #websites
    Farmers win legal fight to bring climate resources back to federal websites
    www.theverge.com
    After farmers filed suit, the US Department of Agriculture (USDA) has agreed to restore climate information to webpages it took down soon after President Donald Trump took office this year.The US Department of Justice filed a letter late last night on behalf of the USDA that says the agency “will restore the climate-change-related web content that was removed post-inauguration, including all USDA webpages and interactive tools” that were named in the plaintiffs’ complaint. It says the work is already “underway” and should be mostly done in about two weeks. If the Trump administration fulfills that commitment, it’ll be a significant victory for farmers and other Americans who rely on scientific data that has disappeared from federal websites since January.“We’re ecstatic.”“I’ll be real frank, it feels good to win one, right? Farmers have been so put upon by the actions of this administration that, you know, it feels good to be able to say, we have something for you. This is back. You can rely on these resources,” says Marcie Craig, executive director of the Northeast Organic Farming Association of New York. “We’re ecstatic.”The group filed suit in February alongside two environmental organizations, alleging that the USDA threatened organic farmers’ livelihoods by removing information they relied on to help them make decisions about planting crops and managing their land — key resources as climate change leads to more unpredictable and extreme weather. One of the resources removed by the USDA is an online tool called the “Climate Risk Viewer” that showed the impacts of climate change on rivers and water sheds, and how that might affect water supplies in the future.“We’re really glad that USDA recognized that this blatantly unlawful purge is harming farmers and researchers and advocates all across the country, and we’re ready to ensure that USDA follows through on this promise,” Jeffrey Stein, an associate attorney with the nonprofit legal organization Earthjustice that represented the plaintiffs, tells The Verge. The initial complaint accused the USDA of violating the Freedom of Information Act (FOIA) that gives the public the right to access key records from any federal agency, the Paperwork Reduction Act that stipulates adequate notice before changing access to information, and the Administrative Procedure Act that governs the way federal agencies develop regulations. President Trump’s backing of the fossil fuel industry has also stripped farmers of federal funding through climate-related programs. The Northeast Organic Farming Association of New York has lost nearly half of its budget this year due to funding freezes, which it has been trying to make up for through donations, according to Craig. “This has been one of so many cuts. You know, pain by a thousand cuts,” Craig says. “This [legal victory] was good … then, of course, after the initial feeling, you sit back, you take a breath, and you say, ‘and we still have a whole lot of work to do.’”See More:
    0 Comentários ·0 Compartilhamentos ·0 Anterior
CGShares https://cgshares.com