• As 23andMe Crumbles, It Ceases All Efforts to Develop Drugs Using DNA Data
    futurism.com
    Bad to WorseNov 17, 8:30 AM EST/byNoor Al-SibaiAs 23andMe Crumbles, It Ceases All Efforts to Develop Drugs Using DNA Data"We continue to believe in the promise shown by our clinical and preclinical stage pipeline."Nov 17, 8:30 AM EST/Noor Al-SibaiImage by Getty ImagesGeneticsIn the year since its devastating hack, 23andMe has lost virtually all its stock value and most of its governing board and somehow, things just keep getting worse for the one-time DNA kit wunderkind.As CEO Anne Wojcicki announced in a press release, 23andMe is not only laying off 40 percent of its remaining workforce about 200 employees total but also completely kiboshing its therapeutics program that was seeking to use AI and the company's hoard of genetic information to develop new drugs."We are taking these difficult but necessary actions as we restructure 23andMe," the CEO said, "and focus on the long-term success of our core consumer business and research partnerships."As is her nature, Wojciki put an optimistic spin on the news that her company is crumbling even worse than before."We continue to believe in the promise shown by our clinical and preclinical stage pipeline and will continue to pursue strategic opportunities to continue their development," she said. These "strategic opportunities" may include, per the announcement, licensing or selling the drugs its therapeutics arm has discovered or is developing.Just a day after the restructuring news dropped, 23andMe also announced in its latest quarterly earnings report that its revenue was down to $44 million in the second quarter of 2024, from $50 million over the same period last year. During its latest filing with the Securities and Exchange Commission, the company noted that without significant capital, there is "substantial doubt" it can stay afloat.Despite her tendency towards spin, Wojcicki seemed sober when recounting the company's financial difficulties in an investor call this week that referenced the board's abrupt resignation earlier this year, which led to the company's near-delisting by the NASDAQ, and the subsequent stock-split scheme last month that saved it from going under entirely."We have fulfilled our obligations as a public company and regained compliance with the NASDAQ listing standards by reconstituting our board and executing a reverse stock split," the CEO said, perCNBC.It's a sorry state of affairs for the company that was valued at $3.5 billion when it went public in 2021 and unless someone with a lot of money wants to throw some of it into the fire, it's hard to imagine a happy ending now.More on DNA kits: Fun New Mouth Swab Will Tell You When Youll DieShare This ArticleImage by Getty ImagesRead This Next
    0 Comments ·0 Shares ·123 Views
  • US Military Tests AI-Powered Machine Gun
    futurism.com
    What could go wrong?Gun BotsUS Defense Department contractor Allen Control Systems (ACS) has developed an artificial intelligence-powered autonomous robotic gun system called the "Bullfrog," which can target small drones using proprietary computer vision software.As Wired reports, the Defense Department tested out the system during the Technology Readiness Experimentation event earlier this year, which allows contractors like ACS to showcase their prototype technologies to the Pentagon.Recent footage shows the vehicle-mounted gun shooting small drones out of the sky with ease.And that kind of capability is more relevant than ever as small, uncrewed aircraft are becoming increasingly common on the battlefield."During the Russian invasion of Ukraine, we saw the proliferation of drones on both sides of the conflict, and we read in various news outlets the Ukrainians were firing AK-47s in the air at them," ACS cofounder and CEO Steve Simoni told Wired. "We thought, Thats a good robotics problem. It's hard to hit something flying so fast, but a robot can do that with modern-day computer vision and AI control algorithms."Computer KillAccording to Simoni, the goal was to remove humans from the equation entirely, particularly considering how fast these uncrewed drones can fly."We are electrical engineers, and we decided that in order to solve this problem of hitting a fast drone that's accelerating at five Gs at a couple hundred yards, you would need an incredibly high-end current that goes through a motor and encoders that know the position of your gun at all times," he told Wired. "To put that form factor in the hands of someone with an M4 seemed like a very tough problem."ACS' Bullfrog system is part of a much larger trend. The US military is dabbling in a whole range of remotely controlled and semi-autonomous weapons systems to shoot adversary drones out of the sky. Earlier this year, for instance, the US Army started experimenting with rifle-equipped robot dogs at a testing facility in the Middle East.The contractor claims the Bullfrog is incredibly cheap to use, especially compared to far more complicated and expensive laser or microwave weapons systems.So far, humans are still required to give the Bullfrog the green light before it can open fire. That's because there are strict policies when it comes to the use of lethal autonomous weapons.However, ACS is keen to reassure the military that it's technically capable of fully autonomous operation."Our system is fully autonomous-capable, were just waiting for the government to determine its needs," ACSs chief strategy officer Brice Cooper told Wired.But when or if those needs will change remains to be seen. Plenty of thorny ethical questions remain surrounding the use of such autonomous weapons."Anything with robotics requires software to make the determination of friend or foe, and that's a concern with anything that's automated," former congressional defense appropriator Mike Clementi told Wired. "The use of fully automated systems would be uncharted territory. There's always been a person in the loop before."More on AI and guns: The US Army Is Testing Killer Robot Dogs With AIShare This Article
    0 Comments ·0 Shares ·120 Views
  • 0 Comments ·0 Shares ·121 Views
  • Google DeepMind has a new way to look inside an AIs mind
    www.technologyreview.com
    AI has led to breakthroughs in drug discovery and robotics and is in the process of entirely revolutionizing how we interact with machines and the web. The only problem is we dont know exactly how it works, or why it works so well. We have a fair idea, but the details are too complex to unpick. Thats a problem: It could lead us to deploy an AI system in a highly sensitive field like medicine without understanding that it could have critical flaws embedded in its workings. A team at Google DeepMind that studies something called mechanistic interpretability has been working on new ways to let us peer under the hood. At the end of July, it released Gemma Scope, a tool to help researchers understand what is happening when AI is generating an output. The hope is that if we have a better understanding of what is happening inside an AI model, well be able to control its outputs more effectively, leading to better AI systems in the future. I want to be able to look inside a model and see if its being deceptive, says Neel Nanda, who runs the mechanistic interpretability team at Google DeepMind. It seems like being able to read a models mind should help. Mechanistic interpretability, also known as mech interp, is a new research field that aims to understand how neural networks actually work. At the moment, very basically, we put inputs into a model in the form of a lot of data, and then we get a bunch of model weights at the end of training. These are the parameters that determine how a model makes decisions. We have some idea of whats happening between the inputs and the model weights: Essentially, the AI is finding patterns in the data and making conclusions from those patterns, but these patterns can be incredibly complex and often very hard for humans to interpret. Its like a teacher reviewing the answers to a complex math problem on a test. The studentthe AI, in this casewrote down the correct answer, but the work looks like a bunch of squiggly lines. This example assumes the AI is always getting the correct answer, but thats not always true; the AI student may have found an irrelevant pattern that its assuming is valid. For example, some current AI systems will give you the result that 9.11 is bigger than 9.8. Different methods developed in the field of mechanistic interpretability are beginning to shed a little bit of light on what may be happening, essentially making sense of the squiggly lines. A key goal of mechanistic interpretability is trying to reverse-engineer the algorithms inside these systems, says Nanda. We give the model a prompt, like Write a poem, and then it writes some rhyming lines. What is the algorithm by which it did this? Wed love to understand it. To find featuresor categories of data that represent a larger conceptin its AI model, Gemma, DeepMind ran a tool known as a sparse autoencoder on each of its layers. You can think of a sparse autoencoder as a microscope that zooms in on those layers and lets you look at their details. For example, if you prompt Gemma about a chihuahua, it will trigger the dogs feature, lighting up what the model knows about dogs. The reason it is considered sparse is that its limiting the number of neurons used, basically pushing for a more efficient and generalized representation of the data. The tricky part of sparse autoencoders is deciding how granular you want to get. Think again about the microscope. You can magnify something to an extreme degree, but it may make what youre looking at impossible for a human to interpret. But if you zoom too far out, you may be limiting what interesting things you can see and discover. DeepMinds solution was to run sparse autoencoders of different sizes, varying the number of features they want the autoencoder to find. The goal was not for DeepMinds researchers to thoroughly analyze the results on their own. Gemma and the autoencoders are open-source, so this project was aimed more at spurring interested researchers to look at what the sparse autoencoders found and hopefully make new insights into the model's internal logic. Since DeepMind ran autoencoders on each layer of their model, a researcher could map the progression from input to output to a degree we havent seen before. This is really exciting for interpretability researchers, says Josh Batson, a researcher at Anthropic. If you have this model that youve open-sourced for people to study, it means that a bunch of interpretability research can now be done on the back of those sparse autoencoders. It lowers the barrier to entry to people learning from these methods. Neuronpedia, a platform for mechanistic interpretability, partnered with DeepMind in July to build a demo of Gemma Scope that you can play around with right now. In the demo, you can test out different prompts and see how the model breaks up your prompt and what activations your prompt lights up. You can also mess around with the model. For example, if you turn the feature about dogs way up and then ask the model a question about US presidents, Gemma will find some way to weave in random babble about dogs, or the model may just start barking at you. One interesting thing about sparse autoencoders is that they are unsupervised, meaning they find features on their own. That leads to surprising discoveries about how the models break down human concepts. My personal favorite feature is the cringe feature, says Joseph Bloom, science lead at Neuronpedia. It seems to appear in negative criticism of text and movies. Its just a great example of tracking things that are so human on some level. You can search for concepts on Neuronpedia and it will highlight what features are being activated on specific tokens, or words, and how strongly each one is activated. If you read the text and you see whats highlighted in green, thats when the model thinks the cringe concept is most relevant. The most active example for cringe is somebody preaching at someone else, says Bloom. Some features are proving easier to track than others. One of the most important features that you would want to find for a model is deception, says Johnny Lin, founder of Neuronpedia. Its not super easy to find: Oh, theres the feature that fires when its lying to us. From what Ive seen, it hasnt been the case that we can find deception and ban it. DeepMinds research is similar to what another AI company, Anthropic, did back in May with Golden Gate Claude. It used sparse autoencoders to find the parts of Claude, their model, that lit up when discussing the Golden Gate Bridge in San Francisco. It then amplified the activations related to the bridge to the point where Claude literally identified not as Claude, an AI model, but as the physical Golden Gate Bridge and would respond to prompts as the bridge. Although it may just seem quirky, mechanistic interpretability research may prove incredibly useful. As a tool for understanding how the model generalizes and what level of abstraction its working at, these features are really helpful, says Batson. For example, a team lead by Samuel Marks, now at Anthropic, used sparse autoencoders to find features that showed a particular model was associating certain professions with a specific gender. They then turned off these gender features to reduce bias in the model. This experiment was done on a very small model, so its unclear if the work will apply to a much larger model. Mechanistic interpretability research can also give us insights into why AI makes errors. In the case of the assertion that 9.11 is larger than 9.8, researchers from Transluce saw that the question was triggering the parts of an AI model related to Bible verses and September 11. The researchers concluded the AI could be interpreting the numbers as dates, asserting the later date, 9/11, as greater than 9/8. And in a lot of books like religious texts, section 9.11 comes after section 9.8, which may be why the AI thinks of it as greater. Once they knew why the AI made this error, the researchers tuned down the AI's activations on Bible verses and September 11, which led to the model giving the correct answer when prompted again on whether 9.11 is larger than 9.8. There are also other potential applications. Currently, a system-level prompt is built into LLMs to deal with situations like users who ask how to build a bomb. When you ask ChatGPT a question, the model is first secretly prompted by OpenAI to refrain from telling you how to make bombs or do other nefarious things. But its easy for users to jailbreak AI models with clever prompts, bypassing any restrictions. If the creators of the models are able to see where in an AI the bomb-building knowledge is, they can theoretically turn off those nodes permanently. Then even the most cleverly written prompt wouldnt elicit an answer about how to build a bomb, because the AI would literally have no information about how to build a bomb in its system. This type of granularity and precise control are easy to imagine but extremely hard to achieve with the current state of mechanistic interpretability. A limitation is the steering [influencing a model by adjusting its parameters] is just not working that well, and so when you steer to reduce violence in a model, it ends up completely lobotomizing its knowledge in martial arts. Theres a lot of refinement to be done in steering, says Lin. The knowledge of bomb making, for example, isnt just a simple on-and-off switch in an AI model. It most likely is woven into multiple parts of the model, and turning it off would probably involve hampering the AIs knowledge of chemistry. Any tinkering may have benefits but also significant trade-offs. That said, if we are able to dig deeper and peer more clearly into the mind of AI, DeepMind and others are hopeful that mechanistic interpretability could represent a plausible path to alignmentthe process of making sure AI is actually doing what we want it to do.
    0 Comments ·0 Shares ·135 Views
  • Best Internet Providers in Oklahoma City, Oklahoma
    www.cnet.com
    Cable, fiber, high-speeds and more, Oklahoma City has a lot of internet options available.
    0 Comments ·0 Shares ·144 Views
  • Best Vitamins for Healthy Hair, Skin and Nails in 2024
    www.cnet.com
    Our Picks $12 at Amazon Best overall vitamin for hair, skin and nails Nature's Bounty Extra Strength Hair, Skin and Nails View details $12 at Amazon View details $13 at Amazon Best gummy vitamin for hair, skin and nails Olly Undeniable Beauty Hair, Skin and Nails View details $13 at Amazon View details $88 at Amazon Best vitamin for hair growth Nutrafol Women View details $88 at Amazon View details $11 at Persona Best subscription hair, skin and nail vitamins Persona Nutrition: Hair, Skin and Nails View details $11 at Persona View details See at Amazon Best budget hair, skin and nail vitamin Revly Hair, Skin and Nail Complex (Update: Currently Unavailable) View details See at Amazon View details Table of Contents It's natural to want your hair, skin and nails to stay in the best shape possible. Although a balanced diet is key to keeping them healthy, you may also want to consider taking dietary supplements -- after consulting your doctor about the essential vitamins you might need for better hair growth, glowing skin and healthy nails.You may think your diet covers everything your body needs, but doctors often prescribe vitamins and supplements to help people who are deficient in ways they can't help. These supplements are a practical way to incorporate essential nutrients into your body, whether it's vitamin A, C or D or other vital minerals and proteins, like iron and collagen. If you struggle with vitamin deficiency and find it tricky to incorporate key vitamins into your diet, a few tablets can be highly effective workarounds.Read more: 22 of the Best Gifts Under $50 for 2024But the market is flooded with a ton of supplements, and there are no standard measurements. So finding the right vitamins can be daunting. Vitamins might be weighed in milligrams or micrograms, and there's a considerable difference: a milligram is equivalent to 1,000 micrograms, in the same way that a gram is equivalent to 1,000 milligrams.Our CNET experts have gone through scientific research to find the best vitamin supplements for hair, skin and nails, to help you match your goals and save you trouble when you aren't sure where to look.What are the best overall vitamins for hair, skin and nails?There are a lot of multivitamins that claim that they can improve hair, skin and nails, you need to pay attention to the ingredients. The best overall vitamins for hair, skin and nails include vitamin B7 (biotin), collagen, vitamin C and omega-3s.Vitamin B7 is essential for the health of your hair, skin and nails. It isn't the type of thing you can stock up on -- taking a ton of biotin doesn't amplify the benefits. Collagen is a protein that makes up connective tissue. As we age and our production of collagen decreases, the once-tight fibers become more like a maze. This translates to wrinkles on the face. Supplements include collagen to help your skin elasticity and reduce wrinkles. Vitamin C has various benefits for the body, and it also increases collagen production in your body. Lastly, Omega-3s help maintain the cholesterol-derived layer of our skin cells. They contribute to the shine of your hair and keep your scalp healthy. Studies have found that it can also help treat the symptoms of inflammatory skin conditions.Best vitamins for hair, skin and nails in 2024 Pros Affordable multivitamin at only $12 a bottle USP certified, meaning the factories meet FDA good manufacturing practices Lasts for 50 days Cons The vitamins aren't third-party certified for purity Not an option for vegans because they contain gelatin Price $ Form Gel capsuleServing size Three gel capsulesSupply 50 days $12 at Amazon Nature's Bounty Extra Strength multivitamin is the best overall hair, skin and nail vitamin because of its robust nutrient composition. It's also one of the most affordable multivitamins you can find, at around $10 for a 150-capsule bottle. The dosage is three soft gel capsules each day -- not the worst I've seen, but not the best either.Vitamin A and zinc are included in this multivitamin to promote collagen production. If you have a vitamin D deficiency, a huge benefit of this is you also get 100% of your daily recommended amount, which may help clear up acne. Unlike other options available, Nature's Bounty multivitamin includes horsetail for thinning hair and skin health. A 2019 review of research found insufficient evidence to establish horsetail as an effective treatment for hair loss.Nature's Bounty also includes a significant amount of biotin (vitamin B7) at 5,000 mcg per serving. That sounds like a lot, but no side effects have been reported in doses of up to 10,000 mcg (10 mg) of biotin. Experts note that supplementing biotin at high levels can cause false or low test results for immunoassays, which use biotin as part of the testing method. That's something to consider for upcoming doctor's appointments. Photo Gallery 1/1 $13 at Amazon Best gummy vitamin for hair, skin and nails Olly Undeniable Beauty Hair, Skin and Nails $13 at Amazon $14 at Olly Olly Undeniable Beauty gummy vitamins promote hair, skin and nail health with key ingredients biotin, vitamin C, vitamin E and keratin. Vitamin E has been linked to treating eczema by suppressing inflammation. Vitamin C aids in collagen production and UV skin protection.Olly Undeniable Beauty Hair, Skin and Nails vitamins contain a large dose of biotin at 2,500 mcg. I was also glad to see 50 mg keratin included in the Olly Undeniable Beauty vitamin. Keratin is the basic component in our hair, skin and nails. The research into the effectiveness of taking additional keratin supplements is lacking, but it's a good option for people with a keratin deficiency.Olly is naturally flavored and colored with sweet potatoes, apples, cherries, radish, carrot and blueberry juices. Reviews of Olly suggest these grapefruit-flavored gummies taste good. Note that some reviewers say the smell is off-puttingin some batches. I like Olly because of the depth of the product line. If you're looking for a vitamin that only targets hair or nail health, you have that option. Photo Gallery 1/1
    0 Comments ·0 Shares ·146 Views
  • Half-Life 2 marks 20th anniversary by breaking its own concurrent record on Steam
    www.eurogamer.net
    Half-Life 2 marks 20th anniversary by breaking its own concurrent record on SteamThe good life.Image credit: Valve News by Vikki Blake Contributor Published on Nov. 17, 2024 20 years after its 2004 debuted, Half-Life 2 has broken its own concurrent player record on Steam.According to SteamDB, the seminal shooter has secured just over a 61.7K player peak over the weekend, the highest number of simultaneous players the game has achieved since records began in 2008.In fact, the number I've just written is probably already out of date - the number is still rising and has been all day, having increased several times in the time I've been writing this.Racing Down Highway 17 With The Half-Life 2 VR Mod Is AWESOME! - Ian's VR Corner.Watch on YouTubeGiven this weekend's surprise release of a special anniversary update, it's possible players are jumping back into the game to revel in nostalgia or possibly check out the new director's commentary. It's also possible - given Valve is currently giving away the game for free - new players are trying it for the first time.If you haven't seen it yet, in the accompanying documentary now available on YouTube, the Half-Life 2 development team revealed that Episode 3 didn't make it after the team switched focus to concentrate on getting Left 4 Dead "out the door"."Left 4 Dead needed an all-hands-on effort to ship, and so we put down [Episode 3] to go help Left 4 Dead," explained developer David Speyre. "It took long enough that by the time we considered going back to Episode 3, the argument was made like, well, we missed it. It's too late now. We really need to make a new engine to continue the Half-Life series."Now, in hindsight, that seems so wrong. We could have definitely gone back and spent two years to make Episode 3."
    0 Comments ·0 Shares ·162 Views
  • Black Ops 6 is going to let you use legacy XP tokens after all - but not quite yet
    www.eurogamer.net
    Black Ops 6 is going to let you use legacy XP tokens after all - but not quite yetTreyarch says it is "currently testing a way to implement this change correctly in a future update".Image credit: Activision / Eurogamer News by Vikki Blake Contributor Published on Nov. 17, 2024 Treyarch has enabled, disabled, and is now reintroducing legacy XP tokens in Call of Duty: Black Ops 6.Along with five multiplayer maps - three of which are brand new - free and premium content, a new Zombies "experience in-season", and full integration with Call of Duty's spin-off battle royale, Warzone, Black Ops 6's Season 1 also introduced something else - legacy XP tokens.Turns out the latter was an accident, though, as no sooner did players spot legacy XP tokens in the new shooter, developer Treyarch "fixed an issue that incorrectly allowed legacy XP tokens to be activated in Black Ops 6 UI".To see this content please enable targeting cookies. Call of Duty: Black Ops 6 Opening Scene and Gameplay (4K).Watch on YouTubeIf you were frustrated by the fix, you're not alone - which is why Treyarch is exploring a way to do this "correctly" in the future."With the start of Season 01, a UI bug allowed players to activate legacy XP tokens in Black Ops 6. Unfortunately, it also introduced some potential risk to game stability, which is why it was patched yesterday," Treyarch explained. To see this content please enable targeting cookies."We realise how much players appreciate being able to redeem legacy XP tokens in both BO6 and Warzone, so we are currently testing a way to implement this change correctly in a future update. This allows us time to ensure stability is maintained before we reintroduce this feature."In the interim, players can activate any legacy XP tokens in Warzone. Any tokens applied in Warzone will also apply to Black Ops 6 should you switch titles or modes," Treyarch concluded, before revealing we should hear more about this change "next week".The latest Call of Duty: Black Ops 6 update also nerfed assault rifles. As the same time, the Ghost perk - which hides you from UAV and scout pulse - also bugged out, as it seemingly couldn't be equipped anymore."For a series built on high-octane thrills and explosive gratification, its withdrawal to the well-trodden formula echoes the wider industry's continued allergy to risk," Chris Tapsell wrote in Eurogamer's Call of Duty: Black Ops 6 review.
    0 Comments ·0 Shares ·160 Views
  • Disgruntled X users make the switch to Bluesky
    techcrunch.com
    Welcome back to Week in Review. This week, were breaking down Blueskys big surge in users, Elon Musk co-leading Trumps Department of Government Efficiency, and Mark Zuckerbergs latest foray in extreme wife-guy behavior. Lets go.Bluesky is seeing a major surge as X users unhappy with the platforms latest policy decisions make the move to the competitor social network. The decentralized social media platform has grown to more than 16 million users, including Swifties. If youre making the switch or at least wanting to see if the grass is greener (or bluer) on the other side weve put together a guide on how to get started.Teslas Cybertruck is facing its sixth recall in a year, affecting 2,431 units. A report from Tesla found that those trucks are or were equipped with a faulty drive inverter. Unlike Octobers Cybertruck recall, which could be solved with an over-the-air update, Tesla will need to physically replace the recalled drive inverters for this batch. The EV maker said it would do so free of charge.Elon Musk will co-lead President-elect Donald Trumps Department of Government Efficiency, the acronym of which references Musks favorite cryptocurrency. Musk will lead the department with biotech entrepreneur and former presidential hopeful Vivek Ramaswamy to help Trumps administration dismantle government bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure federal agencies.This is TechCrunchs Week in Review, where we recap the weeks biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.NewsImage Credits:Album art for Mark Zuckerberg's cover of "Get Low" with T-PainMark Zuckerberg T-Pain: Mark Zuckerberg enlisted T-Pain to write and record an acoustic cover of Lil Jon & The East Side Boyzs Get LowRead moreStanding desks arent as healthy as you think: Sorry to standing desk users, but a new study found that standing for more than two hours a day doesnt protect against cardiovascular risks, and it actually heightens an individuals risk of circulatory problems. Read moreTalk Tuah dating coach: Social media star Haliey Welch has launched Pookie Tools, an AI-powered dating advice app for Gen Z singles. The apps chatbot helps write conversation starters, while another tool predicts if a potential match is lying about their height. Read moreWriter nabs $200M: The generative AI startup has raised $200 million at a $1.9 billion valuation to expand its platform. CEO May Habib says the new cash will be used for product development and cementing the companys leadership in the enterprise generative AI category. Read moreAmazon takes on Temu: Amazon has rolled out the Amazon Haul store, a storefront that offers discounted and mass-produced items, most of which ship from China, to better compete with highly popular competitors Temu and Shein. Read moreJust Eat sells off Grubhub: TheRead moreSBF is headed to the big screen: Lena Dunham is working with Apple and A24 to adapt Michael Lewis book Going Infinite, which chronicles the life of Sam Bankman-Fried and the implosion of FTX. Now to wonder who will be cast as SBF Read morePrepare to see more AI-video slop: InVideo is launching a generative AI-powered video creation feature that lets people use prompts to make videos in a variety of styles, including live-action, animated, or anime. Read moreApples wall-mounted tablet: Apple is reportedly planning to release a tablet that mounts to your wall, controls smart home appliances, and does video calls, as early as March 2025. Of course, the device will feature Apple Intelligence. Read moreAds are coming to Perplexity: The AI-powered search engine is experimenting with ads. The site will be showing ads in the U.S. to start, and theyll be formatted as sponsored follow-up questions from partners like Indeed, Whole Foods, Universal McCann, and PMG. Read moreYou can now play Hot Cross Buns on your phone: Artinoises latest product is reimagining the classic plastic recorder.Read more
    0 Comments ·0 Shares ·143 Views
  • 0 Comments ·0 Shares ·142 Views