• How much does ChatGPT cost? Everything you need to know about OpenAIs pricing plans
    techcrunch.com
    OpenAIs AI-powered chatbot platform ChatGPT keeps expanding with new features. The chatbots memory feature lets you save preferences so that chats are more tailored to you. ChatGPT also has an upgraded voice mode, letting you interact with the platform more or less in real time. It even offers a store the GPT Store for AI-powered applications and services. So, you might be wondering: How much does ChatGPT cost? Its a tougher question to answer than you might think. OpenAI offers an array of plans for ChatGPT, both paid and free, aimed at customers ranging from individuals to nonprofits, small- and medium-sized businesses, educational institutions, and enterprises.To keep track of the various ChatGPT subscription options available, weve put together a guide on ChatGPT pricing. Well keep it updated as new plans are introduced. ChatGPT freeOnce upon a time, the free version of ChatGPT was quite limited in what it could do. But thats changed as OpenAI has rolled out new capabilities and underlying generative AI models.ChatGPT free users get access to OpenAIs GPT-4o mini model, responses augmented with content from the web, access to the GPT Store, and the ability to upload files and photos and ask questions about those uploads. Free users also have limited access to more advanced features, including Advanced Voice mode, GPT-4o, and o3-mini. Users can also store chat preferences as memories and leverage advanced data analysis, a ChatGPT feature that can reason over (i.e., analyze data from) files such as spreadsheets and PDFs.There are downsides that come with the free ChatGPT plan, however, including daily capacity limits on the GPT-4o model and file uploads, depending on demand. ChatGPT free users also miss out on more advanced features, which we discuss in greater detail below.ChatGPT PlusFor individual users who want a more capable ChatGPT, theres ChatGPT Plus, which costs $20 per month. ChatGPT Plus offers higher capacity than ChatGPT free users can send 80 messages to GPT-4o every three hours and unlimited messages to GPT4o-mini plus access to OpenAIs reasoning models, including o3-mini, o1-preview, and o1-mini.Subscribers to ChatGPT Plus also get access to multimodal features, such as Advanced Voice mode with video and screen sharing, although they may run into daily limits.ChatGPT Plus subscribers also get limited access to newer tools, including OpenAIs deep research agent and Soras video generation.In addition, ChatGPT Plus subscribers get an upgraded data analysis feature, underpinned by GPT-4o, that can create interactive charts and tables from datasets. Users can upload the files to be analyzed directly from Google Drive and Microsoft OneDrive or from their devices.ChatGPT ProFor people who want near-unlimited access to OpenAIs products, and the chance to try new features out first, theres ChatGPT Pro. The plan costs $200 a month.Subscribers to ChatGPT Pro get unlimited access to reasoning models, GPT-4o, and Advanced Voice mode. The $200 tier also comes with 120 deep research queries a month, as well as access to o1 pro mode, which uses more compute than the version of o1 available in ChatGPT plus.ChatGPT Pro users also get access to OpenAIs web-browsing agent, Operator, and more video generations with Sora.OpenAI tends to release most of its new features to ChatGPT Pro users first, and these users get priority access to existing features, such as GPT-4o, during times of high demand.ChatGPT TeamSay you own a small business or manage an org and want more than one ChatGPT license, plus collaborative features. ChatGPT Team might fit the bill: It costs$30 per user per month or $25 per user per month billed annually for up to 149 users.ChatGPT Team provides a dedicated workspace and admin tools for team management. All users in a ChatGPT Team plan gain access to OpenAIs latest models and the aforementioned tools that let ChatGPT analyze, edit and extract info from files. Beyond this, ChatGPT Team lets people within a team build and share custom apps similar to the apps in the GPT Store based on OpenAI models. These apps can be tailored for specific use cases or departments, or tuned on a teams data.ChatGPT EnterpriseLarge organizations any organization in need of more than 149 ChatGPT licenses, to be specific can opt for ChatGPT Enterprise, OpenAIs corporate-focused ChatGPT plan. OpenAI doesnt publish the price of ChatGPT Enterprise, but the reported cost is around $60 per user per month with a minimum of 150 users and a 12-month contract.ChatGPT Enterprise adds enterprise-grade privacy and data analysis capabilities on top of the vanilla ChatGPT, as well as enhanced performance and customization options. Theres a dedicated workspace and admin console with tools to manage how employees within an organization use ChatGPT, including integrations for single sign-on, domain verification and a dashboard showing usage and engagement statistics. Shareable conversation templates provided as a part of ChatGPT Enterprise allow users to build internal workflows and bots leveraging ChatGPT, while credits to OpenAIs API platform let companies create fully custom ChatGPT-powered solutions if they choose. ChatGPT Enterprise customers also get priority access to models and lines to OpenAI expertise, including a dedicated account team, training, and consolidated invoicing. And theyre eligible for Business Associate Agreements with OpenAI, which are required by U.S. law for companies that wish to use tools like ChatGPT with private health information such as medical records.ChatGPT EduChatGPT Edu, a newer offering from OpenAI, delivers a version of ChatGPT built for universities and the students attending them as well as faculty, staff researchers and campus operations teams. Pricing hasnt been made public or reported secondhand yet, but well update this section if it is.ChatGPT Edu is comparable to ChatGPT Enterprise with the exception that it supports SCIM, an open protocol used to simplify cloud identity and access management. (OpenAI plans to bring SCIM to ChatGPT Enterprise in the future.) As with ChatGPT Enterprise, ChatGPT Edu customers get data analysis tools, admin controls, single sign-on, enhanced security and the ability to build and share custom chatbots.ChatGPT Edu also comes with the latest OpenAI models and, importantly, increased message limits. OpenAI for NonprofitsOpenAI for Nonprofits is OpenAIs early foray into nonprofit tech solutions. Its not a stand-alone ChatGPT plan so much as a range of discounts for eligible organizations.Nonprofits can access ChatGPT Team at a discounted rate of $20 monthly per user. Larger nonprofits can get a 50% discount on ChatGPT Enterprise, which works out to about $30 per user. The eligibility requirements are quite strict, however. While nonprofits based anywhere in the world can apply for discounts, OpenAI isnt currently accepting applications from academic, medical, religious or governmental institutions.This article was originally published on June 15, 2024. It was updated on February 25, 2025, to include new features from OpenAI, including o1 and deep research, as well as the new ChatGPT Pro plan.
    0 Comentários ·0 Compartilhamentos ·39 Visualizações
  • Y Combinator deletes posts after a startups demo goes viral
    techcrunch.com
    A demo from Optifye.ai, a member of Y Combinators current cohort, sparked a social media backlash that ended up with YC deleting it off its socials.Optifye says its building software to help factory owners know whos working and who isnt in real-time thanks to AI-powered security cameras it places on assembly lines, according to its YC profile.On Monday, YC posted an Optifye demo video on X (and on LinkedIn), according to a snapshot saved by TechCrunch.The video shows Optifye co-founder Kushal Mohta acting as the boss of a garment factory, calling a supervisor in reality his co-founder Vivaan Baid about a low-performing worker known only as Number 17.Hey Number 17, whats going on man? Youre in the red, Baid asks the worker, who responds that hes been working all day.Working all day? You havent hit your hourly output even once and you had 11.4% efficiency. This is really bad, Baid retorts.After checking Optifyes dashboard, the supervisor looks at the output of Number 17 for 15 days, decides that the worker has been underperforming and calls the worker out on it.Rough day? More like a rough month, he says.The clip was heavily criticized on X, where @VCBrags called it sweatshops-as-a-service and another deemed it computer vision sweatshop software. It also sparked criticism on Y Combinators own link sharing site Hacker News.Not everyone was critical, though. Eoghan McCabe, the CEO of customer support startup Intercom, posted that anyone complaining better stop buying products made in China and India.Indeed, its not too difficult to find tech companies in China touting a sleep detection camera that uses computer vision to spot sleeping workers, for example.Either way, YC ended up deleting the demo video from its socials, but not before it was saved by several accounts.Neither YC nor Optifye.ai responded to a request for comment.The videos likely unintended virality showcases growing anxieties over the rise of AI, especially in the workplace.Most Americans oppose using AI to track workers desk time, movements, and computer use, a Pew poll found in 2023. This is a segment of surveillance products sometimes called bossware.That hasnt stopped VCs from funding the space, though. Invisible AI, for example, raised $15 million in 2022 to stick worker-monitoring cameras in factories, too.
    0 Comentários ·0 Compartilhamentos ·30 Visualizações
  • What are GFCI outlets? Plus 6 things you should never plug into one
    www.zdnet.com
    While GFCI outlets offer reliable protection compared with standard outlets, they aren't meant for every type of electrical device. Some items should never be plugged into a GFCI, as they could malfunction or trip the circuit unnecessarily, leading to power disruptions or unsafe situations. Below is a list of devices not suited for GFCIs.1. Devices with a "high inrush" currentAppliances likerefrigerators,freezers,air conditioner units, andpower toolsrequire a serious initial burst of electricity when powered on. Yes, even though refrigerators are almost always located in kitchens, the counterintuitive truth is that they can cause the GFCI to trip because of the large initial surge of current, even if there is no actual fault. This is known as "nuisance tripping," and it is aptly named. Don't risk spoiling a fridge full of foods by plugging it into a GFCI.2. Outdoor equipmentSimilarly, outdoor equipment such aselectric lawnmowersorpressure washersshould ideally be plugged into a dedicated outdoor outlet. While GFCIs are designed for outdoor use to reduce shock hazards, high-powered equipment can cause the GFCI to trip if it draws too much current at one time.Also:The best home EV chargers of 2025: Expert tested3. High-powered appliancesAppliances that use significant power, such asspace heaters,microwave ovens, orvacuum cleaners, can cause a GFCI outlet to trip, especially if they are used on a circuit with high loads. These appliances could create a situation where the GFCI trips frequently (more nuisance tripping).4. Surge protectors or power stripsSomepower stripsandsurge protectors, particularly those with multiple plugs for high-powered devices, could cause the GFCI outlet to trip due to the combined electrical load. A sudden power surge or imbalance could cause the GFCI to trip unexpectedly, defeating the GFCI outlet's purpose and causing repeated circuit tripping.5. Sump pumpsWhile theNational Electrical Code (NEC)specifically mandates that new construction includes GFCIs in basements, there is another ironic exclusion among the list:sump pumps. Sump pumps are designed to prevent flooding, but if the GFCI trips and cuts off its power, your basement might end up under several inches of water.Also:How I used this portable power station to bring electricity to a caveman6. Medical equipmentMedical equipment likeCPAP machinesandoxygen concentratorsrequire continuous, uninterrupted power, so we advise plugging these vital devices into conventional outlets, not GFCIs. Some medical devices have sensitive circuitry, and the GFCI may trip unnecessarily, causing a loss of power to life-sustaining equipment.
    0 Comentários ·0 Compartilhamentos ·40 Visualizações
  • The Apple AirTag just hit its lowest price ever at just $17 each
    www.zdnet.com
    Right now, grab a four-pack of Apple AirTags for only $68 to help the iPhone user in your life monitor their keys, wallet, luggage, and more.
    0 Comentários ·0 Compartilhamentos ·37 Visualizações
  • Major Deadlock Patch Overhauls The Map To Have Three Lanes
    www.forbes.com
    Deadlock looks very different now. Credit: Valve / Mike StubbsA major new patch for Deadlock has just been revealed, with the map getting some big changes that will impact the entire game. The somewhat iconic four lane map has now been changed to a more traditional three lane map, in a move that will shake up more than just the playing field.The headline change in the latest Deadlock patch is obviously the removal of one of the lanes on the map. Since Deadlock started to break cover last year, it has been played on a four lane MOBA map that, despite the extra lane, was always very easy to move around. However, this patch has completely reworked the entire map, removing a lane and changing the spaces between the remaining ones.A first look at the new map will instantly feel familiar to any MOBA players, with three lanes, including a mid-lane that is noticeably shorter than the other two. This is obviously going to shake up how the game is played, with teams now more likely to split their players two to each lane, rather than the solo and duo lane combinations that have been happening so far.But this will likely also have a big impact on the Deadlock meta and which heroes are deemed the strongest. With solo lanes now likely a thing of the past, some heroes that thrived in them will likely need some balance changes to tweak their power spike timings. Given the patch has only just launched, a lot of this is yet to be figured out, but I am expecting it to have far-reaching implications for a long time.The patch also includes changes to how Soul Orbs work, with last hits no longer being required to release souls, providing you are in the area, they will still appear. This should make it easier for players to gain souls and increase their power a little quicker. There are also changes to almost every hero, including some that have been given new models that look great.Fans seem divided on this new patch, which is to be expected with a change this big, but it seems that Valves plan to work on fewer but more impactful Deadlock patches is certainly working. Theres no word on if this is an experiment or a permanent change, but given the hype around the patch already, you have to assume they will be pleased with the results. I havent played Deadlock for months, but with this new map change Im going to go boot it up as soon as I hit publish.
    0 Comentários ·0 Compartilhamentos ·40 Visualizações
  • AI Thinking Time And Prompting To Be Better Handled By AI Makers This Way
    www.forbes.com
    Advanced generative AI involves stipulating "thinking time" for the AI and here's the way this is ... [+] going to go.gettyIn todays column, I identify three eras underlying the advent of so-called thinking time when it comes to using generative AI and large language models (LLMs) and discuss the changes in prompting that will arise soon accordingly.Lets talk about it.This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). For my extensive discussion of fifty key prompting strategies and the value of solid prompt engineering, see the link here.AI Thinking Time Is A Hot TopicWhen you use modern-day generative AI, the latest versions tend to have a chain-of-thought (CoT) capability that is now built into the AI, see my in-depth explanation at the link here.The essence of the chain-of-thought approach is that the AI derives a series of steps to follow when trying to process the prompt that a user has entered. You can somewhat liken this to human decision-making and problem-solving in the broad sense of coming up with logical steps to figure things out (do not though anthropomorphize AI, such that AI isnt on par with how humans think).In the AI community, this extended use of processing time has been given all kinds of fanciful titles such as providing additional test-time compute or so-called thinking, reasoning, or inference time. I disfavor the phrasing of thinking/reasoning/inference time since those words smack of human thought. I find quite questionable the phrase test-time compute since the idea of testing is something usually done before a system or app is rolled out to users, while in the case of AI, the semblance seems to be that test-time also encompasses when users are actively utilizing a supposedly fielded system.I prefer the admittedly bland but apt indication that you are simply providing more processing time for the AI to dig deeper into deriving an answer or solution.Imagine things this way.You are using an AI app that plays chess. If you timebox the app to a split second of processing time, there isnt sufficient time to perhaps examine a plethora of possible chess moves. The AI must cut off the look-ahead since there isnt sufficient processing time to do a lot of calculations. On the other hand, you might tell the AI that it can take up to five minutes to make a chess move. This would allow for a bunch more processing time to explore a wide variety of chess moves and potentially derive a better choice for the move at hand.Some would say this is adding thinking time, reasoning time, test-time compute, and so on. Id suggest we could also be more plainspoken and note this as extending the run-time, real-time processing, or execution time allotment.You decide which vernacular is the most parsimonious and, frankly, honest.Time Estimation And ConsumptionSince the AI community seems to have decided that they like the impressive-sounding parlance of thinking time, I am going to proceed to use that catchphrase in this discussion since I will be showing generative AI examples entailing this topic. Please know that I use that phrase with great reluctance. In your mind, construe the thinking time as simply the processing time, thanks.I have identified that there are three eras associated with the amount of thinking time for generative AI:(1) First Era: AI vendor choice. AI makers decide on the amount of thinking time.(2) Second Era: User choice. Users get to decide on the amount of thinking time.(3) Third Era: Human-AI choice. Human-AI collaboration on the amount of thinking time.Right now, we are primarily in the first era.It goes like this. You ask generative AI a question and the AI will run for some amount of thinking time to figure out your answer. The AI maker has decided how much thinking time you will be allowed. To some extent, you can pay a fee to get extended thinking times, otherwise it is set by the AI maker.This has led us to the second era.After realizing that users might want to set how much thinking time is to be consumed, AI makers are variously now implementing the option of users being able to choose the amount of thinking time. For example, you might choose between Low, Medium, and High. Thats the ChatGPT approach for the moment. Another angle is a sliding scale, which is what Anthropic is moving toward.You will see in a moment that this ability to choose the desired thinking time isnt all that its cracked up to be. Hang in there.I predict we are heading at a rapid pace toward a third era.In the third era, the amount of thinking time will be collaboratively ascertained on a human-AI basis. In short, the generative AI will discuss the thinking time aspects with the user, of which the user then gets the final say on the amount of thinking time that will be utilized for a prompt or when used on a default basis.My logic for saying that AI makers are inevitably and soon going to shift into my defined third era is that users of generative AI are going to express their dissatisfaction with the second-era approach. Maybe few of the AI makers will admit that there is such discontent and will merely proclaim they have enhanced how thinking time is set. Sure, whatever gets us to improvements for those using generative AI. Spin away.Example of Low, Medium, High ApproachThe first era is what you have already experienced when using most of the conventional generative AI. No need to dwell on that era. Lets jump into the slowly emerging second era and see some examples.The ability to pick from low, medium, or high is straightforward and provides a useful illustration of the second-era approach. There is either a picklist presented to you for each prompt, or the AI directly asks you which you prefer.Here is an example of generative AI asking for your preference.Generative AI: Indicate the amount of thinking time: Low, Medium, or High.My entered prompt: The thinking time for this question is to be Low. What is the capital of Paris?Observe that I told generative AI that I wanted to go with low for the prompt I am entering.My basis for choosing low is that the question I am going to ask is pretty easy. By simply asking about the capital of Paris, I expect that the thinking time should be quite minimal. No sense in telling the AI to go with high when I can guess that the thinking time isnt going to be notable.This brings up an important point.Some generative AI apps will do whatever you say, such that if you say high, the AI will potentially whirl and calculate far beyond the true amount of time needed. You have essentially told the AI that this is okay with you. The problem is that you might end up paying for extra cycles of processing that you really didnt need to have expended. Its kind of a rip-off.The retort by the AI maker is that if the user has said they want a high amount of thinking time, by gosh, thats on their shoulders. It is up to the user to realize what they are doing. Plus, the added thinking time can be construed as a handy double-check. You are getting the AI to do a lengthier exploration and as a result, you can be happier with the answer given.Mull that over.Example Of A Sliding ScaleThough there is a bit of a splash made about using a sliding scale instead of the low, medium, and high, the rub is still about the same.Take a look at this example.Generative AI: Indicate on a sliding scale from 1 to 10 the amount of thinking time (1 is low, 10 is high).My entered prompt: The thinking time should be 7. Tell me the best chess move based on the board positions that I will give you in my next prompt.You can certainly tout the extravagance of being able to use a sliding scale, which might be an actual bar on the screen with a slider or could be as Ive shown above the entry of a number. In reality, with a scale of 1 to 10, you could reasonably assume that 1 is low, 5 is medium, and 10 is high. Your ability to choose those numbers or something in between might be nice, but it doesnt get us to the moon if you know what I mean.The essence is that the selection of the thinking time is still on the shoulders of the user.Third Era Approaching FastI will now shift into the approaching third era.Lets continue my example using the sliding scale. Suppose that the AI consumed the 7 as the amount of thinking time, but the answer wasnt quite as good as it could have been if I had given a higher number. Please know that in the first and second eras, you would not be informed that your choice of time was a tad low. Tough luck to you.In the third era, something like this would happen.Generative AI: You told me to use 7 as the amount of thinking time. I did so. The best chess move based on that amount of thinking time is shown next. I suggest you consider rerunning at a higher amount of thinking time if you want a deeper answer.My entered prompt: Go ahead and rerun with a thinking time of 9.You can see that the AI not only proceeded as I originally instructed, but it also detected that there was more that could have been done to give a stronger answer. The AI kindly informed me accordingly.I then opted to do a rerun with a 9 as the amount of thinking time.This showcases the third era as consisting of human-AI collaboration in establishing the thinking time. Rather than the first era where the AI makes the choice, and the second era where the user makes the choice (though somewhat blindly), the third era entails the AI and the user working hand-in-hand to figure out the thinking time. Nice.Midstream Adjustments To OccurYou might have had some heartburn that the AI informed me after the fact that my 7 was less than what might have been a better choice. I reran the prompt with a 9, but I had already incurred the cost and delay associated with my prompt that said to use 7. You might say that I am doubling my cost and that this seems unfair. I agree.The third era will introduce the midstream capability of adjusting thinking time. So, for this next example, envision that my initial prompt of 7 was accepted and the AI got underway.Heres what might have happened.Generative AI: You told me to proceed with a thinking time of 7. I started on this. I now believe that if you are willing to go to a thinking time of 9, the answer will be notably improved. Is that okay with you or should I stop at the 7?My entered prompt: Please further proceed with a thinking time of 9.The beauty is that I dont incur a complete rerun. Midstream of processing, the AI came back to me and asked if I was willing to up the ante to 9. I said yes.Cynical readers might right away be bellowing that this is going to incentivize the AI makers to convince users to increase their thinking times, perhaps just to make a buck. I get that. There is little doubt that the AI could be tilted to ask the user for more thinking time even when it is misleading or an outright lie. Ka-ching goes the cash register for the AI maker. It will be hard for an average user to discern whether they are being honestly told to increase their time or are being tricked into doing so.The saving grace, perhaps, would be that AI makers doing this tomfoolery are taking a huge reputational risk if it is discovered they are purposely gaming users. Possibly lawsuits and maybe criminal action could be in their future for such deceptions of users (for more on the evolving realm of AI and the law, see my analysis at the link here).Well have to wait and see how this pans out.Collaboration Gets More RobustPart of the issue with my having stipulated the 7 as my desired thinking time for my prompt was that I had to take a wild guess about the matter.Consider things this way.You go to a car mechanic to fix your car. Suppose the car mechanic asks you how much you are willing to spend to fix the car. That seems zany. The car mechanic ought to give you an estimate. Few people would magically know how much they think the car fix is going to cost. It doesnt make much sense to do things that way.The same will hold true in the third era of generative AI thinking time.Here is an example.Generative AI: You can tell me how much thinking time you want to use, doing so with a sliding scale from 1 (low) to 10 (high). If you are unsure of how much thinking time is needed, I can give you an estimate before I get started on solving or answering your question or problem. Would you like an estimate?My entered prompt: Yes, I would like an estimate. My question is going to be that I want to know how many manhole covers there are in New York City. What is the estimated amount of thinking time needed?Generative AI: I estimate that would be a 2 on a scale of 1 to 10. You can approve that or change to some other amount of thinking time that is between 1 and 10.My entered prompt: Proceed with the thinking time of 2, thanks.This makes a lot more sense. I was able to provide my prompt and get a preliminary estimate. I approved the estimate.Once the AI gets underway, if it determines that the estimate was not sufficient, it will come back to me midstream and let me know. I could then adjust if desired.Not Always Dealing With EstimatesA user that is frequently utilizing generative AI might get tired of having to continually deal with estimates and approvals for thinking time. It could be exasperating and irksome.In the third era, the AI will keep track of how things are going and make recommendations to the user.Consider this example.Generative AI: I periodically analyze the thinking time that is being used to answer your questions. Ive noticed that you often ask for Medium. Most of your questions so far have been answered within the Low timeframe. You might want to consider using Low unless your questions start to become more complex.My entered prompt: Thanks for the analysis. Id like you to automatically default that my preference for thinking time henceforth is Low. I will tell you if I want to switch the thinking time to a different level.Generative AI: Got it. All your questions will now have a default of Low. You will tell me in your prompt whether that is to be changed. I will also notify you if a question you ask is estimated by me as being well above a Low, and then let you decide what you want to do.The AI has handily determined that my best bet is to generally be at Low. This isnt rigid. The AI will adjust, and I can adjust.Thinking About Thinking TimeIf you havent been dealing with thinking time when using generative AI, you now know whats coming up. I trust that you are prepared for the changes afoot.My expectation is that we will advance quickly to the third era. No sense in making life harder for users by getting mired in the first era or the second era. Its time to move on.A final comment for now.Henry Ford famously said this: Coming together is a beginning; keeping together is progress; working together is success.The same applies to working with generative AI. Human-AI collaboration is the best path toward success. Humans will be happy, and I suppose the AI will be happy though lets not hand out that emotion to non-sentient AI. We must keep our heads and minds clear on what contemporary AI can and cannot do.Thats a good use of our thinking time.
    0 Comentários ·0 Compartilhamentos ·43 Visualizações
  • Tottenham vs. Manchester City: How to watch, results, and highlights
    www.digitaltrends.com
    Table of ContentsTable of ContentsHow to watch Tottenham vs. Manchester CityCan you watch Tottenham vs. Manchester City on Fubo?How to watch Tottenham vs. Manchester City from abroad with a VPNIts a dogfight for the Premier Leagues qualifying spots in the Champions League. Even though the Premier League will likely get the extra fifth spot this year, its no guarantee that Manchester City (13-5-8, 44 points) will secure one of those five spots. Wednesdays match against Tottenham (10-3-13, 33 points)now becomes a must-win for the Sky Blues.Its been an epic run for City, winning six of the last seven Premier League titles. Pep Guardiolas team will likely be dethroned as champions by the end of the season. City has lost eight of their past 17 in the Premier League. City is barely hanging onto that fourth spot in the Premier League, and a loss to Tottenham could push them out of the top five.Recommended VideosTottenham would love to play spoiler and win their fourth consecutive Premier League match. Find out how to watch Wednesdays match between Tottenham and Manchester City, including the start time, channel, and streaming information. Read more of our soccer coverage in Digital Trends Premier League guide.RelatedStepping up the work @BetMGMUK pic.twitter.com/LyxGm5DqER Tottenham Hotspur (@SpursOfficial) February 25, 2025Tune in for the game between Tottenham and Manchester City at2:30 p.m. ET on Wednesday, February 26, 2025. Fans can stream the game on . A replay will be available after the game if youre busy and cant watch the match.Peacock has a package of Premier League games that stream exclusively on the service. Choose between Premium at $8 per month and Premium Plus at $14 per month. Both plans will air Tottenham versus Manchester City. After choosing a plan, read how to set up your TV to watch the Premier League. This guide ensures a better viewing experience.Phil Nickinson / Digital TrendsWhileFubo is an ideal live streaming TV service for Premier League games, it will not carry Wednesdays match between Tottenham and Manchester City. That game is exclusive to Peacock. Soccer fans should still think about subscribing to Fubo to watch the Premier League on channels like NBC and USA Network.Derek Malcolm / Digital TrendsIf youre in the market for a VPN,NordVPN is your best bet. As one of the fastest VPNs in the world, NordVPN protects online activity from nefarious figures trying to spy on your connection. NordVPN provides secure encryption, protection from tracking, an anti-malware tool, and a dark web monitor. Plus, NordVPN offers a 30-day money-back guarantee if unsatisfied with the product.Editors Recommendations
    0 Comentários ·0 Compartilhamentos ·46 Visualizações
  • Vanderbilt vs. Texas A&M: How to watch, results, and highlights
    www.digitaltrends.com
    Table of ContentsTable of ContentsHow to watch Vanderbilt vs. Texas A&MWatch Vanderbilt vs. Texas A&M on Sling TVHow to watch Vanderbilt vs. Texas A&M from abroad with a VPNThe competition in the SEC remains the best in the NCAA. There is always a good game against potential tournament teams every night. One of Wednesdays top games features the Vanderbilt Commodores (18-9) hitting the road to play the No. 12 Texas A&M Aggies (20-7). Barring collapses from both teams, Vanderbilt and Texas A&M should be playing in March Madness.The Commodores began the season with a dream start of 16-4. Since February 1, Vanderbilt is 2-5, with losses against Oklahoma, Florida, Auburn, Tennessee, and Kentucky. Besides Oklahoma, those four teams are the best in the SEC. A&M also finds themselves in a rut, losing two straight, including Saturdays loss to Tennessee. The Aggies still have plenty of chances to improve their resume with games against Vanderbilt, Florida, and Auburn.Recommended VideosCan the Commodores pick up their first win in College Station since 2017? Find out how to watch the game between Vanderbilt and Texas A&M. Read our NCAA mens basketball March to the Madness guide for more coverage.RelatedEpisode 12: The Aggie Basketball Hour with Buzz WilliamsThe game between Vanderbilt and Texas A&M tips at 7 p.m. ET on Wednesday, February 26, 2025. The Aggies will host the game inside Reed Arena in College Station. Watch the game on SEC Networkor stream onWatchESPN.Watch Vanderbilt vs. Texas A&MDigital TrendsOne of the best live TV streaming services is . With Sling TV, customers can enjoy the benefits of cable without having the high prices or a set-top box. Plus, Sling does not have long-term contracts, meaning youre not tied down to the service.Sling TV offers two paid plans: Orange for $46 per month and Blue for $51 per month. Both are 50% off the first month. Combine both plans for $66 per month. To access SEC Network, you will need the Sports Extra add-on for $11 per month.NordVPNIf you watch the game while traveling abroad, downloading a VPN, or virtual private network, is in your best interest. Think of VPNs as a security blanket for your connection. VPNs help protect your connection from malicious activity and cybercriminals. Plus, it helps ensure a smoother streaming experience by working around broadcast restrictions. NordVPN is our recommendation for VPNs because of its accessibility, speed, and 30-day money-back guarantee.Editors Recommendations
    0 Comentários ·0 Compartilhamentos ·35 Visualizações
  • Groks new unhinged voice mode can curse and scream, simulate phone sex
    arstechnica.com
    I have no mouth, and I must scream Groks new unhinged voice mode can curse and scream, simulate phone sex New cursing chatbot follows Elon Musk's plan to provide an "uncensored" answer to ChatGPT. Benj Edwards Feb 25, 2025 6:18 pm | 49 Credit: dvoriankin via Getty Images Credit: dvoriankin via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Sunday, xAI released a new voice interaction mode for its Grok 3 AI model that is currently available to its premium subscribers. The feature is somewhat similar to OpenAI's Advanced Voice Mode for ChatGPT. But unlike ChatGPT, Grok offers several uncensored personalities users can choose from (currently expressed through the same default female voice), including an "unhinged" mode and one that will roleplay verbal sexual scenarios.On Monday, AI researcher Riley Goodside brought wider attention to the over-the-top "unhinged" mode in particular when he tweeted a video (warning: NSFW audio) that showed him repeatedly interrupting the vocal chatbot, which began to simulate yelling when asked. "Grok 3 Voice Mode, following repeated, interrupting requests to yell louder, lets out an inhuman 30-second scream, insults me, and hangs up," he wrote.By default, "unhinged" mode curses, insults, and belittles the user non-stop using vulgar language. Other modes include "Storyteller" (which does what it sounds like), "Romantic" (which stammers and speaks in a slow, uncertain, and insecure way), "Meditation" (which can guide you through a meditation-like experience), "Conspiracy" (which likes to talk about conspiracy theories, UFOs, and bigfoot), "Unlicensed Therapist" (which plays the part of a talk psychologist), "Grok Doc" (a doctor), "Sexy" (marked as "18+" and acts almost like a 1-800 phone sex operator), and "Professor" (which talks about science). A composite screenshot of various Grok 3 voice mode personalities, as seen in the Grok app for iOS. Basically, xAI is taking the exact opposite approach of other AI companies, such as OpenAI, which censor discussions about not-safe-for-work topics or scenarios they consider too risky for discussion. For example, the "Sexy" mode (warning: NSFW audio) will discuss graphically sexual situations, which ChatGPT's voice mode will not touch, although OpenAI recently loosened up the moderation on the text-based version of ChatGPT to allow some discussion of some erotic content.Users can also customize Grok's voice mode to act in a certain way. For example, musician Sean Lennon customized the voice mode to play the part of "Roko's Basilisk," a fictional AI character based on a thought experiment about a hypothetical superintelligent AI that might retroactively punish those who did not help bring it into existence.What we see with Grok's voice mode seems in line with Elon Musk's original plans for xAI when he founded the company last year. Musk announced that he wanted xAI's chatbots to serve uncensored and "based" answers compared to ChatGPT, which he perceives as producing outputs that are too restrictive and politically left-leaning. Already, we've seen xAI allow Grok to generate mostly uncensored images as a service available through the X social networking platform.Technologically, it's currently novel to interact with a voice AI chatbot that does not censor itself, as offered through a tech company and not as a jailbreak or open source hack. It's probably a "first" with a chatbot of this capability. And yet, in our experiments with Grok voice mode's different personalities, the voice frequently tended to repeat itself and get stuck in loops, almost as if hitting pre-programmed talking points. So, it's not nearly as smooth as ChatGPT's Advanced Voice Mode. But it is provocative, which is probably what Musk wants it to be.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 49 Comments
    0 Comentários ·0 Compartilhamentos ·46 Visualizações
  • Tiny tubes wrap around brain cells
    www.technologyreview.com
    Wearable devices like smart watches and fitness trackers help us measure and learn from physical functions such as heart rates and sleep stages. Now MIT researchers have developed a tiny equivalent for individual brain cells. These soft, battery-free wireless devices, actuated with light, are designed to wrap around different parts of neurons, such as axons and dendrites, without damaging them. They could be used to measure or modulate a neurons electrical and metabolic activity. They could also serve as synthetic myelin for axons that have lost this insulation, helping to address neuronal degradation in diseases like multiple sclerosis. The devices are made from thin sheets of a soft polymer called azobenzene, which roll when exposed to light. Researchers can precisely control the direction of the rolling and the size and shape of the tubes by varying the intensity and polarization of the light. This enables the devices to snugly, but gently, wrap around curved axons and dendrites. To have intimate interfaces with these cells, the devices must be soft and able to conform to these complex structures. That is the challenge we solved in this work, says Deblina Sarkar, an assistant professor in the Media Lab and the senior author of a paper on the research. We were the first to show that azobenzene could even wrap around living cells. The researchers, who developed a scalable fabrication technique that doesnt require the use of a cleanroom, have demonstrated that the devices can be combined with optoelectrical materials that can stimulate cells. Moreover, atomically thin materials can be patterned on top of the tubes, offering opportunities to integrate sensors and circuits. In addition, because they make such a tight connection with cells, they could make it possible to stimulate subcellular regions with very little energy. This could enable a researcher or clinician to treat brain diseases by modulating neurons electrical activity.
    0 Comentários ·0 Compartilhamentos ·47 Visualizações