• Mozilla patches Firefox bug exploited in the wild, similar to bug attacking Chrome
    techcrunch.com
    In BriefPosted:5:31 AM PDT March 28, 2025Image Credits:Mozilla (file photo)Mozilla patches Firefox bug exploited in the wild, similar to bug attacking ChromeMozilla has fixed a security bug in its Firefox for Windows browser that was being exploited in the wild.In a brief update, Mozilla said it updated the browser to Firefox version 136.0.4 after identifying and fixing the new bug, tracked as CVE-2025-2857, which presents a similar pattern to a bug that Google patched in its Chrome browser earlier this week.Anyone exploiting the bug could escape Firefoxs sandbox, which limits the browsers access to other apps and data on the users computer.The bug also affects other browsers with the same codebase as Firefox for Windows, such as the Tor Browser, which also received a patch updating the browser to 14.0.7.Kaspersky researcher Boris Larin, who first discovered the Chrome zero-day, confirmed in a post that the root cause of the Chrome bug also affects Firefox. Kaspersky previously linked the use of the exploits to attacks on journalists, employees of educational institutions, and government organizations in Russia.Topics
    0 Comentários ·0 Compartilhamentos ·28 Visualizações
  • Google rolls out user choice billing on Google Play in the UK
    techcrunch.com
    Google said today that it will start offering user choice billing in the U.K., giving Google Play developers the ability to use other billing options instead of Googles own system. The change kicks in on March 29, initially only to non-game developers. If developers opt for it, they cannot replace Google Play billing altogether. Instead, the third-party route will be offered as an option.Developer who opt to use an alternative billing option get a 4% discount from the fees they pay to Google (to account for the fees that third parties may also charge). Google typically gets a cut of up to 30% on in-app transactions and paid downloads.In a blog post announcing the change, Google claimed that more than 90% of developers on its platform are satisfied or very satisfied with Google Plays native billing. However, it added, We recognise that some developers may want more choice in how they process payments. This launch enables developers to offer an additional billing system alongside Google Plays billing system and users can choose which option to use at checkout.The backstory is a little less rosy than Google just being a Nice Guy. Googles move is actually a long-awaited response to a Competition and Markets Authority (CMA) investigation dating back to 2022. At that time, the competition watchdog published a report based on a year-long study of the mobile ecosystem and noted that both Google and Apples power in the market could be subject to regulatory scrutiny. The companies app stores where they were the sole in-app billing providers for their respective platforms was a particular point of focus when investigating Google and Apples anticompetitive duopoly status. That was only the start. In 2023, Google proposed that it could offer user choice billing to developers to settle the antitrust probe. In response, the CMA opened up a consultation and invited developers to provide feedback on Googles proposal.The CMA ultimately closed the probe against Google and Apple last year, noting that it planned to use regulatory reforms, such as the digital market competition bill, to regulate these companies in the mobile market.In the meantime, Google has been permitting billing from third parties elsewhere in response to regulatory pressure to open its app store to more competition. Countries where Google already offers user choice billing includethe U.S., as well asIndia, Australia, Indonesia, Japan, and the European Economic Area (EEA), which all follow the same commissions and charges as in the U.K.
    0 Comentários ·0 Compartilhamentos ·29 Visualizações
  • Understanding RAG architecture and its fundamentals
    www.computerweekly.com
    All the large language model (LLM) publishers and suppliers are focusing on the advent of artificial intelligence (AI) agents and agentic AI. These terms are confusing. All the more so as the players do not yet agree on how to develop and deploy them.This is much less true for retrieval augmented generation (RAG) architectures where, since 2023, there has been widespread consensus in the IT industry.Augmented generation through retrieval enables the results of a generative AI model to be anchored in truth. While it does not prevent hallucinations, the method aims to obtain relevant answers, based on a company's internal data or on information from a verified knowledge base.It could be summed up as the intersection of generative AI and an enterprise search engine.Broadly speaking, the process of a RAG system is simple to understand. It starts with the user sending a prompt - a question or request. This natural language prompt and the associated query are compared with the content of the knowledge base. The results closest to the request are ranked in order of relevance, and the whole process is then sent to an LLM to produce the response sent back to the user.The companies that have tried to deploy RAG have learned the specifics of such an approach, starting with support for the various components that make up the RAG mechanism. These components are associated with the steps required to transform the data, from ingesting it into a source system to generating a response using an LLM.The first step is to gather the documents you want to search. While it is tempting to ingest all the documents available, this is the wrong strategy. Especially as you have to decide whether to update the system in batch or continuously."Failures come from the quality of the input. Some customers say to me: 'I've got two million documents, you've got three weeks, give me a RAG'. Obviously, it doesn't work," says Bruno Maillot, director of the AI for Business practice at Sopra Steria Next. "This notion of refinement is often forgotten, even though it was well understood in the context of machine learning. Generative AI doesn't make Chocapic".An LLM is not de facto a data preparation tool. It is advisable to remove duplicates and intermediate versions of documents and to apply strategies for selecting up-to-date items. This pre-selection avoids overloading the system with potentially useless information and avoids performance problems.Once the documents have been selected, the raw data - HTML pages, PDF documents, images, doc files, etc - needs to be converted into a usable format, such astext and associated metadata, expressed in a JSON file, for example. This metadata can not only document the structure of the data, but also its authors, origin, date of creation, and so on. This formatted data is then transformed into tokens and vectors.Publishers quickly realised that with large volumes of documents and long texts, it was inefficient to vectorise the whole document.Hence the importance of implementing a "chunking" strategy. This involves breaking down a document into short extracts. A crucial step, according to Mistral AI, which says, "It makes it easier to identify and retrieve the most relevant information during the search process".There are two considerations here - the size of the fragments and the way in which they are obtained.The size of a chunk is often expressed as a number of characters or tokens. A larger number of chunks improves the accuracy of the results, but the multiplication of vectors increases the amount of resources and time required to process them.There are several ways of dividing a text into chunks.The first is to slice according to fragments of fixed size - characters, words or tokens. "This method is simple, which makes it a popular choice for the initial phases of data processing where you need to browse the data quickly," says Ziliz, a vector database editor.A second approach consists of a semantic breakdown that is, based on a "natural" breakdown: by sentence, by section - defined by an HTML header for example - subject or paragraph. Although more complex to implement, this method is more precise. It often depends on a recursive approach, since it involves using logical separators, such as a space, comma, full stop, heading, and so on.The third approach is a combination of the previous two. Hybrid chunking combines an initial fixed breakdown with a semantic method when a very precise response is required.In addition to these techniques, it is possible to chain the fragments together, taking into account that some of the content of the chunks may overlap."Overlap ensures that there is always some margin between segments, which increases the chances of capturing important information even if it is split according to the initial chunking strategy," according to documentation from LLM platform Cohere. "The disadvantage of this method is that it generates redundancy.The most popular solution seems to be to keep fixed fragments of 100 to 200 words with an overlap of 20% to 25% of the content between chunks.This splitting is often done using Python libraries, such as SpaCy or NTLK, or with the text splitters tools in the LangChain framework.The right approach generally depends on the precision required by users. For example, a semantic breakdown seems more appropriate when the aim is to find specific information, such as the article of a legal text.The size of the chunks must match the capacities of the embedding model. This is precisely why chunking is necessary in the first place. This "allows you to stay below the input token limit of the embedding model", explains Microsoft in its documentation. "For example, the maximum length of input text for the Azure OpenAI text-embedding-ada-002 model is 8,191 tokens. Given that one token corresponds on average to around four characters with current OpenAI models, this maximum limit is equivalent to around 6,000 words".An embedding model is responsible for converting chunks or documents into vectors. These vectors are stored in a database.Here again, there are several types of embedding model, mainly dense and sparse models. Dense models generally produce vectors of fixed size, expressed in x number of dimensions. The latter generate vectors whose size depends on the length of the input text. A third approach combines the two to vectorise short extracts or comments (Splade, ColBERT, IBM sparse-embedding-30M).The choice of the number of dimensions will determine the accuracy and speed of the results. A vector with many dimensions captures more context and nuance, but may require more resources to create and retrieve. A vector with fewer dimensions will be less rich, but faster to search.The choice of embedding model also depends on the database in which the vectors will be stored, the large language model with which it will be associated and the task to be performed. Benchmarks such as the MTEB ranking are invaluable. It is sometimes possible to use an embedding model that does not come from the same LLM collection, but it is necessary to use the same embedding model to vectorise the document base and user questions.Note that it is sometimes useful to fine-tune the embeddings model when it does not contain sufficient knowledge of the language related to a specific domain, for example, oncology or systems engineering.Vector databases do more than simply store vectors - they generally incorporate a semantic search algorithm based on the nearest-neighbour technique to index and retrieve information that corresponds to the question. Most publishers have implemented the Hierarchical Navigable Small Worlds (HNSW) algorithm. Microsoft is also influential with DiskANN, an open source algorithm designed to obtain an ideal performance-cost ratio with large volumes of vectors, at the expense of accuracy. Google has chosen to develop a proprietary model, ScANN, also designed for large volumes of data. The search process involves traversing the dimensions of the vector graph in search of the nearest approximate neighbour, and is based on a cosine or Euclidean distance calculation.The cosine distance is more effective at identifying semantic similarity, while the Euclidean method is simpler, but less demanding in terms of computing resources.Since most databases are based on an approximate search for nearest neighbours, the system will return several vectors potentially corresponding to the answer. It is possible to limit the number of results (top-k cutoff). This is even necessary, since we want the user's query and the information used to create the answer to fit within the LLM context window. However, if the database contains a large number of vectors, precision may suffer or the result we are looking for may be beyond the limit imposed.Combining a traditional search model such as BM25 with an HNSW-type retriever can be useful for obtaining a good cost-performance ratio, but it will also be limited to a restricted number of results. All the more so as not all vector databases support the combination of HNSW models with BM25 (also known as hybrid search).A reranking model can help to find more content deemed useful for the response. This involves increasing the limit of results returned by the "retriever" model. Then, as its name suggests, the reranker reorders the chunks according to their relevance to the question. Examples of rerankers include Cohere Rerank, BGE, Janus AI and Elastic Rerank. On the other hand, such a system can increase the latency of the results returned to the user. It may also be necessary to re-train this model if the vocabulary used in the document base is specific. However, some consider it useful - relevance scores are useful data for supervising the performance of a RAG system.Reranker or not, it is necessary to send the responses to the LLMs. Here again, not all LLMs are created equal - the size of their context window, their response speed and their ability to respond factually (even without having access to documents) are all criteria that need to be evaluated. In this respect, Google DeepMind, OpenAI, Mistral AI, Meta and Anthropic have trained their LLMs to support this use case.In addition to the reranker, an LLM can be used as a judge to evaluate the results and identify potential problems with the LLM that is supposed to generate the response. Some APIs rely instead on rules to block harmful content or requests for access to confidential documents for certain users. Opinion-gathering frameworks can also be used to refine the RAG architecture. In this case, users are invited to rate the results in order to identify the positive and negative points of the RAG system. Finally, observability of each of the building blocks is necessary to avoid problems of cost, security and performance.Read more about AI, LLMs and RAGWhy run AI on-premise? Much of artificial intelligences rapid growth has come from cloud-based tools. But there are very good reasons to host AI workloads on-premiseAdvancing LLM precision & reliability - This is a guest post written by Ryan Mangan, datacentre and cloud evangelist and founder of Efficient Ether.Why responsible AI is a business imperative - Tools are emerging for real-world AI systems that focus more on responsible adoption, deployment and governance, rather than academic and philosophical questions about speculative risks.
    0 Comentários ·0 Compartilhamentos ·21 Visualizações
  • How to use ChatGPT to quickly analyze your credit card spending - and why you should
    www.zdnet.com
    Andriy Onufriyenko/Getty ImagesSpending money is easy. Keeping track of how much money you spend, not so much.Even if you're good at recording all your expenses in a financial program like Quicken, QuickBooks, or Xero, understanding your expenses and turning all of that transaction data into actionable insights can be more difficult. But, as it turns out, AI can help make that process much easier. And you don't even have to be some sort of accountant or spreadsheet wizard. You just have to know how to ask the right questions (i.e., give the right prompts). To make that process as easy as it can be, I've included 10 powerful AI prompts right here in this article, along with five additional prompts perfect for use at tax time. I'll also show you the fast way to get at your transaction data so you can use the AI to analyze it. But first, let's look at five good reasons to better understand your expenses. Why using AI to analyze your transactions is a good idea Here are five good reasons for using an AI to analyze your financial transactions. Turn raw data into clear insights: The transaction report you get from Quicken or QuickBooks is a list of individual vendors and expenses. It provides a way to record ledger data, not a way to gain insights from that data. Using these prompts can give you understanding in addition to record keeping.Save time on manual analysis: You can use a spreadsheet to do all I'm showing you, but if you're not a spreadsheet guru (and even if you are), the AI can give you results in minutes for something that would likely take you or me hours. I applied these prompts to my credit card data and got results in about 10 minutes.Uncover hidden spending patterns: How much are you really spending on British streaming services? Do you really back that many Patreon accounts? Didn't you cancel that service last year? Using the AI can find things that might not normally stand out.Get tax-ready with minimal effort: This is certainly not a fail safe process (it's AI, after all), but you can use the AI to quickly group expenses by whether or not they're deductible, identify contractor payments, or identify other potential areas of savings you might have missed.Make smarter financial decisions: Having clarity and direction about how you're spending your money can only help you make smarter decisions. How to quickly get at your data Before you can analyze your data, you'll need to export it. Most financial tools like QuickBooks have a reports section. In that reports section, almost all will have a transaction detail report, which is literally a report of every transaction and all the details recorded. Run that report for the period you want to analyze, then find the Export button. Export as either Excel or CSV and that will give you a data set you can import into the AI. The easiest way to start is to just export one credit card account. But you can, for example, export each credit card account and name each file for the appropriate card. Then, when you ask the AI to import, you can either import one after the other and instruct the AI to keep track of which card is in use, or do the AI analysis on each report one at a time. Do be aware you're giving the AI your personal data. But transaction reports don't export the card numbers, just that you have a pattern of watching British cozy mysteries instead of those blockbuster action flicks most people would expect. 10 useful prompts for financial insights Let's get started with some general prompts. 1. Understand your data structure Prompt: "Here is a transaction detail report from my accounting software. Please list all the columns so I can verify the structure." This helps make sure the AI is properly reading your exported file. I find that the AI tends to get lost if the name of the spreadsheet columns aren't in the first row. So if this prompt returns gibberish, tell the AI that the row containing column names contains fields like "transaction date". It should be able to work from that point on. 2. Identify top spending vendors Prompt: "Group all expenses by vendor and show me which vendors I spend the most money with." This is a great way to find out where you're spending your money. Keep in mind that the AI may not know which column is a vendor name, so you might have to tell the AI, for example, that the column "name" is the name of the vendor you want to use for grouping. 3. See totals by category Prompt: "Group expenses by accounting category or account name and show the total for each." This gives you a big-picture view of how your spending is distributed across different expense types. But keep in mind that the AI may not quite understand what each category is, so you may need to provide definitions or tell it that certain vendors fall into certain categories. A good follow-on prompt is "For each category, list the vendors you think are associated with each category." Then, if vendors are associated with the wrong category, correct the AI. 4. Track trends over time Prompt: "Summarize my monthly spending trends by category. Show how much I spend per category per month." This helps you see and understand any seasonal spending patterns, identify recurring costs, and identify any cash flow behaviors you need to fix. 5. Catch vendor price increases Prompt: "Which vendors have increased their prices over time? Show changes in average monthly charges per vendor." This works, but not always. For example, I pay Google for three or four completely different services. So the AI might get confused lumping everything into Google. If it goes down the wrong path, you can tell it to ignore all Google expenses in this prompt, for example. Or you can instruct it on any special handling you want for such vendors. Then, you can go back to those vendors and investigate the charges, renegotiate the fees, drop their services, or suck it up and live with the increases. 6. Find irregular large expenses Prompt: "Which categories contain large one-time or annual expenses? Flag anything over $100 that only shows up once or twice." This is a great way to find out if anyone did an automatic charge that you weren't expecting. I keep a pretty good eye on my online services and software licenses, but once in a while one will sneak through. This is a good way to get ahead of those expenses, or at least prevent them from being charged again. 7. Break down recurring services Prompt: "Break down streaming, subscriptions, and internet service expenses. Group by vendor and show monthly totals." This is a great way to see how you're spending on those recurring services. Just keep in mind you might have to tell it which vendors should be categorized into which categories. To do this, use a prompt like "All expenses from Netflix and Britbox should be considered streaming, all expenses from Patreon and the Washington Post should be considered subscriptions, and all expenses from Pagely and GoDaddy and Comcast should be considered internet services." 8. Spot errors or duplicates Prompt: "Are there any duplicate transactions or likely errors? Highlight any identical amounts to the same vendor on the same day or any transactions that seem anomalous." This helps catch duplicate charges, which can happen. The second part of the prompt is more interesting, though. Here, you're asking the AI to just see what it thinks and if it finds anything that seems weird or unusual and is worthy of your attention. You can fine-tune that approach by prompting, "Are there any transactions or series of transactions that seem different or unusual or could indicate errors, fraudulent charges, or anything else I should pay special attention to? Please list those transactions item by item, along with your reasoning for including them in this list." 9. Compare year-over-year spending Prompt: "Compare this year's expenses to last year's, grouped by category. Show the percentage change." Obviously, this depends on how many transactions you exported. If you want to do year-over-year spending analysis, be sure to set your starting date far enough back on your export to make that possible. However, if you do this, you may want to prepend "For the last 12 months" or "For the year XXXX" (where XXXX is the year you want) to all the other prompts in this article to make sure the AI triangulates on just the period you're investigating. 10. Identify areas to cut Prompt: "Which vendors or categories could be cut or reduced? Flag anything underused, duplicated, or with growing costs." Obviously, only you know what might be considered an overspend, but this is a good way to elevate the expenses that might fit into that classification so that you can see them and make good decisions. And with that, let's look at some tax-related prompts. Tax-specific prompts Whether tax time is upon us or you're planning for your next year, these tax-related prompts can help you prepare, save money, and manage your taxes. Just keep in mind that AI's have a tendency to make stuff up, and that includes numbers. So don't directly use the AI's results in your tax returns. Instead, use them to guide your work but you do need to do the work. 1. Group tax-deductible expenses Prompt: "Group all expenses by tax-deductible categories like insurance, education, office supplies, or charitable contributions." This can help you identify expenses that might qualify as deductions. Be sure to tell the AI whether you're doing taxes as a business or as an individual because deduction categories are quite different. 2. Flag personal expenses in business calculations Prompt: "Flag any personal expenses that may have been included accidentally in my business transactions." This is a prompt that helps if you're preparing taxes or doing record keeping for a small business. It will help you easily identify any expenses that shouldn't be considered deductible expenses, or that you should discuss in more detail with your accountant. 3. Summarize home office deductions Prompt: "Summarize all expenses that are potentially deductible as home office, utilities, internet, phone, and office supplies." If you're a sole proprietor who files a Schedule C, or a small business owner, this can help you identify home office or work-from-home expenses that should be considered deductions. Prepares key figures for Schedule C or small business deductions with minimal effort. 4. List charitable contributions Prompt: "List all charitable donations or contributions. Include dates and amounts." Charitable contributions are often deductible. This prompt can help you quickly pull together donation records needed for itemized deduction filing. 5. Find 1099-eligible vendors Prompt: "Which vendors or transactions might require a 1099 to be issued? Group all payments to contractors or freelancers over $600." This prompt helps you prepare for an IRS ruling where you need to provide 1099s when you issue smaller payments. Called the $600 rule, it's a way of the IRS tracking payments through PayPal, Venmo, and similar TPSOs (third-party settlement organizations). Now, the good news is the IRS has postponed the actual $600 rule until 2026. The bad news is the threshold changes in 2024 and 2025. Using $600 in this prompt now, however, means you can identify all vendors or transactions where this reporting might be necessary. It's your money I know. That last tip was a bit obscure. But it applies to most small businesses. Using an AI to help identify items associated with a reporting requirement that more and more of us will be required to comply with is a big part of how the AI can help save time. Have you tried using ChatGPT or another AI tool to analyze your credit card or accounting transactions? What kinds of insights did you uncover? Were there any prompts that worked especially well for you, or ones that didn't? Are there other types of financial reports you'd like to see AI help with? Let us know in the comments below. Want more stories about AI?Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured
    0 Comentários ·0 Compartilhamentos ·35 Visualizações
  • Miss the old Facebook? The 'friends-only' tab is here to help you reclaim your feed
    www.zdnet.com
    Meta said it's returning to its roots and bringing 'the magic of friends' back to Facebook.
    0 Comentários ·0 Compartilhamentos ·32 Visualizações
  • New Google AI Features May Be Coming To Pixel Watch Imminently
    www.forbes.com
    Google Pixel WatchGoogleIt looks like Googles Gemini AI may be on the cusp of arriving on Wear OS smartwatches, with signs already appearing on actual smartwatches.Beebom reports its tester Pixel Watch displayed a Gemini icon when receiving a call.The suggestion is not that youll be able to get Gemini to answer, robot voice style, like a PA. The icon appeared by the Quick Replies feature, implying Google may be giving them an LLM makeover.That icon is the only change seen so far. The quick replies seen by Beebom were the usual canned texts of a smartwatch.Theres certainly scope for AI to swoop in and make Quick Replies feel more useful and vital. Perhaps the Pixel Watch could (with the help of a connected phone) realize youre on a train. Or cross-reference with your calendar to ascertain youre about to head into a meeting.Perhaps it could go deeper with the help of some Android APIs and know the estimated arrival time of your Uber, bus or train, and feed them into the Quick Replies suggestions.Conversely, Gemini could potentially just be used to transcribe something you say, to be relayed as a text message.Or perhaps its just a blip and we wont hear too much about this form of smartwatch AI any time soon.The watch in question was not even the latest Pixel Watch, though, but the Pixel Watch 2 from 2023.Regardless, we do know Gemini is coming to Wear OS watches including the Pixel series. For starters, the Google Assistant brand is being retired, for all but ancient phones. It is to be replaced by Google Gemini by the end of the year.Were upgrading Google Assistant users on mobile to Gemini, offering a new kind of help only possible with the power of AI, Brian Marquardt, Google Senior Director of Gemini product management wrote on the Google blog earlier this month.What form Gemini AI will take on Wear OS, though, is yet to be officially confirmed.
    0 Comentários ·0 Compartilhamentos ·30 Visualizações
  • Why Couples Spend So Long In The Talking Stage By A Psychologist
    www.forbes.com
    Not dating. Not seeing each other. Just talking. Heres how new research defines this confusing, ... More emerging stage of romantic relationships.gettyWhat happens in the time between meeting someone new and officially calling them your partner?Decades ago, you mightve called this the wooing stage: commonly, where a man offered a woman gifts, flowers and attention in the hopes of gaining her affection. Practices like these were considered normal in terms of courtship. Today, however, this stage is much more ambiguous for various reasons.Theres no longer the ever-looming pressure for adults to marry and start a family as young as they possibly can; they now have the freedom to take their time, truly get to know the person and practice cohabitation (living-in) before making the big commitment.Beyond this, however, is the fact that casual romantic and sexual relationships have become increasingly normalized since exclusivity isnt always mandatory when it comes to searching for a partner.Given the change in these norms in recent years, a new stage of romantic relationships has emerged according to September 2024 research from the Journal of Couple & Relationship Therapy. Time once spent wooing is now supposedly spent just talking except, just talking looks different to every person.To both researchers and the people in the talking stage themselves, this romantic juncture can be incredibly confusing. However, through four focus groups with 21 participants and a survey administered to 657 young adults, lead author Scott Sibley and his colleagues were able to explain what just talking really means.Heres what they found in their study.Defining The Talking StageFor Sibley and his colleagues, it was difficult to find a single definition for the talking stage. That said, the majority of young adults within the study conceptualized it in the same way with three core themes emerging:The pre-dating phase. Most participants agreed that the talking stage generally occurs just before the decision to become an official couple. Notably, there was also agreement that just talking isnt the same as just getting to know someone, nor is it the same as casually hooking up or being friends with benefits. Confusingly, the participants also noted that the talking stage can indeed involve physical intimacy. Overall, however, it seems that the talking stage is characterized by there being potential for long-term commitment. The two partners test this hypothesis on their own terms be it by just talking, going on dates, having a sexual relationship or a mixture of these different methods.Ambiguity about commitment. The second most common theme in the talking stage is the navigation of commitment that is, two potential partners discerning whether or not theyd like to become exclusive in the future. However, participants noted that its also possible for individuals to have multiple talking stages with different people at once. Overall, it seems this stage is characterized by fluctuating levels of commitments, which depend largely on how well the process is going with the candidate(s) in question.Unofficial romantic label. Oxymoronically, the quasi-committed nature of the talking stage serves as an intentional non-label for the relationship which, in its own way, still serves as a label. Theyre not dating, but theyre not really single either; theres no explicit pressure, yet theres still a slight expectation. The stage itself is confusing, but referring to it as just talking alleviates some of the stress that it entails. The participants also note that continued intentional communication or talking further alleviates this stress.While conceptualizing the talking stage in definitive terms was tricky, the participants nevertheless had a firm understanding of what it generally entails even considering how widely it can vary from person to person.One participant captured it quite aptly: Its the stage between friends and a relationship. Theres a mutual liking between the two people, and theyre testing the waters before becoming officially dating.Reasons For The Talking StageAfter gauging how participants conceptualized the talking stage, Sibley and his colleagues were interested in the reasons why individuals engaged in just talking prior to entering a relationship. A further four themes emerged:Feeling pressure to keep options open. Considering that most people in the talking stage are relatively young, theres a near unanimous desire among them to keep their options open. As such, just talking requires a shared understanding between two people that theyre having fun without any explicit declarations of commitment. This mutuality makes it much easier to cope if the talking stage doesnt pan out or turn into a relationship as they know they still have time and options.Protecting themselves from rejection. The nonchalant nature of the talking stage can also shield individuals from rejection. As the authors explain, the participants perceived lower chances of being rejected if they ask to just talk with someone, versus the outright rejection that can happen if they jump right to asking someone out on a date. As such, keeping things casual can increase confidence and prevent rushed commitments which, in turn, protects them from the sting of heartbreak.Testing the waters. Most participants agreed that the talking stage serves as an effective way to vet potential partners. A shared interest in frequent communication allows two people to gauge their compatibility. In this sense, the talking stage helps to prevent entering a romantic relationship blindly which, in turn, prevents individuals from wasting their time with a romantic partner that likely wont stick around.Avoiding having to define the relationship. One rather sneaky benefit of the talking stage is, as the authors put it, having your cake and eating it too. Without feeling forced to move the relationship forward in any way, individuals can reap some of the benefits of a committed relationship like having a sense of companionship, intimacy or just a person to do things with without putting in the effort that it requires. The consequences of this generally outweigh the benefits. While it may conserve energy for the person avoiding a relationship, it can also lead to the other feeling hurt or as though they arent good enough to be in a labeled relationship. Moreover, it can also lead to both individuals getting stuck in a romantic limbo of sorts: where the relationship never really progresses, and both individuals get comfortable with the lack of demands that the stage typifies.Participants in the study had mixed feelings regarding the utilization of the talking stage, particularly in terms of attachment or, rather, the lack thereof. For instance, one woman explains, It is exhausting. You are constantly worrying whether the other person is into you and whether they are talking to other girls (which you know they are), which drives you insane.Some participants, on the other hand, enjoy the freedom the stage allows. One participant explains, Most people just talk because they still want to associate themselves with other people and not feel guilty for it, since they dont have a title. Continuing, they note, This lets the two people get closer, but also allows freedom for both partners to do as they please and avoid early attachment.The talking stage seems to be a paradoxical, almost liminal space: both liberating and frustrating, full of potential yet undefined. Although it offers two people ample means for exploration and self-protection, it also threatens them with uncertainty and misalignment more than any other romantic juncture.Overall, however, it perfectly encapsulates the modern struggle between craving intimacy and fearing commitment: the human yearning for connection without risking too much, too soon.Could your prolonged talking stage be a symptom of something deeper? Take this science-backed test to learn more: Fear of Intimacy Scale
    0 Comentários ·0 Compartilhamentos ·30 Visualizações
  • China's AI craze has led to empty data centers and falling GPU rentals
    www.techspot.com
    TL;DR: In the wake of ChatGPT's explosive debut in late 2022, China's AI industry experienced a surge of excitement and investment. However, this initial fervor has given way to a sobering reality as the country grapples with an oversupply of underutilized data centers and shifting market dynamics. Xiao Li, a former real estate contractor who pivoted to AI infrastructure in 2023, has witnessed this transformation firsthand through the fluctuating demand for Nvidia GPUs. A year ago, traders in his network boasted about acquiring high-performance Nvidia GPUs despite U.S. export restrictions. Many of these chips were illegally funneled into Shenzhen through international channels. At the market's peak, an Nvidia H100 crucial for training AI models could fetch as much as 200,000 yuan ($28,000) on the black market.Today, Li noticed that traders have become more discreet and GPU prices have stabilized. Additionally, two data center projects he is acquainted with are struggling to attract further investment as backers anticipate weak returns. This financial strain has forced project leaders to offload excess GPUs. "Everyone seems to be selling, but there aren't many buyers," he told MIT Technology Review.In short, leasing GPUs to businesses for AI model training a core strategy for the latest generation of data centers was once considered a guaranteed success. However, the emergence of DeepSeek and shifting economic factors in the AI sector have put the country's data center industry on unstable ground.The rapid construction of data centers across China, from Inner Mongolia to Guangdong, was fueled by a combination of government directives and private investment. Over 500 new projects were announced in 2023 and 2024, with at least 150 completed by the end of 2024. However, this building boom has led to a paradoxical situation: an abundance of computational power, particularly in central and western China, coupled with a shortage of chips that meet the current needs for inference and regulatory realities.The rise of DeepSeek, a company that developed an open-source reasoning model matching the performance of ChatGPT but at a fraction of the cost, has further disrupted the market. Hancheng Cao, an assistant professor at Emory University, noted that this breakthrough has shifted the focus from model development to practical applications. "The burning question shifted from 'Who can make the best large language model?' to 'Who can use them better?'"This shift has exposed the limitations of many hastily constructed data centers. Many facilities optimized for large-scale AI training are ill-suited for the low-latency requirements of inference tasks needed for real-time reasoning models. As a result, data centers in remote areas with cheaper electricity and land are losing their appeal to AI companies. // Related StoriesThe oversupply of computational power has led to a dramatic drop in GPU rental prices. An Nvidia H100 server with eight GPUs now rents for 75,000 yuan per month (around $10,345), down from previous highs of around 180,000 yuan ($25,141). Some data center operators chose to leave their facilities idle rather than operate at a loss.Jimmy Goodrich, senior technology advisor to the RAND Corporation, attributes this predicament to inexperienced players jumping on the AI bandwagon. "The growing pain China's AI industry is going through is largely a result of inexperienced players corporations and local governments jumping on the hype train, building facilities that aren't optimal for today's needs," he explains.China's political system, with its emphasis on short-term economic projects for career advancement, has played a significant role in the data center boom. Local officials, seeking to boost their political careers and stimulate the economy in the face of a post-pandemic downturn, turned to AI infrastructure as a new growth driver.This top-down approach often disregarded actual demand or technical feasibility. Many projects were led by executives and investors with limited expertise in AI infrastructure, resulting in hastily constructed facilities that fell short of industry standards.The rise of reasoning models like DeepSeek's R1 and OpenAI's ChatGPT has shifted computing needs from large-scale training to real-time inference. This change requires hardware with low latency, often located near major tech hubs, to minimize transmission delays and ensure access to skilled staff.As a result, many data centers built in central, western, and rural China are struggling to attract clients. Some, like a newly built facility in Zhengzhou, even distribute free computing vouchers to local tech firms but still struggle to find users.Despite the challenges, China's central government prioritizes AI infrastructure development. In early 2025, it convened an AI industry symposium emphasizing the importance of self-reliance in this technology.Major tech companies like Alibaba and ByteDance have announced significant investments in cloud computing and AI hardware infrastructure.Goodrich suggests that the Chinese government views the current situation as a necessary growing pain. "The Chinese central government will likely see [underused data centers] as a necessary evil to develop an important capability... They see the end, not the means," he says.As the industry evolves, demand remains strong for Nvidia chips, particularly the H20 model designed for the Chinese market. However, for many in the field, like data center project manager Fang Cunbao, the current state of the market has prompted a reevaluation.At the beginning of the year, Fang left the data center industry entirely. "The market is too chaotic. The early adopters profited, but now it's just people chasing policy loopholes," he explains. He's now shifting his focus to AI education.
    0 Comentários ·0 Compartilhamentos ·35 Visualizações
  • The Weird World of AI Hallucinations: When AI Makes Things Up
    www.techspot.com
    When someone sees something that isn't there, people often refer to the experience as a hallucination. Hallucinations occur when your sensory perception does not correspond to external stimuli. Technologies that rely on artificial intelligence can have hallucinations, too.When an algorithmic system generates information that seems plausible but is actually inaccurate or misleading, computer scientists call it an AI hallucination.Editor's Note:Guest authors Anna Choi and Katelyn Xiaoying Mei are Information Science PhD students. Anna's work relates to the intersection between AI ethics and speech recognition. Katelyn's research work relates to psychology and Human-AI interaction. This article is republished from The Conversation under a Creative Commons license.Researchers and users alike have found these behaviors in different types of AI systems, from chatbots such as ChatGPT to image generators such as Dall-E to autonomous vehicles. We are information science researchers who have studied hallucinations in AI speech recognition systems.Wherever AI systems are used in daily life, their hallucinations can pose risks. Some may be minor when a chatbot gives the wrong answer to a simple question, the user may end up ill-informed.But in other cases, the stakes are much higher.At this early stage of AI development, the issue isn't just with the machine's responses it's also with how people tend to accept them as factual simply because they sound believable and plausible, even when they're not.We've already seen cases in courtrooms, where AI software is used to make sentencing decisions to health insurance companies that use algorithms to determine a patient's eligibility for coverage, AI hallucinations can have life-altering consequences. They can even be life-threatening: autonomous vehicles use AI to detect obstacles: other vehicles and pedestrians.Making it upHallucinations and their effects depend on the type of AI system. With large language models, hallucinations are pieces of information that sound convincing but are incorrect, made up or irrelevant.A chatbot might create a reference to a scientific article that doesn't exist or provide a historical fact that is simply wrong, yet make it sound believable.In a 2023 court case, for example, a New York attorney submitted a legal brief that he had written with the help of ChatGPT. A discerning judge later noticed that the brief cited a case that ChatGPT had made up. This could lead to different outcomes in courtrooms if humans were not able to detect the hallucinated piece of information.With AI tools that can recognize objects in images, hallucinations occur when the AI generates captions that are not faithful to the provided image.Imagine asking a system to list objects in an image that only includes a woman from the chest up talking on a phone and receiving a response that says a woman talking on a phone while sitting on a bench. This inaccurate information could lead to different consequences in contexts where accuracy is critical.What causes hallucinationsEngineers build AI systems by gathering massive amounts of data and feeding it into a computational system that detects patterns in the data. The system develops methods for responding to questions or performing tasks based on those patterns.Supply an AI system with 1,000 photos of different breeds of dogs, labeled accordingly, and the system will soon learn to detect the difference between a poodle and a golden retriever. But feed it a photo of a blueberry muffin and, as machine learning researchers have shown, it may tell you that the muffin is a chihuahua.Object recognition AIs can have trouble distinguishing between chihuahuas and blueberry muffins and between sheepdogs and mops.When a system doesn't understand the question or the information that it is presented with, it may hallucinate. Hallucinations often occur when the model fills in gaps based on similar contexts from its training data, or when it is built using biased or incomplete training data. This leads to incorrect guesses, as in the case of the mislabeled blueberry muffin.It's important to distinguish between AI hallucinations and intentionally creative AI outputs. When an AI system is asked to be creative like when writing a story or generating artistic images its novel outputs are expected and desired.Hallucinations, on the other hand, occur when an AI system is asked to provide factual information or perform specific tasks but instead generates incorrect or misleading content while presenting it as accurate.The key difference lies in the context and purpose: Creativity is appropriate for artistic tasks, while hallucinations are problematic when accuracy and reliability are required. To address these issues, companies have suggested using high-quality training data and limiting AI responses to follow certain guidelines. Nevertheless, these issues may persist in popular AI tools.What's at riskThe impact of an output such as calling a blueberry muffin a chihuahua may seem trivial, but consider the different kinds of technologies that use image recognition systems: an autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians' lives in danger.For AI tools that provide automatic speech recognition, hallucinations are AI transcriptions that include words or phrases that were never actually spoken. This is more likely to occur in noisy environments, where an AI system may end up adding new or irrelevant words in an attempt to decipher background noise such as a passing truck or a crying infant.As these systems become more regularly integrated into health care, social service and legal settings, hallucinations in automatic speech recognition could lead to inaccurate clinical or legal outcomes that harm patients, criminal defendants or families in need of social support.Check AI's Work Don't Trust Verify AIRegardless of AI companies' efforts to mitigate hallucinations, users should stay vigilant and question AI outputs, especially when they are used in contexts that require precision and accuracy.Double-checking AI-generated information with trusted sources, consulting experts when necessary, and recognizing the limitations of these tools are essential steps for minimizing their risks.
    0 Comentários ·0 Compartilhamentos ·50 Visualizações
  • NYT Strands today: hints, spangram and answers for Friday, March 28
    www.digitaltrends.com
    Table of ContentsTable of ContentsHow to play StrandsHint for todays Strands puzzleTodays Strand answersStrands is a brand new daily puzzle from the New York Times. A trickier take on the classic word search, youll need a keen eye to solve this puzzle.Like Wordle, Connections, and the Mini Crossword, Strands can be a bit difficult to solve some days. Theres no shame in needing a little help from time to time. If youre stuck and need to know the answers to todays Strands puzzle, check out the solved puzzle below.Recommended VideosHow to play StrandsYou start every Strands puzzle with the goal of finding the theme words hidden in the grid of letters. Manipulate letters by dragging or tapping to craft words; double-tap the final letter to confirm. If you find the correct word, the letters will be highlighted blue and will no longer be selectable.RelatedIf you find a word that isnt a theme word, it still helps! For every three non-theme words you find that are at least four letters long, youll get a hint the letters of one of the theme words will be revealed and youll just have to unscramble it.Every single letter on the grid is used to spell out the theme words and there is no overlap. Every letter will be used once, and only once.Each puzzle contains one spangram, a special theme word (or words) that describe the puzzles theme and touches two opposite sides of the board. When you find the spangram, it will be highlighted yellow.The goal should be to complete the puzzle quickly without using too many hints.Todays theme is Wise ones.Heres a hint that might help you: great thinkers.Todays Strand answersNYTTodays spanagramWell start by giving you the spangram, which might help you figure out the theme and solve the rest of the puzzle on your own:GUIDINGLIGHTTodays Strands answersPHILOSOPHERELDERVISIONARYTHINKERSAGEEditors Recommendations
    0 Comentários ·0 Compartilhamentos ·60 Visualizações