Technology news and analysis with a focus on founders and startup teams
التحديثات الأخيرة
-
TECHCRUNCH.COMGoogle’s AI search numbers are growing, and that’s by designGoogle started testing AI-summarized results in Google Search, AI Overviews, two years ago, and continues to expand the feature to new regions and languages. By the company’s estimation, it’s been a big success. AI Overviews is now used by more than 1.5 billion users monthly across over 100 countries. AI Overviews compiles results from around the web to answer certain questions. When you search for something like “What is generative AI?” AI Overviews will show AI-generated text at the top of the Google Search results page. While the feature has dampened traffic to some publishers, Google sees it and other AI-powered search capabilities as potentially meaningful revenue drivers and ways to boost engagement on Search. Last October, the company launched ads in AI Overviews. More recently, it started testing AI Mode, which lets users ask complex questions and follow-ups in the flow of Google Search. The latter is Google’s attempt to take on chat-based search interfaces like ChatGPT search and Perplexity. During its Q1 2025 earnings call on Thursday, Google highlighted the growth of its other AI-based search products as well, including Circle to Search. Circle to Search, which lets you highlight something on your smartphone’s screen and ask questions about it, is now available on more than 250 million devices, Google said — up from around 200 million devices as of late last year. Circle to Search usage rose close to 40% quarter-over-quarter, according to the company. Google also noted in its call that visual searches on its platforms are growing at a steady clip. According to CEO Sundar Pichai, searches through Google Lens, Google’s multimodal AI-powered search technology, have increased by 5 billion since October. The number of people shopping on Lens was up over 10% in Q1, meanwhile. The growth comes amid intense regulatory scrutiny of Google’s search practices. The U.S. Department of Justice has been pressuring Google to spin off Chrome after the court found that the tech giant had an illegal online search monopoly. A federal judge has also ruled that Google has an adtech monopoly, opening the door to a potential breakup.0 التعليقات 0 المشاركات 32 مشاهدةالرجاء تسجيل الدخول , للأعجاب والمشاركة والتعليق على هذا!
-
TECHCRUNCH.COMDoorDash seeks dismissal of Uber lawsuitDoorDash has asked a California Superior Court judge to dismiss a lawsuit filed by Uber that accuses the food delivery company of stifling competition by intimidating restaurant owners into exclusive deals. DoorDash argues in its motion that Uber’s claim lacks merit on all fronts. On a post on its website on Friday, DoorDash said, “the lawsuit is nothing more than a cynical and calculated scare tactic from a frustrated competitor seeking to avoid real competition. It’s disappointing behavior from a company once known for competing on the merits of its products and innovation.” In its post, DoorDash added that it will “vigorously” defend itself, and positioned the company as one that “competes fiercely yet fairly to deliver exceptional value to merchants.” A hearing has been set for July 11 in California Superior Court in San Francisco County. Uber filed its lawsuit against DoorDash in February. The ride-hailing giant alleged DoorDash, which holds the largest share of the food delivery market in the U.S., threatens restaurants with multimillion-dollar penalties or the removal or demotion of the businesses’ position on the DoorDash app. Uber responded to the DoorDash request in a statement sent to TechCrunch. “It seems like the team at DoorDash is having a hard time understanding the content of our Complaint,” reads the emailed statement from Uber. “When restaurants are forced to choose between unfair terms or retaliation, that’s not competition — it’s coercion. Uber will continue to stand up for merchants and for a level playing field. We look forward to presenting the facts in court.” Uber requested a jury trial in its original complaint. The company has not specified the amount of damages it is seeking. Separately, Deliveroo confirmed Friday that DoorDash offered to buy the European food delivery company for $3.6 billion.0 التعليقات 0 المشاركات 6 مشاهدة
-
TECHCRUNCH.COMAnthropic sent a takedown notice to a dev trying to reverse-engineer its coding toolIn the battle between two “agentic” coding tools — Anthropic’s Claude Code and OpenAI’s Codex CLI — the latter appears to be fostering more developer goodwill than the former. That’s at least partly because Anthropic has issued takedown notices to a developer trying to reverse-engineer Claude Code, which is under a more restrictive usage license than Codex CLI. Claude Code and Codex CLI are dueling tools that accomplish much of the same thing: allow developers to tap into the power of AI models running in the cloud to complete various coding tasks. Anthropic and OpenAI released them within months of each other — each company racing to capture valuable developer mindshare. The source code for Codex CLI is available under an Apache 2.0 license that allows for distribution and commercial use. That’s in contrast to Claude Code, which is tied to Anthropic’s commercial license. That limits how it can be modified without explicit permission from the company. Anthropic also “obfuscated” the source code for Claude Code. In other words, Claude Code’s source code isn’t readily available. When a developer de-obfuscated it and released the source code on GitHub, Anthropic filed a DMCA complaint — a copyright notification requesting the code’s removal. Developers on social media weren’t pleased by the move, which they said compared unfavorably with OpenAI’s rollout of Codex CLI. In the week or so since Codex CLI’s release, OpenAI has merged dozens of developer suggestions into the tool’s codebase, including one that lets Codex CLI tap AI models from rival providers — including Anthropic. Anthropic didn’t respond to a request for comment. To be fair to the lab, Claude Code is still in beta (and a bit buggy); it’s possible Anthropic will release the source code under a permissive license in the future. Companies have many reasons for obfuscating code, security considerations being one of them. It’s a somewhat surprising PR win for OpenAI, which in recent months has shied away from open source releases in favor of proprietary, locked-down products. It may be emblematic of a broader shift in the lab’s approach; OpenAI CEO Sam Altman earlier this year said he believed that the company has been on the “wrong side of history” when it comes to open source.0 التعليقات 0 المشاركات 24 مشاهدة
-
TECHCRUNCH.COMAnthropic sent a takedown notice to a dev trying to reverse-engineer its coding toolIn the battle between two “agentic” coding tools — Anthropic’s Claude Code and OpenAI’s Codex CLI — the latter appears to be fostering more developer goodwill than the former. That’s at least partly because Anthropic has issued takedown notices to a developer trying to reverse-engineer Claude Code, which is under a more restrictive usage license than Codex CLI. Claude Code and Codex CLI are dueling tools that accomplish much of the same thing: allow developers to tap into the power of AI models running in the cloud to complete various coding tasks. Anthropic and OpenAI released them within months of each other — each company racing to capture valuable developer mindshare. The source code for Codex CLI is available under an Apache 2.0 license that allows for distribution and commercial use. That’s in contrast to Claude Code, which is tied to Anthropic’s commercial license. That limits how it can be modified without explicit permission from the company. Anthropic also “obfuscated” the source code for Claude Code. In other words, Claude Code’s source code isn’t readily available. When a developer de-obfuscated it and released the source code on GitHub, Anthropic filed a DMCA complaint — a copyright notification requesting the code’s removal. Developers on social media weren’t pleased by the move, which they said compared unfavorably with OpenAI’s rollout of Codex CLI. In the week or so since Codex CLI’s release, OpenAI has merged dozens of developer suggestions into the tool’s codebase, including one that lets Codex CLI tap AI models from rival providers — including Anthropic. Anthropic didn’t respond to a request for comment. To be fair to the lab, Claude Code is still in beta (and a bit buggy); it’s possible Anthropic will release the source code under a permissive license in the future. Companies have many reasons for obfuscating code, security considerations being one of them. It’s a somewhat surprising PR win for OpenAI, which in recent months has shied away from open-source releases in favor of proprietary, locked-down products. It may be emblematic of a broader shift in the lab’s approach; OpenAI CEO Sam Altman earlier this year said he believed that the company has been on the “wrong side of history” when it comes to open source.0 التعليقات 0 المشاركات 10 مشاهدة
-
TECHCRUNCH.COMChatGPT: Everything you need to know about the AI-powered chatbotChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users. 2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora. OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit. In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history. Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here. To see a list of 2024 updates, go here. Timeline of the most recent ChatGPT updates April 2025 OpenAI wants its AI model to access cloud models for assistance OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch. OpenAI aims to make its new “open” AI model the best on the market OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch. OpenAI’s GPT-4.1 may be less aligned than earlier models OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” OpenAI’s o3 AI model scored lower than expected on a benchmark Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score. OpenAI unveils Flex processing for cheaper, slower AI tasks OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads. OpenAI’s latest AI models now have a safeguard against biorisks OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report. OpenAI launches its latest reasoning models, o3 and o4-mini OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models. OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post. OpenAI could “adjust” its safeguards if rivals release “high-risk” AI OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition. OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT. OpenAI will remove its largest AI model, GPT-4.5, from the API, in July OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14. OpenAI unveils GPT-4.1 AI models that focus on coding capabilities OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3. OpenAI will discontinue ChatGPT’s GPT-4 at the end of April OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API. OpenAI could release GPT-4.1 soon OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report. OpenAI has updated ChatGPT to use information from your previous conversations OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland. OpenAI is working on watermarks for images made with ChatGPT It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.” OpenAI offers ChatGPT Plus for free to U.S., Canadian college students OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version. ChatGPT users have generated over 700M images so far More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos. OpenAI’s o3 model could cost more to run than initial estimate The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task. OpenAI CEO says capacity issues will cause product delays In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote. March 2025 OpenAI plans to release a new ‘open’ AI language model OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia. OpenAI removes ChatGPT’s restrictions on image generation OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior. OpenAI adopts Anthropic’s standard for linking AI models with data OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said. OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization. OpenAI expects revenue to triple to $12.7 billion this year OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said. ChatGPT has upgraded its image-generation feature OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected. OpenAI announces leadership updates Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer. OpenAI’s AI voice assistant now has advanced feature OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch. OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans. OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” OpenAI upgrades its transcription and voice-generating AI models OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less. OpenAI has launched o1-pro, a more powerful version of its o1 OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1. Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms. OpenAI says it has trained an AI that’s “really good” at creative writing OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all. OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026. OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’ OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them. ChatGPT can directly edit your code The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users. ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch. February 2025 OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. ChatGPT may not be as power-hungry as once assumed A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing. OpenAI now reveals more of its o3-mini model’s thought process In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. You can now use ChatGPT web search without logging in OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in. OpenAI unveils a new ChatGPT agent for ‘deep research’ OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources. January 2025 OpenAI used a subreddit to test AI persuasion OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. OpenAI launches o3-mini, its latest ‘reasoning’ model OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.” ChatGPT’s mobile users are 85% male, report says A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users. OpenAI launches ChatGPT plan for US government agencies OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data. More teens report using ChatGPT for schoolwork, despite the tech’s faults Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm. OpenAI says it may store deleted Operator data for up to 90 days OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s. OpenAI launches Operator, an AI agent that performs tasks autonomously OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online. Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website. OpenAI tests phone number-only ChatGPT signups OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email. ChatGPT now lets you schedule reminders and recurring tasks ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week. New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’ OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely. FAQs: What is ChatGPT? How does it work? ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text. When did ChatGPT get released? November 30, 2022 is when ChatGPT was released for public use. What is the latest version of ChatGPT? Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o. Can I use ChatGPT for free? There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus. Who uses ChatGPT? Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns. What companies use ChatGPT? Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool. Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT. And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space. What does GPT mean in ChatGPT? GPT stands for Generative Pre-Trained Transformer. What is the difference between ChatGPT and a chatbot? A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt. Can ChatGPT write essays? Yes. Can ChatGPT commit libel? Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry. Does ChatGPT have an app? Yes, there is a free ChatGPT mobile app for iOS and Android users. What is the ChatGPT character limit? It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words. Does ChatGPT have an API? Yes, it was released March 1, 2023. What are some sample everyday uses for ChatGPT? Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc. What are some advanced uses for ChatGPT? Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc. How good is ChatGPT at writing code? It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used. Can you save a ChatGPT chat? Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet. Are there alternatives to ChatGPT? Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives. How does ChatGPT handle data privacy? OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”. The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”. In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.” What controversies have surrounded ChatGPT? Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm. An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service. CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect. Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with. There have also been cases of ChatGPT accusing individuals of false crimes. Where can I find examples of ChatGPT prompts? Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day. Can ChatGPT be detected? Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. Are ChatGPT chats public? No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service. What lawsuits are there surrounding ChatGPT? None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT. Are there issues regarding plagiarism with ChatGPT? Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.0 التعليقات 0 المشاركات 43 مشاهدة
-
TECHCRUNCH.COMThe TechCrunch Cyber GlossaryThe cybersecurity world is full of jargon and lingo. At TechCrunch, we have been writing about cybersecurity for years, and we frequently use technical terms and expressions to describe the nature of what is happening in the world. That’s why we have created this glossary, which includes some of the most common — and not so common — words and expressions that we use in our articles, and explanations of how, and why, we use them. This is a developing compendium, and we will update it regularly. If you have any feedback or suggestions for this glossary, get in touch. Advanced persistent threat (APT) An advanced persistent threat (APT) is often categorized as a hacker, or group of hackers, which gains and maintains unauthorized access to a targeted system. The main aim of an APT intruder is to remain undetected for long periods of time, often to conduct espionage and surveillance, to steal data, or sabotage critical systems. APTs are traditionally well-resourced hackers, including the funding to pay for their malicious campaigns, and access to hacking tools typically reserved by governments. As such, many of the long-running APT groups are associated with nation states, like China, Iran, North Korea, and Russia. In recent years, we’ve seen examples of non-nation state cybercriminal groups that are financially motivated (such as theft and money laundering) carrying out cyberattacks similar in terms of persistence and capabilities as some traditional government-backed APT groups. (See: Hacker) Adversary-in-the-middle attack An adversary-in-the-middle (AitM) attack, traditionally known as a “man-in-the-middle” (MitM), is where someone intercepts network traffic at a particular point on the network in an attempt to eavesdrop or modify the data as it travels the internet. This is why encrypting data makes it more difficult for malicious actors to read or understand a person’s network traffic, which could contain personal information or secrets, like passwords. Adversary-in-the-middle attacks can be used legitimately by security researchers to help understand what data goes in and out of an app or web service, a process that can help identify security bugs and data exposures. Arbitrary code execution The ability to run commands or malicious code on an affected system, often because of a security vulnerability in the system’s software. Arbitrary code execution can be achieved either remotely or with physical access to an affected system (such as someone’s device). In the cases where arbitrary code execution can be achieved over the internet, security researchers typically call this remote code execution. Often, code execution is used as a way to plant a back door for maintaining long-term and persistent access to that system, or for running malware that can be used to access deeper parts of the system or other devices on the same network. (See also: Remote code execution) Attribution Attribution is the process of finding out and identifying who is behind a cyberattack. There is an often repeated mantra, “attribution is hard,” which is to warn cybersecurity professionals and the wider public that definitively establishing who was behind a cyberattack is no simple task. While it is not impossible to attribute, the answer is also dependent on the level of confidence in the assessment. Threat intelligence companies such as CrowdStrike, Kaspersky, and Mandiant, among others, have for years attributed cyberattacks and data breaches to groups or “clusters” of hackers, often referencing groups by a specific codename, based on a pattern of certain tactics, techniques and procedures as seen in previous attacks. Some threat intelligence firms go as far as publicly linking certain groups of hackers to specific governments or their intelligence agencies when the evidence points to it. Government agencies, however, have for years publicly accused other governments and countries of being behind cyberattacks, and have gone as far as identifying — and sometimes criminally charging — specific people working for those agencies. Backdoor A backdoor is a subjective term, but broadly refers to creating the means to gain future access to a system, device, or physical area. Backdoors can be found in software or hardware, such as a mechanism to gain access to a system (or space) in case of accidental lock-out, or for remotely providing technical support over the internet. Backdoors can have legitimate and helpful use cases, but backdoors can also be undocumented, maliciously planted, or otherwise unknown to the user or owner, which can weaken the security of the product and make it more susceptible to hacking or compromise. TechCrunch has a deeper dive on encryption backdoors. Black/white hat Hackers historically have been categorized as either “black hat” or “white hat,” usually depending on the motivations of the hacking activity carried out. A “black hat” hacker may be someone who might break the law and hack for money or personal gain, such as a cybercriminal. “White hat” hackers generally hack within legal bounds, like as part of a penetration test sanctioned by the target company, or to collect bug bounties finding flaws in various software and disclosing them to the affected vendor. For those who hack with less clearcut motivations, they may be regarded as a “gray hat.” Famously, the hacking group the L0pht used the term gray hat in an interview with The New York Times Magazine in 1999. While still commonly used in modern security parlance, many have moved away from the “hat” terminology. (Also see: Hacker, Hacktivist) Botnet Botnets are networks of hijacked internet-connected devices, such as webcams and home routers, that have been compromised by malware (or sometimes weak or default passwords) for the purposes of being used in cyberattacks. Botnets can be made up of hundreds or thousands of devices and are typically controlled by a command-and-control server that sends out commands to ensnared devices. Botnets can be used for a range of malicious reasons, like using the distributed network of devices to mask and shield the internet traffic of cybercriminals, deliver malware, or harness their collective bandwidth to maliciously crash websites and online services with huge amounts of junk internet traffic. (See also: Command-and-control server; Distributed denial-of-service) Brute force A brute-force attack is a common and rudimentary method of hacking into accounts or systems by automatically trying different combinations and permutations of letters and words to guess passwords. A less sophisticated brute-force attack is one that uses a “dictionary,” meaning a list of known and common passwords, for example. A well designed system should prevent these types of attacks by limiting the number of login attempts inside a specific timeframe, a solution called rate-limiting. Bug A bug is essentially the cause of a software glitch, such as an error or a problem that causes the software to crash or behave in an unexpected way. In some cases, a bug can also be a security vulnerability. The term “bug” originated in 1947, at a time when early computers were the size of rooms and made up of heavy mechanical and moving equipment. The first known incident of a bug found in a computer was when a moth disrupted the electronics of one of these room-sized computers. (See also: Vulnerability) Command-and-control (C2) server Command-and-control servers (also known as C2 servers) are used by cybercriminals to remotely manage and control their fleets of compromised devices and launch cyberattacks, such as delivering malware over the internet and launching distributed denial-of-service attacks. (See also: Botnet; Distributed denial-of-service) Crypto This is a word that can have two meanings depending on the context. Traditionally, in the context of computer science and cybersecurity, crypto is short for “cryptography,” the mathematical field of coding and decoding messages and data using encryption. Crypto has more recently also become short for cryptocurrency, such as Bitcoin, Ethereum, and the myriad blockchain-based decentralized digital currencies that have sprung up in the last fifteen years. As cryptocurrencies have grown from a niche community to a whole industry, crypto is now also used to refer to that whole industry and community. For years, the cryptography and cybersecurity community have wrestled with the adoption of this new meaning, going as far as making the phrases “crypto is not cryptocurrency” and “crypto means cryptography” into something that features on its own dedicated website and even T-shirts. Languages change over time depending on how people use words. As such, TechCrunch accepts the reality where crypto has different meanings depending on context, and where the context isn’t clear, then we spell out cryptography, or cryptocurrency. Cryptojacking Cryptojacking is when a device’s computational power is used, with or without the owner’s permission, to generate cryptocurrency. Developers sometimes bundle code in apps and on websites, which then uses the device’s processors to complete complex mathematical calculations needed to create new cryptocurrency. The generated cryptocurrency is then deposited in virtual wallets owned by the developer. Some malicious hackers use malware to deliberately compromise large numbers of unwitting computers to generate cryptocurrency on a large and distributed scale. Dark and deep web The world wide web is the public content that flows across the pipes of the internet, much of what is online today is for anyone to access at any time. The “deep web,” however, is the content that is kept behind paywalls and member-only spaces, or any part of the web that is not readily accessible or browsable with a search engine. Then there is the “dark web,” which is the part of the internet that allows users to remain anonymous but requires certain software (such as the Tor Browser) to access, depending on the part of the dark web you’re trying to access. Anonymity benefits those who live and work in highly censored or surveilled countries, but it also can benefit criminals. There is nothing inherently criminal or nefarious about accessing the dark web; many popular websites also offer dark web versions so that users around the world can access their content. TechCrunch has a more detailed explainer on what the dark web is. Data breach When we talk about data breaches, we ultimately mean the improper removal of data from where it should have been. But the circumstances matter and can alter the terminology we use to describe a particular incident. A data breach is when protected data was confirmed to have improperly left a system from where it was originally stored and usually confirmed when someone discovers the compromised data. More often than not, we’re referring to the exfiltration of data by a malicious cyberattacker or otherwise detected as a result of an inadvertent exposure. Depending on what is known about the incident, we may describe it in more specific terms where details are known. (See also: Data exposure; Data leak) Data exposure A data exposure (a type of data breach) is when protected data is stored on a system that has no access controls, such as because of human error or a misconfiguration. This might include cases where a system or database is connected to the internet but without a password. Just because data was exposed doesn’t mean the data was actively discovered, but nevertheless could still be considered a data breach. Data leak A data leak (a type of data breach) is where protected data is stored on a system in a way that it was allowed to escape, such as due to a previously unknown vulnerability in the system or by way of insider access (such as an employee). A data leak can mean that data could have been exfiltrated or otherwise collected, but there may not always be the technical means, such as logs, to know for sure. Deepfake Deepfakes are AI-generated videos, audios, or pictures designed to look real, often with the goal of fooling people into thinking they are genuine. Deepfakes are developed with a specific type of machine learning known as deep learning, hence its name. Examples of deepfakes can range from relatively harmless, like a video of a celebrity saying something funny or outrageous, to more harmful efforts. In recent years, there have been documented cases of deepfaked political content designed to discredit politicians and influence voters, while other malicious deepfakes have relied on using recordings of executives designed to trick company employees into giving up sensitive information or sending money to scammers. Deepfakes are also contributing to the proliferation of nonconsensual sexual images. Def Con (aka DEFCON) Def Con is one of the most important hacking conferences in the world, held annually in Las Vegas, usually during August. Launched in 1993 as a party for some hacker friends, it has now become an annual gathering of almost 30,000 hackers and cybersecurity professionals, with dozens of talks, capture-the-flag hacking competitions, and themed “villages,” where attendees can learn how to hack internet-connected devices, voting systems, and even aircraft. Unlike other conferences like RSA or Black Hat, Def Con is decidedly not a business conference, and the focus is much more on hacker culture. There is a vendor area, but it usually includes nonprofits like the Electronic Frontier Foundation, The Calyx Institute, and the Tor Project, as well as relatively small cybersecurity companies. Distributed denial-of-service (DDoS) A distributed denial-of-service, or DDoS, is a kind of cyberattack that involves flooding targets on the internet with junk web traffic in order to overload and crash the servers and cause the service, such as a website, online store, or gaming platform to go down. DDoS attacks are launched by botnets, which are made up of networks of hacked internet-connected devices (such as home routers and webcams) that can be remotely controlled by a malicious operator, usually from a command-and-control server. Botnets can be made up of hundreds or thousands of hijacked devices. While a DDoS is a form of cyberattack, these data-flooding attacks are not “hacks” in themselves, as they don’t involve the breach and exfiltration of data from their targets, but instead cause a “denial of service” event to the affected service. (See also: Botnet; Command-and-control server) Encryption Encryption is the way and means in which information, such as files, documents, and private messages, are scrambled to make the data unreadable to anyone other than to its intended owner or recipient. Encrypted data is typically scrambled using an encryption algorithm — essentially a set of mathematical formulas that determines how Nearly all modern encryption algorithms in use today are open source, allowing anyone (including security professionals and cryptographers) to review and check the algorithm to make sure it’s free of faults or flaws. Some encryption algorithms are stronger than others, meaning data protected by some weaker algorithms can be decrypted by harnessing large amounts of computational power. Encryption is different from encoding, which simply converts data into a different and standardized format, usually for the benefit of allowing computers to read the data. (See also: End-to-end encryption) End-to-end encryption (E2EE) End-to-end encryption (or E2EE) is a security feature built into many messaging and file-sharing apps, and is widely considered one of the strongest ways of securing digital communications as they traverse the internet. E2EE scrambles the file or message on the sender’s device before it’s sent in a way that allows only the intended recipient to decrypt its contents, making it near-impossible for anyone — including a malicious hacker, or even the app maker — to snoop inside on someone’s private communications. In recent years, E2EE has become the default security standard for many messaging apps, including Apple’s iMessage, Facebook Messenger, Signal, and WhatsApp. E2EE has also become the subject of governmental frustration in recent years, as encryption makes it impossible for tech companies or app providers to give over information that they themselves do not have access to. (See also: Encryption) Escalation of privileges Most modern systems are protected with multiple layers of security, including the ability to set user accounts with more restricted access to the underlying system’s configurations and settings. This prevents these users — or anyone with improper access to one of these user accounts — from tampering with the core underlying system. However, an “escalation of privileges” event can involve exploiting a bug or tricking the system into granting the user more access rights than they should have. Malware can also take advantage of bugs or flaws caused by escalation of privileges by gaining deeper access to a device or a connected network, potentially allowing the malware to spread. Espionage When we talk about espionage, we’re generally referring to threat groups or hacking campaigns that are dedicated to spying, and are typically characterized by their stealth. Espionage-related hacks are usually aimed at gaining and maintaining stealthy persistent access to a target’s network to carry out passive surveillance, reconnaissance for future cyberattacks, or the long-term collection and exfiltration of data. Espionage operations are often carried out by governments and intelligence agencies, though not exclusively. Exploit An exploit is the way and means in which a vulnerability is abused or taken advantage of, usually in order to break into a system. (See also: Bug; Vulnerability) Extortion In general terms, extortion is the act of obtaining something, usually money, through the use of force and intimidation. Cyber extortion is no different, as it typically refers to a category of cybercrime whereby attackers demand payment from victims by threatening to damage, disrupt, or expose their sensitive information. Extortion is often used in ransomware attacks, where hackers typically exfiltrate company data before demanding a ransom payment from the hacked victim. But extortion has quickly become its own category of cybercrime, with many, often younger, financially motivated hackers, opting to carry out extortion-only attacks, which snub the use of encryption in favor of simple data theft. (Also see: Ransomware) Forensics Forensic investigations involve analyzing data and information contained in a computer, server, or mobile device, looking for evidence of a hack, crime, or some sort of malfeasance. Sometimes, in order to access the data, corporate or law enforcement investigators rely on specialized devices and tools, like those made by Cellebrite and Grayshift, which are designed to unlock and break the security of computers and cellphones to access the data within. Hacker There is no one single definition of “hacker.” The term has its own rich history, culture, and meaning within the security community. Some incorrectly conflate hackers, or hacking, with wrongdoing. By our definition and use, we broadly refer to a “hacker” as someone who is a “breaker of things,” usually by altering how something works to make it perform differently in order to meet their objectives. In practice, that can be something as simple as repairing a machine with non-official parts to make it function differently as intended, or work even better. In the cybersecurity sense, a hacker is typically someone who breaks a system or breaks the security of a system. That could be anything from an internet-connected computer system to a simple door lock. But the person’s intentions and motivations (if known) matter in our reporting, and guides how we accurately describe the person, or their activity. There are ethical and legal differences between a hacker who works as a security researcher, who is professionally tasked with breaking into a company’s systems with their permission to identify security weaknesses that can be fixed before a malicious individual has a chance to exploit them; and a malicious hacker who gains unauthorized access to a system and steals data without obtaining anyone’s permission. Because the term “hacker” is inherently neutral, we generally apply descriptors in our reporting to provide context about who we’re talking about. If we know that an individual works for a government and is contracted to maliciously steal data from a rival government, we’re likely to describe them as a nation-state or government hackeradvanced persistent threat), for example. If a gang is known to use malware to steal funds from individuals’ bank accounts, we may describe them as financially motivated hackers, or if there is evidence of criminality or illegality (such as an indictment), we may describe them simply as cybercriminals. And, if we don’t know motivations or intentions, or a person describes themselves as such, we may simply refer to a subject neutrally as a “hacker,” where appropriate. Hack-and-leak operation Sometimes, hacking and stealing data is only the first step. In some cases, hackers then leak the stolen data to journalists, or directly post the data online for anyone to see. The goal can be either to embarrass the hacking victim, or to expose alleged malfeasance. The origins of modern hack-and-leak operations date back to the early- and mid-2000s, when groups like el8, pHC (“Phrack High Council”) and zf0 were targeting people in the cybersecurity industry who, according to these groups, had foregone the hacker ethos and had sold out. Later, there are the examples of hackers associated with Anonymous and leaking data from U.S. government contractor HBGary, and North Korean hackers leaking emails stolen from Sony as retribution for the Hollywood comedy, The Interview. Some of the most recent and famous examples are the hack against the now-defunct government spyware pioneer Hacking Team in 2015, and the infamous Russian government-led hack-and-leak of Democratic National Committee emails ahead of the 2016 U.S. presidential elections. Iranian government hackers tried to emulate the 2016 playbook during the 2024 elections. Hacktivist A particular kind of hacker who hacks for what they — and perhaps the public — perceive as a good cause, hence the portmanteau of the words “hacker” and “activist.” Hacktivism has been around for more than two decades, starting perhaps with groups like the Cult of the Dead Cow in the late 1990s. Since then, there have been several high profile examples of hacktivist hackers and groups, such as Anonymous, LulzSec, and Phineas Fisher. (Also see: Hacker) Infosec Short for “information security,” an alternative term used to describe defensive cybersecurity focused on the protection of data and information. “Infosec” may be the preferred term for industry veterans, while the term “cybersecurity” has become widely accepted. In modern times, the two terms have become largely interchangeable. Infostealers Infostealers are malware capable of stealing information from a person’s computer or device. Infostealers are often bundled in pirated software, like Redline, which when installed will primarily seek out passwords and other credentials stored in the person’s browser or password manager, then surreptitiously upload the victim’s passwords to the attacker’s systems. This lets the attacker sign in using those stolen passwords. Some infostealers are also capable of stealing session tokens from a user’s browser, which allow the attacker to sign in to a person’s online account as if they were that user but without needing their password or multi-factor authentication code. (See also: Malware) Jailbreak Jailbreaking is used in several contexts to mean the use of exploits and other hacking techniques to circumvent the security of a device, or removing the restrictions a manufacturer puts on hardware or software. In the context of iPhones, for example, a jailbreak is a technique to remove Apple’s restrictions on installing apps outside of its “walled garden” or to gain the ability to conduct security research on Apple devices, which is normally highly restricted. In the context of AI, jailbreaking means figuring out a way to get a chatbot to give out information that it’s not supposed to. Kernel The kernel, as its name suggests, is the core part of an operating system that connects and controls virtually all hardware and software. As such, the kernel has the highest level of privileges, meaning it has access to virtually any data on the device. That’s why, for example, apps such as antivirus and anti-cheat software run at the kernel level, as they require broad access to the device. Having kernel access allows these apps to monitor for malicious code. Malware Malware is a broad umbrella term that describes malicious software. Malware can land in many forms and be used to exploit systems in different ways. As such, malware that is used for specific purposes can often be referred to as its own subcategory. For example, the type of malware used for conducting surveillance on people’s devices is also called “spyware,” while malware that encrypts files and demands money from its victims is called “ransomware.” (See also: Infostealers; Ransomware; Spyware) Metadata is information about something digital, rather than its contents. That can include details about the size of a file or document, who created it, and when, or in the case of digital photos, where the image was taken and information about the device that took the photo. Metadata may not identify the contents of a file, but it can be useful in determining where a document came from or who authored it. Metadata can also refer to information about an exchange, such as who made a call or sent a text message, but not the contents of the call or the message. Multi-factor authentication Multi-factor authentication (MFA) is the common umbrella term for describing when a person must provide a second piece of information, aside from a username and password, to log into a system. MFA (or two-factor; also known as 2FA) can prevent malicious hackers from re-using a person’s stolen credentials by requiring a time-sensitive code sent to or generated from a registered device owned by the account holder, or the use of a physical token or key. Operational security (OPSEC) Operational security, or OPSEC for short, is the practice of keeping information secret in various situations. Practicing OPSEC means thinking about what information you are trying to protect, from whom, and how you’re going to protect it. OPSEC is less about what tools you are using, and more about how you are using them and for what purpose. For example, government officials discussing plans to bomb foreign countries on Signal are practicing bad OPSEC because the app is not designed for that use-case, and runs on devices that are more vulnerable to hackers than highly restricted systems specifically designed for military communications. On the other hand, journalists using Signal to talk to sensitive sources is generally good OPSEC because it makes it harder for those communications to be intercepted by eavesdroppers. (See also: Threat model) Penetration testing Also known as “pen-testing,” this is the process where security researchers “stress-test” the security of a product, network, or system, usually by attempting to modify the way that the product typically operates. Software makers may ask for a pen-test on a product, or of their internal network, to ensure that they are free from serious or critical security vulnerabilities, though a pen-test does not guarantee that a product will be completely bug-free. Phishing Phishing is a type of cyberattack where hackers trick their targets into clicking or tapping on a malicious link, or opening a malicious attachment. The term derives from “fishing,” because hackers often use “lures” to convincingly trick their targets in these types of attacks. A phishing lure could be attachment coming from an email address that appears to be legitimate, or even an email spoofing the email address of a person that the target really knows. Sometimes, the lure could be something that might appear to be important to the target, like sending a forged document to a journalist that appears to show corruption, or a fake conference invite for human rights defenders. There is an often cited adage by the well-known cybersecurity influencer The Grugq, which encapsulates the value of phishing: “Give a man an 0day and he’ll have access for a day, teach a man to phish and he’ll have access for life.” (Also see: Social engineering) Ransomware Ransomware is a type of malicious software (or malware) that prevents device owners from accessing its data, typically by encrypting the person’s files. Ransomware is usually deployed by cybercriminal gangs who demand a ransom payment — usually cryptocurrency — in return for providing the private key to decrypt the person’s data. In some cases, ransomware gangs will steal the victim’s data before encrypting it, allowing the criminals to extort the victim further by threatening to publish the files online. Paying a ransomware gang is no guarantee that the victim will get their stolen data back, or that the gang will delete the stolen data. One of the first-ever ransomware attacks was documented in 1989, in which malware was distributed via floppy disk (an early form of removable storage) to attendees of the World Health Organization’s AIDS conference. Since then, ransomware has evolved into a multibillion-dollar criminal industry as attackers refine their tactics and hone in on big-name corporate victims. (See also: Malware; Sanctions) Remote code execution Remote code execution refers to the ability to run commands or malicious code (such as malware) on a system from over a network, often the internet, without requiring any human interaction from the target. Remote code execution attacks can range in complexity but can be highly damaging when vulnerabilities are exploited. (See also: Arbitrary code execution) Sanctions Cybersecurity-related sanctions work similarly to traditional sanctions in that they make it illegal for businesses or individuals to transact with a sanctioned entity. In the case of cyber sanctions, these entities are suspected of carrying out malicious cyber-enabled activities, such as ransomware attacks or the laundering of ransom payments made to hackers. The U.S. Treasury’s Office of Foreign Assets Control (OFAC) administers sanctions. The Treasury’s Cyber-Related Sanctions Program was established in 2015 as part of the Obama administration’s response to cyberattacks targeting U.S. government agencies and private sector U.S. entities. While a relatively new addition to the U.S. government’s bureaucratic armory against ransomware groups, sanctions are increasingly used to hamper and deter malicious state actors from conducting cyberattacks. Sanctions are often used against hackers who are out of reach of U.S. indictments or arrest warrants, such as ransomware crews based in Russia. Sandbox A sandbox is a part of a system that is isolated from the rest. The goal is to create a protected environment where a hacker can compromise the sandbox, but without allowing further access to the rest of the system. For example, mobile applications usually run in their own sandboxes. If hackers compromise a browser, for example, they cannot immediately compromise the operating system or another app on the same device. Security researchers also use sandboxes in both physical and virtual environments (such as a virtual machine) to analyze malicious code without risking compromising their own computers or networks. SIM swap SIM swapping is a type of attack where hackers hijack and take control of a person’s phone number, often with the goal of then using the phone number to log into the target’s sensitive accounts, such as their email address, bank account, or cryptocurrency wallet. This attack exploits the way that online accounts sometimes rely on a phone number as a fallback in the event of losing a password. SIM swaps often rely on hackers using social engineering techniques to trick phone carrier employees (or bribing them) into handing over control of a person’s account, as well as hacking into carrier systems. Social engineering is the art of human deception, and encompasses several techniques a hacker can use to deceive their target into doing something they normally would not do. Phishing, for example, can be classified as a type of social engineering attack because hackers trick targets into clicking on a malicious link or opening a malicious attachment, or calling someone on the phone while pretending to be their employer’s IT department. Social engineering can also be used in the real world, for example, to convince building security employees to let someone who shouldn’t be allowed to enter the building. Some call it “human hacking” because social engineering attacks don’t necessarily have to involve technology. (Also see: Phishing) Spyware (commercial, government) A broad term, like malware, that covers a range of surveillance monitoring software. Spyware is typically used to refer to malware made by private companies, such as NSO Group’s Pegasus, Intellexa’s Predator, and Hacking Team’s Remote Control System, among others, which the companies sell to government agencies. In more generic terms, these types of malware are like remote access tools, which allows their operators — usually government agents — to spy and monitor their targets, giving them the ability to access a device’s camera and microphone or exfiltrate data. Spyware is also referred to as commercial or government spyware, or mercenary spyware. (See also: Stalkerware) Stalkerware Stalkerware is a kind of surveillance malware (and a form of spyware) that is usually sold to ordinary consumers under the guise of child or employee monitoring software but is often used for the purposes of spying on the phones of unwitting individuals, oftentimes spouses and domestic partners. The spyware grants access to the target’s messages, location, and more. Stalkerware typically requires physical access to a target’s device, which gives the attacker the ability to install it directly on the target’s device, often because the attacker knows the target’s passcode. (See also: Spyware) Threat model What are you trying to protect? Who are you worried about that could go after you or your data? How could these attackers get to the data? The answers to these kinds of questions are what will lead you to create a threat model. In other words, threat modeling is a process that an organization or an individual has to go through to design software that is secure, and devise techniques to secure it. A threat model can be focused and specific depending on the situation. A human rights activist in an authoritarian country has a different set of adversaries, and data, to protect than a large corporation in a democratic country that is worried about ransomware, for example. (See also: Operational security) When we describe “unauthorized” access, we’re referring to the accessing of a computer system by breaking any of its security features, such as a login prompt or a password, which would be considered illegal under the U.S. Computer Fraud and Abuse Act, or the CFAA. The Supreme Court in 2021 clarified the CFAA, finding that accessing a system lacking any means of authorization — for example, a database with no password — is not illegal, as you cannot break a security feature that isn’t there. It’s worth noting that “unauthorized” is a broadly used term and often used by companies subjectively, and as such has been used to describe malicious hackers who steal someone’s password to break in through to incidents of insider access or abuse by employees. Virtual private network (VPN) A virtual private network, or VPN, is a networking technology that allows someone to “virtually” access a private network, such as their workplace or home, from anywhere else in the world. Many use a VPN provider to browse the web, thinking that this can help to avoid online surveillance. TechCrunch has a skeptics’ guide to VPNs that can help you decide if a VPN makes sense for you. If it does, we’ll show you how to set up your own private and encrypted VPN server that only you control. And if it doesn’t, we explore some of the privacy tools and other measures you can take to meaningfully improve your privacy online. Vulnerability A vulnerability (also referred to as a security flaw) is a type of bug that causes software to crash or behave in an unexpected way that affects the security of the system or its data. Sometimes, two or more vulnerabilities can be used in conjunction with each other — known as “vulnerability chaining” — to gain deeper access to a targeted system. (See also: Bug; Exploit) Zero-click (and one-click) attacks Malicious attacks can sometimes be categorized and described by the amount of user interaction that malware, or a malicious hacker, needs in order to achieve successful compromise. One-click attacks refer to the target having to interact only once with the incoming lure, such as clicking on a malicious link or opening an attachment, to grant the intruder access. But zero-click attacks differ in that they can achieve compromise without the target having to click or tap anything. Zero-clicks are near-invisible to the target and are far more difficult to identify. As such, zero-click attacks are almost always delivered over the internet, and are often reserved for high-value targets for their stealthy capabilities, such as deploying spyware. (Also see: Spyware) Zero-day A zero-day is a specific type of security vulnerability that has been publicly disclosed or exploited but the vendor who makes the affected hardware or software has not been given time (or “zero days”) to fix the problem. As such, there may be no immediate fix or mitigation to prevent an affected system from being compromised. This can be particularly problematic for internet-connected devices. (See also: Vulnerability) First published on September 20, 2024.0 التعليقات 0 المشاركات 17 مشاهدة
-
TECHCRUNCH.COMAn OpenAI researcher who worked on GPT-4.5 had their green card deniedKai Chen, a Canadian AI researcher working at OpenAI who’s lived in the U.S. for 12 years, was denied a green card, according to Noam Brown, a leading research scientist at the company. In a post on X, Brown said that Chen learned of the decision Friday and must soon leave the country. “It’s deeply concerning that one of the best AI researchers I’ve worked with […] was denied a U.S. green card,” wrote Brown. “A Canadian who’s lived and contributed here for 12 years now has to leave. We’re risking America’s AI leadership when we turn away talent like this.” Another OpenAI employee, Dylan Hunn, said in a post that Chen was “crucial” for GPT-4.5, one of OpenAI’s flagship AI models. Green cards can be denied for all sorts of reasons, and the decision won’t cost Chen her job. In a follow-up post, Brown said that Chen plans to work remotely from an Airbnb in Vancouver “until [the] mess hopefully gets sorted out.” But it’s the latest example of foreign talent facing high barriers to living, working, and studying in the U.S. under the Trump administration. OpenAI didn’t immediately respond to a request for comment. However, in a post on X last July, Altman called for changes to make it easier for “high-skill” immigrants to move to and work in the U.S. Over the past few months, more than 1,700 international students in the U.S., including AI researchers who’ve lived in the country for a number of years, have had their visa statuses challenged as part of an aggressive crackdown. While the government has accused some of these students of supporting Palestinian militant groups or engaging in “antisemitic” activities, others have been targeted for minor legal infractions, like speeding tickets or other traffic violations. Meanwhile, the Trump administration has turned a skeptical eye toward many green card applicants, reportedly suspending processing of requests for legal permanent residency submitted by immigrants granted refugee or asylum status. It has also taken a hardline approach to green card holders it perceives as “national security” threats, detaining and threatening several with deportation. AI labs like OpenAI rely heavily on foreign research talent. According to Shaun Ralston, an OpenAI contractor providing support for the company’s API customers, OpenAI filed more than 80 applications for H1-B visas last year alone and has sponsored more than 100 visas since 2022. H1-B visas, favored by the tech industry, allow U.S. companies to temporarily employ foreign workers in “specialty occupations” that require at least a bachelor’s degree or the equivalent. Recently, immigration officials have begun issuing “requests for evidence” for H-1Bs and other employment-based immigration petitions, asking for home addresses and biometrics, a change some experts worry may lead to an uptick in denied applications. Immigrants have played a major role in contributing to the growth of the U.S. AI industry. According to a study from Georgetown’s Center for Security and Emerging Technology, 66% of the 50 “most promising” U.S.-based AI startups on Forbes’ 2019 “AI 50” list had an immigrant founder. A 2023 analysis by the National Foundation for American Policy found that 70% of full-time graduate students in fields related to AI are international students. Ashish Vaswani, who moved to the U.S. to study computer science in the early 2000s, is one of the co-creators of the transformer, the seminal AI model architecture that underpins chatbots like ChatGPT. One of the co-founders of OpenAI, Wojciech Zaremba, earned his doctorate in AI from NYU on a student visa. The U.S.’s immigration policies, cutbacks in grant funding, and hostility to certain sciences have many researchers contemplating moving out of the country. Responding to a Nature poll of over 1,600 scientists, 75% said that they were considering leaving for jobs abroad.0 التعليقات 0 المشاركات 65 مشاهدة
-
TECHCRUNCH.COMGoogle’s AI search numbers are growing, and that’s by designGoogle started testing AI-summarized results in Google Search, AI Overviews, two years ago, and continues to expand the feature to new regions and languages. By the company’s estimation, it’s been a big success. AI Overviews is now used by more than 1.5 billion users monthly across over 100 countries. AI Overviews compiles results from around the web to answer certain questions. When you search for something like “What is generative AI?” AI Overviews will show AI-generated text at the top of the Google Search results page. While the feature has dampened traffic to some publishers, Google sees it and other AI-powered search capabilities as potentially meaningful revenue drivers and ways to boost engagement on Search. Last October, the company launched ads in AI Overviews. More recently, it started testing AI Mode, which lets users ask complex questions and follow-ups in the flow of Google Search. The latter is Google’s attempt to take on chat-based search interfaces like ChatGPT search and Perplexity. During its Q1 2025 earnings call on Thursday, Google highlighted the growth of its other AI-based search products as well, including Circle to Search. Circle to Search, which lets you highlight something on your smartphone’s screen and ask questions about it, is now available on more than 250 million devices, Google said — up from around 200 million devices as of late last year. Circle to Search usage rose close to 40% quarter-over-quarter, according to the company. Google also noted in its call that visual searches on its platforms are growing at a steady clip. According to CEO Sundar Pichai, searches through Google Lens, Google’s multimodal AI-powered search technology, have increased by 5 billion since October. The number of people shopping on Lens was up over 10% in Q1, meanwhile. The growth comes amid intense regulatory scrutiny of Google’s search practices. The U.S. Department of Justice has been pressuring Google to spin off Chrome after the court found that the tech giant had an illegal online search monopoly. A federal judge has also ruled that Google has an adtech monopoly, opening the door to a potential breakup.0 التعليقات 0 المشاركات 20 مشاهدة
-
TECHCRUNCH.COMChinese AI startup Manus reportedly gets funding from Benchmark at $500M valuationIn Brief Posted: 6:30 AM PDT · April 25, 2025 Image Credits:Carol Yepes / Getty Images Chinese AI startup Manus reportedly gets funding from Benchmark at $500M valuation Chinese startup Manus AI, which works on building tools related to AI agents, has picked up $75 million in a funding round led by Benchmark at a roughly $500 million valuation, according to Bloomberg. The company will use the money to expand to new markets, including the U.S., Japan, and the Middle East, Bloomberg noted, citing people familiar with the matter. Bloomberg’s report suggests that the fresh round has quintupled the valuation of Manus, which previously raised somewhere north of $10 million from backers including Tencent and HSG (formerly Sequoia China). Manus came into the spotlight when the company launched a demo of a general AI agent that could complete various tasks in March. (In TechCrucnh’s testing, it didn’t work quite as well as advertised.) The company later launched paid subscription plans costing between $39 per month and $199 per month. Topics0 التعليقات 0 المشاركات 71 مشاهدة
-
TECHCRUNCH.COMData breach at Connecticut’s Yale New Haven Health affects over 5 millionIn Brief Posted: 6:00 AM PDT · April 25, 2025 Image Credits:Tim Clayton / Corbis / Getty Images Data breach at Connecticut’s Yale New Haven Health affects over 5 million A data breach at Connecticut’s largest healthcare system Yale New Haven Health affects more than 5.5 million people, according to a legally required notice with the U.S. government’s health department. Yale New Haven said the March cyberattack allowed malicious hackers to obtain copies of patients’ personally identifiable information and some healthcare-related data. Per a notice on the healthcare system’s website, the stolen data varies by person, but can include patient names, dates of birth, postal and email addresses, phone number, race and ethnicity data, and Social Security numbers. The stolen data also includes information about types of patients and medical record numbers. Local media quoted the healthcare system’s spokesperson as saying that the number of affected individuals “may change.” A spokesperson for Yale New Haven did not immediately comment when contacted by TechCrunch. This is the second major healthcare data breach confirmed this week, after Blue Shield of California revealed it shared health data of 4.7 million patients with Google over several years. Topics0 التعليقات 0 المشاركات 37 مشاهدة
-
TECHCRUNCH.COMFaraday Future founder named co-CEO three years after being sidelined by internal probeTroubled electric vehicle startup Faraday Future’s board of directors has appointed founder Jia Yueting as the company’s co-CEO, three years after he was sidelined following an internal probe into allegations of fraud — a probe that led to a investigation by the Securities and Exchange Commission that remains ongoing. Jia will serve alongside current CEO Matthias Aydt and will oversee Faraday’s finance, legal, and supply chain teams, the company announced in a press conference Thursday. Aydt is a longtime Faraday Future employee who was once placed on probation after he offered to pay a Faraday Future board member up to $700,000 to resign in the middle of a months-long power struggle over the company. Jia’s appointment comes just one month after Faraday Future named Jia’s nephew Jerry Wang as president of the EV startup. Wang resigned in 2022 as a result of the internal probe because of a “failure to cooperate with the investigation” according to filings with the Securities and Exchange Commission. Faraday Future was founded by Jia in 2014 as he looked to build on what was at the time a successful electronics and media streaming empire in China. That empire collapsed, and Jia self-exiled to the U.S. to focus on Faraday Future. The company has spent the last decade and over $3 billion to develop an ultra-luxury EV called the FF91. But it has only sold around a dozen of them to date, and has been accused in lawsuits of misrepresenting some of those sales.0 التعليقات 0 المشاركات 30 مشاهدة
-
TECHCRUNCH.COMPerplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ adsPerplexity doesn’t just want to compete with Google, it apparently wants to be Google. CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads. “That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.” And work-related queries won’t help the AI company build an accurate-enough dossier. “On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained. Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them. “We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said. The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said. He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today. That’s why it built a browser and a mobile operating system. Indeed, Perplexity is attempting something in the mobile world, too. It’s signed a partnership with Motorola, announced Thursday, where its app will be pre-installed on the Razr series and can be accessed though the Moto AI by typing “Ask Perplexity.” Perplexity is also in talks with Samsung, Bloomberg reported. Srinivas didn’t flat-out confirm that, though he did reference on the podcast the Bloomberg article, published earlier this month, that discussed both partnerships. Obviously, Google isn’t the only one watching users online to sell ads. Meta’s ad tracking technology, Pixels, which is embedded on websites across the internet, is how Meta gathers data, even on people that don’t have Facebook or Instagram accounts. Even Apple, which has marketed itself as a privacy protector, can’t resist tracking users’ locations to sell advertising in some of its apps by default. On the other hand, this kind of thing has led people across the political spectrum in the U.S. and in Europe to distrust big tech. The irony of Srinivas openly explaining his browser-tracking ad-selling ambitions this week also can’t be overstated. Google is currently in court fighting the U.S. Department of Justice, which has alleged Google behaved in monopolistic ways to dominate search and online advertising. The DOJ wants the judge to order Google to divest Chrome. Both OpenAI and Perplexity — not surprisingly, given Srinivas’ reasons — said they would buy the Chrome browser business if Google was forced to sell.0 التعليقات 0 المشاركات 40 مشاهدة
-
TECHCRUNCH.COMTechCrunch StrictlyVC in Athens in May will feature a special guest: the Greece Prime MinisterWe’re thrilled to announce that Greece Prime Minister Kyriakos Mitsotakis will be joining us at our upcoming StrictlyVC event in Athens, co-hosted with Endeavor, on Thursday night, May 8, at the stunning Stavros Niarchos Foundation Cultural Center. For those who might not be familiar with his background, Mitsotakis brings a fascinating blend of experiences to the table. Before entering politics, he worked at both McKinsey and Chase Investment Bank, giving him firsthand experience in the business world that many operators throughout the startup ecosystem can appreciate. The youngest of four children, he also has some Silicon Valley-esque academic credentials – he headed to Harvard, then to Stanford for a master’s degree in international relations, and finally nabbed an MBA at Harvard Business School – and says his education has long shaped his vision for Greece’s future. Mitsotakis has also been championing Greece’s tech transformation for many years. In fact, after navigating the country through the pandemic, he has doubled down on positioning Athens as an emerging tech hub, recently introducing initiatives to attract international talent, including tax incentives and reforms aimed at cutting bureaucratic red tape for new businesses. The Prime Minister comes from a political family — his father was prime minister and his sister was mayor of Athens — but he has carved out his own reputation as a reformer focused on modernizing the Greek economy. His administration has been particularly interested in how tech can help diversify renowned traditional Greek strengths like shipping and tourism. StrictlyVC events are constrained by design to give attendees a unique opportunity for investors, founders, and ecosystem builders to engage directly with power players like the Prime Minister, so if you want to ask about his government’s vision for Greece’s tech future, and how the country fits into the broader European innovation landscape, this could be your chance. You can check out more details here to learn more about the agenda and other speakers (you can also buy tickets while they are still available). Registration is now open for what promises to be a fun evening, filled with illuminating discussions, but this chat — with one of Europe’s most interesting political leaders about Greece’s emerging technology narrative — is definitely one you won’t want to miss. Register for your StrictlyVC Greece ticket here.0 التعليقات 0 المشاركات 49 مشاهدة
-
TECHCRUNCH.COMOpenAI rolls out a ‘lightweight’ version of its ChatGPT deep research toolOpenAI is bringing a new “lightweight” version of its ChatGPT deep research tool, which scours the web to compile research reports on a topic, to ChatGPT Plus, Team, and Pro users, the company announced Thursday. The new lightweight deep research, which will also come to free ChatGPT users starting today, is powered by a version of OpenAI’s o4-mini model, OpenAI says. It’s not quite as capable as the “full” deep research, but OpenAI claims it’s cheaper to serve and thus enables the company to up usage limits. “Responses will typically be shorter while maintaining the depth and quality you’ve come to expect,” OpenAI said in a series of posts on X. “Once limits for the original version of deep research are reached, queries automatically default to the lightweight version.” There’s been a raft of deep research tools launched recently across chatbots including Google’s Gemini, Microsoft’s Copilot, and xAI’s Grok. Driving them are reasoning AI models, which possess the ability to think through problems and fact-check themselves — skills arguably important for conducting in-depth research on a subject. ChatGPT’s lightweight deep research will come to Enterprise and educational users next week with the same usage levels as Team users, OpenAI says.0 التعليقات 0 المشاركات 71 مشاهدة
-
TECHCRUNCH.COMBezos-backed Slate Auto debuts analog EV pickup truck that is decidedly anti-TeslaA new American electric vehicle startup called Slate Auto has made its debut, and it’s about as anti-Tesla as it gets. It’s affordable, deeply customizable, and very analog. It has manual windows and it doesn’t come with a main infotainment screen. Heck, it isn’t even painted. It can also transform from a two-seater pickup to a five-seater SUV. The three-year-old startup revealed its vehicle during an event Thursday night in Long Beach, California, and promised the first trucks would be available to customers for under $20,000 with the federal EV tax credit by the end of 2026. The event comes just a few weeks since TechCrunch revealed details of Slate Auto’s plans to enter the U.S. EV market, build its trucks in Indiana, and that the enterprise is financially backed by Amazon founder Jeff Bezos. The auto industry “has been so focused on autonomy and technology in the vehicle, it’s driven prices to a place that most Americans simply can’t afford,” chief commercial officer Jeremy Snyder said during the event, which Inside EVs live streamed. “But we’re here to change that.” “We are building the affordable vehicle that has long been promised but never been delivered,” CEO Chris Barman added. Image Credits:Slate Auto The Specs Slate isn’t saying exactly how much its truck will cost — multiple sources have told TechCrunch over the last few weeks the company has gone back and forth on the number. And so much can change between now and a late 2026 release date. The company is The base version of Slate’s truck will squeeze 150 miles out of a 52.7kWh battery pack, which will power a single 150kW motor on the rear axle. For folks who get a little spooked at that number, Slate is offering a larger battery pack that it says will have about 240 miles of range. It will charge using a North American Charging Standard port, the standard Tesla established that almost all major automakers now use. The truck comes with 17-inch wheels and a five-foot bed, and has a projected 1,400 pound payload capacity with a 1,000 pound towing capacity. Since it’s an EV, there’s no engine up front. In its place there’s a front trunk (or frunk) with 7 cubic feet of storage space, which happens to have a drain in case the owner wants to fill it with ice for that tailgate party. That towing capacity is lower than a more capable Ford F-150, and is even less than the smaller Ford Maverick, which can tow around 1,500 pounds. Speaking of the Ford Maverick, Slate’s truck is smaller. The Slate EV has a wheelbase of 108.9 inches, and an overall length of 174.6 inches. The Maverick has a 121.1-inch wheelbase and overall length of 199.7 inches Everything else about the base version of the truck is awfully spare — and that’s the point. Slate is really maximizing the idea of a base model, and setting customers up for paying to customize the EV to their liking. Custom… everything Screenshot Slate is deeply committed to the idea of customization, which sets it apart from any other EV startup (or traditional automaker). The company said Thursday it will launch with more than 100 different accessories that buyers can use to personalize the truck to their liking. If that’s overwhelming, Slate has curated a number of different “starter packs” that interested buyers can choose from. The truck doesn’t even come painted. Slate is instead playing up the idea of wrapping its vehicles, something executives said they will sell in kits. Buyers can either have Slate do that work for them, or put the wraps on themselves. This not only adds to the idea of a buyer being able to personalize their vehicle, but it also cuts out a huge cost center for the company. It means Slate won’t need a paint shop at its factory, allowing it to spend less to get to market, while also avoiding one of the most heavily regulated parts of vehicle manufacturing. Slate is telling customers that they can name the car whatever they want, offering the ability to purchase an embossed wrap for the tailgate. Otherwise, the truck is just referred to as the “Blank Slate.” As TechCrunch previously reported, the customization piece is central to how the company hopes to make up margin on what is otherwise a relatively dirt-cheap vehicle. But it’s also part of the friendly pitch Slate is making to customers. Barman said Thursday that people can “make the Blank Slate yours at the time of purchase, or as your needs and finances change over time.” It’s billing the add-ons as “easy DIY” that “non-gearheads” can tackle, and says it will launch a suite of how-to resources under the billing of Slate University. “Buy your accessories, get them delivered fast, and install them yourself with the easy how-to videos in Slate U, our content hub,” the website reads. “Don’t want to go the DIY route? A Slate authorized partner can come and do it for you.” The early library of customizations on Slate’s website range from functional to cosmetic. Buyers can add infotainment screens, speakers, roof racks, light covers, and much more. The most significant are the options that let buyers “transform” the truck into roomier SUV form factors. But these aren’t permanent decisions. Slate says people will be able to change their vehicle into, and back from, an SUV if they like — “no mechanics certification required.” All that said, Slate’s truck comes standard with some federally mandated safety features such as automatic emergency braking, airbags, and a backup camera. Buckle up The road to making a successful American automotive startup is littered with failures. In the last few years, Canoo, Fisker, and Lordstown Motors have all filed for bankruptcy. And that’s just to name a few. Those companies that are still around, like Rivian and Lucid Motors, are hemorrhaging money in an attempt to get high-volume, more affordable models to market. Slate is a total inversion of that approach. It’s going after a low-cost EV first and foremost, and hopes to make that business viable by supplementing it with money from this deep customization play. But, much like Rivian and Lucid Motors, it also has deep-pocketed backers. It raised has raised more than $111 million so far (the exact figure is still not public). And, aside from Bezos, has taken money from Mark Walter, Guggenheim Partners CEO and controlling owner of the LA Dodgers, as TechCrunch reported this month. The company has hired nearly 400 employees in service of accomplishing all of its ambitious goals, and is currently trying to hire more. Slate arguably could not have picked a more volatile time to make its debut, but it’s also focused on domestic manufacturing, and may be insulated from some of the turmoil facing other startups and established automakers. “We believe vehicles should be affordable and desirable,” Barman said Thursday, adding that Slate’s truck “is a vehicle people are actually going to love and be proud to own.”0 التعليقات 0 المشاركات 49 مشاهدة
-
TECHCRUNCH.COMAnthropic CEO wants to open the black box of AI models by 2027Anthropic CEO Dario Amodei published an essay Thursday highlighting how little researchers understand about the inner workings of the world’s leading AI models. To address that, Amodei set an ambitious goal for Anthropic to reliably detect most AI model problems by 2027. Amodei acknowledges the challenge ahead. In “The Urgency of Interpretability,” the CEO says Anthropic has made early breakthroughs in tracing how models arrive at their answers — but emphasizes that far more research is needed to decode these systems as they grow more powerful. “I am very concerned about deploying such systems without a better handle on interpretability,” Amodei wrote in the essay. “These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work.” Anthropic is one of the pioneering companies in mechanistic interpretability, a field that aims to open the black box of AI models and understand why they make the decisions they do. Despite the rapid performance improvements of the tech industry’s AI models, we still have relatively little idea how these systems arrive at decisions. For example, OpenAI recently launched new reasoning AI models, o3 and o4-mini, that perform better on some tasks, but also hallucinate more than its other models. The company doesn’t know why it’s happening. “When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate,” Amodei wrote in the essay. In the essay, Amodei notes that Anthropic co-founder Chris Olah says that AI models are “grown more than they are built.” In other words, AI researchers have found ways to improve AI model intelligence, but they don’t quite know why. In the essay, Amodei says it could be dangerous to reach AGI — or as he calls it, “a country of geniuses in a data center” — without understanding how these models work. In a previous essay, Amodei claimed the tech industry could reach such a milestone by 2026 or 2027, but believes we’re much further out from fully understanding these AI models. In the long term, Amodei says Anthropic would like to, essentially, conduct “brain scans” or “MRIs” of state-of-the-art AI models. These checkups would help identify a wide range of issues in AI models, including their tendencies to lie or seek power, or other weakness, he says. This could take five to 10 years to achieve, but these measures will be necessary to test and deploy Anthropic’s future AI models, he added. Anthropic has made a few research breakthroughs that have allowed it to better understand how its AI models work. For example, the company recently found ways to trace an AI model’s thinking pathways through, what the company call, circuits. Anthropic identified one circuit that helps AI models understand which U.S. cities are located in which U.S. states. The company has only found a few of these circuits but estimates there are millions within AI models. Anthropic has been investing in interpretability research itself and recently made its first investment in a startup working on interpretability. While interpretability is largely seen as a field of safety research today, Amodei notes that, eventually, explaining how AI models arrive at their answers could present a commercial advantage. In the essay, Amodei called on OpenAI and Google DeepMind to increase their research efforts in the field. Beyond the friendly nudge, Anthropic’s CEO asked for governments to impose “light-touch” regulations to encourage interpretability research, such as requirements for companies to disclose their safety and security practices. In the essay, Amodei also says the U.S. should put export controls on chips to China, in order to limit the likelihood of an out-of-control, global AI race. Anthropic has always stood out from OpenAI and Google for its focus on safety. While other tech companies pushed back on California’s controversial AI safety bill, SB 1047, Anthropic issued modest support and recommendations for the bill, which would have set safety reporting standards for frontier AI model developers. In this case, Anthropic seems to be pushing for an industry-wide effort to better understand AI models, not just increasing their capabilities.0 التعليقات 0 المشاركات 77 مشاهدة
-
TECHCRUNCH.COMWait, how did a decentralized service like Bluesky go down?It turns out that decentralized social networks can go down, too. On Thursday evening, the decentralized social network Bluesky experienced a significant outage, leaving users unable to load the app on both the web and mobile devices for roughly an hour. According to a message on Bluesky’s status page, the company was aware of the outage, which it attributed to “Major PDS Networking Problems.” (PDS means personal data servers.) The first status message was posted at 6:55 PM ET, and a second one indicating that a fix was being applied was shared soon after at 7:38 PM ET. The question many may be asking now is, how did this decentralized social network go down? Isn’t it … decentralized? Isn’t one of the perks of decentralization that there’s not a single point of failure? Despite the platform’s decentralized nature, the majority of Bluesky users today interact with the service via Bluesky’s official app, powered by the AT Protocol. While in theory, anyone can run the various parts of the infrastructure that make up the protocol, including PDS, relays, and other components, it’s still early days for the social network, so few have done so. Those who did, however, were not impacted by the outage. In time, the idea is that many communities will be built on Bluesky, some with their own infrastructure, moderation services, and even client applications. (One example is the work that the Blacksky team is doing to create safer, more welcoming online spaces that take advantage of these decentralized tools.) Eventually, the hope is that Bluesky will be one of many entities that run the infrastructure needed to support the growing number of applications built on the AT Protocol. In the near term, however, an outage impacting Bluesky’s infrastructure will be felt more broadly. The outage stirred up some of the rivalry between Bluesky and another decentralized social network, Mastodon, which runs on a different social networking protocol called ActivityPub. Mastodon users were quick to point to the outage in order to make jokes or jabs that focused on Bluesky’s approach to decentralization. One Mastodon user, Luke Johnson, wrote, “see how the mighty Bluesky crumbles while the Raspberry Pi running Mastodon under my bed just keeps chugging along” — a reference to how Mastodon can run off even tiny machines users themselves configure. Or, as another Mastodon user joked, “nice decentralization ya got there.” In any event, Bluesky’s outage was resolved shortly after it began and the service is back up and running. Topics0 التعليقات 0 المشاركات 44 مشاهدة
-
TECHCRUNCH.COMHow do you define cheating in the age of AI?This AI startup raised $5.3M to help people “cheat on everything.” But in the age of AI, how do you define cheating? Columbia University recently suspended student Roy Lee for building a tool to help people cheat on engineering interviews. He’s been making waves on X after posting a long thread detailing the saga and how he and his co-founder, Neel Shanmugam, have now turned that product into a startup called Cluely.0 التعليقات 0 المشاركات 77 مشاهدة
-
TECHCRUNCH.COMIntel reverses course, opts not to spin out Intel CapitalSemiconductor giant Intel won’t spin out its venture arm, Intel Capital, after all. During Intel’s Q1 earnings call Thursday, Intel CEO Lip-Bu Tan said the company has reversed its decision to spin out its 34-year-old venture arm. Instead, Intel Capital will remain internal and continue to invest with Intel’s interests in mind. “We have made the decision not to spin off Intel Capital, but to work with the team to monetize our existing portfolio, while being more selective on new investments that support the strategy we need to get our balance sheet healthy and start the process of deleveraging this year,” Tan said on the call. It’s a stark change in Intel’s plans for Intel Capital. Intel announced in January that Intel Capital was going to strike out on its own. Shortly after the announcement, Intel Capital VP and senior managing director Mark Rostick told TechCrunch that the firm had considered spinning out multiple times over the years. “We thought our track record merited attention from outside investors,” Rostick told TechCrunch. “We had done really well, even while, you know, a lot of the venture industry hasn’t been able to realize exits, we’d had some success doing that, so we felt like we could position ourselves as a bit of an outlier there.” Talks about spinning out got more serious last year — and reportedly had support from Intel’s ex-CEO Pat Gelsinger. The original plan was for Intel Capital to become independent by the third quarter of this year; Intel would remain an investment partner. Now, it seems that won’t come to pass.0 التعليقات 0 المشاركات 52 مشاهدة
-
TECHCRUNCH.COMOpenAI may be developing its own social platform but who’s it for?OpenAI is reportedly building its own X-like social network. The project is still in the early stages, but there’s an internal prototype focused on ChatGPT’s image generation that contains a social feed, The Verge reports. A social app would give OpenAI its own unique, real-time data that X and Meta already use to help train their AI models.0 التعليقات 0 المشاركات 74 مشاهدة
-
TECHCRUNCH.COMThreads officially moves to Threads.com and updates its web appInstagram Threads, Meta’s newest social network and X competitor, is officially relocating from the website Threads.net to Threads.com. The transition will coincide with a handful of quality-of-life improvements for the Threads web app, including features to more easily access custom feeds, saved posts, and likes, as well as other tools for creating new columns, copying posts for resharing, finding your favorite creators from X on Threads, and more. Meta had initially launched its new social app in July 2023 on the URL Threads.net, as a Sequoia-backed Slack alternative startup had owned the Threads.com domain at the time. (That startup sold to Shopify last year.) In September 2024, Meta acquired the Threads.com domain name and later began redirecting the URL Threads.com to Threads.net. Starting today, Meta explains that users will no longer be redirected from the .com to the .net; it will be the other way around. Going forward, if you type in Threads.com in your browser, you’ll go directly to your Threads home screen without being redirected. Meanwhile, those who type in Threads.net will be redirected to the URL Threads.com. The change gives Meta a more prominent and better-remembered URL for its social app that now reaches over 320 million monthly active users, as of Meta’s last public earnings announcement in January. The rebrand of sorts may allow the app to better compete with its rival X, which also has a memorable (and simple!) domain name. In addition to this change, Instagram head Adam Mosseri on Thursday announced a few other minor updates coming to the Threads web app, which is often used by creators. He said users will now see their custom feeds appear in the web app in the same order as they appear on the mobile app. Plus, users will now be able to access their liked and saved posts via the main menu instead of having to create a pinned column to see them. Image Credits:Threads Another new addition allows users to copy a Threads post as an image instead of having to screenshot it. This will make it easier to share Threads posts in other apps, like Instagram, Meta thinks. Threads users will also now be able to add a column by clicking a new column icon on the right side of the screen. And they’ll be able to click a plus “+” button in the bottom-right to open a new window and compose a post. Image Credits:Threads There’s also a new feature that allows people to find and follow the same creators they previously followed on X. This feature was introduced earlier this month and works by having users download an archive of their X data, which is uploaded to Threads. Those who previously had access to the feature were shown a pop-up saying they could now “Find popular creators from X.” The feature remains in testing, Meta says. Topics0 التعليقات 0 المشاركات 60 مشاهدة
-
TECHCRUNCH.COMAnthropic is launching a new program to study AI ‘model welfare’Could future AIs be “conscious,” and experience the world similarly to the way humans do? There’s no strong evidence that they will, but Anthropic isn’t ruling out the possibility. On Thursday, the AI lab announced that it has started a research program to investigate — and prepare to navigate — what it’s calling “model welfare.” As part of the effort, Anthropic says it’ll explore things like how to determine whether the “welfare” of an AI model deserves moral consideration, the potential importance of model “signs of distress,” and possible “low-cost” interventions. There’s major disagreement within the AI community on what human characteristics models “exhibit,” if any, and how we should “treat” them. Many academics believe that AI today can’t approximate consciousness or the human experience, and won’t necessarily be able to in the future. AI as we know it is a statistical prediction engine. It doesn’t really “think” or “feel” as those concepts have traditionally been understood. Trained on countless examples of text, images, and so on, AI learns patterns and sometime useful ways to extrapolate to solve tasks. As Mike Cook, a research fellow at King’s College London specializing in AI, recently told TechCrunch in an interview, a model can’t “oppose” a change in its “values” because models don’t have values. To suggest otherwise is us projecting onto the system. “Anyone anthropomorphizing AI systems to this degree is either playing for attention or seriously misunderstanding their relationship with AI,” Cook said. “Is an AI system optimizing for its goals, or is it ‘acquiring its own values’? It’s a matter of how you describe it, and how flowery the language you want to use regarding it is.” Another researcher, Stephen Casper, a doctoral student at MIT, told TechCrunch that he thinks AI amounts to an “imitator” that “[does] all sorts of confabulation[s]” and says “all sorts of frivolous things.” Yet other scientists insist that AI does have values and other human-like components of moral decision-making. A study out of the Center for AI Safety, an AI research organization, implies that AI has value systems that lead it to prioritize its own well-being over humans in certain scenarios. Anthropic has been laying the groundwork for its model welfare initiative for some time. Last year, the company hired its first dedicated “AI welfare” researcher, Kyle Fish, to develop guidelines for how Anthropic and other companies should approach the issue. (Fish, who’s leading the new model welfare research program, told The New York Times that he thinks there’s a 15% chance Claude or another AI is conscious today.) In a blog post Thursday, Anthropic acknowledged that there’s no scientific consensus on whether current or future AI systems could be conscious or have experiences that warrant ethical consideration. “In light of this, we’re approaching the topic with humility and with as few assumptions as possible,” the company said. “We recognize that we’ll need to regularly revise our ideas as the field develops.0 التعليقات 0 المشاركات 93 مشاهدة
-
TECHCRUNCH.COMAnthropic is launching a new program to study AI ‘model welfare’Could future AIs be “conscious,” and experience the world similarly to the way humans do? There’s no strong evidence that they will, but Anthropic isn’t ruling out the possibility. On Thursday, the AI lab announced that it has started a research program to investigate — and prepare to navigate — what it’s calling “model welfare.” As part of the effort, Anthropic says it’ll explore things like how to determine whether the “welfare” of an AI model deserves moral consideration, the potential importance of model “signs of distress,” and possible “low-cost” interventions. There’s major disagreement within the AI community on what human characteristics models “exhibit,” if any, and how we should “treat” them. Many academics believe that AI today can’t approximate consciousness or the human experience, and won’t necessarily be able to in the future. AI as we know it is a statistical prediction engine. It doesn’t really “think” or “feel” as those concepts have traditionally been understood. Trained on countless examples of text, images, and so on, AI learns patterns and sometime useful ways to extrapolate to solve tasks. As Mike Cook, a research fellow at King’s College London specializing in AI, recently told TechCrunch in an interview, a model can’t “oppose” a change in its “values” because models don’t have values. To suggest otherwise is us projecting onto the system. “Anyone anthropomorphizing AI systems to this degree is either playing for attention or seriously misunderstanding their relationship with AI,” Cook said. “Is an AI system optimizing for its goals, or is it ‘acquiring its own values’? It’s a matter of how you describe it, and how flowery the language you want to use regarding it is.” Another researcher, Stephen Casper, a doctoral student at MIT, told TechCrunch that he thinks AI amounts to an “imitator” that “[does] all sorts of confabulation[s]” and says “all sorts of frivolous things.” Yet other scientists insist that AI does have values and other human-like components of moral decision-making. A study out of the Center for AI Safety, an AI research organization, implies that AI has value systems that lead it to prioritize its own well-being over humans in certain scenarios. Anthropic has been laying the groundwork for its model welfare initiative for some time. Last year, the company hired its first dedicated “AI welfare” researcher, Kyle Fish, to develop guidelines for how Anthropic and other companies should approach the issue. (Fish, who’s leading the new model welfare research program, told The New York Times that he thinks there’s a 15% chance Claude or another AI is conscious today.) In a blog post Thursday, Anthropic acknowledged that there’s no scientific consensus on whether current or future AI systems could be conscious or have experiences that warrant ethical consideration. “In light of this, we’re approaching the topic with humility and with as few assumptions as possible,” the company said. “We recognize that we’ll need to regularly revise our ideas as the field develops.0 التعليقات 0 المشاركات 60 مشاهدة
-
TECHCRUNCH.COMDropbox adds new features to Dash, its AI-powered search toolCompanies like Google and Microsoft have equipped their productivity suites with AI features and assistants, while startups such as ClickUp and ReadAI have focused heavily on building AI integrations and search capabilities. In alignment with this growing trend of infusing digital work suites with AI, Dropbox on Thursday upgraded its AI search tool, Dash, first introduced in 2023. The company is adding AI “understanding” of different types of content in Dash, which means users can search across audio, video, and images in addition to text. The company is also adding people search to let users search for a person who worked on a specific project or look for a subject-matter expert. . Last year, Dropbox unveiled Dash for Business to let enterprises use AI search. This year, it’s improving Dash’s enterprise tooling by adding support for IT admins to exclude some sensitive documents from the search results. Dropbox will allow users to search for different media formats via natural language queries. Image Credits: Dropbox Dropbox already has functions for summarizing documents using its AI. With this new release, the company is introducing new writing tools that leverage summaries from different data sources to create new documents and presentations. The company said these tools, which live in Dash, can collate information from email, meeting notes, and existing documents to create project plans, memos, or briefs. The core idea is that users won’t have to jump from one app to another to read some info and add it to a document. What’s more, Dropbox is adding new integrations to Dash, including integrations for communication tools like Slack, Zoom, and Microsoft Teams, along with project management and creative tools like Figma, Canva, and Jira. This will help users search for information across their projects on different platforms, according to Dropbox. As AI vendors release new AI models, companies working in the productivity and workforce sectors are realizing that it’s getting easier to have AI look through a lot of information, summarize it, and also generate new content based on that. The challenge for these companies is building features quickly enough while integrating with other platforms to keep their customers happy.0 التعليقات 0 المشاركات 93 مشاهدة
-
TECHCRUNCH.COMDropbox adds new features to Dash, its AI-powered search toolCompanies like Google and Microsoft have equipped their productivity suites with AI features and assistants, while startups such as ClickUp and ReadAI have focused heavily on building AI integrations and search capabilities. In alignment with this growing trend of infusing digital work suites with AI, Dropbox on Thursday upgraded its AI search tool, Dash, first introduced in 2023. The company is adding AI “understanding” of different types of content in Dash, which means users can search across audio, video, and images in addition to text. The company is also adding people search to let users search for a person who worked on a specific project or look for a subject-matter expert. . Last year, Dropbox unveiled Dash for Business to let enterprises use AI search. This year, it’s improving Dash’s enterprise tooling by adding support for IT admins to exclude some sensitive documents from the search results. Dropbox will allow users to search for different media formats via natural language queries. Image Credits: Dropbox Dropbox already has functions for summarizing documents using its AI. With this new release, the company is introducing new writing tools that leverage summaries from different data sources to create new documents and presentations. The company said these tools, which live in Dash, can collate information from email, meeting notes, and existing documents to create project plans, memos, or briefs. The core idea is that users won’t have to jump from one app to another to read some info and add it to a document. What’s more, Dropbox is adding new integrations to Dash, including integrations for communication tools like Slack, Zoom, and Microsoft Teams, along with project management and creative tools like Figma, Canva, and Jira. This will help users search for information across their projects on different platforms, according to Dropbox. As AI vendors release new AI models, companies working in the productivity and workforce sectors are realizing that it’s getting easier to have AI look through a lot of information, summarize it, and also generate new content based on that. The challenge for these companies is building features quickly enough while integrating with other platforms to keep their customers happy.0 التعليقات 0 المشاركات 62 مشاهدة
-
TECHCRUNCH.COMThis tool estimates how much electricity your chatbot messages consumeEver wonder how much electricity you’re using when you prompt, or thank, an AI model? Hugging Face engineer Julien Delavande did, so he built a tool to help arrive at the answer. AI models consume energy each time they’re run. They’re run on GPUs and specialized chips that need a lot of power to carry out the associated computational workloads at scale. It’s not easy to pin down model power consumption, but it’s widely expected that growing usage of AI technologies will drive electricity needs to new heights in the next couple of years. The demand for more power to fuel AI has led some companies to pursue environmentally unfriendly strategies. Tools like Delavande’s aim to bring attention to this, and perhaps give some AI users pause. “Even small energy savings can scale up across millions of queries — model choice or output length can lead to major environmental impact,” Delavande and the tool’s other creators wrote in a statement. Delavande’s tool is designed to work with Chat UI, an open-source front-end for models like Meta’s Llama 3.3 70B and Google’s Gemma 3. The tool estimates the energy consumption of messages sent to and from a model in real time, reporting consumption in Watt-hours or Joules. It also compares model energy usage to that of common household appliances, like microwaves and LEDs. According to the tool, asking Llama 3.3 70B to write a typical email uses approximately 0.1841 Watt-hours — equivalent to running a microwave for 0.12 seconds or using a toaster for 0.02 seconds. It’s worth remembering that the tool’s estimates are only that — estimates. Delavande makes no claim that they’re incredibly precise. Still, they serve as a reminder that everything — chatbots included — has a cost. “With projects like the AI energy score and broader research on AI’s energy footprint, we’re pushing for transparency in the open source community. One day, energy usage could be as visible as nutrition labels on food!” Delavande and his co-creators wrote.0 التعليقات 0 المشاركات 72 مشاهدة
-
TECHCRUNCH.COMBritish startup Isembard lands $9M to reshore manufacturing for critical industriesGeopolitical pressures are accelerating a demand in many countries and regions to reshore — that is, to redevelop critical industry infrastructure, and to bring back businesses, which had moved or outsourced some or all of their industrial operations to cheaper countries further away. But that is easier said than done. In the key area of precision manufacturing, for example, most countries in the West are not set up to handle the current production demands that businesses face. This is the challenge Isembard aims to address. The British startup said it plans to create a network of factories across several Western locations. CEO Alexander Fitzgerald told TechCrunch that the first of these started to operate in London in January, and claims that it can already respond to requests for high-precision parts. It has yet to disclose further locations. The aim here is to target companies that may not be sinking billions of capex into their own factories, but typically would have contracted with a manufacturer to produce on their behalf. “Let’s say you are making an uncrewed aerial system, like a drone,” said Fitzgerald. “You’ll send us a design for some key parts for that in a 3D file. We’ll give you a quote for how fast we can do that, and the price. And then we machine that part out of whatever material is required, and we ship it to you. And sometimes maybe we’ll do the actual final assembly.” Isembard will aim for economies of scale across its own operations, too, with a single proprietary software layer, MasonOS, connecting and powering its facilities. This isn’t fundamentally different from sending that very same request to a factory in Asia, but it aligns with the rising demand for more local, resilient, and greener supply chains. Fitzgerald believes that British legacy suppliers will struggle to keep pace with the bigger reshoring swing: Supply chains are fragmented, skilled operators have retired or moved to different roles, and factories are outdated — all outcomes brought about and furthered by supply chains moving to China and other countries over past years. By leveraging software and automation, Isembard believes it can offer a viable alternative to the current state of affairs, while also presenting options that happen to be faster and cheaper. This pitch helped the startup secure a £7 million seed round (approximately $9 million) led by Notion Capital, with participation from 201 Ventures, Basis Capital, Forward Fund, Material Ventures, Neverlift Ventures and NP-Hard Ventures, as well as angels including EU Inc promoter Andreas Klinger and SpaceForge founder Joshua Western. Isembard’s go-to-market strategy initially focuses on aerospace, defense, and energy. Fitzgerald declined to name clients, but he said the company saw most of its initial traction come from defense and fast-growing startups. He claimed that he and his team are also having conversations with primes and government bodies. With just 12 employees, Isembard is still small. That’s in part because up to now, it was self-funded with the proceeds of Fitzgerald’s first exit — he sold his previous company Cuckoo to Giganet in 2022. But that’s also because he purposely chose a less capital-intensive route than U.S.-based automation startup Hadrian, which raised some $216.5 million in 2024 to modernize parts manufacturing. “We take the view that it takes too long, too much capex, and too much concentration of talent in one single place to build these large, 100,000 square-foot factories,” he said. “What we’re actually doing is a distributed factory model where we have lots of much smaller units, but all with the same operating model technology and automation.” That is a reference to the functionality of MasonOS, that proprietary system that powers Isambard’s plants, which will do “everything from quoting and estimating work to a customer, through to managing our own supply chain, automating scheduling and the prioritization, but also the core manufacturing and how you code the machines themselves,” Fitzgerald said. “At the moment, the problem is either that’s all paper-based or it’s all software built in the 70s,” he said. Despite this modern software layer, Isembard is very much an engineering-focused company. With a minor spelling tweak due to the original already being in use, its name is a nod to British industrialist and engineer Isambard Kingdom Brunel, known for his work during the Industrial Revolution. But it also takes a page from his father, as told by the startup in its manifesto. “When Isambard Kingdom Brunel’s father saw British soldiers returning from the Peninsula War with injured feet due to shoddy footwear suppliers,” the story goes, “he founded a shoe factory.” This reference is meant to reflect Isembard’s spirit and ambition, but it is no accident that it’s about soldiers. None of Fitzgerald’s family was in the military, but he “always had a sense of patriotism” and has been a reservist since 2016. This inspired Isembard, but the company’s ambitions go beyond the U.K. and Europe, potentially up to North America, Australia and New Zealand. “We want to help solve industrialization for the West,” he said.0 التعليقات 0 المشاركات 75 مشاهدة
-
TECHCRUNCH.COMHere are the 19 US AI startups that have raised $100M or more in 2025Last year was monumental for the AI industry in the U.S. and beyond. There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch; three companies raised more than one “megaround,” and seven companies raised rounds $1 billion in size or larger. How will 2025 compare? It’s still the first half of the year, but so far it looks like 2024’s momentum will continue this year. There have already been multiple billion-dollar rounds this year, and more AI megarounds closed in the U.S. in Q1 2025 compared to Q1 2024. Here are all the U.S. AI companies that have raised $100 million this year: April SandboxAQ closed a $450 million Series E round on April 4 that valued the AI model company at $5.7 billion. The round included Nvidia, Google, and Bridgewater Associates founder Ray Dalio among other investors. Runway, which creates AI models for media production, raised a $308 million Series D round that was announced on April 3, valuing the company at $3 billion. It was led by General Atlantic. SoftBank, Nvidia, and Fidelity also participated. March AI behemoth OpenAI raised a record-breaking $40 billion funding round that valued the startup at $300 billion. This round, which closed on March 31, was led by SoftBank with participation from Thrive Capital, Microsoft, and Coatue, among others. On March 25, Nexthop AI, an AI infrastructure company, announced that it had raised a Series A round led by Lightspeed Venture Partners. The $110 million round also included Kleiner Perkins, Battery Ventures, and Emergent Ventures, among others. Cambridge Massachusetts-based Insilico Medicine raised $110 million for its generative AI-powered drug discovery platform as announced on March 13. This Series E round valued the company at $1 billion and was co-led by Value Partners and Pudong Chuangtou. AI infrastructure company Celestial AI raised a $250 million Series C round that valued the company at $2.5 billion. The March 11 round was led by Fidelity with participation from Tiger Global, BlackRock, and Intel CEO Lip-Bu Tan, among others. Lila Sciences raised a $200 million seed round as it looks to create a science superintelligence platform. The round was led by Flagship Pioneering. The Cambridge, Massachusetts-based company also received funding from March Capital, General Catalyst, and ARK Venture Fund, among others. Brooklyn-based Reflection.Ai, which looks to build superintelligent autonomous systems, raised a $130 million Series A round that values the 1-year-old company at $580 million. The round was led by Lightspeed Venture Partners and CRV. AI coding startup Turing closed a Series E round on March 7 that valued the startup, which partners with LLM companies, at $2.2 billion. The $111 million round was led by Khazanah Nasional with participation from WestBridge Capital, Gaingels, and Sozo Ventures, among others. Shield AI, an AI defense tech startup, raised $240 million in a Series F round that closed on March 6. This round was co-led by L3Harris Technologies and Hanwha Aerospace, with participation from Andreessen Horowitz and the US Innovative Technology Fund, among others. The round valued the company at $5.3 billion AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others. February Together AI, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise. January Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others. This piece was updated on April 23 to include more deals. This piece has been updated to remove that Abridge is based in Pittsburgh; the company was founded there.0 التعليقات 0 المشاركات 76 مشاهدة
-
TECHCRUNCH.COMOpenAI seeks to make its upcoming ‘open’ AI model best-in-classToward the end of March, OpenAI announced its intention to release its first “open” language model since GPT‑2 sometime this year. Now details about that model are beginning to trickle out from the company’s sessions with the AI developer community. Sources tell TechCrunch that Aidan Clark, OpenAI’s VP of research, is leading development of the open model, which is in the very early stages. OpenAI is targeting an early summer release and aims to make the model — a reasoning model along the lines of OpenAI’s o-series models — benchmark-topping among other open reasoning models. OpenAI is exploring a highly permissive license for the model with few usage or commercial restrictions, per TechCrunch’s sources. Open models like Llama and Google’s Gemma have been criticized by some in the community for imposing onerous requirements — criticisms OpenAI is seemingly seeking to avoid. OpenAI is facing increasing pressure from rivals such as Chinese AI lab DeepSeek that have adopted an open approach to launching models. In contrast to OpenAI’s strategy, these “open” competitors make their models available to the AI community for experimentation and, in some cases, commercialization. It has proven to be a wildly successful strategy for some outfits. Meta, which has invested heavily in its Llama family of open AI models, said in early March that Llama has racked up over 1 billion downloads. Meanwhile, DeepSeek has quickly amassed a large worldwide user base and attracted the attention of domestic investors. Sources tell TechCrunch that OpenAI intends for its open model, which will be “text in, text out,” to run on high-end consumer hardware and possibly allow developers to switch its “reasoning” on and off, similar to reasoning models recently released by Anthropic and others. (Reasoning can improve accuracy, but at the cost of increased latency.) If the launch is well-received, OpenAI may follow it up with additional models — potentially including smaller models. In previous public comments, OpenAI CEO Sam Altman said he thinks OpenAI has been on the wrong side of history when it comes to open sourcing its technologies. “[I personally think we need to] figure out a different open source strategy,” Altman said during a Reddit Q&A in January. “Not everyone at OpenAI shares this view, and it’s also not our current highest priority … We will produce better models [going forward], but we will maintain less of a lead than we did in previous years.” Altman has also said that OpenAI’s upcoming open model will be thoroughly red-teamed and evaluated for safety. Sources tell TechCrunch that the company intends to release a model card for the model — a thorough technical report showing the results of OpenAI’s internal and external benchmarking and safety testing. “[B]efore release, we will evaluate this model according [to] our preparedness framework, like we would for any other model,” Altman said in a post on X last month. “[A]nd we will do extra work given that we know this model will be modified post-release.” OpenAI has raised the ire of some AI ethicists for reportedly rushing safety testing of recent models and failing to release model cards for others. Altman also stands accused of misleading OpenAI executives about model safety reviews prior to his brief ouster in November 2023. We’ve reached out to OpenAI for comment and will update this piece if we hear back.0 التعليقات 0 المشاركات 75 مشاهدة
-
TECHCRUNCH.COMGovernment censorship comes to Bluesky, but not its third-party apps … yetGovernment censorship has found its way to Bluesky, but there’s currently a loophole thanks to how the social network is structured. Earlier this month, Bluesky restricted access to 72 accounts in Turkey at the request of Turkish governmental authorities, according to a recent report by the Freedom of Expression Association. As a result, people in Turkey can no longer see these accounts, and their reach is limited.The report indicates that 59 Bluesky accounts were blocked on the grounds of protecting “national security and public order.” Bluesky also made another 13 accounts and at least one post invisible from Turkey. Given that many Turkish users migrated from X to Bluesky in the hopes of fleeing government censorship, Bluesky’s bowing to the Turkish government’s demands has raised questions among the community as to whether the social network is as open and decentralized as it claims to be. (Or whether it’s “just like Twitter” after all.) However, Bluesky’s technical underpinnings currently make bypassing these blocks easier than it would be on a network like X — even if it’s not quite as open as the alternative social network Mastodon, another decentralized X rival. A Mastodon user could move their account around to different servers to avoid censorship targeted at the original Mastodon instance (server) where they first made posts that attracted the censors. Users on the official Bluesky app can configure their moderation settings but have no way to opt out of the moderation service Bluesky provides. This includes its use of geographic labelers, like the newly added Turkish moderation labeler that handles the censorship of accounts mandated by the Turkish government. (Laurens Hof has a great breakdown of how this all works in more technical detail here on The Fediverse Report.) Simply put, if you’re on the official Bluesky app and Bluesky (the company) agrees to censor something in your region, there’s no way to opt out of this to see the hidden posts or accounts. Working around censorship in the Atmosphere Other third-party Bluesky apps, which make up the larger open social web known as the Atmosphere, don’t have to follow these same rules. At least, not for now. Because Bluesky is built on top of the AT Protocol, third-party clients can create their own interfaces and views into Bluesky’s content without applying the same moderation choices. Meanwhile, the censored accounts in question aren’t banned from Bluesky infrastructure, like relays and personal data servers (which others outside the company can run, too). Instead, the accounts are moderated by the geographic labelers at the client level. Currently, Bluesky doesn’t require any third-party apps to use its geographic moderation labelers, which would force the apps to geolocate their users and then apply the appropriate regional restrictions. That means any app that doesn’t implement the existing geographic labelers isn’t censoring these blocked Turkish accounts. In other words, apps like Skeets, Ouranos, Deer.social, Skywalker, and others can currently be used to bypass Turkish censors. This “solution” comes with several caveats, unfortunately. The app developers’ choice not to use geographic labelers isn’t necessarily intentional. Adding the geographic labelers would be extra work on their part, and most have simply not bothered to implement them yet. In addition, these third-party apps have much smaller user bases than the official Bluesky app, which allows them to fly under the radar of government censors. That also makes decisions like this less of a concern for the app developers — at least for the time being. If these third-party apps grew popular enough, a government like Turkey’s could also approach them and demand action. And if they failed to comply, they could risk their app being blocked in the country (e.g., several Bluesky app developers told us they won’t worry about adding geographic labelers until Apple approaches them about a potential removal from the App Store). Because avoiding labelers is seemingly not a permanent solution, one developer, Aviva Ruben, is building an alternative Bluesky client called Deer.social that works differently. Here, users can choose to entirely disable Bluesky’s official moderation service and labelers in favor of using other third-party labelers instead. Plus, the app allows users to configure their location manually in its settings — an option that would let users avoid geolocation-based blocks and censorship. Alternative Bluesky client Deer.social.Image Credits:Deer.social “I like the current policy, but I do fear it will get more restrictive or change in the future — a great reason to continue pushing on alternative AppViews,” Ruben said, referencing the need for alternative ways to access and view Bluesky’s data. Though today’s government censorship concerns are focused on Turkey, Bluesky’s community has to prep for a future where any government, including the U.S., could request that the company hide posts beyond only those that are blatantly illegal, like CSAM. Ruben says Deer.social would add a “no location” option to the app at this point, so users could choose to avoid all geographic labelers. Despite these possible loopholes, censorship has arrived at Bluesky. And considering the official app reaches the largest number of people, this is a notable evolution.0 التعليقات 0 المشاركات 75 مشاهدة
المزيد من المنشورات