• Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR

    Anysphere, the maker of AI coding assistant Cursor, has raised million at a billion valuation, Bloomberg reported. The round was led by returning investor Thrive Capital, with participation from Andreessen Horowitz, Accel, and DST Global.
    The massive round is Anysphere’s third fundraise in less than a year. The 3-year-old startup secured its previous capital haul of million at a pre-money valuation of billion late last year, as TechCrunch was first to report. 
    AI coding assistants, often referred to as “vibe coders,” have emerged as one of AI’s most popular applications, with Cursor leading the category. Anysphere’s annualized revenuehas been doubling approximately every two months, a person familiar with the company told TechCrunch. The company has surpassed million in ARR, sources told Bloomberg, a 60% increase from the million we reported in mid-April.
    Cursor offers developers tiered pricing. After a two-week free trial, the company converts users into paying customers, who can opt for either a Pro offering or a monthly business subscription.
    Until recently, the majority of the company’s revenue came from individual user subscriptions, Bloomberg reported. However, Anysphere is now offering enterprise licenses, allowing companies to purchase the application for their teams at a higher price point.
    Earlier this year, the company was approached by OpenAI and other potential buyers, but Anysphere turned down those offers. The ChatGPT maker bought Windsurf, another fast-growing AI assistant, reportedly for billion.
    #cursors #anysphere #nabs #99b #valuation
    Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR
    Anysphere, the maker of AI coding assistant Cursor, has raised million at a billion valuation, Bloomberg reported. The round was led by returning investor Thrive Capital, with participation from Andreessen Horowitz, Accel, and DST Global. The massive round is Anysphere’s third fundraise in less than a year. The 3-year-old startup secured its previous capital haul of million at a pre-money valuation of billion late last year, as TechCrunch was first to report.  AI coding assistants, often referred to as “vibe coders,” have emerged as one of AI’s most popular applications, with Cursor leading the category. Anysphere’s annualized revenuehas been doubling approximately every two months, a person familiar with the company told TechCrunch. The company has surpassed million in ARR, sources told Bloomberg, a 60% increase from the million we reported in mid-April. Cursor offers developers tiered pricing. After a two-week free trial, the company converts users into paying customers, who can opt for either a Pro offering or a monthly business subscription. Until recently, the majority of the company’s revenue came from individual user subscriptions, Bloomberg reported. However, Anysphere is now offering enterprise licenses, allowing companies to purchase the application for their teams at a higher price point. Earlier this year, the company was approached by OpenAI and other potential buyers, but Anysphere turned down those offers. The ChatGPT maker bought Windsurf, another fast-growing AI assistant, reportedly for billion. #cursors #anysphere #nabs #99b #valuation
    TECHCRUNCH.COM
    Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR
    Anysphere, the maker of AI coding assistant Cursor, has raised $900 million at a $9.9 billion valuation, Bloomberg reported. The round was led by returning investor Thrive Capital, with participation from Andreessen Horowitz, Accel, and DST Global. The massive round is Anysphere’s third fundraise in less than a year. The 3-year-old startup secured its previous capital haul of $100 million at a pre-money valuation of $2.5 billion late last year, as TechCrunch was first to report.  AI coding assistants, often referred to as “vibe coders,” have emerged as one of AI’s most popular applications, with Cursor leading the category. Anysphere’s annualized revenue (ARR) has been doubling approximately every two months, a person familiar with the company told TechCrunch. The company has surpassed $500 million in ARR, sources told Bloomberg, a 60% increase from the $300 million we reported in mid-April. Cursor offers developers tiered pricing. After a two-week free trial, the company converts users into paying customers, who can opt for either a $20 Pro offering or a $40 monthly business subscription. Until recently, the majority of the company’s revenue came from individual user subscriptions, Bloomberg reported. However, Anysphere is now offering enterprise licenses, allowing companies to purchase the application for their teams at a higher price point. Earlier this year, the company was approached by OpenAI and other potential buyers, but Anysphere turned down those offers. The ChatGPT maker bought Windsurf, another fast-growing AI assistant, reportedly for $3 billion.
    Like
    Love
    Wow
    Angry
    Sad
    223
    0 Commentarii 0 Distribuiri
  • Don’t Automate the Wrong Thing: Lessons from Building Agentic Hiring Systems

    Pankaj Khurana, VP Technology & Consulting, RocketMay 19, 20253 Min ReadElenaBs via Alamy StockFor the past four years, I’ve been building AI-powered tools that help recruiters do their job better. Before that, I was a recruiter myself -- reading resumes, making calls, living the grind. And here’s one thing I’ve learned from straddling both worlds: In hiring, automating the wrong thing can quietly erode everything that makes your process work. As engineering leaders, we’re constantly told to streamline and optimize. Move fast. But if you automate the wrong step -- like how candidates are filtered, scored, or messaged -- you might be replacing good human judgment with rigid shortcuts. And often, you won’t notice the damage until weeks later, when engagement plummets or teams stop trusting your system. The Allure of Automation Hiring is messy. Resumes come in all shapes. Job descriptions are vague. Recruiters are overworked. AI seems like a godsend. We start by automating outreach. Then scoring. Then matching. Eventually, someone asks: can this whole thing run without a person? But here’s the rub: many hiring decisions are deeply contextual. Should a product manager with a non-traditional background be fast-tracked for a high-growth SaaS role? That’s not a “yes/no” the system can decide for you. Early on at Rocket, we made that mistake. Our scoring engine prioritized resumes based solely on skills overlap. It was fast -- but completely off for roles that required nuance. We had to pause, rethink, and admit: “This isn’t working like we hoped.” Related:What Agentic Systems Do Well I’m not anti-automation. Far from it. But it has to be paired with human review. We found that agentic systems -- AI tools with autonomy to assist but not decide -- were far more effective. Think copilots, not autopilots. For example, our system can: Suggest better phrasing for job descriptions Flag resumes that match roles 80% or more Recommend outreach templates based on role and tone But it never auto-rejects or sends messages without review. The AI suggests; the recruiter decides. That balance makes all the difference. Lessons Learned: Where Automation Fails One of our biggest missteps? Automating outreach too heavily. We thought sending personalized AI-written emails at scale would boost response rates. It didn’t. Candidates sensed something off. The emails looked polished but felt cold. Engagement dropped. We eventually went back to having humans rewrite the AI drafts. That one shift nearly doubled our positive response rate. Why? Because candidates want to feel seen -- not sorted. Related:A CIO’s Checklist: What Not to Automate If you’re leading an AI initiative in hiring, here’s a checklist we now swear by: Don’t automate decisions that impact trust. Rejections, scores, hiring calls? Keep a human in the loop. Avoid automating tasks with high context needs. A great candidate might not use trendy buzzwords. That doesn’t make them a bad fit. Be careful with candidate-facing automation. Generic outreach harms brand perception. Do automate the repetitive stuff. Parsing, meeting scheduling, draft -- automate those and give time back to your team. Human-AI Collaboration Wins We saw the best outcomes when recruiters felt like they had an assistant -- not a competitor. Here’s one quick story: A recruiter used our AI to shortlist 10 profiles for a hard-to-fill GTM analyst role. She reviewed five, adjusted the messaging tone slightly, and got two responses in under a day. Same tools -- different mindset. Feedback loops mattered too. We built in ways for users to rate suggestions. The model kept improving -- and more importantly, people trusted it more. Final Thought: Think Like a System Designer If you’re building AI into your hiring stack, go beyond automation. Think augmentation. Don’t just ask, “Can this task be automated?” Instead, ask, “If I automate this, what do we lose in context, empathy, or nuance?” Related:Agentic hiring systems can deliver speed and scale -- but only if we let people stay in control of what matters most. About the AuthorPankaj KhuranaVP Technology & Consulting, RocketPankaj Khurana is VP of Technology & Consulting at Rocket, an AI-driven recruiting firm. He has over 20 years of experience in hiring and tech and has led the development of agentic hiring tools used by top US startups. See more from Pankaj KhuranaWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #dont #automate #wrong #thing #lessons
    Don’t Automate the Wrong Thing: Lessons from Building Agentic Hiring Systems
    Pankaj Khurana, VP Technology & Consulting, RocketMay 19, 20253 Min ReadElenaBs via Alamy StockFor the past four years, I’ve been building AI-powered tools that help recruiters do their job better. Before that, I was a recruiter myself -- reading resumes, making calls, living the grind. And here’s one thing I’ve learned from straddling both worlds: In hiring, automating the wrong thing can quietly erode everything that makes your process work. As engineering leaders, we’re constantly told to streamline and optimize. Move fast. But if you automate the wrong step -- like how candidates are filtered, scored, or messaged -- you might be replacing good human judgment with rigid shortcuts. And often, you won’t notice the damage until weeks later, when engagement plummets or teams stop trusting your system. The Allure of Automation Hiring is messy. Resumes come in all shapes. Job descriptions are vague. Recruiters are overworked. AI seems like a godsend. We start by automating outreach. Then scoring. Then matching. Eventually, someone asks: can this whole thing run without a person? But here’s the rub: many hiring decisions are deeply contextual. Should a product manager with a non-traditional background be fast-tracked for a high-growth SaaS role? That’s not a “yes/no” the system can decide for you. Early on at Rocket, we made that mistake. Our scoring engine prioritized resumes based solely on skills overlap. It was fast -- but completely off for roles that required nuance. We had to pause, rethink, and admit: “This isn’t working like we hoped.” Related:What Agentic Systems Do Well I’m not anti-automation. Far from it. But it has to be paired with human review. We found that agentic systems -- AI tools with autonomy to assist but not decide -- were far more effective. Think copilots, not autopilots. For example, our system can: Suggest better phrasing for job descriptions Flag resumes that match roles 80% or more Recommend outreach templates based on role and tone But it never auto-rejects or sends messages without review. The AI suggests; the recruiter decides. That balance makes all the difference. Lessons Learned: Where Automation Fails One of our biggest missteps? Automating outreach too heavily. We thought sending personalized AI-written emails at scale would boost response rates. It didn’t. Candidates sensed something off. The emails looked polished but felt cold. Engagement dropped. We eventually went back to having humans rewrite the AI drafts. That one shift nearly doubled our positive response rate. Why? Because candidates want to feel seen -- not sorted. Related:A CIO’s Checklist: What Not to Automate If you’re leading an AI initiative in hiring, here’s a checklist we now swear by: Don’t automate decisions that impact trust. Rejections, scores, hiring calls? Keep a human in the loop. Avoid automating tasks with high context needs. A great candidate might not use trendy buzzwords. That doesn’t make them a bad fit. Be careful with candidate-facing automation. Generic outreach harms brand perception. Do automate the repetitive stuff. Parsing, meeting scheduling, draft -- automate those and give time back to your team. Human-AI Collaboration Wins We saw the best outcomes when recruiters felt like they had an assistant -- not a competitor. Here’s one quick story: A recruiter used our AI to shortlist 10 profiles for a hard-to-fill GTM analyst role. She reviewed five, adjusted the messaging tone slightly, and got two responses in under a day. Same tools -- different mindset. Feedback loops mattered too. We built in ways for users to rate suggestions. The model kept improving -- and more importantly, people trusted it more. Final Thought: Think Like a System Designer If you’re building AI into your hiring stack, go beyond automation. Think augmentation. Don’t just ask, “Can this task be automated?” Instead, ask, “If I automate this, what do we lose in context, empathy, or nuance?” Related:Agentic hiring systems can deliver speed and scale -- but only if we let people stay in control of what matters most. About the AuthorPankaj KhuranaVP Technology & Consulting, RocketPankaj Khurana is VP of Technology & Consulting at Rocket, an AI-driven recruiting firm. He has over 20 years of experience in hiring and tech and has led the development of agentic hiring tools used by top US startups. See more from Pankaj KhuranaWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #dont #automate #wrong #thing #lessons
    WWW.INFORMATIONWEEK.COM
    Don’t Automate the Wrong Thing: Lessons from Building Agentic Hiring Systems
    Pankaj Khurana, VP Technology & Consulting, RocketMay 19, 20253 Min ReadElenaBs via Alamy StockFor the past four years, I’ve been building AI-powered tools that help recruiters do their job better. Before that, I was a recruiter myself -- reading resumes, making calls, living the grind. And here’s one thing I’ve learned from straddling both worlds: In hiring, automating the wrong thing can quietly erode everything that makes your process work. As engineering leaders, we’re constantly told to streamline and optimize. Move fast. But if you automate the wrong step -- like how candidates are filtered, scored, or messaged -- you might be replacing good human judgment with rigid shortcuts. And often, you won’t notice the damage until weeks later, when engagement plummets or teams stop trusting your system. The Allure of Automation Hiring is messy. Resumes come in all shapes. Job descriptions are vague. Recruiters are overworked. AI seems like a godsend. We start by automating outreach. Then scoring. Then matching. Eventually, someone asks: can this whole thing run without a person? But here’s the rub: many hiring decisions are deeply contextual. Should a product manager with a non-traditional background be fast-tracked for a high-growth SaaS role? That’s not a “yes/no” the system can decide for you. Early on at Rocket, we made that mistake. Our scoring engine prioritized resumes based solely on skills overlap. It was fast -- but completely off for roles that required nuance. We had to pause, rethink, and admit: “This isn’t working like we hoped.” Related:What Agentic Systems Do Well I’m not anti-automation. Far from it. But it has to be paired with human review. We found that agentic systems -- AI tools with autonomy to assist but not decide -- were far more effective. Think copilots, not autopilots. For example, our system can: Suggest better phrasing for job descriptions Flag resumes that match roles 80% or more Recommend outreach templates based on role and tone But it never auto-rejects or sends messages without review. The AI suggests; the recruiter decides. That balance makes all the difference. Lessons Learned: Where Automation Fails One of our biggest missteps? Automating outreach too heavily. We thought sending personalized AI-written emails at scale would boost response rates. It didn’t. Candidates sensed something off. The emails looked polished but felt cold. Engagement dropped. We eventually went back to having humans rewrite the AI drafts. That one shift nearly doubled our positive response rate. Why? Because candidates want to feel seen -- not sorted. Related:A CIO’s Checklist: What Not to Automate If you’re leading an AI initiative in hiring, here’s a checklist we now swear by: Don’t automate decisions that impact trust. Rejections, scores, hiring calls? Keep a human in the loop. Avoid automating tasks with high context needs. A great candidate might not use trendy buzzwords. That doesn’t make them a bad fit. Be careful with candidate-facing automation. Generic outreach harms brand perception. Do automate the repetitive stuff. Parsing, meeting scheduling, draft -- automate those and give time back to your team. Human-AI Collaboration Wins We saw the best outcomes when recruiters felt like they had an assistant -- not a competitor. Here’s one quick story: A recruiter used our AI to shortlist 10 profiles for a hard-to-fill GTM analyst role. She reviewed five, adjusted the messaging tone slightly, and got two responses in under a day. Same tools -- different mindset. Feedback loops mattered too. We built in ways for users to rate suggestions. The model kept improving -- and more importantly, people trusted it more. Final Thought: Think Like a System Designer If you’re building AI into your hiring stack, go beyond automation. Think augmentation. Don’t just ask, “Can this task be automated?” Instead, ask, “If I automate this, what do we lose in context, empathy, or nuance?” Related:Agentic hiring systems can deliver speed and scale -- but only if we let people stay in control of what matters most. About the AuthorPankaj KhuranaVP Technology & Consulting, RocketPankaj Khurana is VP of Technology & Consulting at Rocket, an AI-driven recruiting firm. He has over 20 years of experience in hiring and tech and has led the development of agentic hiring tools used by top US startups. See more from Pankaj KhuranaWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commentarii 0 Distribuiri