• THEHACKERNEWS.COM
    The ROI of Security Investments: How Cybersecurity Leaders Prove It
    Nov 11, 2024The Hacker NewsCyber Resilience / Offensive SecurityCyber threats are intensifying, and cybersecurity has become critical to business operations. As security budgets grow, CEOs and boardrooms are demanding concrete evidence that cybersecurity initiatives deliver value beyond regulation compliance.Just like you wouldn't buy a car without knowing it was first put through a crash test, security systems must also be validated to confirm their value. There is an increasing shift towards security validation as it allows cyber practitioners to safely use real exploits in production environments to accurately assess the efficiency of their security systems and identify critical areas of exposure, at scale. We met with Shawn Baird, Associate Director of Offensive Security & Red Teaming at DTCC, to discuss how to effectively communicate the business value of his Security Validation practices and tools to his upper management. Here is a drill down into how Shawn made room for security validation platforms within his already tight budget and how he translated technical security practices into tangible business outcomes that have driven purchase decisions in his team's favor.Please note that all responses below are solely the opinions of Shawn Baird and do not represent the beliefs or opinions of DTCC and its subsidiaries.Q: What value does Security Validation bring to your organization? Security Validation is about putting your defenses to the test, not against theoretical risks, but actual real-world attack techniques. It's a shift from passive assumptions of security to active validation of what works. It tells me the degree to which our systems can withstand the same tactics cybercriminals use today.For us at DTCC, we've been doing security validation for a long time, but we were looking for tech that would serve as a performance amplifier. Instead of relying solely on expensive, highly-skilled engineers to carry out manual validations across all systems, we could focus our elite teams on high-value, targeted red-teaming exercises. The automated platform has built-in content of TTPs for conducting tests, covering techniques like Kerberoasting, network scanning, brute forcing etc, relieving the team from having to create this. Tests are executed even outside regular business hours so we are not confined to standard testing windows. This approach meant we weren't stretching our security staff thin on repetitive tasks. Instead, they could focus on more complex attack scenarios and critical issues. Pentera gave us a way to maintain continuous validation across the board, without burning out our most skilled engineers on tasks that could be automated. In essence, it's become a force multiplier for our team. It goes a long way to improve our ability to stay ahead of threats while optimizing the use of our top talent.Q: How did you justify the ROI of an investment in an Automated Security Validation platform?First and foremost, we see a direct increase in our team's productivity. Automating time-consuming manual assessments and testing tasks was a game changer. By shifting these repetitive and effort-intensive tasks to Pentera, our skilled engineers could focus on more complex work. And without needing additional headcount we could significantly expand the scope of tests. Second, we're able to reduce the cost of third-party contractors. Traditionally, we relied heavily on external expert contractors, which can be costly and often limited in scope. With human expertise built into a platform like Pentera, we reduced our dependence on expensive service engagements. Instead, we have internal staff - analysts with less expertise - running effective tests. Finally, there's a clear benefit of risk reduction. By continuously validating our security posture, we can significantly reduce the probability of a breach and the potential cost of a breach, if it occurs. IBM's 2023 Cost of a Data Breach report confirms this, reporting an 11% reduction in breach costs for organizations using proactive risk management strategies. With Pentera, we achieved just thatless exposure, faster detection, and quicker remediationall of which contributed to lowering our overall risk profile.Q: What were some of the internal roadblocks or hurdles you encountered?One of the key hurdles we faced was friction from the architectural review board. Understandably, they had concerns about running automated exploits on our network, even though the platform is 'safe-by-design'. The idea of running real-world attacks in production environments can be unnerving, especially for teams responsible for the stability of critical systems.To address this, we took a phased approach. We started by running the platform on a reduced attack surface, targeting less critical systems to demonstrate its safety and effectiveness. Next, we expanded its use during a red team engagement, running it alongside our existing testing processes. Over time, we're incrementally expanding the scope, proving the platform's reliability and safety at each stage. This gradual rollout helped build confidence without risking major disruptions, so now trust in the platform is fairly well established.Q: How did you allocate the funds?We allocated the funds for Pentera under the same line item as our red teaming tools, grouped with other solutions like Rapid7 and vulnerability scanners. By positioning it alongside offensive security tools, the budgeting process was kept straightforward.We looked specifically at our cost for assessing our environment's susceptibility to a ransomware attack. Previously, we spent $150K annually on ransomware scans, but with Pentera, we could test more frequently at the same budget. This reallocation of funds made sense because it hit our key criteria, mentioned earlier: improving productivity by increasing our testing capacity without needing to hire, and reducing risk with more frequent and larger-scale testing. Lowering the chances of a ransomware attack and limiting the damage if one occurs.Q: What other considerations came into play?A few other factors influenced our decision to invest in Automated Security Validation. Employee retention was a big one. Like I said before, automating repetitive tasks kept our cybersecurity experts focused on more challenging, impactful work, which I believe has helped us retain their talent.Improvement in security operations was another point. Pentera helps us ensure our controls are properly tuned and validated, it also helps coordination between red teams, blue teams, and the SOC. From a compliance standpoint, it made it easier to compile evidence for audits - allowing us to get through the process much faster than we would otherwise. Finally, cyber insurance is another area where Pentera has added further financial value by enabling us to lower our premiums.Q: Advice to other security professionals trying to get a budget for secure validation? The performance value of Automated Security Validation is clear. Most organizations don't have the internal resources to conduct mature red teaming. Whether you have a small security team or a mature offensive security practice like we do at DTCC, it's very likely that you do not have enough security expert resources to do a full assessment. If you don't find anything, no proof of a malicious insider in your network you can't demonstrate resilience - making it harder to achieve regulatory compliance. With Pentera, you have built-in TTPs, giving you a direct path to assess how well your organization responds to threats. Based on that validation you can harden your infrastructure and address discovered vulnerabilities.The alternativedoing nothingis far riskier. The cost of a breach can result in stolen IP, lost data, and potentially shutting down operations. On the other hand, the cost of the tool brings peace of mind knowing you've reduced your exposure to real-world threats and the ability to sleep better at night.Watch the full on-demand webinar with Shawn Baird, Associate Director of Offensive Security & Red Teaming at DTCC, and Pentera Field CISO, Jason Mar-Tang.Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Σχόλια 0 Μοιράστηκε 156 Views
  • THEHACKERNEWS.COM
    Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation
    Nov 11, 2024Ravie LakshmananMachine Learning / VulnerabilityCybersecurity researchers have uncovered nearly two dozen security flaws spanning 15 different machine learning (ML) related open-source projects.These comprise vulnerabilities discovered both on the server- and client-side, software supply chain security firm JFrog said in an analysis published last week.The server-side weaknesses "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines," it said.The vulnerabilities, discovered in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, have been broken down into broader sub-categories that allow for remotely hijacking model registries, ML database frameworks, and taking over ML Pipelines.A brief description of the identified flaws is below -CVE-2024-7340 (CVSS score: 8.8) - A directory traversal vulnerability in the Weave ML toolkit that allows for reading files across the whole filesystem, effectively allowing a low-privileged authenticated user to escalate their privileges to an admin role by reading a file named "api_keys.ibd" (addressed in version 0.50.8)An improper access control vulnerability in the ZenML MLOps framework that allows a user with access to a managed ZenML server to elevate their privileges from a viewer to full admin privileges, granting the attacker the ability to modify or read the Secret Store (No CVE identifier)CVE-2024-6507 (CVSS score: 8.1) - A command injection vulnerability in the Deep Lake AI-oriented database that allows attackers to inject system commands when uploading a remote Kaggle dataset due to a lack of proper input sanitization (addressed in version 3.9.11)CVE-2024-5565 (CVSS score: 8.1) - A prompt injection vulnerability in the Vanna.AI library that could be exploited to achieve remote code execution on the underlying hostCVE-2024-45187 (CVSS score: 7.1) - An incorrect privilege assignment vulnerability that allows guest users in the Mage AI framework to remotely execute arbitrary code through the Mage AI terminal server due to the fact that they have been assigned high privileges and remain active for a default period of 30 days despite deletion"Since MLOps pipelines may have access to the organization's ML Datasets, ML Model Training and ML Model Publishing, exploiting an ML pipeline can lead to an extremely severe breach," JFrog said."Each of the attacks mentioned in this blog (ML Model backdooring, ML data poisoning, etc.) may be performed by the attacker, depending on the MLOps pipeline's access to these resources.The disclosure comes over two months after the company uncovered more than 20 vulnerabilities that could be exploited to target MLOps platforms.It also follows the release of a defensive framework codenamed Mantis that leverages prompt injection as a way to counter cyber attacks Large language models (LLMs) with more than over 95% effectiveness."Upon detecting an automated cyber attack, Mantis plants carefully crafted inputs into system responses, leading the attacker's LLM to disrupt their own operations (passive defense) or even compromise the attacker's machine (active defense)," a group of academics from the George Mason University said."By deploying purposefully vulnerable decoy services to attract the attacker and using dynamic prompt injections for the attacker's LLM, Mantis can autonomously hack back the attacker." Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Σχόλια 0 Μοιράστηκε 156 Views
  • THEHACKERNEWS.COM
    HPE Issues Critical Security Patches for Aruba Access Point Vulnerabilities
    Nov 11, 2024Ravie LakshmananVulnerability / Risk MitigationHewlett Packard Enterprise (HPE) has released security updates to address multiple vulnerabilities impacting Aruba Networking Access Point products, including two critical bugs that could result in unauthenticated command execution.The flaws affect Access Points running Instant AOS-8 and AOS-10 -AOS-10.4.x.x: 10.4.1.4 and belowInstant AOS-8.12.x.x: 8.12.0.2 and belowInstant AOS-8.10.x.x: 8.10.0.13 and belowThe most severe among the six newly patched vulnerabilities are CVE-2024-42509 (CVSS score: 9.8) and CVE-2024-47460 (CVSS score: 9.0), two critical unauthenticated command injection flaws in the CLI Service that could result in the execution of arbitrary code."Command injection vulnerability in the underlying CLI service could lead to unauthenticated remote code execution by sending specially crafted packets destined to the PAPI (Aruba's Access Point management protocol) UDP port (8211)," HPE said in an advisory for both the flaws."Successful exploitation of this vulnerability results in the ability to execute arbitrary code as a privileged user on the underlying operating system."It's advised to enable cluster security via the cluster-security command to mitigate CVE-2024-42509 and CVE-2024-47460 on devices running Instant AOS-8 code. However, for AOS-10 devices, the company recommends blocking access to UDP port 8211 from all untrusted networks.Also resolved by HPE are four other vulnerabilities -CVE-2024-47461 (CVSS score: 7.2) - An authenticated arbitrary remote command execution (RCE) in Instant AOS-8 and AOS-10CVE-2024-47462 and CVE-2024-47463 (CVSS scores: 7.2) - An arbitrary file creation vulnerability in Instant AOS-8 and AOS-10 that leads to authenticated remote command executionCVE-2024-47464 (CVSS score: 6.8) - An authenticated path traversal vulnerability leads to remote unauthorized access to filesAs workarounds, users are being urged to restrict access to CLI and web-based management interfaces by placing them within a dedicated VLAN, and controlling them via firewall policies at layer 3 and above."Although Aruba Network access points have not previously been reported as exploited in the wild, they are an attractive target for threat actors due to the potential access these vulnerabilities could provide through privileged user RCE," Arctic Wolf said. "Additionally, threat actors may attempt to reverse-engineer the patches to exploit unpatched systems in the near future."Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE
    0 Σχόλια 0 Μοιράστηκε 145 Views
  • WWW.INFORMATIONWEEK.COM
    Next Steps to Secure Open Banking Beyond Regulatory Compliance
    Final rules from the Consumer Financial Protection Bureau further the march towards open banking. What will it take to keep such data sharing secure?
    0 Σχόλια 0 Μοιράστηκε 153 Views
  • WWW.INFORMATIONWEEK.COM
    Getting a Handle on AI Hallucinations
    John Edwards, Technology Journalist & AuthorNovember 11, 20244 Min ReadCarloscastilla via Alamy Stock PhotoAI hallucination occurs when a large language model (LLM) -- frequently a generative AI chatbot or computer vision tool -- perceives patterns or objects that are nonexistent or imperceptible to human observers, generating outputs that are either inaccurate or nonsensical.AI hallucinations can pose a significant challenge, particularly in high-stakes fields where accuracy is crucial, such as the energy industry, life sciences and healthcare, technology, finance, and legal sectors, says Beena Ammanath, head of technology trust and ethics at business advisory firm Deloitte. With generative AI's emergence, the importance of validating outputs has become even more critical for risk mitigation and governance, she states in an email interview. "While AI systems are becoming more advanced, hallucinations can undermine trust and, therefore, limit the widespread adoption of AI technologies."Primary CausesAI hallucinations are primarily caused by the nature of generative AI and LLMs, which rely on vast amounts of data to generate predictions, Ammanath says. "When the AI model lacks sufficient context, it may attempt to fill in the gaps by creating plausible sounding, but incorrect, information." This can occur due to incomplete training data, bias in the training data, or ambiguous prompts, she notes.Related:LLMs are generally trained for specific tasks, such as predicting the next word in a sequence, observes Swati Rallapalli, a senior machine learning research scientist in the AI division of the Carnegie Mellon University Software Engineering Institute. "These models are trained on terabytes of data from the Internet, which may include uncurated information," she explains in an online interview. "When generating text, the models produce outputs based on the probabilities learned during training, so outputs can be unpredictable and misrepresent facts."Detection ApproachesDepending on the specific application, hallucination metrics tools, such as AlignScore, can be trained to capture any similarity between two text inputs. Yet automated metrics don't always work effectively. "Using multiple metrics together, such as AlignScore, with metrics like BERTScore, may improve the detection," Rallapalli says.Another established way to minimize hallucinations is by using retrieval augmented generation (RAG), in which the model references the text from established databases relevant to the output. "There's also research in the area of fine-tuning models on curated datasets for factual correctness," Rallapalli says.Related:Yet even using existing multiple metrics may not fully guarantee hallucination detection. Therefore, further research is needed to develop more effective metrics to detect inaccuracies, Rallapalli says. "For example, comparing multiple AI outputs could detect if there are parts of the output that are inconsistent across different outputs or, in case of summarization, chunking up the summaries could better detect if the different chunks are aligned with facts within the original article." Such methods could help detect hallucinations better, she notes.Ammanath believes that detecting AI hallucinations requires a multi-pronged approach. She notes that human oversight, in which AI-generated content is reviewed by experts who can cross-check facts, is sometimes the only reliable way to curb hallucinations. "For example, if using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be easy to identify and the outcomes are lower stakes for the enterprise," Ammanath explains. Yet when it comes to applications that include mission-critical business decisions, error tolerance must be low. "This makes a 'human-in the-loop', someone who validates model outputs, more important than ever before."Related:Hallucination TrainingThe best way to minimize hallucinations is by building your own pre-trained fundamental generative AI model, advises Scott Zoldi, chief AI officer at credit scoring service FICO. He notes, via email, that many organizations are now already using, or planning to use, this approach utilizing focused-domain and task-based models. "By doing so, one can have critical control of the data used in pre-training -- where most hallucinations arise -- and can constrain the use of context augmentation to ensure that such use doesn't increase hallucinations but re-enforces relationships already in the pre-training."Outside of building your own focused generative models, one needs to minimize harm created by hallucinations, Zoldi says. "[Enterprise] policy should prioritize a process for how the output of these tools will be used in a business context and then validate everything," he suggests.A Final ThoughtTo prepare the enterprise for a bold and successful future with generative AI, it's necessary to understand the nature and scale of the risks, as well as the governance tactics that can help mitigate them, Ammanath says. "AI hallucinations help to highlight both the power and limitations of current AI development and deployment."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Σχόλια 0 Μοιράστηκε 139 Views
  • WWW.INFORMATIONWEEK.COM
    How IT Can Show Business Value From GenAI Investments
    Nishad Acharya, Head of Talent Network, TuringNovember 11, 20244 Min ReadNicoElNino via Alamy StockAs IT leaders, were facing increasing pressure to prove that our generative AI investments translate into measurable and meaningful business outcomes. It's not enough to adopt the latest cutting-edge technology; we have a responsibility to show that AI delivers tangible results that directly support our business objectives.To truly maximize ROI from GenAI, IT leaders need to take a strategic approach -- one that seamlessly integrates AI into business operations, aligns with organizational goals, and generates quantifiable outcomes. Lets explore advanced strategies for overcoming GenAI implementation challenges, integrating AI with existing systems, and measuring ROI effectively.Key Challenges in Implementing GenAIIntegrating GenAI into enterprise systems isnt always straightforward. There are several hurdles IT leaders face, especially surrounding data and system complexity. Data governance and infrastructure. AI is only as good as the data its trained on. Strong data governance enforces better accuracy and compliance, especially when AI models are trained on vast, unstructured data sets. Building AI-friendly infrastructure that can handle both the scale and complexity of AI data pipelines is another challenge, as these systems must be resilient and adaptable.Related:Model accuracy and hallucinations. GenAI models can produce non-deterministic results, sometimes generating content that is inaccurate or entirely fabricated. Unlike traditional software with clear input-output relationships that can be unit-tested, GenAI models require a different approach to validation. This issue introduces risks that must be carefully managed through model testing, fine-tuning, and human-in-the-loop feedback.Security, privacy, and legal concerns. The widespread use of publicly and privately sourced data in training GenAI models raises critical security and legal questions. Enterprises must navigate evolving legal landscapes. Data privacy and security concerns must also be addressed to avoid potential breaches or legal issues, especially when dealing with heavily regulated industries like finance or healthcare.Strategies for Measuring and Maximizing AI ROIAdopting a comprehensive, metrics-driven approach to AI implementation is necessary for assessing your investments business impact. To ensure GenAI delivers meaningful business results, here are some effective strategies:Define high-impact use cases and objectives: Start with clear, measurable objectives that align with core business priorities. Whether its improving operational efficiency or streamlining customer support, identifying use cases with direct business relevance ensures AI projects are focused and impactful.Quantify both tangible and intangible benefits: Beyond immediate cost savings, GenAI drives value through intangible benefits like improved decision-making or customer satisfaction. Quantifying these benefits gives a fuller picture of the overall ROI.Focus on getting the use case right, before optimizing costs: LLMs are still evolving. It is recommended that you first use the best model (likely most expensive), prove that the LLM can achieve the end goal, and then identify ways to reduce cost to serve that use case. This will make sure that the business need is not left unmet.Run pilot programs before full rollout: Test AI in controlled environments first to validate use cases and refine your ROI model. Pilot programs allow organizations to learn, iterate, and de-risk before full-scale deployment, as well as pinpoint areas where AI delivers the greatest value, learn, iterate, and de-risk before full-scale deployment.Track and optimize costs throughout the lifecycle: One of the most overlooked elements of AI ROI is the hidden costs of data preparation, integration, and maintenance that can spiral if left unchecked. IT leaders should continuously monitor expenses related to infrastructure, data management, training, and human resources.Continuous monitoring and feedback: AI performance should be tracked continuously against KPIs and adjusted based on real-world data. Regular feedback loops allow for continuous fine-tuning, ensuring your investment aligns with evolving business needs and delivers sustained value. Related:Overcoming GenAI Implementation RoadblocksRelated:Successful GenAI implementations depend on more than adopting the right technologythey require an approach that maximizes value while minimizing risk. For most IT leaders, success depends on addressing challenges like data quality, model reliability, and organizational alignment. Heres how to overcome common implementation hurdles:Align AI with high-impact business goals. GenAI projects should directly support business objectives and deliver sustainable value like streamlining operations, cutting costs, or generating new revenue streams. Define priorities based on their impact and feasibility.Prioritize data integrity. Poor data quality prevents effective AI. Take time to establish data governance protocols from the start to manage privacy, compliance, and integrity while minimizing risk tied to faulty data.Start with pilot projects. Pilot projects allow you to test and iterate real-world impact before committing to large-scale rollouts. They offer valuable insights and mitigate risk.Monitor and measure continuously. Ongoing performance tracking ensures AI remains aligned with evolving business goals. Continuous adjustments are key for maximizing long-term value.About the AuthorNishad AcharyaHead of Talent Network, TuringNishad Acharya leads initiatives focused on the acquisition and experience of the 3M global professionals on Turing's Talent Cloud. At Turing, he has led critical roles in Strategy and Product that helped scale the company to a Unicorn. With a B.Tech from IIT Madras and an MBA from Wharton, Nishad has a strong foundation in both technology and business. Previously, he led strategy & digital transformation projects at The Boston Consulting Group. Nishad brings a passion for AI and expertise in tech services coupled with extensive experience in sectors like financial services and energy.See more from Nishad AcharyaNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Σχόλια 0 Μοιράστηκε 154 Views
  • WEWORKREMOTELY.COM
    UNICEF: Senior Full Stack Developer, UNICEF Office of Innovation, 12 Months, Remote
    Exciting job opportunity UNICEFs Office of Innovation is looking for two Senior Full-Stack Developers to take the engineering lead on an ambitious projectThe Learning Cabinet! This online platform connects education decision-makers worldwide with curated EdTech solutions tailored to their unique contexts. What Youll Do: As a Senior Full-Stack Developer, youll spearhead a headless Drupal and Next.js platform deployed on Cloudflare, empowering education decision-makers to access EdTech tools that will make a tangible difference in childrens learning outcomes. You'll collaborate with an agile, interdisciplinary team to come up with innovative solutions and implement exciting value propositionsall geared towards impactful change.Whats in it for You?Be part of a global team at the forefront of tech innovation for social good. Use your expertise to shape an MVP into a scalable solution that can reach help reaching millions of children and solve a global learning crisis.Work remotely with a passionate team and join us for a 3-day design sprint in beautiful Helsinki, Finland! Are you ready to use your skills to reimagine education for every child?Apply today, and lets make education a transformative journey for all! Terms of Reference - developer post 1 Related Jobs See more Full-Stack Programming jobs
    0 Σχόλια 0 Μοιράστηκε 168 Views
  • WEWORKREMOTELY.COM
    Filestage: Chief Revenue Officer (CRO)
    Time zones: SBT (UTC +11), GMT (UTC +0), CET (UTC +1), EET (UTC +2), MSK (UTC +3)About FilestageFilestage is the online proofing software for brands in regulated industries, where the consequences of missed feedback are highest. People are creating content in more ways than ever and managing all this over email can be chaos. So our platform gives organizations a central quality control hub for reviewing and approving all their human- and AI-generated content. This makes sure every print and digital asset is compliant before it goes out the door, freeing teams up to focus on delivering their best and most creative work.We're a fully remote team with people working from home offices, co-working spaces, and coffee shops worldwide. Together, we're on a mission to create a seamless approval process that helps people deliver their best work.We have over half a million users across 800+ companies, including Sharp, LG, Publicis, GroupM, and Emirates. So if you're looking for an ambitious startup in a booming market, you've found it!This is your opportunity as our CROWere an ambitious team, aiming to become a category leader in a growing market. Weve built a strong foundation with a solid inbound channel, a loved product, and healthy revenue retention. And as AI starts transforming the lives of our customers, we're perfectly placed to take our growth to the next level. This is your opportunity to help us build effective acquisition channels, level up our teams and operations, and shape our company strategy to become the go-to solution in our market.At Filestage, you will:Play a key role in shaping the future of our category-leading SaaS product. This is an opportunity to influence how the world's biggest brands ensure content quality in the age of AI.Develop and implement effective strategies to acquire customers. This involves enhancing our existing inbound funnel and building new channels to drive customer growth.Elevate our upselling and cross-selling playbooks by collaborating and experimenting with our cross-functional teams.Build strong relationships with key customers to drive growth, gather strategic insights, and have a finger on the pulse of market trends.Develop and coach our high-performing and happy teams. This involves fostering a culture of trust, providing guidance, and empowering a sense of ownership and accountability in our revenue-generating teams.Contribute to our company's strategy as a member of the C-level team.Life at FilestageWe believe people are more productive when they can choose their own schedule. So were proud to offer fully-remote roles that give you the perfect balance between work and life.Work from where youre happiest and enjoy a flexible schedule. Weve been fully remote from the start, giving you the opportunity to meet people all over the world and broaden your horizons. For this role, were looking for someone based in western/central Europe to make sure we can regularly meet for strategic conversations.Meet up in real life. We all travel together at least once a year for our full team retreat to have fun and get to know each other. Additionally, we meet more regularly with our C-level and leadership team for strategic sessions.Enjoy a strong team culture. Were a group of knowledge seekers, reflective thinkers, clear communicators, goal owners, problem solvers, and team players. These are the values we strive for to help us achieve our mission.Join a happy team. Weve been rated five stars on Glassdoor by our happy and high-performing team. You can take a look at our reviews here.Create a workspace that suits you. Youll get a budget for hardware, as well as for working from home to buy whatever you need to do your best work including a computer, webcam, or standing desk.Get 36 days of holiday. Plenty of time for city breaks, summer escapes, and everything in between. Youll also get a half day on your birthday to give you a chance to celebrate!Continue to grow and develop your career. We care about your development and want you to be able to learn new things! After six months in the company, youll get a budget to be able to use for personal development.Make your voice heard. We trust our team members to make the best decisions to achieve their goals, so you wont have to put up with micromanagers here.Say goodbye to pointless meetings. We practice what we preach when it comes to productivity, so you can expect flat hierarchies, fast iterations, and no-bullshit meetings.What youll bring to the roleYou have experience in a revenue-generating leadership role within B2B SaaS. Now youre looking for a new and exciting challenge that hugely impacts how people work.We're looking for someone who:Has a deep understanding of customer acquisition and growth. You excel in crafting sharp strategies, managing teams, and implementing reliable processes to drive sustainable growth.Has a proven track record of contributing to significant growth in SaaS companies. You have experience in revenue-generating leadership roles where you have helped achieve and surpass $10 million ARR.Is hands-on. Youre happy, willing, and able to roll up your sleeves and directly engage with key customers, address deal blockers, and develop your team, while also working strategically as part of the C level.Is passionate about PLG. You understand and fully believe in the value of a product-led growth model and can effectively integrate it into your sales strategy.Works well with lots of questions and few answers. No problem is too big or too hard. You are most productive when ambitious goals are clearly set and you can choose your own path to reach them.Is an entrepreneur at heart, driven by a relentless pursuit of results and a thirst for knowledge. You're always seeking ways to improve, adapting your strategies, and seizing growth opportunities.Is a strong communicator and collaborator. You can effectively communicate with and collaborate across a distributed team.
    0 Σχόλια 0 Μοιράστηκε 181 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Science and technology stories in the age of Trump
    Rather than analyzing the news this week, I thought Id lift the hood a bit on how we make it. Ive spent most of this year being pretty convinced that Donald Trump would be the 47th president of the United States. Even so, like most people, I was completely surprised by the scope of his victory. By taking the lions share not just in the Electoral College but also the popular vote, coupled with the wins in the Senate (and, as I write this, seemingly the House) and ongoing control of the courts, Trump has done far more than simply eke out a win. This level of victory will certainly provide the political capital to usher in a broad sweep of policy changes. Some of these changes will be well outside our lane as a publication. But very many of President-elect Trumps stated policy goals will have direct impacts on science and technology. Some of the proposed changes would have profound effects on the industries and innovations weve covered regularly, and for years. When he talks about his intention toend EV subsidies, hit the brakes on FTC enforcement actions on Big Tech, ease the rules on crypto, or impose a 60 percent tariff on goods from China, these are squarely in our strike zone and we would be remiss not to explore the policies and their impact in detail. And so I thought I would share some of my remarks from our edit meeting on Wednesday morning, when we woke up to find out that the world had indeed changed. I think its helpful for our audience if we are transparent and upfront about how we intend to operate, especially over the next several months that will likely be, well, chaotic. This is a moment when our jobs are more important than ever. There will be so much noise and heat out there in the coming weeks and months, and maybe even years. The next six months in particular will be a confusing time for a lot of people. We should strive to be the signal in that noise. We have extremely important stories to write about the role of science and technology in the new administration. There are obvious stories for us to take on in regards to climate, energy, vaccines, womens health, IVF, food safety, chips, China, and Im sure a lot more, that people are going to have all sorts of questions about. Lets start by making a list of questions we have ourselves. Some of the people and technologies we cover will be ascendant in all sorts of ways. We should interrogate that power. Its important that we take care in those stories not to be speculative or presumptive. To always have the facts buttoned up. To speak the truth and be unassailable in doing so. Do we drop everything and only cover this? No. But it will certainly be a massive story that affects nearly all others. This election will be a transformative moment for society and the world. Trump didnt just win, he won a mandate. And hes going to change the country and the global order as a result. The next few weeks will see so much speculation as to what it all means. So much fear, uncertainty, and doubt. There is an enormous amount of bullshit headed down the line. People will be hungry for sources they can trust. We should be there for that. Lets leverage our credibility, not squander it. We are not the resistance. We just want to tell the truth. So lets take a breath, and then go out there and do our jobs. I like to tell our reporters and editors that our coverage should be free from either hype or cynicism. I think thats especially true now. Im also very interested to hear from our readers: What questions do you have? What are the policy changes or staffing decisions you are curious about? Please drop me a line atmat.honan@technologyreview.comIm eager to hear from you. If someone forwarded you this edition of The Debrief, you cansubscribe here. Now read the rest of The Debrief The News Palmer Luckey, who was ousted from Facebook over his support for the last Trump administration and went into defense contracting, is poised to grow in influence under a second administration. He recently talked to MIT Technology Review about how the Pentagon is using mixed reality. What does Donald Trumps relationship with Elon Musk mean for the global EV industry? The Biden administration was perceived as hostile to crypto. The industry can likely expect friendlier waters under Trump Some counter-programming: Life seeking robots could punch through Europas icy surface And for one more big take thats not related to the election: AI vs quantum. AI could solve some of the most interesting scientific problems before big quantum computers become a reality The Chat Every week Ill talk to one of MIT Technology Reviews reporters or editors to find out more about what theyve been working on. This week, I chatted with Melissa Heikkil about her story on how ChatGPT search paves the way for AI agents. Mat: Melissa, OpenAI rolled out web search for ChatGPT last week. It seems pretty cool. But you got at a really interesting bigger picture point about it paving the way for agents. What does that mean? Melissa: Microsoft tried to chip away at Googles search monopoly with Bing, and that didnt really work. Its unlikely OpenAI will be able to make much difference either. Their best bet is try to get users used to a new way of finding information and browsing the web through virtual assistants that can do complex tasks. Tech companies call these agents. ChatGPTs usefulness is limited by the fact that it cant access the internet and doesnt have the most up to date information. By integrating a really powerful search engine into the chatbot, suddenly you have a tool that can help you plan things and find information in a far more comprehensive and immersive way than traditional search, and this is a key feature of the next generation of AI assistants. Mat: What will agents be able to do? Melissa: AI agents can complete complex tasks autonomously and the vision is that they will work as a human assistant would book your flights, reschedule your meetings, help with research, you name it. But I wouldnt get too excited yet. The cutting-edge of AI tech can retrieve information and generate stuff, but it still lacks the reasoning and long-term planning skills to be really useful. AI tools like ChatGPT and Claude also cant interact with computer interfaces, like clicking at stuff, very well. They also need to become a lot more reliable and stop making stuff up, which is still a massive problem with AI. So were still a long way away from the vision becoming reality! I wrote anexplainer on agentsa little while ago with more details. Mat: Is search as we know it going away? Are we just moving to a world of agents that not only answer questions but also accomplish tasks? Melissa: Its really hard to say. We are so used to using online search, and its surprisingly hard to change peoples behaviors. Unless agents become super reliable and powerful, I dont think search is going to go away. Mat: By the way, I know you are in the UK. Did you hear we had an election over here in the US? Melissa: LOL The Recommendation Im just back from a family vacation in New York City, where I was in town to run the marathon. (I get to point this out for like one or two more weeks before the bragging gets tedious, I think.) While there, we went to see The Outsiders. Chat, it was incredible. (Which maybe should go without saying given that it won the Tony for best musical.) But wow. I loved the book and the movie as a kid. But this hit me on an entirely other level. Im not really a cries-at-movies (or especially at musicals) kind of person but I was wiping my eyes for much of the second act. So were very many people sitting around me. Anyway. If youre in New York, or if it comes to your city, go see it. And until then, the soundtrack is pretty amazing on its own. (Heres a great example.)
    0 Σχόλια 0 Μοιράστηκε 167 Views
  • WWW.TECHNOLOGYREVIEW.COM
    A bold AI movement is underway in Africabut it is being held up
    Kessel Okinga-Koumu paced around a crowded hallway. It was her first time presenting at the Deep Learning Indaba, she told the crowd gathered to hear her, filled with researchers from Africas machine-learning community. The annual weeklong conference (Indaba is a Zulu word for gathering), was held most recently in September at Amadou Mahtar Mbow University in Dakar, Senegal. It attracted over 700 attendees to hear aboutand debatethe potential of Africa-centric AI and how its being deployed in agriculture, education, health care, and other critical sectors of the continents economy. A 28-year-old computer science student at the University of the Western Cape in Cape Town, South Africa, Okinga-Koumu spoke about how shes tackling a common problem: the lack of lab equipment at her university. Lecturers have long been forced to use chalkboards or printed 2D representations of equipment to simulate practical lessons that need microscopes, centrifuges, or other expensive tools. In some cases, they even ask students to draw the equipment during practical lessons, she lamented. Okinga-Koumu pulled a phone from the pocket of her blue jeans and opened a prototype web app shes built. Using VR and AI features, the app allows students to simulate using the necessary lab equipmentexploring 3D models of the tools in a real-world setting, like a classroom or lab. Students could have detailed VR of lab equipment, making their hands-on experience more effective, she said. Established in 2017, the Deep Learning Indaba now has chapters in 47 of the 55 African nations and aims to boost AI development across the continent by providing training and resources to African AI researchers like Okinga-Koumu. Africa is still early in the process of adopting AI technologies, but organizers say the continent is uniquely hospitable to it for several reasons, including a relatively young and increasingly well-educated population, a rapidly growing ecosystem of AI startups, and lots of potential consumers. The building and ownership of AI solutions tailored to local contexts is crucial for equitable development, says Shakir Mohamed, a senior research scientist at Google DeepMind and cofounder of the organization sponsoring the conference. Africa, more than other continents in the world, can address specific challenges with AI and will benefit immensely from its young talent, he says: There is amazing expertise everywhere across the continent. However, researchers ambitious efforts to develop AI tools that answer the needs of Africans face numerous hurdles. The biggest are inadequate funding and poor infrastructure. Not only is it very expensive to build AI systems, but research to provide AI training data in original African languages has been hamstrung by poor financing of linguistics departments at many African universities and the fact that citizens increasingly don't speak or write local languages themselves. Limited internet access and a scarcity of domestic data centers also mean that developers might not be able to deploy cutting-edge AI capabilities. DEEP LEARNING INDABA 2024 Complicating this further is a lack of overarching policies or strategies for harnessing AIs immense benefitsand regulating its downsides. While there are various draft policy documents, researchers are in conflict over a continent-wide strategy. And they disagree about which policies would most benefit Africa, not the wealthy Western governments and corporations that have often funded technological innovation. Taken together, researchers worry, these issues will hold Africas AI sector back and hamper its efforts to pave its own pathway in the global AI race. On the cusp of change Africas researchers are already making the most of generative AIs impressive capabilities. In South Africa, for instance, to help address the HIV epidemic, scientists have designed an app called Your Choice, powered by an LLM-based chatbot that interacts with people to obtain their sexual history without stigma or discrimination. In Kenya, farmers are using AI apps to diagnose diseases in crops and increase productivity. And in Nigeria, Awarri, a newly minted AI startup, is trying to build the countrys first large language model, with the endorsement of the government, so that Nigerian languages can be integrated into AI tools. The Deep Learning Indaba is another sign of how Africas AI research scene is starting to flourish. At the Dakar meeting, researchers presented 150 posters and 62 papers. Of those, 30 will be published in top-tier journals, according to Mohamed. Meanwhile, an analysis of 1,646 publications in AI between 2013 and 2022 found a significant increase in publications from Africa. And Masakhane, a cousin organization to Deep Learning Indaba that pushes for natural-language-processing research in African languages, has released over 400 open-source models and 20 African-language data sets since it was founded in 2018. These metrics speak a lot to the capacity building that's happening, says Kathleen Siminyu, a computer scientist from Kenya, who researches NLP tools for her native Kiswahili. Were starting to see a critical mass of people having basic foundational skills. They then go on to specialize. She adds: Its like a wave that cannot be stopped. Khadija Ba, a Senegalese entrepreneur and investor at the pan-African VC fund P1 Ventures who was at this years conference, says that she sees African AI startups as particularly attractive because their local approaches have potential to be scaled for the global market. African startups often build solutions in the absence of robust infrastructure, yet these innovations work efficiently, making them adaptable to other regions facing similar challenges, she says. In recent years, funding in Africas tech ecosystem has picked up: VC investment totaled $4.5 billion last year, more than double what it was just five years ago, according to a report by the African Private Capital Association. And this October, Google announced a $5.8 million commitment to support AI training initiatives in Kenya, Nigeria, and South Africa. But researchers say local funding remains sluggish. Take the Google-backed fund rolled out, also in October, in Nigeria, Africas most populous country. It will pay out $6,000 each to 10 AI startupsnot even enough to purchase the equipment needed to power their systems. Lilian Wanzare, a lecturer and NLP researcher at Maseno University in Kisumu, Kenya, bridles at African governments lackadaisical support for local AI initiatives and complains as well that the government charges exorbitant fees for access to publicly generated data, hindering data sharing and collaboration. [We] researchers are just blocked, she says. The government is saying theyre willing to support us, but the structures have not been put in place for us. Language barriers Researchers who want to make Africa-centric AI dont face just insufficient local investment and inaccessible data. There are major linguistic challenges, too. During one discussion at the Indaba, Ife Adebara, a Nigerian computational linguist, posed a question: How many people can write a bachelors thesis in their native African language? Zero hands went up. Then the audience disintegrated into laughter. Africans want AI to speak their local languages, but many Africans cannot speak and write in these languages themselves, Adebara said. Although Africa accounts for one-third of all languages in the world, many oral languages are slowly disappearing, their population of native speakers declining. And LLMs developed by Western-based tech companies fail to serve African languages; they dont understand locally relevant context and culture. For Adebara and others researching NLP tools, the lack of people who have the ability to read and write in African languages poses a major hurdle to development of bespoke AI-enabled technologies. Without literacy in our local languages, the future of AI in Africa is not as bright as we think, she says. On top of all that, theres little machine-readable data for African languages. One reason is that linguistic departments in public universities are poorly funded, Adebara says, limiting linguists participation in work that could create such data and benefit AI development. This year, she and her colleagues established EqualyzAI, a for-profit company seeking to preserve African languages through digital technology. They have built voice tools and AI models, covering about 517 African languages. Lelapa AI, a software company thats building data sets and NLP tools for African languages, is also trying to address these language-specific challenges. Its cofounders met in 2017 at the first Deep Learning Indaba and launched the company in 2022. In 2023, it released its first AI tool, Vulavula, a speech-to-text program that recognizes several languages spoken in South Africa. This year, Lelapa AI released InkubaLM, a first-of-its-kind small language model that currently supports a range of African languages: IsiXhosa, Yoruba, Swahili, IsiZulu, and Hausa. InkubaLM can answer questions and perform tasks like English translation and sentiment analysis. In tests, it performed as well as some larger models. But its still in early stages. The hope is that InkubaLM will someday power Vulavula, says Jade Abbott, cofounder and chief operating officer of Lelapa AI. Its the first iteration of us really expressing our long-term vision of what we want, and where we see African AI in the future, Abbott says. What were really building is a small language model that punches above its weight. InkubaLM is trained on two open-source data sets with 1.9 billion tokens, built and curated by Masakhane and other African developers who worked with real people in local communities. They paid native speakers of languages to attend writing workshops to create data for their model. Fundamentally, this approach will always be better, says Wanzare, because its informed by people who represent the language and culture. A clash over strategy Another issue that came up again and again at the Indaba was that Africas AI scene lacks the sort of regulation and support from governments that you find elsewhere in the worldin Europe, the US, China, and, increasingly, the Middle East. Of the 55 African nations, only sevenSenegal, Egypt, Mauritius, Rwanda, Algeria, Nigeria, and Beninhave developed their own formal AI strategies. And many of those are still in the early stages. A major point of tension at the Indaba, though, was the regulatory framework that will govern the approach to AI across the entire continent. In March, the African Union Development Agency published a white paper, developed over a three-year period, that lays out this strategy. The 200-page document includes recommendations for industry codes and practices, standards to assess and benchmark AI systems, and a blueprint of AI regulations for African nations to adopt. The hope is that it will be endorsed by the heads of African governments in February 2025 and eventually passed by the African Union. But in July, the African Union Commission in Addis Ababa, Ethiopia, another African governing body that wields more power than the development agency, released a rival continental AI strategya 66-page document that diverges from the initial white paper. Its unclear whats behind the second strategy, but Seydina Ndiaye, a program director at the Cheikh Hamidou Kane Digital University in Dakar who helped draft the development agencys white paper, claims it was drafted by a tech lobbyist from Switzerland. The commissions strategy calls for African Union member states to declare AI a national priority, promote AI startups, and develop regulatory frameworks to address safety and security challenges. But Ndiaye expressed concerns that the document does not reflect the perspectives, aspirations, knowledge, and work of grassroots African AI communities. Its a copy-paste of whats going on outside the continent, he says. Vukosi Marivate, a computer scientist at the University of Pretoria in South Africa who helped found the Deep Learning Indaba and is known as an advocate for the African machine-learning movement, expressed fury over this turn of events at the conference. These are things we shouldnt accept, he declared. The room full of data wonks, linguists, and international funders brimmed with frustration. But Marivate encouraged the group to forge ahead with building AI that benefits Africans: We dont have to wait for the rules to act right, he said. Barbara Glover, a program manager for the African Union Development Agency, acknowledges that AI researchers are angry and frustrated. Theres been a push to harmonize the two continental AI strategies, but she says the process has been fractious: That engagement didnt go as envisioned. Her agency plans to keep its own version of the continental AI strategy, Glover says, adding that it was developed by African experts rather than outsiders. We are capable, as Africans, of driving our own AI agenda, she says. DEEP LEARNING INDABA 2024 This all speaks to a broader tension over foreign influence in the African AI scene, one that goes beyond any single strategic document. Mirroring the skepticism toward the African Union Commission strategy, critics say the Deep Learning Indaba is tainted by its reliance on funding from big foreign tech companies; roughly 50% of its $500,000 annual budget comes from international donors and the rest from corporations like Google DeepMind, Apple, Open AI, and Meta. They argue that this cash could pollute the Indabas activities and influence the topics and speakers chosen for discussion. But Mohamed, the Indaba cofounder who is a researcher at Google DeepMind, says that almost all that goes back to our beneficiaries across the continent, and the organization helps connect them to training opportunities in tech companies. He says it benefits from some of its cofounders ties with these companies but that they do not set the agenda. Ndiaye says that the funding is necessary to keep the conference going. But we need to have more African governments involved, he says. To Timnit Gebru, founder and executive director at the nonprofit Distributed AI Research Institute (DAIR), which supports equitable AI research in Africa, the angst about foreign funding for AI development comes down to skepticism of exploitative, profit-driven international tech companies. Africans [need] to do something different and not replicate the same issues were fighting against, Gebru says. She warns about the pressure to adopt AI for everything in Africa, adding that theres a lot of push from international development organizations to use AI as an antidote for all Africas challenges. Siminyu, who is also a researcher at DAIR, agrees with that view. She hopes that African governments will fund and work with people in Africa to build AI tools that reach underrepresented communitiestools that can be used in positive ways and in a context that works for Africans. We should be afforded the dignity of having AI tools in a way that others do, she says.
    0 Σχόλια 0 Μοιράστηκε 161 Views