GigaOm
GigaOm
Upskill your business with technology practitioners
Science & TechnologySanta Barbara, California gigaom.comJoined March 2007
  • 1 pessoas curtiram isso
  • 12 Publicações
  • 2 fotos
  • 0 Vídeos
  • 0 Anterior
  • News
Pesquisar
Atualizações recentes
  • GIGAOM.COM
    From Products to Customers: Delivering Business Transformation At Scale
    Transformation is a journey, not a destination – so how to transform at scale? GigaOm Field CTOs Darrel Kent and Whit Walters explore the nuances of business and digital transformation, sharing their thoughts on scaling businesses, value-driven growth, and leadership in a rapidly evolving world. Whit: Darrel, transformation is such a well-used word these days—digital transformation, business transformation. It’s tough enough at a project level, but for enterprises looking to grow, where should they begin? Darrel: You’re right. Transformation has become one of those overused buzzwords, but at its core, it’s about fundamental change. What is digital transformation? What is business transformation? It’s about translating those big concepts into value-based disciplines—disciplines that drive real impact. Whit: That sounds compelling. Can you give us an example of what that looks like in practice – how does transformation relate to company growth? Darrel: Sure. Think of a company aiming to grow from 1 billion, to 2 billion, to 5 billion in revenue. That’s not just a numbers game; it’s a journey of transformation. To get to 1 billion, you can get there by focusing on product excellence. But you won’t get to 2 billion based on product alone – you need more. You need to rethink your approach to scaling—whether it’s through innovation, operations, or culture. Finance needs to invest strategically, sales needs to evolve, and leadership must align every decision with long-term goals. Whit: It’s a fascinating shift. So, scaling isn’t just about selling more products? Darrel: Exactly. Scaling requires a transformation in how you deliver value. For example, moving beyond transactional sales to consultative relationships. It’s about operational efficiency, customer experience, and innovation working together to create value at scale. I call these value-based disciplines. Whit: Let’s break that down a bit more. You’ve mentioned product excellence, operational excellence, and customer excellence. How do these concepts build on each other? Darrel: Great question. Product excellence is the foundation. When building a company, your product needs to solve a real problem and do it exceptionally well. That’s how you reach your first milestone—say, that 1-billion-dollar mark. But to scale beyond that, you can’t rely on product alone. This is where operational excellence comes in. It’s about streamlining your processes, reducing inefficiencies, and ensuring that every part of the organization is working in harmony. Whit: And customer excellence? Where does that fit in? Darrel: Customer excellence takes it to the next level beyond operational excellence. Once again, what gets you to 2 billion does not take you beyond that. You have to change again. It’s not just about creating a great product or running a smooth operation. It’s about truly understanding and anticipating your customers’ needs. Companies that master customer excellence create loyalty and advocacy. They don’t just react to customer feedback; they proactively shape the customer experience. This is where long-term growth happens, and it’s a hallmark of companies that scale successfully. Whit: That makes so much sense. So, it’s a progression—starting with product, moving to operations, and finally centering everything around the customer? Darrel: Exactly. Think of it as a ladder. Each step builds on the previous one. You need product excellence to get off the ground, operational excellence to scale efficiently, and customer excellence to ensure longevity and market leadership. And these aren’t isolated phases—they’re interconnected. A failure in one area can disrupt the whole system. Whit: That’s a powerful perspective. What role does leadership play in this transformation? Darrel: Leadership is everything. It starts with understanding that transformation isn’t optional—it’s survival. Leaders must champion change, align the organization’s culture with its strategy, and invest in the right areas. For example, what does the CFO prioritize? What technologies or processes does the COO implement? It all needs to work together. Whit: That’s a powerful perspective. What would you say to leaders who are hesitant to embark on such a daunting journey? Darrel: I’d tell them this: Transformation isn’t just about surviving the present; it’s about thriving in the future. It’s what Simon Sinek refers to as ‘the long game’. Companies that embrace these principles—aligning value creation with their business strategy—will not only grow but will set the pace in their industries. Whit: Do you have any final thoughts for organizations navigating their own transformations? Darrel: Focus on value. Whether it’s your customers, employees, or stakeholders, every transformation effort should return to delivering value. And remember, it’s a journey. You don’t have to get it perfect overnight, but you do have to start. Whit: Thank you, Darrel. Your insights are invaluable. The post From Products to Customers: Delivering Business Transformation At Scale appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 24 Visualizações
  • GIGAOM.COM
    Where’s Security Going in 2025?
    Few areas of technology are moving as fast as security, so what gives and how is it going to evolve in 2025? We asked our analysts Andrew Brust, Andrew Green, Chester Conforte, Chris Ray, Howard Holton, Ivan McPhee, Stan Wisseman, and Whit Walters for their thoughts. First off – is the future of cybersecurity protection agentless? Andrew G: We are seeing the growth of eBPF, which offers more stability compared to past agent-based systems like CrowdStrike. eBPF has built-in verification mechanisms, like memory limits and timeouts, which help to prevent issues like the blue screen of death. I’ve also seen eBPF-based alternatives that handle runtime security in the kernel without agents, with built-in verification. Note that you can do both kernel and external analysis. Some vendors, like Wiz, gather telemetry in the kernel and send it to the cloud for processing and display. Whit: That ties back to the business model, especially after the disruption caused by the CrowdStrike outage. Many vendors are moving towards agentless solutions, and this trend is accelerating. Howard: Analysis has to happen somewhere, even if it’s at the kernel level. If we’re analyzing kernel traffic externally, it’s not built into the kernel, which raises questions. It could be unnecessarily generating new network traffic and the trust needed for kernel access. We need to ensure companies are responsible for maintaining kernel reliability. Stewardship is key. Chris: Agentless is popular for good reason; however, security doesn’t live in a vacuum. It was previously acceptable to have multiple independent endpoint agents, for detection and response, management, and security. This is no longer the case: all-in-one solutions, or those tightly integrated through official partnerships, have been winning the hearts and minds of security teams. One example is CrowdStrike’s Falcon, which can be licensed to perform EDR, MDR, and (combined with Veeam) recovery.  What security developments are we seeing at the edge?  Ivan: We will see more edge computing and AI: combining 5G with Internet of Things (IoT) will be a major trend next year. However, the increase in rollouts means a broader attack surface, which will drive more regulations for protection. We’re also seeing more deployments of 5G worldwide, and I expect a rapid increase in private and hybrid 5G networks. Seth: Agreed – as a result, companies are moving toward machine-based identity management. Stan: We’re also seeing improvements in vulnerability management for IoT, through more frequent firmware updates and the integration of encryption to prevent data exposure. Network or micro-segmentation is becoming more prevalent, especially in sectors like automotive, where adoption was relatively slow. However, given the industry’s lengthy four to five-year rollout cycles, forward-thinking measures are essential to mitigate risks effectively over the long term. Howard: We’re finally seeing zero-trust concepts becoming feasible for average organizations. Micro-segmentation, which has been valuable but hard to implement for smaller organizations, is now more achievable due to better automation, rollout, and maintenance tools. This will improve the maturity of the zero-trust model. Chester: I’ve noticed a trend where some established players move away from segmenting everything to focusing on the critical assets—essentially, a more risk-based approach. They’re asking simple questions like, “What are the crown jewels?” and then focusing segmentation efforts there. It’s a more pragmatic approach. Cyber insurance is on the rise, so what are the ramifications? Stan: While cyber insurance has become increasingly popular among executives, the escalating costs associated with breaches have put pressure on insurers and underwriters to ensure firms are protecting their assets. As a result, insurers are implementing more stringent due diligence requirements, making cyber insurance more challenging and costly. Insurers are shifting from point-in-time questionnaires to more robust, periodic assessments. Some insurers employ third-party firms to conduct penetration tests to verify active security controls like multi-factor authentication (MFA). Although continuous testing isn’t yet required, insurers supplement their point-in-time evaluations with more frequent and rigorous checks. Howard: The insurance industry is complex. Insurers must balance rigorous protection standards with the need to remain attractive to customers. If they’re significantly stricter than their competitors, they’ll lose business, which makes it a constant struggle between thorough protection and marketability. I’m not sure continuous security validation is entirely a good thing. Security organizations are often not equipped to handle a constant influx of issues. Many customers are excited about continuous testing but need to adjust their operating model to accelerate how they deal with the resulting security incidents. Finally, how ready do organizations need to be for quantum security?  Stan: While quantum computing may not be a practical reality by 2025, preparing for its impact on cybersecurity is essential now. Quantum computing will fundamentally challenge current digital asset protection best practices, and vendors are already working on how best to implement quantum-resistant algorithms.  In a post-quantum computing world, understanding the potential exposure of sensitive data is crucial. Organizations must begin assessing vulnerabilities across new and legacy systems to identify where updated controls and governance are needed. While quantum-resistant solutions are being developed, implementing them to fully protect data in a PQC environment will take time, making it essential to plan strategically and act early. Organizations must recognize that quantum threats won’t only compromise PII data but could also erode competitive advantages and intellectual assets. To protect these sensitive assets, now is the time to start considering how to address the quantum computing challenges of tomorrow. Andrew B: Quantum computing was on the verge of becoming a big phenomenon, gaining attention and hype. Then ChatGPT came along and drew away both attention and funding from quantum startups. Some of those startups are doing really interesting work—they remind me of the supercomputing startups in the ’80s. Quantum has a lot of potential beyond security, but it’s in a kind of suspended animation because AI has diverted so many resources. That situation may protect us for now, but if private sector funding dries up, it leaves room for nation-state actors to advance quantum on their own.   The post Where’s Security Going in 2025? appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 34 Visualizações
  • GIGAOM.COM
    Bridging Wireless and 5G
    Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I spoke to Bruno Tomas, CTO of the Wireless Broadband Alliance (WBA), to get his insights on convergence, collaboration, and the road ahead. Q: Bruno, could you start by sharing a bit about your background and your role at the WBA? Bruno: Absolutely. I’m an engineer by training, with degrees in electrical and computer engineering, as well as a master’s in telecom systems. I started my career with Portugal Telecom and later worked in Brazil, focusing on network standards. About 12 years ago, I joined the WBA, and my role has been centered on building the standards for seamless interoperability and convergence between Wi-Fi, 3G, LTE, and now 5G. At the WBA, we bring together vendors, operators, and integrators to create technical specifications and guidelines that drive innovation and usability in wireless networks. Q: What are the key challenges in achieving seamless integration between wireless technologies and 5G? Bruno: One of the biggest challenges is ensuring that our work translates into real-world use cases—particularly in enterprise and public environments. For example, in manufacturing or warehousing, where metal structures and interference can disrupt connectivity, we need robust solutions for starters. At the WBA, we’ve worked with partners from the vendor, chipset and device communities, as well as integrators, to address these challenges by building field-tested guidelines. On top of that comes innovation. For instance, our OpenRoaming concepts help enable seamless transitions between networks, including IoT, reducing the complexity for IT managers and CIOs. Q: Could you explain how WBA’s “Tiger Teams” contribute to these solutions? Bruno: Tiger Teams are specialized working groups within our alliance. They bring together technical experts from companies such as AT&T, Intel, Broadcom, and AirTies to solve specific challenges collaboratively. For instance, in our 5G & Wi-Fi convergence group, members define requirements and scenarios for industries like aerospace or healthcare. By doing this, we ensure that our recommendations are practical and field-ready. This collaborative approach helps drive innovation while addressing real-world challenges. Q: You mentioned OpenRoaming earlier. How does that help businesses and consumers? Bruno: OpenRoaming simplifies connectivity by allowing users to seamlessly move between Wi-Fi and cellular networks without needing manual logins or configurations. Imagine a hospital where doctors move between different buildings while using tablets for patient care, supported by an enhanced security layer. With OpenRoaming, they can stay connected without interruptions. Similarly, for enterprises, it minimizes the need for extensive IT support and reduces costs while ensuring high-quality service. Q: What’s the current state of adoption for technologies like 5G and Wi-Fi 6? Bruno: Adoption is growing rapidly, but it’s uneven across regions. Wi-Fi 6 has been a game-changer, offering better modulation and spectrum management, which makes it ideal for high-density environments like factories or stadiums. On the 5G side, private networks have been announced, especially in industries like manufacturing, but the integration with existing systems remains a hurdle. In Europe, regulatory and infrastructural challenges slow things down, while the U.S. and APAC regions are moving faster. Q: What role do you see AI playing in wireless and 5G convergence? Bruno: AI is critical for optimizing network performance and making real-time decisions. At the WBA, we’ve launched initiatives to incorporate AI into wireless networking, helping systems predict and adapt to user needs. For instance, AI can guide network steering—deciding whether a device should stay on Wi-Fi or switch to 5G based on signal quality and usage patterns. This kind of automation will be essential as networks become more complex. Q: Looking ahead, what excites you most about the future of wireless and 5G? Bruno: The potential for convergence to enable new use cases is incredibly exciting. Whether it’s smart cities, advanced manufacturing, or immersive experiences with AR and VR, the opportunities are limitless. Wi-Fi 7, will bring even greater capacity and coverage, making it possible to deliver gigabit speeds in dense environments like stadiums or urban centers. Conversely, we are starting to look into 6G. One trend is clear: Wi-Fi should be integrated within a 6G framework, enabling densification. At the WBA, we’re committed to ensuring these advancements are accessible, interoperable, and sustainable. Thank you, Bruno!  N.B. The WBA Industry Report 2025 has now been released and is available for download. Please click here for further information. The post Bridging Wireless and 5G appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 48 Visualizações
  • GIGAOM.COM
    The Evolving Revolution: AI in 2025
    AI was 2024’s hot topic, so how is it evolving? What are we seeing in AI today, and what do we expect to see in the next 12-18 months? We asked Andrew Brust, Chester Conforte, Chris Ray, Dana Hernandez, Howard Holton, Ivan McPhee, Seth Byrnes, Whit Walters, and William McKnight to weigh in.  First off, what’s still hot? Where are AI use cases seeing success? Chester: I see people leveraging AI beyond experimentation. People have had the opportunity to experiment, and now we’re getting to a point where true, vertical-specific use cases are being developed. I’ve been tracking healthcare closely and seeing more use-case-specific, fine-tuned models, such as the use of AI to help doctors be more present during patient conversations through auditory tools for listening and note-taking.  I believe ‘small is the new big’—that’s the key trend, such as hematology versus pathology versus pulmonology. AI in imaging technologies isn’t new, but it’s now coming to the forefront with new models used to accelerate cancer detection. It has to be backed by a healthcare professional: AI can’t be the sole source of diagnoses. A radiologist needs to validate, verify, and confirm the findings.  Dana: In my reports, I see AI leveraged effectively from an industry-specific perspective. For instance, vendors focused on finance and insurance are using AI for tasks like preventing financial crime and automating processes, often with specialized, smaller language models. These industry-specific AI models are a significant trend I see continuing into next year. William: We’re seeing cycles reduced in areas like pipeline development and master data management, which are becoming more autonomous. An area gaining traction is data observability—2025 might be its year.  Andrew: Generative AI is working well in code generation—generating SQL queries and creating natural language interfaces for querying data. That’s been effective, though it’s a bit commoditized now.  More interesting are advancements in the data layer and architecture. For instance, Postgres has a vector database add-in, which is useful for retrieval-augmented generation (RAG) queries. I see a shift from the “wow” factor of demos to practical use, using the right models and data to reduce hallucinations and make data more accessible. Over the next two or three years, vendors will move from basic query intelligence to creating more sophisticated tools. How are we likely to see large language models evolve?  Whit: Globally, we’ll see AI models shaped by cultural and political values. It’s less about technical developments and more about what we want our AIs to do. Consider Elon Musk’s xAI, based on Twitter/X. It’s uncensored—quite different from Google Gemini, which tends to lecture you if you ask the wrong question.  Different providers, geographies, and governments will tend to move either towards free-er speech, or will seek to control AI’s outputs. The difference is noticeable. Next year, we’ll see a rise in models without guardrails, which will provide more direct answers. Ivan: There’s also a lot of focus on structured prompts. A slight change in phrasing, like using “detailed” versus “comprehensive,” can yield vastly different responses. Users need to learn how to use these tools effectively. Whit: Indeed, prompt engineering is crucial. Depending on how words are embedded in the model, you can get drastically different answers. If you ask the AI to explain what it wrote and why, it forces it to think more deeply. We’ll see domain-trained prompting tools soon—agentic models that can help optimize prompts for better outcomes. How is AI building on and advancing the use of data through analytics and business intelligence (BI)? Andrew: Data is the foundation of AI. We’ve seen how generative AI over large amounts of unstructured data can lead to hallucinations, and projects are getting scrapped. We’re seeing a lot of disillusionment in the enterprise space, but progress is coming: we’re starting to see a marriage between AI and BI, beyond natural language querying.  Semantic models exist in BI to make data more understandable and can extend to structured data. When combined, we can use these models to generate useful chatbot-like experiences, pulling answers from structured and unstructured data sources. This approach creates business-useful outputs while reducing hallucinations through contextual enhancements. This is where AI will become more grounded, and data democratization will be more effective. Howard: Agreed. BI has yet to work perfectly for the last decade. Those producing BI often don’t understand the business, and the business doesn’t fully grasp the data, leading to friction. However, this can’t be solved by Gen AI alone, it requires a mutual understanding between both groups. Forcing data-driven approaches without this doesn’t get organizations very far. What other challenges are you seeing that might hinder AI’s progress?  Andrew: The euphoria over AI has diverted mindshare and budgets away from data projects, which is unfortunate. Enterprises need to see them as the same.  Whit: There’s also the AI startup bubble—too many startups, too much funding, burning through cash without generating revenue. It feels like an unsustainable situation, and we’ll see it burst a bit next year. There’s so much churn, and keeping up has become ridiculous. Chris: Related, I am seeing vendors build solutions to “secure” GenAI / LLMs. Penetration testing as a service (PTaaS) vendors are offering LLM-focused testing, and cloud-native application protection (CNAPP) has vendors offering controls for LLMs deployed in customer cloud accounts. I don’t think buyers have even begun to understand how to effectively use LLMs in the enterprise, yet vendors are pushing new products/services to “secure” them. This is ripe for popping, although some “LLM” security products/services will pervade.  Seth: On the supply chain security side, vendors are starting to offer AI model analysis to identify models used in environments. It feels a bit advanced, but it’s starting to happen.  William: Another looming factor for 2025 is the EU Data Act, which will require AI systems to be able to shut off with the click of a button. This could have a big impact on AI’s ongoing development. The million-dollar question: how close are we to artificial general intelligence (AGI)? Whit: AGI remains a pipe dream. We don’t understand consciousness well enough to recreate it, and simply throwing compute power at the problem won’t make something conscious—it’ll just be a simulation.  Andrew: We can progress toward AGI, but we must stop thinking that predicting the next word is intelligence. It’s just statistical prediction—an impressive application, but not truly intelligent. Whit: Exactly. Even when AI models “reason”, it’s not true reasoning or creativity. They’re just recombining what they’ve been trained on. It’s about how far you can push combinatorics on a given dataset. Thanks all! The post The Evolving Revolution: AI in 2025 appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 44 Visualizações
  • GIGAOM.COM
    2025 Predictions: Cloud Architectures, Cost Management and Hybrid By Design
    In this episode of our predictions series, we consider the evolving nature of Cloud, across architecture, cost management, and, indeed, the lower levels of infrastructure. We asked our analysts Dana Hernandez, Ivan McPhee, Jon Collins, Whit Walters, and William McKnight for their thoughts.  Jon: We’re seeing a maturing of thinking around architecture, not just with cloud computing but across technology provision. Keep in mind that what we know as Cloud is still only 25% of the overall space – the other three quarters are on-premise or hosted in private data centers. It’s all got to work together as a single notional platform, or at least, the more accurate we can make this, the more efficient we can be. Whilst the keyword may be ‘hybrid’, I expect to see a shift from hybrid environments by accident, towards hybrid by design – actively making decisions based on performance, cost, and indeed governance areas such as sovereignty. Cost management will continue to catalyze this trend, as illustrated by FinOps.  Dana: FinOps is evolving, with many companies considering on-prem or moving workloads back from the Cloud. At FinOpsX, companies were looking at blended costs of on-prem and Cloud. Oracle has now joined the big three, Microsoft, Google, and AWS, and it’ll be interesting to see who else will jump in. Jon: Another illustration is repatriation, moving workloads away from the Cloud and back on-premise. William: Yes, repatriation is accelerating, but Cloud providers might respond by 2025, likely through more competitive pricing and technical advancements that offer greater flexibility and security. We’re still heavily moving to the Cloud, and repatriation might take a few years to slow down.  Whit: The vendor response to repatriation has been interesting. Oracle with Oracle Cloud Infrastructure (OCI), for example, is undercutting competitors with their pricing model, but there’s skepticism—clients worry Oracle might increase costs later through licensing issues.  Jon: We’re also seeing historically pure-play Cloud providers move to an acceptance of hybrid models, even though they probably wouldn’t say that out loud. AWS’ Outposts on-premise cloud offering, for example, can now work with local storage from NetApp, and it’s likely this type of partnership will accelerate. I maintain that “Cloud” should be seen primarily as an architectural construct around dynamic provisioning and elastic scaling, and secondarily around who the provider – recognizing that hosting companies can do a better job of resilience. Organizations need to put architecture first. Ivan: We’ll also see more cloud-native tools to manage those workloads. For instance, on the SASE/SSE side, companies like Cato Networks are seeing success because people don’t want to install physical devices across the network. We also see this trend in NDR with companies like Lumu Technologies, where security solutions are cloud-native rather than on-premises.  Cloud-native solutions like Cato Networks and Lumu Technologies have more pricing flexibility than those tied to hardware components. They will be better positioned to adjust pricing to drive adoption and growth than traditional on-premises solutions. Some vendors are exploring value-based pricing, considering factors like customer business value to get into strategic accounts. This could be an exciting shift as we move into the future. The post 2025 Predictions: Cloud Architectures, Cost Management and Hybrid By Design appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 60 Visualizações
  • GIGAOM.COM
    Making Sense of Cybersecurity – Part 1: Seeing Through Complexity
    At the Black Hat Europe conference in December, I sat down with one of our senior security analysts, Paul Stringfellow. In this first part of our conversation we discuss the complexity of navigating cybersecurity tools, and defining relevant metrics to measure ROI and risk. Jon: Paul, how does an end-user organization make sense of everything going on? We’re here at Black Hat, and there’s a wealth of different technologies, options, topics, and categories. In our research, there are 30-50 different security topics: posture management, service management, asset management, SIEM, SOAR, EDR, XDR, and so on. However, from an end-user organization perspective, they don’t want to think about 40-50 different things. They want to think about 10, 5, or maybe even 3. Your role is to deploy these technologies. How do they want to think about it, and how do you help them translate the complexity we see here into the simplicity they’re looking for? Paul: I attend events like this because the challenge is so complex and rapidly evolving. I don’t think you can be a modern CIO or security leader without spending time with your vendors and the broader industry. Not necessarily at Black Hat Europe, but you need to engage with your vendors to do your job. Going back to your point about 40 or 50 vendors, you’re right. The average number of cybersecurity tools in an organization is between 40 and 60, depending on which research you refer to. So, how do you keep up with that? When I come to events like this, I like to do two things—and I’ve added a third since I started working with GigaOm. One is to meet with vendors, because people have asked me to. Two, go to some presentations. Three is to walk around the Expo floor talking to vendors, particularly ones I’ve never met, to see what they do.  I sat in a session yesterday, and what caught my attention was the title: “How to identify the cybersecurity metrics that are going to deliver value to you.” That caught my attention from an analyst’s point of view because part of what we do at GigaOm is create metrics to measure the efficacy of a solution in a given topic. But if you’re deploying technology as part of SecOps or IT operations, you’re gathering a lot of metrics to try and make decisions. One of the things they talked about in the session was the issue of creating so many metrics because we have so many tools that there’s so much noise. How do you start to find out the value? The long answer to your question is that they suggested something I thought was a really smart approach: step back and think as an organization about what metrics matter. What do you need to know as a business? Doing that allows you to reduce the noise and also potentially reduce the number of tools you’re using to deliver those metrics. If you decide a certain metric no longer has value, why keep the tool that provides it? If it doesn’t do anything other than give you that metric, take it out. I thought that was a really interesting approach. It’s almost like, “We’ve done all this stuff. Now, let’s think about what actually still matters.” This is an evolving space, and how we deal with it must evolve, too. You can’t just assume that because you bought something five years ago, it still has value. You probably have three other tools that do the same thing by now. How we approach the threat has changed, and how we approach security has changed. We need to go back to some of these tools and ask, “Do we really need this anymore?” Jon: We measure our success with this, and, in turn, we’re going to change. Paul: Yes, and I think that’s hugely important. I was talking to someone recently about the importance of automation. If we’re going to invest in automation, are we better now than we were 12 months ago after implementing it? We’ve spent money on automation tools, and none of them come for free. We’ve been sold on the idea that these tools will solve our problems. One thing I do in my CTO role, outside of my work with GigaOm, is to take vendors’ dreams and visions and turn them into reality for what customers are asking for. Vendors have aspirations that their products will change the world for you, but the reality is what the customer needs at the other end. It’s that kind of consolidation and understanding—being able to measure what happened before we implemented something and what happened after. Can we show improvements, and has that investment had real value? Jon: Ultimately, here’s my hypothesis: Risk is the only measure that matters. You can break that down into reputational risk, business risk, or technical risk. For example, are you going to lose data? Are you going to compromise data and, therefore, damage your business? Or will you expose data and upset your customers, which could hit you like a ton of bricks? But then there’s the other side—are you spending way more money than you need, to mitigate risks?  So, you get into cost, efficiency, and so on, but is this how organizations are thinking about it? Because that’s my old-school way of viewing it. Maybe it’s moved on. Paul: I think you’re on the right track. As an industry, we live in a little echo chamber. So when I say “the industry,” I mean the little bit I see, which is just a small part of the whole industry. But within that part, I think we are seeing a shift. In customer conversations, there’s a lot more talk about risk. They’re starting to understand the balance between spending and risk, trying to figure out how much risk they’re comfortable with. You’re never going to eliminate all risk. No matter how many security tools you implement, there’s always the risk of someone doing something stupid that exposes the business to vulnerabilities. And that’s before we even get into AI agents trying to befriend other AI agents to do malicious things—that’s a whole different conversation. Jon: Like social engineering? Paul: Yeah, very much so. That’s a different show altogether. But, understanding risk is becoming more common. The people I speak to are starting to realize it’s about risk management. You can’t remove all the security risks, and you can’t deal with every incident. You need to focus on identifying where the real risks lie for your business. For example, one criticism of CVE scores is that people look at a CVE with a 9.8 score and assume it’s a massive risk, but there’s no context around it. They don’t consider whether the CVE has been seen in the wild. If it hasn’t, then what’s the risk of being the first to encounter it? And if the exploit is so complicated that it’s not been seen in the wild, how realistic is it that someone will use it? It’s such a complicated thing to exploit that nobody will ever exploit it. It has a 9.8, and it shows up on your vulnerability scanner saying, “You really need to deal with this.” The reality is that you have already seen a shift where there’s no context applied to that—if we’ve seen it in the wild. Jon: Risk equals probability multiplied by impact. So you’re talking about probability and then, is it going to impact your business? Is it affecting a system used for maintenance once every six months, or is it your customer-facing website? But I’m curious because back in the 90s, when we were doing this hands-on, we went through a wave of risk avoidance, then went to, “We’ve got to stop everything,” which is what you’re talking about, through to risk mitigation and prioritizing risks, and so on.  But with the advancement of the Cloud and the rise of new cultures like agile in the digital world, it feels like we’ve gone back to the direction of, “Well, you need to prevent that from happening, lock all the doors, and implement zero trust.” And now, we’re seeing the wave of, “Maybe we need to think about this a bit smarter.” Paul: It’s a really good point, and actually, it’s an interesting parallel you raise. Let’s have a little argument while we’re recording this. Do you mind if I argue with you? I’ll question your definition of zero trust for a moment. So, zero trust is often seen as something trying to stop everything. That’s probably not true of zero trust. Zero trust is more of an approach, and technology can help underpin that approach. Anyway, that’s a personal debate with myself. But, zero trust… Now, I’ll just crop myself in here later and argue with myself. So, zero trust… If you take it as an example, it’s a good one. What we used to do was implicit trust—you’d log on, and I’d accept your username and password, and everything you did after that, inside the secure bubble, would be considered valid with no malicious activity. The problem is, when your account is compromised, logging in might be the only non-malicious thing you’re doing. Once logged in, everything your compromised account tries to do is malicious. If we’re doing implicit trust, we’re not being very smart. Jon: So, the opposite of that would be blocking access entirely? Paul: That’s not the reality. We can’t just stop people from logging in. Zero trust allows us to let you log on, but not blindly trust everything. We trust you for now, and we continuously evaluate your actions. If you do something that makes us no longer trust you, we act on that. It’s about continuously assessing whether your activities are appropriate or potentially malicious and then acting accordingly. Jon: It’s going to be a very disappointing argument because I agree with everything you say. You argued with yourself more than I’m going to be able to, but I think, as you said, the castle defense model—once you’re in, you’re in.  I’m mixing two things there, but the idea is that once you’re inside the castle, you can do whatever you like. That’s changed.  So, what to do about it? Read Part 2, for how to deliver a cost-effective response.  The post Making Sense of Cybersecurity – Part 1: Seeing Through Complexity appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 48 Visualizações
  • GIGAOM.COM
    Making Sense of Cybersecurity – Part 2: Delivering a Cost-effective Response
    At Black Hat Europe last year, I sat down with one of our senior security analysts, Paul Stringfellow. In this section of our conversation (you can find the first part here), we discuss balancing cost and efficiency, and aligning security culture across the organization. Jon: So, Paul, in an environment with problems everywhere, and you’ve got to fix everything, we need to move beyond that. In the new architectures we now have, we need to be thinking smarter about our overall risk. This ties into cost management and service management—being able to grade our architecture in terms of actual risk and exposure from a business perspective. So, I’m kind of talking myself into needing to buy a tool for this because I think that in order to cut through the 50 tools, I first need a clear view of our security posture. Then, we can decide which of the tools we have actually respond to that posture because we’ll have a clearer picture of how exposed we are. Paul: Buying a tool goes back to vendors’ hopes and dreams—that one tool will fix everything. But I think the reality is that it’s a mix of understanding what metrics are important. Understanding the information we’ve gathered, what’s important, and balancing that with the technology risk and the business impact. You made a great point before: if something’s at risk but the impact is minimal, we have limited budgets to work with. So where do we spend? You want the most “bang for your buck.” So, it’s understanding the risk to the business. We’ve identified the risk from a technology point of view, but how significant is it to the business? And is it a priority? Once we’ve prioritized the risks, we can figure out how to address them. There’s a lot to unpack in what you’re asking. For me, it’s about doing that initial work to understand where our security controls are and where our risks lie. What really matters to us as an organization? Go back to the important metrics—eliminating the noise and identifying metrics that help us make decisions. Then, look at whether we’re measuring those metrics. From there, we assess the risks and put the right controls in place to mitigate them. We do that posture management work. Are the tools we have in place responding to that posture? This is just the internal side of things, but there’s also external risk, which is a whole other conversation, but it’s the same process. So, looking at the tools we have, how effective are they in mitigating the risks we’ve identified? There are lots of risk management frameworks, so you can probably find a good fit, like NIST or something else. Find a framework that works for you, and use that to evaluate how your tools are managing risk. If there’s a gap, look for a tool that fills that gap. Jon: And I was thinking about the framework because it essentially says there are six areas to address, and maybe a seventh could be important to your organization. But at least having the six areas as a checkbox: Am I dealing with risk response? Am I addressing the right things? It gives you that, not Pareto view, but it’s about diminishing returns—cover the easiest stuff first. Don’t try to fix everything until you’ve fixed the most common issues. That’s what people are trying to do right now. Paul: Yeah, I think—let me quote another podcast I do, where we do “tech takeaways.” Yeah, who knew? I thought I’d plug it. But if you think about the takeaways from this conversation, I think, you know, going back to your question—what should I be considering as an organization? I think the starting point is probably to take a step back. As a business, as an IT leader inside that business, am I taking a step back to really understand what risk looks like? What does risk look like to the business, and what needs to be prioritized? Then, we need to assess whether we’re capable of measuring our efficacy against that risk. We’re getting lots of metrics and lots of tools. Are those tools effective in helping us avoid the risks we deem important for the business? Once we’ve answered those two questions, we can then look at our posture. Are the tools in place giving us the kind of controls we need to deal with the threats we face? Context is huge. Jon: On that note, I’m reminded of how organizations like Facebook, for example, had a pretty high tolerance for business risk, especially around customer data. Growth was everything—just growth at all costs. So, they were prepared to manage the risks to achieve that. It ultimately boils down to assessing and taking those risks. At that point, it’s no longer a technical conversation. Paul: Exactly. It probably never is just a technical conversation. To deliver projects that address risk and security, it should never be purely technical-led. It impacts how the company operates and the daily workflow. If everyone doesn’t buy into why you’re doing it, no security project is going to succeed. You’ll get too much pushback from senior people saying, “You’re just getting in the way. Stop it.” You can’t be the department that just gets in the way. But you do need that culture across the company that security is important. If we don’t prioritize security, all the hard work everyone’s doing could be undone because we haven’t done the basics to ensure there aren’t vulnerabilities waiting to be exploited. Jon: I’m just thinking about the number of conversations I’ve had with vendors on how to sell security products. You’ve sold it, but then nothing gets deployed because everyone else tries to block it—they didn’t like it. The reality is that the company needs to work towards something and make sure everything aligns to deliver it. Paul: One thing I’ve noticed over my 30-plus years in this job is how vendors often struggle to explain why they might be valuable to a business. Our COO, Howard Holton, is a big advocate of this argument—that vendors are terrible at telling people what they actually do and where the benefit lies for a business. But one thing he said to me yesterday was about their approach. One representative I know works for a vendor offering an orchestration and automation tool, but when he starts a meeting, the first thing he does is ask why automation hasn’t worked for the customer. Before he pitches his solution, he takes the time to understand where their automation problems are. If more of us did that—vendors and others alike—if we first asked, “What’s not working for you?” maybe we’d get better at finding the things that will work. Jon: So we have two takeaways for end users – to focus on risk management, and to simplify and refine security metrics. And for vendors, the takeaway is to understand the customer’s challenges before pitching a solution. By listening to the customer’s problems and needs, vendors can provide relevant and effective solutions, rather than simply selling their aspirations. Thanks, Paul! The post Making Sense of Cybersecurity – Part 2: Delivering a Cost-effective Response appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 36 Visualizações
  • GIGAOM.COM
    Demystifying data fabrics – bridging the gap between data sources and workloads
    The term “data fabric” is used across the tech industry, yet its definition and implementation can vary. I have seen this across vendors: in autumn last year, British Telecom (BT) talked about their data fabric at an analyst event; meanwhile, in storage, NetApp has been re-orienting their brand to intelligent infrastructure but was previously using the term. Application platform vendor Appian has a data fabric product, and database provider MongoDB has also been talking about data fabrics and similar ideas.  At its core, a data fabric is a unified architecture that abstracts and integrates disparate data sources to create a seamless data layer. The principle is to create a unified, synchronized layer between disparate sources of data and the workloads that need access to data—your applications, workloads, and, increasingly, your AI algorithms or learning engines.  There are plenty of reasons to want such an overlay. The data fabric acts as a generalized integration layer, plugging into different data sources or adding advanced capabilities to facilitate access for applications, workloads, and models, like enabling access to those sources while keeping them synchronized.  So far, so good. The challenge, however, is that we have a gap between the principle of a data fabric and its actual implementation. People are using the term to represent different things. To return to our four examples: BT defines data fabric as a network-level overlay designed to optimize data transmission across long distances. NetApp’s interpretation (even with the term intelligent data infrastructure) emphasizes storage efficiency and centralized management. Appian positions its data fabric product as a tool for unifying data at the application layer, enabling faster development and customization of user-facing tools.  MongoDB (and other structured data solution providers) consider data fabric principles in the context of data management infrastructure. How do we cut through all of this? One answer is to accept that we can approach it from multiple angles. You can talk about data fabric conceptually—recognizing the need to bring together data sources—but without overreaching. You don’t need a universal “uber-fabric” that covers absolutely everything. Instead, focus on the specific data you need to manage. If we rewind a couple of decades, we can see similarities with the principles of service-oriented architecture, which looked to decouple service provision from database systems. Back then, we discussed the difference between services, processes, and data. The same applies now: you can request a service or request data as a service, focusing on what’s needed for your workload. Create, read, update and delete remain the most straightforward of data services! I am also reminded of the origins of network acceleration, which would use caching to speed up data transfers by holding versions of data locally rather than repeatedly accessing the source. Akamai built its business on how to transfer unstructured content like music and films efficiently and over long distances.  That’s not to suggest data fabrics are reinventing the wheel. We are in a different (cloud-based) world technologically; plus, they bring new aspects, not least around metadata management, lineage tracking, compliance and security features. These are especially critical for AI workloads, where data governance, quality and provenance directly impact model performance and trustworthiness. If you are considering deploying a data fabric, the best starting point is to think about what you want the data for. Not only will this help orient you towards what kind of data fabric might be the most appropriate, but this approach also helps avoid the trap of trying to manage all the data in the world. Instead, you can prioritize the most valuable subset of data and consider what level of data fabric works best for your needs: Network level: To integrate data across multi-cloud, on-premises, and edge environments. Infrastructure level: If your data is centralized with one storage vendor, focus on the storage layer to serve coherent data pools. Application level: To pull together disparate datasets for specific applications or platforms. For example, in BT’s case, they’ve found internal value in using their data fabric to consolidate data from multiple sources. This reduces duplication and helps streamline operations, making data management more efficient. It’s clearly a useful tool for consolidating silos and improving application rationalization. In the end, data fabric isn’t a monolithic, one-size-fits-all solution. It’s a strategic conceptual layer, backed up by products and features, that you can apply where it makes the most sense to add flexibility and improve data delivery. Deployment fabric isn’t a “set it and forget it” exercise: it requires ongoing effort to scope, deploy, and maintain—not only the software itself but also the configuration and integration of data sources. While a data fabric can exist conceptually in multiple places, it’s important not to replicate delivery efforts unnecessarily. So, whether you’re pulling data together across the network, within infrastructure, or at the application level, the principles remain the same: use it where it’s most appropriate for your needs, and enable it to evolve with the data it serves. The post Demystifying data fabrics – bridging the gap between data sources and workloads appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 41 Visualizações
  • GIGAOM.COM
    The EU’s AI Act
    Have you ever been in a group project where one person decided to take a shortcut, and suddenly, everyone ended up under stricter rules? That’s essentially what the EU is saying to tech companies with the AI Act: “Because some of you couldn’t resist being creepy, we now have to regulate everything.” This legislation isn’t just a slap on the wrist—it’s a line in the sand for the future of ethical AI. Here’s what went wrong, what the EU is doing about it, and how businesses can adapt without losing their edge. When AI Went Too Far: The Stories We’d Like to Forget Target and the Teen Pregnancy Reveal One of the most infamous examples of AI gone wrong happened back in 2012, when Target used predictive analytics to market to pregnant customers. By analyzing shopping habits—think unscented lotion and prenatal vitamins—they managed to identify a teenage girl as pregnant before she told her family. Imagine her father’s reaction when baby coupons started arriving in the mail. It wasn’t just invasive; it was a wake-up call about how much data we hand over without realizing it. (Read more) Clearview AI and the Privacy Problem On the law enforcement front, tools like Clearview AI created a massive facial recognition database by scraping billions of images from the internet. Police departments used it to identify suspects, but it didn’t take long for privacy advocates to cry foul. People discovered their faces were part of this database without consent, and lawsuits followed. This wasn’t just a misstep—it was a full-blown controversy about surveillance overreach. (Learn more) The EU’s AI Act: Laying Down the Law The EU has had enough of these oversteps. Enter the AI Act: the first major legislation of its kind, categorizing AI systems into four risk levels: Minimal Risk: Chatbots that recommend books—low stakes, little oversight. Limited Risk: Systems like AI-powered spam filters, requiring transparency but little more. High Risk: This is where things get serious—AI used in hiring, law enforcement, or medical devices. These systems must meet stringent requirements for transparency, human oversight, and fairness. Unacceptable Risk: Think dystopian sci-fi—social scoring systems or manipulative algorithms that exploit vulnerabilities. These are outright banned. For companies operating high-risk AI, the EU demands a new level of accountability. That means documenting how systems work, ensuring explainability, and submitting to audits. If you don’t comply, the fines are enormous—up to €35 million or 7% of global annual revenue, whichever is higher. Why This Matters (and Why It’s Complicated) The Act is about more than just fines. It’s the EU saying, “We want AI, but we want it to be trustworthy.” At its heart, this is a “don’t be evil” moment, but achieving that balance is tricky. On one hand, the rules make sense. Who wouldn’t want guardrails around AI systems making decisions about hiring or healthcare? But on the other hand, compliance is costly, especially for smaller companies. Without careful implementation, these regulations could unintentionally stifle innovation, leaving only the big players standing. Innovating Without Breaking the Rules For companies, the EU’s AI Act is both a challenge and an opportunity. Yes, it’s more work, but leaning into these regulations now could position your business as a leader in ethical AI. Here’s how: Audit Your AI Systems: Start with a clear inventory. Which of your systems fall into the EU’s risk categories? If you don’t know, it’s time for a third-party assessment. Build Transparency Into Your Processes: Treat documentation and explainability as non-negotiables. Think of it as labeling every ingredient in your product—customers and regulators will thank you. Engage Early With Regulators: The rules aren’t static, and you have a voice. Collaborate with policymakers to shape guidelines that balance innovation and ethics. Invest in Ethics by Design: Make ethical considerations part of your development process from day one. Partner with ethicists and diverse stakeholders to identify potential issues early. Stay Dynamic: AI evolves fast, and so do regulations. Build flexibility into your systems so you can adapt without overhauling everything. The Bottom Line The EU’s AI Act isn’t about stifling progress; it’s about creating a framework for responsible innovation. It’s a reaction to the bad actors who’ve made AI feel invasive rather than empowering. By stepping up now—auditing systems, prioritizing transparency, and engaging with regulators—companies can turn this challenge into a competitive advantage. The message from the EU is clear: if you want a seat at the table, you need to bring something trustworthy. This isn’t about “nice-to-have” compliance; it’s about building a future where AI works for people, not at their expense. And if we do it right this time? Maybe we really can have nice things. The post The EU’s AI Act appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 54 Visualizações
  • GIGAOM.COM
    When Patching Isn’t Enough
    Executive Briefing What Happened: A stealthy, persistent backdoor was discovered in over 16,000 Fortinet firewalls. This wasn’t a new vulnerability – it was a case of attackers exploiting a subtle part of the system (language folders) to maintain unauthorized access even after the original vulnerabilities had been patched. What It Means: Devices that were considered “safe” may still be compromised. Attackers had read-only access to sensitive system files via symbolic links placed on the file system – completely bypassing traditional authentication and detection. Even if a device was patched months ago, the attacker could still be in place. Business Risk: Exposure of sensitive configuration files (including VPN, admin, and user data) Reputational risk if customer-facing infrastructure is compromised Compliance concerns depending on industry (HIPAA, PCI, etc.) Loss of control over device configurations and trust boundaries What We’re Doing About It: We’ve implemented a targeted remediation plan that includes firmware patching, credential resets, file system audits, and access control updates. We’ve also embedded long-term controls to monitor for persistence tactics like this in the future. Key Takeaway For Leadership: This isn’t about one vendor or one CVE. This is a reminder that patching is only one step in a secure operations model. We’re updating our process to include persistent threat detection on all network appliances – because attackers aren’t waiting around for the next CVE to strike. What Happened Attackers exploited Fortinet firewalls by planting symbolic links in language file folders. These links pointed to sensitive root-level files, which were then accessible through the SSL-VPN web interface. The result: attackers gained read-only access to system data with no credentials and no alerts. This backdoor remained even after firmware patches – unless you knew to remove it. FortiOS Versions That Remove the Backdoor: 7.6.2 7.4.7 7.2.11 7.0.17 6.4.16 If you’re running anything older, assume compromise and act accordingly. The Real Lesson We tend to think of patching as a full reset. It’s not. Attackers today are persistent. They don’t just get in and move laterally – they burrow in quietly, and stay. The real problem here wasn’t a technical flaw. It was a blind spot in operational trust: the assumption that once we patch, we’re done. That assumption is no longer safe. Ops Resolution Plan: One-Click Runbook Playbook: Fortinet Symlink Backdoor Remediation Purpose: Remediate the symlink backdoor vulnerability affecting FortiGate appliances. This includes patching, auditing, credential hygiene, and confirming removal of any persistent unauthorized access. 1. Scope Your Environment Identify all Fortinet devices in use (physical or virtual).  Inventory all firmware versions.  Check which devices have SSL-VPN enabled. 2. Patch Firmware Patch to the following minimum versions: FortiOS 7.6.2 FortiOS 7.4.7 FortiOS 7.2.11 FortiOS 7.0.17 FortiOS 6.4.16 Steps:  Download firmware from Fortinet support portal.  Schedule downtime or a rolling upgrade window.  Backup configuration before applying updates.  Apply firmware update via GUI or CLI. 3. Post-Patch Validation After updating:  Confirm version using get system status.  Verify SSL-VPN is operational if in use.  Run diagnose sys flash list to confirm removal of unauthorized symlinks (Fortinet script included in new firmware should clean it up automatically). 4. Credential & Session Hygiene  Force password reset for all admin accounts.  Revoke and re-issue any local user credentials stored in FortiGate.  Invalidate all current VPN sessions. 5. System & Config Audit  Review admin account list for unknown users.  Validate current config files (show full-configuration) for unexpected changes.  Search filesystem for remaining symbolic links (optional): find / -type l -ls | grep -v "/usr" 6. Monitoring and Detection  Enable full logging on SSL-VPN and admin interfaces.  Export logs for analysis and retention.  Integrate with SIEM to alert on: Unusual admin logins Access to unusual web resources VPN access outside expected geos 7. Harden SSL-VPN  Limit external exposure (use IP allowlists or geo-fencing).  Require MFA on all VPN access.  Disable web-mode access unless absolutely needed.  Turn off unused web components (e.g., themes, language packs). Change Control Summary Change Type: Security hotfix Systems Affected: FortiGate appliances running SSL-VPN Impact: Short interruption during firmware upgrade Risk Level: Medium Change Owner: [Insert name/contact] Change Window: [Insert time] Backout Plan: See below Test Plan: Confirm firmware version, validate VPN access, and run post-patch audits Rollback Plan If upgrade causes failure: Reboot into previous firmware partition using console access. Run: exec set-next-reboot primary or secondary depending on which was upgraded. Restore backed-up config (pre-patch). Disable SSL-VPN temporarily to prevent exposure while issue is investigated. Notify infosec and escalate through Fortinet support. Final Thought This wasn’t a missed patch. It was a failure to assume attackers would play fair. If you’re only validating whether something is “vulnerable,” you’re missing the bigger picture. You need to ask: Could someone already be here? Security today means shrinking the space where attackers can operate – and assuming they’re clever enough to use the edges of your system against you. The post When Patching Isn’t Enough appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 55 Visualizações
  • 0 Comentários 0 Compartilhamentos 55 Visualizações
  • 0 Comentários 0 Compartilhamentos 56 Visualizações
Mais stories