InformationWeek
InformationWeek
News and Analysis Tech Leaders Trust
  • 1 people like this
  • 136 Posts
  • 2 Photos
  • 0 Videos
  • 0 Reviews
  • Science &Technology
Search
Recent Updates
  • WWW.INFORMATIONWEEK.COM
    Facing the Specter of Cyber Threats During the Holidays
    Do retailers still face high levels of cyber risk in a world fraught with ransomware attacks year-round?
    0 Comments 0 Shares 4 Views
  • WWW.INFORMATIONWEEK.COM
    Data Quality: The Strategic Imperative Driving AI and Automation
    As enterprises race to implement AI and automation, one often overlooked factor can make or break their success: data quality. In fact, 72% of enterprises have adopted AI for at least one business function. The success of these AI and automation initiatives hinges on quality data. What separates effective automation from costly failures often boils down to the quality of the data feeding these systems. To achieve effective automation, enterprise leaders must rely on high-quality data. In this article, Ill outline simple strategies for gathering and sharing data that drives success.Data Quality: Impact Across IndustriesThe implications of poor data quality can vary across industries, but the underlying risks remain similar. For instance, in healthcare, poor data can lead to poor patient care, putting their safety at risk. Financial services are another sector where data accuracy is paramount -- poor data quality leads to flawed financial reports and increased operational risks, eroding trust, and potentially incurring regulatory penalties. Even retail isnt immune, as inaccurate inventory data can lead to poor stock management decisions, resulting in costly stockouts or excess inventory.Data Quality ChecklistHigh-quality data empowers automation and AI to provide outputs that are accurate, reliable, and context-rich, enabling users -- from data analysts to business leaders -- to make informed, confident decisions. This requires data to meet a checklist of criteria, which are as follows:Related:Timeliness: Is your data up to date? Timely data ensures relevant decision-making. For example, relying on outdated customer data in retail can lead to inaccurate personalization, missing opportunities for sales.Accuracy: Does your data accurately represent real-world conditions? Eliminating biases or errors is critical. For instance, biased healthcare data can lead to improper diagnoses, directly impacting patient outcomes.Completeness: Are your datasets comprehensive? Incomplete data can distort AI outcomes or even lead to hallucinations, where algorithms generate inaccurate or misleading results. For example, missing sales data could result in flawed revenue forecasts.Consistency: Do your records align across datasets? Inconsistent data creates errors that can ripple across automation systems. Imagine a supply chain scenario where mismatched product IDs lead to shipping delays and increased costs.Building a Foundation of Quality DataEnsuring data quality is not just about data cleansing; it requires robust data governance and management practices. Implementing a framework that prioritizes data quality across the organization is essential to achieving reliable outcomes from AI and automation investments. Here are a few best practices:Related:Data stewardship: Designate individuals responsible for monitoring and maintaining data quality across its lifecycle. This ensures that the integrity of the data is preserved.Automated data validation: Proactive detection and correction of errors in real-time is essential for organizations that rely on up-to-date data for fast-paced decision-making.Data lineage tracking: By tracking data from its origin through its transformations, organizations can better understand its reliability and accuracy.Hyperautomation as a Data Quality Use CaseHyperautomation, as defined by Gartner, is reshaping business by automating end-to-end processes across the entire IT landscape. This process merges AI, machine learning, and robotic process automation (RPA) to streamline operations, cut costs, and elevate customer experiences. However, the effectiveness of hyperautomation depends on one crucial factor: data quality. This is because the intelligence behind hyperautomation -- AI and ML models --relies on data.Related:In hyperautomation, data-driven decisions are vital for optimizing processes. Poor data quality can lead to less effective choices, undermining efficiency gains. Analyzing historical data allows organizations to forecast trends and proactively automate, yet the accuracy of these predictions is only as good as the data theyre based on.Hyperautomation also requires integrating data from multiple sources, and inconsistent formats or quality issues can impede seamless integration and scalability. High-quality data helps ensure the reliability and robustness of hyperautomation initiatives, minimizing errors and system risks.For customer-facing hyperautomation projects, such as applications like AI-powered chatbots and virtual assistants, these depend on accurate, current data to respond effectively to inquiries. Organizations that focus on data integrity while deploying hyperautomation projects -- both internal and customer-facing -- can fully harness its potential, enhancing operational efficiency and gaining a competitive advantage.Long-Term Impact of Quality Data on Business StrategyAt the strategic level, high-quality data doesnt just make AI and automation systems work better -- it enhances business outcomes. With data that is complete, accurate, and timely, companies can leverage AI and automation to improve efficiency, reduce operational risks, and foster more data-driven decisions that strengthen competitive advantage.Organizations that prioritize data quality today will be the ones to define industry benchmarks tomorrow. The question is: Is your data strategy ready to meet the demands of AI and automation?
    0 Comments 0 Shares 4 Views
  • WWW.INFORMATIONWEEK.COM
    Forrester Panel: Government Cybersecurity Leaders Discuss Next Steps for Zero Trust
    The recent Forrester Security & Risk Summit in Baltimore featured government cybersecurity officials discussing a newly published guide on zero trust and evaluating the next steps for the security model.In fact, Forrester is known for introducing the zero-trust security model back in 2009. The motto never trust, always verify suggests a least-privilege approach. Former Forrester analyst John Kindervag, now a chief evangelist at Illumio, was an initial champion of zero trust.In a Dec. 10 panel, cybersecurity leaders discussed Navigating the Federal Zero Trust Data Security Guide, which the federal CISO and CDO Councils published on Oct. 31. The guide, developed by 70 people from more than 30 federal agencies and departments, offers a breakdown of how government agencies and organizations should think about data risks. The goal is to provide a practical guide on how to implement zero trust.A Holistic View of Data and SecurityDuring the session, Steven Hernandez, CISO in the US Department of Education and co-chair of the US federal CISO Council, discussed how the guide could teach federal and private cybersecurity professionals think from both a zero-trust and data perspective.Its interesting because we talk about how to harness data, so we use a lot of behavioral analytics and logs from our systems, etc., Hernandez told the audience. Thats one side of the coin, but the other side of the coin is how we protect data using zero trust principles, technologies, and operations, and in the data management section, we're going to have to basically straddle both of those platforms to be successful. Related:Anne Klieve, management analyst in the Office of Enterprise Integration at the US Department of Veterans Affairs, agreed that a goal of the guide was to create a document that both the data and security communities could understand.It was about creating a guide that would be readable to both the cybersecurity and data communities, and specifically looking at how separate even the jargon was for both communities, Klieve said during the session.Massachusetts CIO Jason Snyder said he appreciates how the guide can move federal agencies and organizations past understanding the architecture of zero trust and doing something with it. He also said Massachusetts was at ground zero as far as zero trust.One of the things I really liked about the guide was its primary focus is data, and when you talk about zero trust, I think that is the right area of focus, Snyder said during the panel. So, what were doing within Massachusetts is really driving forward from a data perspective and better understanding our data, better understanding different types of data we have, and then working on ways to protect that data.Related:Heidi Shey, principal analyst at Forrester and co-moderator of the panel, sees the guide as applicable to organizations beyond state and federal government. For example, the panelists plan to add a section on supply chain risk.In an interview following the session, Shey told InformationWeek that the guide can help organizations no longer operate in silos as far as data and security.Were talking about really embedding data security controls throughout that entire life cycle and thinking about how we manage data and how we protect it in a much more holistic way, so that these two functions within organizations are not operating as siloed functions anymore the way they historically have been, Shey said. I think thats one of the big takeaways from this guide that people can use to help bring these two groups together on zero-trust data security.Klieve recommended that organizations use the guide to create a zero-trust data implementation road map based on general program management principles. This would include a maturity analysis and gap assessments. After that, organizations could implement their programs as they planned, including examining finances, examining risks, and managing performance. However, she noted that C-suite leaders such as the CISO and chief data officer would need to be consulted on how the budgets would be allocated.Related:Chapter 4 of the guide has a placeholder for the topic Manage the Data. Klieve would like to see this chapter filled with a discussion of alignment of data management to data security as well as how to use data management to minimize data breaches. In addition, the chapter should cover the interaction between data engines and machine learning as it relates to data security, according to Klieve. That includes preparing data for machine learning models.This will become a key document I just keep on my desk all the time, Klieve said. I really want to see it kept up to date.Hernandez said work on the Zero Trust Data Security Guide is in a holding pattern until late January, but then his team will brief the incoming administration on the overall status of all things cybersecurity. He also said the CISO council could add a zero-trust section to the National Institute of Standards and Technologys Special Publication 800-60, which provides guidelines on how to map data to security systems.The Next Level for Zero TrustMeanwhile, in another Dec. 10 panel, Next-Level Your Zero Trust Initiative panelists from the federal government as well as GE Aerospace addressed how government agencies and the private sector can move forward with zero trust.Eric Poulin, senior director for cybersecurity technology strategy and management at GE Aerospace, told the audience that applying the same zero-trust initiatives to all teams would not work.You can design a master zero-trust plan, but at the end of the day, you just try to put one blanket zero-trust plan, youre going to end up alienating certain individual business lines, Poulin said.At the Department of Interior, its zero-trust program manager, Lou Eichenbaum, has built a zero-trust community of practice, over three years, he told the audience. The department respects the separate missions of areas such as the National Park Service, and they all have input into how the department approaches zero trust.Brandy Sanchez, director of the Zero Trust Initiative at the Cybersecurity and Infrastructure Security Agency in the Department of Homeland Security, stressed the importance of incorporating zero trust in all layers.It needs to be part of every decision and every organization, Sanchez said. Any time you buy software, any time youre procuring something, any time that youre developing a system, all of that has to [incorporate] zero trust as the foundation.The challenge going forward in zero trust will not necessarily be in technology but in people and processes and getting buy-in from leadership and making sure all teams are aligned, according to Carlos Rivera, the panels moderator and a senior analyst at Forrester.Its not just an IT and security initiative; its an organizational initiative, Rivera told InformationWeek following the session. So getting those individuals involved, such as leads from HR, leads from finance, and getting a better understanding of what impacts them and whats important to them, and how do we enable their business and allow them to leverage certain technologies [but] not at the expense of security, thats really where the success will come.Because there are multiple maturity models, Sanchez and her team are working with the Department of Defense on zero-trust guidance.Words are important, and when we say one thing and another agency is interpreting that in a different way, it causes confusion, Sanchez explained during the panel. So anywhere that we can align, and that we can harmonize what we're doing, what others are doing, and get everyone on the same page across the federal government, thats where we want to head.Rivera said organizations have now achieved maturity as far as zero-trust strategy and planning, and now they are moving to implement zero trust into their operations.Sanchez sees the federal government providing more technical deep dives and how-tos around zero trust in the next year or two. Her team will be releasing publications on enterprise mobility and micro-segmentation. Going forward, Sanchez would like to see government agencies focus more on implementing zero-trust strategy based on their risk environment rather than just checking a box.You need to take an adversarial approach where you are looking at zero trust because thats what the bad guys are doing right? They want to get in; they want to get your information, Sanchez said. And so taking a strategic approach based on that view is where you can change the script, and that's really where were trying to push agencies towards, is keeping that in mind and managing at the risk level, versus just checking the box because thats not going to get us near the goal.
    0 Comments 0 Shares 22 Views
  • WWW.INFORMATIONWEEK.COM
    Retailers: Learn From the Holidays To Build Year-Round Resilience
    Ganesh Seetharaman, Managing Director, Deloitte ConsultingDecember 20, 20244 Min ReadValentin Valkov via Alamy StockDuring peak times like holiday periods, retailers, consumer goods companies, insurance firms, and others involved in seasonal crunch-time sectors face a delicate balance between opportunity and risk. Seasonal spikes can be a stringent test for executives, revealing the strength of their business and operational resilience. To understand why, just think back to recent incidents with organizations that may have experienced mass website outages due to holiday spikes or that suffered prolonged log-in issues.Indeed, downtime during peak periods can result in financial impacts measured in millions of dollars per hour, so its clear that the user experience is paramount. Even minor issues can lead to significant consequences, including customer churn, wasted ad spending, and long-term brand damage. The takeaway? Failure when the world is watching can have cascading effects, and a track-record of 99.99% uptime is insufficient if the 0.01% downtime occurs at critical moments. With that in mind, lets explore a strategic approach to building game-ready resilience.Game-ready resilience means that your systems can manage adversity -- from ecosystem impacts, including third-party services -- to unprecedented traffic peaks. Most importantly, it also means creating a culture of reliability with constant learning and cross-functional teams that understand the business impacts of downtime and can respond effectively to outages.Related:To enhance business and operational resilience during the holidays, tech leaders should focus on four key areas.1. Forecast and define measurable requirements.Start enhancing resilience by developing a reliable forecast of expected transaction volumes and user behavior. Seek to understand normal traffic patterns as well as how spikes in traffic might affect your systems during peak periods. Prioritize critical services; for example, with an e-commerce platform, the checkout process should take precedence over less-essential features like recommendation engines.Use service level objectives (SLOs) to define availability expectations and measure them. For instance, aim for 99.99% shopping-cart availability -- which you can foster by forecasting transaction volumes across all channels. Then, translate those forecasts into performance requirements like the ability to accommodate a specific number of concurrent users while meeting reliability expectations. It's also crucial to identify potential architectural bottlenecks and failure points.2. Map dependencies and mitigate risks.Related:Modern retail ecosystems are complex webs of internal systems and third-party services. To identify vulnerabilities and mitigate risk, create a comprehensive map of all dependencies. Then, assess the services scalability and reliability, and develop failure contingency plans that include circuit breakers and fallback options.In addition to infrastructure, focus on key business and foundational services, especially in hybrid and multi-cloud environments. Next, to build agility and minimize recovery time, develop a clear view of all dependency layers and build fault tolerance. An example of dependency management could look like an e-commerce organization simplifying its shipping infrastructure to achieve more efficient package delivery.3. Implement robust reliability checks.Establish clear, measurable reliability objectives aligned with business outcomes. For example, you might set granular targets, such as sub-2-millisecond log-in times. Such metrics create a common language across development, operations, and business teams, fostering a unified approach to reliability. Also, to ensure build stability, avoid last minute changes, and implement rigorous process controls for continuous validation.Related:Integrate SLOs and synthetic monitoring into your operational framework. Develop real-time observability solutions that provide actionable insights and rapid response capabilities. Implement observability to balance innovation and stability during peak loads and align technical metrics with indicators like net promoter scores. Also, adopt site reliability engineering to translate technical metrics directly into customer experience.4. Develop and refine incident-response procedures.Swift and effective responses to system challenges can prevent minor issues from becoming major crises. So, its essential to develop incident response procedures that include comprehensive system dependency maps that create communication channels, action plans, and escalation pathways that help minimize confusion. Automatic failure notifications are a must as well, as are self-healing approaches to incidents and solutions driven by error budgets and burn rates.Next, ensure organizational readiness through training, communication protocols, and regular response drills. Implement proactive monitoring systems to detect and address issues early. Also, learning from high-profile incidents underscores the importance of transparent, timely communication during disruptions.The Path ForwardBuilding resilience requires both a cultural and technical shift to align critical services with customer journeys, refine resilience policies, and adapt to changing demands. Practices like game day drills enhance readiness, reinforcing that resilience is an ongoing effort that requires continuous refinement, not a one-time project. True resilience requires a holistic approach that ensures people, processes, and technology work in sync to handle both surges and scale-downs effectively. By adopting the strategies weve discussed here, you can prepare your systems for peak times while building stronger, more resilient year-round operations.About the AuthorGanesh SeetharamanManaging Director, Deloitte ConsultingGanesh Seetharaman is a managing director at Deloitte Consulting LLP. He leads Deloittes Technology Resiliency market offering and is recognized for delivering innovative solutions for his clients, as well as for helping organizations navigate technology challenges and capitalize on market opportunities.See more from Ganesh SeetharamanNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 24 Views
  • WWW.INFORMATIONWEEK.COM
    Why Enterprises Still Grapple With Data Governance
    Lisa Morgan, Freelance WriterDecember 20, 20249 Min ReadRancz Andrei via Alamy StockData governance isnt where it needs to be in many organizations, despite the widespread use of AI and analytics. This is risky on several levels such as cybersecurity and compliance, not to mention the potential impacts to various stakeholders. In short, data governance is becoming more necessary as organizations rely more heavily on data, not less.Steve Willis, principal research director, data, analytics, enterprise architecture and AI at Info-Tech Research Group offers a sobering statistic: Some 50% to 75% of data governance initiatives fail.Even in highly regulated industries where the acceptance and understanding of the concept and value of governance more broadly are ingrained into the corporate culture, most data governance programs have progressed very little past an expensive [check] boxing exercise, one that has kept regulatory queries to a minimum but returned very little additional business value on the investment, says Willis in an email interview.Most data professionals cite things like lack of business understanding and/or executive engagement, limited funding, the complexity of the data landscape or general organizational change resistance as the root-cause or causes as barriers to data governance implementation and the reason(s) why most data governance initiatives fail, though Willis disagrees.Related:A lack of a deep connection between the tangible outcomes business stakeholders care about and the activities and initiatives undertaken in the name of data governance is the primary cause of failure, says Willis. The few who have successfully implemented data governance can easily point to the value that data governance initiatives have delivered. [They are] able to provide a direct line of sight not only to tactical wins but to deep contributions to an organization achieving its strategic goals and objectives.Where the Problems LieMany data teams, particularly data governance teams, lack the proper relationships with business stakeholders, so the business has no visibility into how data governance works.Data governance teams should be rigorously focused on understanding how improvements in [data use] will tangibly make life easier for those managing and using data, be it removing critical pain points or creating new opportunities to add value, says Info-Techs Willis. By not focusing on their customers needs, many data governance professionals are over-focused on adding workload to those they are purporting to help in return for providing little measurable value.Related:Steve Willis, Info-Tech Research GroupWhy the disconnect? Data teams dont feel they can spend time understanding stakeholders or even challenging business stakeholder needs. Though executive support is critical, data governance professionals are not making the most out of that support. One often unacknowledged problem is culture.Unfortunately, in many organizations, the predominant attitude towards governance and risk management is that [they are] a burden of bureaucracy that slows innovation, says Willis. Data governance teams too frequently perpetuate that mindset, over-rotating on data controls and processes where the effort to execute is misaligned to the value they release.One way to begin improving the effectiveness of data governance is to reassess the organizations objectives and approach.Embed data governance activities, small step by small step into your current business operations, make managing data part of a business process owners day to day responsibilities rather than making the governance and management of data a separate thing, saysWillis. This abstraction of data governance and management away from business operations is a key reason why nominated data stewards, who are typically business process owners, dont understand what they are being asked to do. As a data governance team, you need to contextualize data management activities into the language the business understands and make it a part of what they do.Related:Common Mistakes and How to Avoid ThemBusinesses are struggling to make data accessible for users and protect it from misuse or breaches. This often results in either too much bureaucracy or insufficient control, leaving organizations vulnerable to inefficiencies and regulatory fines.The solution is to start small, focus on delivering results, and build from there. Begin with high-priority areas, like fixing compliance gaps or cleaning up critical datasets, to show quick wins, says Arunkumar Thirunagalingam, senior manager, data and technical operations at healthcare company McKesson, in an email interview. These early successes help build momentum and demonstrate the value of governance across the organization.He says the biggest mistakes companies make include trying to fix everything at once, relying too much on technology without setting up proper processes and ignoring the needs of end users.Overly restrictive governance often leads to workarounds that create even more problems, while waiting until a crisis forces action leaves companies in a reactive and vulnerable position, says Thirunagalingam. [W]hen done right, data governance is much more than a defense mechanism -- its an enabler of innovation and efficiency.Stephen Christiansen, principal security consultant at cybersecurity consulting firm Stratascale,says the shortage of data professionals, exploding data growth, and ever-increasing requirements for AI and data security are causing organizations to take a more conservative approach.Companies need to be continually investing in data technologies that help them manage, secure, and integrate data across their enterprise systems, says Christiansen in an email interview. Internally, companies need to [build] a data-driven culture, so employees better understand the importance of data governance and how it benefits them.David Curtis, chief technology officer at global fintech RobobAI, says the average amount of data is growing 63% monthly. The speed and velocity of this growth is overwhelming, and companies are struggling to manage the storage, protection, quality, and consistency of this data.Data is often collected in multiple different ERPs across an organization. This often means that data is disparate in format and incomplete. Eighty percent of companies estimate that 50% to 90% of their data is unstructured, says Curtis in an email interview. Unstructured data creates challenges for large organizations due to its lack of standardization, making it difficult to store, analyze, and extract actionable insights, while increasing costs, compliance risks and inefficiencies.Companies need to start with a data governance strategy. As part of that, they need to review relevant business goals, define data ownership, identify reference data sources, and align the data governance strategy KPIs. For ongoing success, they need to establish an iterative process of continuous improvement by developing data processes and committing to a master data governance framework.For every dollar you invest in AI you should invest five dollars in data quality. In my experience, the most common data challenges are due to a lack of clear objectives and measurable success metrics around master data management initiatives, says Curtis. Often insufficient or poor-quality data, often at scale, and limited integration with existing systems and workflows, prevents scalability and real-world application. Evolving regulations are also adding fuel to the fire.Organizations are continually challenged with complying with the constant stream of regulations from various jurisdictions, such as GDPR, HIPAA, and CCPA. These regulations keep evolving, and just when IT leaders think theyve addressed one set of compliance requirements, a new one emerges with slight nuances, necessitating continuous adjustments to data governance programs, says Kurt Manske, information assurance andcybersecurity leader at professional services firm Cherry Bekaert. The reality is that companies cant simply pause their operations to align with these ever-changing regulations. Consequently, developing, deploying and managing a data governance program and system is a lot like changing tires on the car as it goes down the highway. [Its] an undeniably daunting task.This underscores the need to establish a resilient culture versus a reactive one.Leading companies see regulatory compliance as a differentiator for their brand and products, says Manske in an email interview. [One] key reason data governance programs and system deployment projects fail is that organizations try to take on too much at once. Big bang deployment strategies sound impressive but they often encounter numerous technical and cultural problems when put into practice. Instead, a metered or scaled deployment approach across the enterprise allows the team, vendor and governance leadership to continuously evaluate, correct and improve.The Sobering TruthOrganizations that lack strong governance are drowning in data, unable to harness its value, and leaving themselves vulnerable to growing cyber threats. According to Klaus Jck, partner at management consulting firm Horvth USA, incidents like the recent CrowdStrike breach are stark reminders of whats at stake. Data quality issues, silos, unclear ownership and a lack of standardization are just the tip of the iceberg.Klaus Jck, Horvth USAThe root cause of these struggles is simple: Data is everywhere. Thanks to new sensor technologies, process mining and advanced supervisory systems, data is produced at every step of every business process, says Jck in an email interview. The drive to monetize this data has only accelerated its growth. Unfortunately, many organizations are simply not equipped to manage this deluge.A truly effective strategy must go beyond policies and frameworks; it must include clear metrics to measure how data is used and how much value it creates. Assigning ownership is also key -- data stewards can help create a control environment sensitive to the nuances of modern data sources, including unstructured data.Failing to connect governance to business goals or neglecting executive sponsorship are major mistakes, says Jck. Poor communication and training also derail efforts. If employees dont understand governance policies or dont see their value, progress will stall. Similarly, treating governance as a one-time project rather than an ongoing process ensures failure.Dimitri Sirota, CEO and co-founder at security, privacy, compliance, and AI data management company BigID, says the root cause of data governance challenges often stem from poor data quality and insufficient governance frameworks.Inconsistent data collection practices, lack of standardized formats for key data elements such as dates and numeric values, and failure to monitor data quality over time exacerbate the problem, says Sirota in an email interview. Additionally, organizational silos and outdated systems can perpetuate inconsistencies, as different teams may define or manage data differently. Without a rigorous framework to identify, fix and monitor data issues, organizations face an uphill battle in maintaining reliable, high-quality data.Ultimately, the absence of a centralized governance strategy makes it difficult to enforce standards, creating noise and clutter in data environments.Marc Rubbinaccio, head of compliance at security compliance provider Secureframe, points to a related issue, which is understanding where sensitive data resides and how it flows within organizations.[T]he rush to adopt and implement AI within organizations and software products has introduced new risks, says Rubbinaccio in an email interview. While the efficiency gains from AI are widely recognized, the vulnerabilities it may introduce often go unaddressed due to a lack of thorough risk evaluation. Many organizations are bypassing detailed AI risk assessments in their eagerness to stay ahead, potentially exposing themselves to long-term consequences.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 24 Views
  • WWW.INFORMATIONWEEK.COM
    Top 5 Infrastructure for AI Articles in 2024
    Delivering and supporting the right infrastructure for AI, a major challenge in 2024, will remain a top challenge in years to come.
    0 Comments 0 Shares 23 Views
  • WWW.INFORMATIONWEEK.COM
    Does Desktop AI Come With a Side of Risk?
    Artificial intelligence capabilities are coming to a desktop near you with Microsoft 365 Copilot, Google Gemini with Project Jarvis, and Apple Intelligence all arriving (or having arrived). But what are the risks?
    0 Comments 0 Shares 10 Views
  • WWW.INFORMATIONWEEK.COM
    Cybercriminals and the SEC: What Companies Need to Know
    Todd Weber, Vice President of Professional Services, SemperisDecember 19, 20245 Min ReadYevhenShkolenko via Alamy StockThe Securities and Exchange Commission (SEC) is putting a spotlight on security incident reporting. This summer, the SECannounced a rule changethat requires certain financial institutions to notify individuals within 30 days of determining their personal information was compromised in a breach. Larger entities will have 18 months to comply, and enforcement will begin for smaller companies in two years.This new rule change follows cybersecurity disclosure requirements for public companies that were adopted only a year prior -- and implemented on December 18, 2023 for larger companies and June 15, 2024 for smaller reporting companies. These changes are already having an impact on disclosures, even if not in the way the SEC intended.Under these disclosure requirements, public companies must report cybersecurity incidents within four business days of determining that an incident was material. But in mid-November, even before the rules were officially adopted, the AlphV/BlackCat ransomware gang added an early twist to its typical game by notifying the SEC that one of its victims had failed to report the groups attack within the four-day limit.This incident raised the sobering possibility that if companies dont report cyberattacks to the SEC, attackers will do it for them. The action has sparked concerns about the abuse of regulatory processes and worries that the new rules could unintentionally lead to early disclosures, lawsuits, and an increase in attacks.Related:Im not convinced threat groups have the upper hand. We must assume the SEC or contractors are monitoring the dark web for info on attacks that impact publicly traded companies. Still, organizations would be wise to strengthen their defenses and prepare for the worst-case scenario.As Cyberattacks Increase, Identity Is in SpotlightThe SECs disclosure rules come as cyberattacks continue to rise in scale and severity, with identity-based attacks at the forefront. Verizons 2023 DBIR found that 74% of all breaches involved the human element, while almost a quarter (24%) involved ransomware.Active Directory (AD) and Entra ID identity systems, used in more than90% of enterprisesworldwide, provide access to mission-critical user accounts, databases, and applications. As the keeper of the keys to the kingdom, AD and Entra ID have become primary targets for identity-based attacks.Its too early to know if cybercriminals reporting their attacks to the SEC will become a trend. Regardless, it is critical for organizations to take a proactive approach to identity security. In todays digital world, identities are necessary to conduct business. But the unfettered access that identity systems can provide attackers presents a critical risk to valuable data and business operations. By taking steps to strengthen their cybersecurity posture, incident response and recovery capabilities, and operational resilience, organizations can help prevent bad actors from infiltrating identity systems.Related:Protect Active Directory, Build Business ResilienceSecuring AD, Entra ID, and Okta is key to identifying and stopping attackers before they can cause damage. AD security should be the core of your cyber-resilience strategy.Attacks are inevitable, and organizations should adopt an assume breach mindset. If AD is taken down by a cyberattack, business operations stop. Excessive downtime can cause irreparable harm to an organization. Henry Schein was forced to take its e-commerce platform offline for weeks after being hit by BlackCat ransomware three times; the company lowered sales expectations for its 2023 fiscal year due to the cybersecurity breach.Having an incident response plan and tested AD disaster recovery plan in place is vital.Here are three steps for organizations to strengthen their AD security -- before, during, and after a cyberattack.Related:1. Implement a layered defense. Cyber resilience requires a certain level of redundancy to avoid a single point of failure. The best defense is a layered defense. Look for an identity threat detection and response (ITDR) solution that focuses specifically on protecting the AD identity system.2. Monitor your hybrid AD. Regular monitoring of the identity attack surface is critical and can help you identify potential vulnerabilities before attackers do. An effective monitoring strategy needs to be specific to AD. Use free community tools like Purple Knight to find risky configurations and vulnerabilities in your organizations hybrid AD environment.3. Practice IR and recovery. An incident response (IR) plan is not a list to check off. It should include tabletop exercises that simulate attacks and involve business leaders as well as the security team. Even with a tested AD disaster recovery plan, your organization is still vulnerable to business-crippling cyber incidents. However, IR testing greatly improves your organizations ability to recover critical systems and data in the event of a breach, decreasing the risk of downtime and data loss.From my own experience, I know that the key difference between an organization that recovers quickly from an identity-related attack and one that loses valuable time is the ability to orchestrate, automate, and test the recovery process.Here are my tips for a swift incident response:Having backups is an essential starting point for business recovery. Make sure you have offline/offsite backups that cannot be accessed by using the same credentials as the rest of your production network.The best approach for recovery is practice makes progress. A convoluted recovery procedure will delay the return to normal business operations. Verify that you have a well-documented IR procedure that details all aspects of the recovery process -- and that the information can be accessed even if the network is down.Orchestrate and automate as much of the recovery process as possible. Time is the critical factor in recovery success. Automation can make the difference between a recovery that takes days or weeks and one that takes minutes or hours.The prospect of attackers outing their victims to the SEC underscores the importance of protecting systems in the first place. Organizations need to take the necessary steps, starting with securing their identity system. Whether your organization uses AD, Entra ID, or Okta, any identity can provide a digital attack path for adversaries seeking your most valuable assets.About the AuthorTodd WeberVice President of Professional Services, SemperisTodd Weber is the Vice President of Professional Services at Semperis, where he is responsible for developing and executing the companys professional services strategy, driving new revenue through service offerings and building and maintaining client relationships. Weber has more than 20 years of experience in cybersecurity professional services, technology development and integration, business strategy and venture investing. He has worked with many of the largest companies in the world developing and deploying information security technologies and architectures. Prior to Semperis, Todd was an Operating Partner and CTO at Ten Eleven Ventures. He previously served as the CTO at Optiv. He holds a B.S. from Virginia Tech.See more from Todd WeberNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 11 Views
  • WWW.INFORMATIONWEEK.COM
    Tech Goes Nuclear
    Artificial intelligence is fueling faster and better organizational intelligence. Its helping business leaders navigate the complexities of the world in new and innovative ways. However, these gains arent without growing pains. AI is also straining the energy grid. Over the coming years, its role in consuming global energy will rise by 26% to 36% annually.As companies seek to boost energy availability, control costs and meet ambitious climate targets, the concerns -- and real-world problems -- multiply. Although renewables such as solar and wind now play a key role in supplying power, consumption continues to outstrip energy growth.The answer? Large enterprises -- including tech firms that require enormous amounts of energy to keep data centers running -- are turning to nuclear power. It can play an important role in supplying energy and supporting decarbonization, says Jennifer Gordon, director of the Nuclear Energy Policy Initiative at the Atlantic Council, a non-partisan think tank.Adds Martin Stansbury, US Power, Utilities & Renewables Risk & Financial Advisory Leader at Deloitte: As demand for energy grows and reliable clean energy becomes the focus, nuclear is an appealing option.Google, Microsoft, AWS, Meta and others have recently announced a spate of projects that incorporate nuclear power. This includes building or restarting conventional reactors, and developing energy systems based on advances in small modular reactors (SMRs) and microreactors that can operate at a site or facility.Related:There is an increased interest in pursuing a low-carbon grid thats resilient, reliable and affordable, states John F. Kotek, Senior Vice President of Policy Development at the Nuclear Energy Institute (NEI). This is leading people to take a fresh look at nuclear power. It can help companies build a more efficient energy infrastructure.Nuclear Charges ForwardThe enormous growth of data centers -- fueled by digitalization and AI -- is fundamentally changing the stakes for companies large and small. Consulting firm McKinsey & Company reports that data center operations will more than double their energy requirements to 35 gigawatts from 2022 to 2035. The US Department of Energy notes that overall electricity demand could double by 2050.Unlike wind and solar, nuclear delivers a consistent source of carbon-free energy. In fact, it has the highest capacity factor of any energy source, at 92%, according to the US Department of Energy. Thats about double natural gas and coal, and about three times more reliable than wind and solar. Nuclear facilities also require relatively little land and fuel, and advances in storage have made it safer to handle radioactive waste.Related:At present, 94 licensed reactors operate in the US. They produce about 20% of US electrical output. Yet, in recent months, Google, Microsoft, Amazon and others have pivoted to nuclear. Microsoft, for example, is reopening the former Three Mile Island facility in Pennsylvania. Renamed as the Crane Clean Energy Center and costing about $1.6 billion, it is slated to go back online in 2028. It will deliver 835 megawatts (MW) of continuous electricity to operate its data centers and cloud computing infrastructure.Meanwhile, Google signed a deal with startup Kairos Power in October 2024 to build a series of small modular reactors (SMRs) for data centers and other facilities. Google aims to have these systems fully operational by 2030. The same month, Amazon announced that it is investing in SMRs. It is working with a company called X-Energy as well as public utilities Energy Northwest and Dominion Energy to boost capacity by 2030.Positive ReactionsThis isnt your grandfathers nuclear power technology. SMRs can deliver up to 300 megawatts of continuous power, which is ideal for a data center or manufacturing facility that relies on robotics and other energy-intensive equipment. Different companies tap different technologies to power the reactors, which include Light Water Reactors (LWRs), Molten Salt Reactors (MSRs), High-temperature Gas-Cooled Reactors (HTGRs) and Lead-Cooled Fast Reactors (LFRs).Related:In fact, SMRs deliver a level of flexibility that conventional energy sources cannot. You can position a system inside or close to a data center and not only have a major source of carbon-free power but also greater resiliency, says Gillian Crossan, Advisory Principal and Global Technology Leader at Deloitte. In the event of a weather-related disruption or other event you can continue to operate.Another thing that makes SMRs attractive is an array of enhanced safety features. This includes a smaller core size that reduces heat and radiation, lower operating pressures, and a simplified design that uses fewer pumps and valves. These systems also offer passive cooling that requires no human intervention in the event of an accident or emergency. Automated safety is built into these systems, Stansbury says.Another type of nuclear power system, ultra-compact microreactors can generate up to 20 megawatts of continuous power. Its possible to transport these systems by train or truck to a temporary site or facility, such as a mine or remote construction site. The technology can also dial up power for a data center or manufacturing site that would otherwise go offline during a hurricane, earthquake or other emergency.Fueling ProgressAt present, about 150 small modular reactors are in development around the world, according to Deloitte. Most wont be fully operational for the next five to seven years. This isnt a wait-and-see proposition. Companies need to start planning for their future energy requirements and capacity, Stansbury says. You have to determine whether or not you want to dip your toes into the SMR space.Nuclear energy wont displace or replace renewables, it merely complements them, Kotek says. Nuclear power unlocks a lot of opportunities. One of the biggest advantages for companies that use small modular reactors and micro reactors is that you dont have to build a single large reactor. You can use dozens of these systems in a very flexible way and assemble them at sites as needed.Of course, nuclear power isnt without challenges. SMRs could increase nuclear waste output. Theyre also expensive to build and subject to frequent cost overruns, partly due to complex and inconsistent regulations, the US Department of Energy notes. Finally, there are concerns about how SMRs and other onsite energy sources could impact the overall grid. Nuclear power could help bring stability to the grid but theres still going to be the need for regulation, Stansbury says.For companies considering a nuclear future, Kotek recommends studying the different types of SMRs and microreactors and understanding whats the best fit. Theres a learning curve associated with these technologies, he says. Organizations that build internal expertise are able to scale up the technology over time and generate more dependable energy and better returns.Concludes Gordon: As companies seek out always-on energy thats fully decarbonized, nuclear stands out. It alone cant solve the energy problem but its emerging as a key part of a balanced energy framework.
    0 Comments 0 Shares 11 Views
  • WWW.INFORMATIONWEEK.COM
    Ransomware Attack on Rhode Island Highlights Risk to Government
    On Dec. 5, a warning from vendor Deloitte alerted the state government of Rhode Island that RIBridges, its online social services portal, was the potential target of a cyberattack. By Dec. 10, Deloitte confirmed the breach. On Dec. 13, Rhode Island instructed Deloitte to shut down the portal due to the presence of malicious code, according to an alert published by the state government.Brain Cipher, the group claiming responsibility, is threatening to release the sensitive data stolen in the attack, potentially impacting hundreds of thousands of people, according to The New York Times.State and local government entities, such as RIBridges, are popular targets for ransomware gangs. They are repositories of valuable data, provide essential services, and are often under-resourced. What do we know about this attack so far and the ongoing cyber risks state and local governments face?The Brain Cipher AttackRIBridges manages many of Rhode Islands public benefits programs, such as the Supplemental Nutrition Assistance Program (SNAP), Medicaid, and health insurance purchased on the states marketplace. Deloitte manages the system and Brain Cipher claims to have attacked Deloitte, BleepingComputer reports.We are aware of the claims by the threat actor. Our investigation indicates that the allegations relate to a single client's system, which sits outside of the Deloitte network. No Deloitte systems have been impacted, according to an emailed statement from Deloitte.Related:The information involved in the breach could include names, addresses, dates of birth and Social Security numbers, as well as certain banking information, according to the RIBridges alert.Rhode Island Governor Daniel McKee (D) issued a public service announcement urging the states residents to protect their personal information in the wake of the breach.Based on the information that's being put out there by the governor about the steps you can take to minimize the fallout of this, that tells me that they're unlikely to be paying the ransom, says Truman Kain, senior product researcher at managed cybersecurity platform Huntress.Brain Cipher appears to be a relatively new ransomware gang. We've tracked five confirmed attacks so far, including this one. Two others have been on government entities as well: one in Indonesia and one in France, Rebecca Moody, head of data research at Comparitech, a tech research website, tells InformationWeek.In June, the ransomware group hit Indonesias national data center. It demanded an $8 million ransom, which it ultimately did not receive. In August, it posted Runion des Muses Nationaux (RMN), a public cultural organization in France, to its data leak site, alleging the theft of 300GB of data, according to Comparitech.Related:In addition to these confirmed attacks, there are 19 unconfirmed attacks potentially linked to Brain Cipher, according to Moody. It is unclear how much the group may have collected in ransoms thus far.It's always really difficult to know when people have paid because, obviously, if they pay they [threat groups] shouldn't really add them to the data leak site, and obviously, companies are very reluctant to tell you if theyve paid a ransom because they think it leaves them open to future attack, says Moody.Ransomware Attacks on GovernmentGovernment remains a popular target for threat actors. They are vulnerable because they are a key service for people, and they can't afford downtime, says Moody. It is one of the sectors that we've seen a consistently high number of attacks.Between 2018 and December 2023, a total of 423 ransomware attacks on US government entities resulted in an estimated $860.3 million in downtime, according to Comparitech. For 2024, Comparitech tracked 82 ransomware attacks on US government agencies, up from 79 last year.Related:Of the 270 respondents in the state and local government sector included in The State of Ransomware in State and Local Government 2024 report from Sophos, just 20% paid the initial ransom demand. States such as Florida, North Carolina, and Tennessee, have legislation limiting or even prohibiting public entities from paying ransom demands.That doesnt necessarily mean threat actors will avoid targeting government entities. Even if a threat group cannot successfully extort a victim, it can still sell stolen data to the highest bidder. Ransoms are probably higher than what they would get for leaking the data. It depends on how much data is stolen though and the value of that data, says Moody.Regardless of whether a government agency pays when hit with ransomware, it still must deal with the disruption and fallout.While cybersecurity threats to local and state governments are highly publicized, funding continues to be a stumbling block. Just 36% of local IT executives report that they have adequate budget to support cybersecurity initiatives, according to the 2023 Local Government Cybersecurity National Survey from Public Technology Institute.While budgets may be limited, cybersecurity cannot be ignored, Kain argues.I think its kind of an excuse for state and local governments to say, Oh, well we just don't have the budget. So, cybersecurity is an afterthought, he says. Things should really start from a cybersecurity perspective, especially when you're dealing with sensitive data like this.State and local government agencies can focus on cybersecurity basics, like enabling multi-factor authentication, regular security awareness training for staff, and vulnerability patching. It's those key things that don't necessarily cost a lot, says Moody. Also [be] prepared for the inevitable because no one's immune to them [attacks].
    0 Comments 0 Shares 11 Views
  • WWW.INFORMATIONWEEK.COM
    Cyber Alignment: Key to Driving Business Growth and Resilience
    As the cyber landscape evolves, a holistic approach to cybersecurity will be essential for organizations to effectively navigate risks and align their cyber strategies with overarching business objectives. By integrating cybersecurity into the core of corporate governance, organizations can transform security from a reactive measure into a strategic asset -- enhancing resilience, fostering innovation, and maintaining competitive advantage.In today's business landscape, incorporating cybersecurity into enterprise risk management is a critical imperative for organizations. As cyber threats evolve, organizations must move beyond viewing cybersecurity as a technical concern and recognize its profound impacts on financial stability, reputation, compliance, and resilience.This new model requires a fundamental shift in how the C-suite and board of directors approach cybersecurity. Change comes from understanding the criticality of moving away from a focus on technical issues towards more comprehensive, business-aligned strategies that encompass risk for the entire organization.To effect this shift, leadership should cultivate broader digital competencies and foster a deeper understanding of cybersecurity as part of their overall risk management strategy. Chief information security officers (CISOs) will play a pivotal role in this transformation, aligning efforts more closely with overarching business objectives.Related:Cybersecurity as a Core Business FunctionCybersecurity conversations should extend far beyond the security team, engaging a broader set of stakeholders including board members, and risk management executives. Nearly 40% of leaders surveyed by the World Economic Forum believe that cyber-attacks represent a paramount global risk. However, most organizations remain mired in Gen 1.0 cyber thinking: that cybersecurity is an IT problem or, worse, that cyber wont strike.Change will only come from understanding how threats specifically impact an organization's business, operations, sustainability, and financial condition. Whether a hospital, bank, insurer, or manufacturing giant, the implications of an incident vary dramatically.Board Engagement and CompetencyBoards are becoming involved in cybersecurity, but many may fear that they lack the necessary digital competencies or may expose themselves to risk. There's a growing need for boards to include cyber experts who can translate technical risks into business terms and create risk committees to ensure informed decision-making and oversight.Related:The challenge lies in shifting perspectives from viewing cybersecurity as a costly problem best solved by technical solutions alone, to understanding the cyber domain as an enterprise risk with shared roles and responsibilities. To facilitate this transition, it's crucial to provide plain business language assessments along with analytics that align investment decisions and help mitigate known risks.Organizations also need to understand what an optimal insurance or risk transfer structure looks like for their specific entity. This involves stress-testing existing policies across a range of potential cyber incidents.Finally, directors want cybersecurity exposures presented in terms that resonate with their expertise in business, operations, governance, legal matters, and finance. They also want to know what to do when things go wrong, and how to involve law enforcement.Addressing Cybersecurity FatigueDigital transformation, with all its efficiencies, is juxtaposed against the seemingly unending battle against cybercrime, leaving many boards questioning how to effectively address the dynamic. To overcome fatigue and pessimism, transparent and effective communication is essential.Premortems and table top exercises (TTXs) are both valuable, low-cost security exercises for boards and leaders. The key is to present concrete scenarios that illustrate the potential impact of cyber events on the business. For instance, demonstrating how a two-week ransomware outage could result in a $200 million write-down can help the board and CFO understand the stakes involved.Related:With budgets always top of mind, it is crucial to allocate cybersecurity capital wisely. Shifting away from conceiving cybersecurity as a cost center to viewing it as part of the long-term capital budget is a worthwhile conversation for organizations to consider.Ultimately, the business must decide on its risk tolerance, ideally elevating this decision to the board level. Presenting the facts, including potential losses, mitigation strategies, and costs, allows boards to make informed decisions about acceptable risks and ROI.CISO Evolution and Future of Cyber Risk GovernanceAs the role of a CISO expands beyond technical expertise, there's a growing need for a new breed of digital risk leaders who can bridge the gap between cybersecurity and wider business objectives. Organizations are exploring innovative governance structures, such as creating a chief digital risk officer role to oversee a broader portfolio of digital exposures.Looking ahead, integrating cybersecurity into enterprise risk management will entail a multi-faceted approach. This includes developing risk committees to address complementary domains like supply chain and technology risks, while leveraging changing frameworks like NIST CSF 2.0 the SECs cyber rules, and regulations like the EUs AIAct, NIS2, and DORA.A Framework for Board EngagementEffective cybersecurity governance at the board level rests on three pillars: substance, frequency, and structure. The information presented must align cyber risks with tangible business exposures, moving beyond technical jargon. The frequency of discussions should be calibrated to ensure timely oversight without overwhelming the boards agenda. Finally, determining the appropriate committee structure is crucial for fostering in-depth and relevant discussions.As the cyber landscape evolves, a holistic approach to cybersecurity will be essential for organizations to effectively navigate risks and align their cyber strategies with overarching business objectives. By integrating cybersecurity into the core of corporate governance, organizations can transform security from a reactive measure into a strategic asset -- enhancing resilience, fostering innovation, and maintaining competitive advantage.
    0 Comments 0 Shares 11 Views
  • WWW.INFORMATIONWEEK.COM
    Things CIOs and CTOs Need To Do Differently in 2025
    Lisa Morgan, Freelance WriterDecember 18, 202410 Min ReadOrazio Puccio via Alamy StockIts that time of year again: the time when journalists and vendors make predictions and IT leaders set priorities for the new year. In a lot of ways, the stakes are high, given a new US presidential administration and the active conflicts in various parts of the world. What will happen to the economy and IT budgets? What will all the unrest equate to in terms of business continuity and cyberattacks?As the world and technology become increasingly complex, CIOs and CTOs need to figure out what that means to the organization as well as the IT department. Loren Margolis, faculty, Stony Brook University, Women Leaders in STEM Program, warns that IT leaders need to proactively combat cybersecurity threats that continue to become more sophisticated.To proactively combat [cyberattacks], leaders must think like them, says Margolis. Questions to ask [include] What are our potential openings and soft spots? What are our competitors doing to combat them? If I were a nefarious operative, what would I do to breach our system?She also says CIOs and CTOs need to get ahead of machine learning to increase customer satisfaction, reduce costs and increase efficiency. In addition, IT leaders should consider the skill gaps in their workforce.Related:Keep ahead or at least on top of the cybersecurity, artificial intelligence, and data analytics skills that are needed. Acquire talent and develop that talent so your company remains competitive, says Margolis. Find ways to use [AI and analytics] to become even more agile so you remain competitive. Also embrace them as opportunities to train and develop your workforce. Make sure your organization is a place where great tech talent can come to develop and use their skills.The following are some other priorities for 2025.1. Increase value deliveryJoe Logan, CIO at cloud-native knowledge platform iManage believes CIOs and CTOs will be focused on driving cost to value, especially when it comes to security.Because the nature of the threat that organizations face is increasing all the time, the tooling thats capable of mitigating those threats becomes more and more expensive, says Logan. Add to that the constantly changing privacy security rules around the globe and it becomes a real challenge to navigate effectively.Also realize that everyone in the organization is on the same team, so problems should be solved as a team. IT leadership is in a unique position to help break down the silos between different stakeholder groups. The companies that master cross-functional problem-solving tend to drive higher value than those that dont.Related:2. Ensure AI investment ROIIn 2024, many organizations discovered that their AI investments werent paying off as expected. As a result, AI investments are shifting from rapid innovation at any cost to measurable ROI. Heading into 2025, Uzi Dvir, Global CIO at digital adoption platform WalkMe says CIOs and CTOs will face increased pressure to justify AI investments in the boardroom.Change management is emerging as a crucial factor for companies to fully realize the benefits of their AI investments and companies are gravitating towards more intuitive, human-centric AI, says Dvir. Faced with more and more AI apps, employees are asking themselves if its worth the time and effort to figure out how to use these new technologies for their specific roles. In response, enterprises are now prioritizing better visibility into AI adoption [and identifying] areas ripe for optimization and enhanced security.As always, the path to AI mastery doesnt lie in technology advancements alone. Companies that actively start investing in and addressing change management will reap the true rewards of their technology investments.3. Overcome budget limitationsRelated:Every IT leader is under pressure to improve efficiency and time to market while reducing costs. As is typical, theyre being asked to do more with less, and do it faster, but in 2025, theyll increase their usage of AI, machine learning, and low-code/no-code platforms to improve efficiency.We are expecting to realize a 10% to 20% improvement in developer productivity via the use of products like GitHub Copilot and Amazon Q. Our current run-rate usage of these products has us bringing in the equivalent of an entire products code base worth of AI-generated code every year, says Steven Berkovitz, CTO of restaurant technology solutions company PAR Technology. We also expect these tools to help our developers focus their time on the hard and novel problems and spend less time on the repetitive tasks of development. We particularly expect this to help accelerate starting new projects and products as much of the boilerplate work can be automated.However, many developers hesitate to use AI for fear of job loss.I think [job loss] concerns are overstated, and developers should be embracing the tooling to improve their efficiency versus fighting I,says Berkovitz. [AI] will make them better, faster developers, which makes them more valuable to companies, not less.4. Strengthen cybersecurityCybersecurity threats are becoming more sophisticated, necessitating stronger defense mechanisms. Unfortunately, the digital services enterprises use to innovate are also utilized by threat actors to exploit.Strengthening cybersecurity measures will protect company assets and build trust with customers and partners, says Rob Kim, CTO at technology services and solutions provider Presidio. Challenges include the scarcity of skilled professionals in emerging technologies [including] Gen AI, data/lake house modernization and cybersecurity. Ensuring data privacy and regulatory compliance in a rapidly evolving legal landscape can also be complex.5. Deal with the lingering talent shortageThe World Economic Forum found theres a global shortage of nearly 4 million professionals in the cybersecurity industry as demand continues to increase. That shortage follows a 12.6% growth rate in the cybersecurity workforce between 2022 and 2023. Highly regulated industries, such as government and healthcare, are among those experiencing the greatest cybersecurity workforce shortages, which presents unique challenges.This same narrative has been repeating for years: businesses are moving to the cloud and facing tighter compliance regulations while budgets remain tight and security threats grow more serious, says Jim Broome, president and CTO at information security services company DirectDefense. It all requires more staff with advanced skill sets and an ability to learn and adapt to constant changes, which can lead to burnout.Expect the trend to continue.6. Ignite innovationCIOs and CTOs face several risks as they attempt to manage technology, privacy, ROI, security, talent and technology integration. According to Joe Batista, chief creatologist, former Dell Technologies & Hewlett Packard Enterprise executive, senior IT leaders and their teams should focus on improving the conditions and skills needed to address such challenges in 2025 so they can continue to innovate.Keep collaborating across the enterprise with other business leaders and peers. Take it a step further by exploring how ecosystems can impact your business agenda, says Batista. [F]oster an environment that encourages taking on greater risks. The key is creating a space where innovation can thrive, and failures are steppingstones to success.7. Understand customers better and remain curiousJust about every organization believes they are customer-centric and know their customers, but actual customer experiences often tell a different story. Batista advises getting to know customers and the customers customers to move beyond superficial engagement. To do that, IT leaders should map customers journeys, experience the customer journey for a day, hold regular insight sessions to dig deeper into customer needs, research the customers world and be consistently available to customers.By doing this, you can build a future-forward learning team. Understanding what skills, knowledge and connections you may need a year from now allows you to start learning and growing today. This initiative-taking approach will help you face future changes with confidence and readiness, says Batista. If I could offer one piece of advice to a peer for 2025, it would be simple: STAY CURIOUS! Curiosity drives us to ask the important why and how questions, leading to deeper analysis and more creative solutions. Embrace not knowing as an opportunity to learn. Explore new interests and make it a habit to question your assumptions about people, situations, or ideas.8. Unearth novel insights about dataWith the explosion of unstructured data, CIOs and CTOs need better insights into it. Such insights are key for managing the lifecycle of data from creation to archiving. Insights are also critical for ensuring the most appropriate data is included in data lakes and data lake houses that support new AI/ML workloads.In 2025, the amount of unstructured data stored in both public cloud and private cloud environments will continue to grow, says Carl D'Halluin, CTO at hybrid cloud data services provider Datadobi. Its no longer realistic to ignore the fact that, in most organizations, data lives in a hybrid environment and global data management is required.9. Cloud adoptionCIOs and CTOs in remote-based industries such as maritime, and oil and gas have been slower to adopt cloud technologies than their peers in other industries. However, that is changing as the result of satellite connectivity.Data processing teams will be able to work remotely, with minimal physical infrastructure, says Andrew Lunstad, CTO of ocean data platform Terradepth. This shift will reduce the need for physical equipment on-site or on vessels, freeing up costly space and allowing teams to work from virtually anywhere.Another driver is the desire to accelerate data availability and minimize the risk of loss or damage to physical hard drives. However, adopting cloud-based processes requires sound change management because it potentially challenges long-standing practices.10. Enable extreme agility to weather shifting geopolitical threatsIn the wake of the election, Lou Steinberg, founder and managing partner of cyber research incubator CTM Insights (CTM), says CIOs and CTOs should expect geopolitical changes that will change threat actors behavior.Our defenses need the agility to adapt. Where you operate and your industry, should dictate what you do next, says Steinberg, who outlines the following scenarios:Russia may diminish its threat against the US given President-elect Trumps more favorable relationship with President Putin and European support for the war in Ukraine will likely dictate if the same holds true there. An emboldened Russia might increase DDoS attacks against western leaning states in the Balkans, Georgia, and Moldova while increasing the use of AI generated disinformation campaigns throughout Western Europe. Ransomware will continue to hit from multiple sources, but ransomware from Eastern Europe is generally less prevalent in nations that the Kremlin views as friendly.The Middle East may drive more cyberattacks against nations that seemingly support Israel. If Iran and Israel engage more significantly, regional groups will likely increase DDoS and hacktivist activities to draw attention to their cause. At the same time, Iran may seek to increase the cost of supporting Israel through unattributed attacks against critical Western infrastructure such as power generation, municipal water and dams.North Korea and the Trump administration could rekindle discussions that could lead to reduced sanctions, thereby reducing the DPRK's interest in financial theft. If they no longer see a Trump administration as one who negotiates in good faith, financial attacks will continue, and DDoS attacks could increase against American allies South Korea and Japan.Chinas likelihood of conflict is increasing. To date, it primarily focused on data theft, intelligence gathering and preparing for cyber-war, all of which rely on stealth. Should the US impose sanctions that cripple its economy, or should they decide to take Taiwan by force, stealthy behavior could be replaced by something much noisier. Backdoors could be used to disable critical infrastructure in banking, power generation and distribution, communications, etc. In the event of armed conflict with Taiwan, significant attacks against US infrastructure could be used to blunt its ability to intervene.None of these are guaranteed, but all are plausible. What's certain is that adversaries have interests, and their tactics reflect them, says Steinberg. Defenders need to consider how to adjust to a changing landscape as the threats change, or risk investing in immaterial controls at the expense of what's now needed. Buckle up, it's likely to be a bumpy ride."About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 11 Views
  • WWW.INFORMATIONWEEK.COM
    Forrester Award Keynote: Schneider Electric Deputy CISO on Managing Trust, Supplier Risk
    During a keynote at last weeks Forrester Security & Risk Summit in Baltimore, the research firm presented energy management and industrial automation company Schneider Electric with the Security & Risk Enterprise Leadership Award. Stephanie Balaouras, vice president and group director at Forrester, led a discussion with Mansur Abilkasimov, Schneider Electrics deputy CISO & chief product security officer, and bestowed this years honor.Balaouras noted that the judges, a group of Forrester analysts, voted unanimously to choose Schneider Electric. Barclays was the first recipient of the award in 2023. Schneider Electrics ability to integrate security, privacy, and risk management across the enterprise stood out as a factor in being chosen, according to Balaouras.We wanted to recognize organizations that have figured out how to take these functions, embed them across the enterprise, and actually use them as a driver of business, use them to drive business success and drive results, and improve the organization's reputation for trust with customers, employees, and partners, Balaouras told the audience.A Holistic Approach to Security and TrustSchneider Electric is a company that develops everything from DC chargers to safety instrumented systems. It maintains a holistic approach to energy and management in which security, privacy, and risk do not exist in silos.Related:Carrying out an integrated strategy is a challenge for a company like Schneider Electric given its wide footprint in infrastructure, distribution centers, and factories filled with industrial machines. Abilkasimov told the audience that nobody can achieve 100% visibility, but gaining this visibility as part of risk management is a key challenge for the organization.In his keynote, Abilkasimov stressed that product security is not an afterthought and is integrated in the holistic vision of a products life cycle. In a security by design or security by operations strategy, the manufacturing teams are responsible for security by design as well as security by operation, he said.The company received the award because of its implementation of a Trust Charter that incorporates ethics, safety, cybersecurity, and governance as well as a Trust Center, which addresses the requests of customers and stakeholders in security and data protection.Trust Charter is a document that embodies all our principles and tenants for code of conduct, from AI to cybersecurity, from ethics and compliance to price, from safety to quality, Abilkasimov explained in the keynote.Related:Abilkasimov and his team also organize a Trust Month in which they lead discussions around cybersecurity with employees and partners around trust.Cyber is one of the pillars of this trust, he said.Trust is important for both cybersecurity and talent retention. Forrester recognized Schneider Electric for its ability to find talent for cybersecurity roles in operational technology (OT). according to Balaouras.Companies that are trusted, they earn and retain customers, Balaouras told the audience. They earn and retain the best talent. And what weve also found is customers are actually more willing to share sensitive data with trusted companies and even embrace emerging tech, where in other situations, they would have skepticism or fear of engaging with that emerging tech.Schneider Electric Tackles Third-Party RiskIn his keynote remarks, Abilkasimov described Schneider Electrics approach to managing risk from the companys 52,000 suppliers, which includes suppliers for Internet of Things components and regular IT as well as service providers. He explained that companies must prioritize which suppliers to work with on a security assessment.Its impossible to cover all of the suppliers with a cybersecurity or third-party security program, so sometimes you need to choose your battle, Abilkasimov told InformationWeek after the session.Related:Schneider Electric has added 5,000 suppliers to its third-party cybersecurity program. It started with the 300 most critical IT suppliers, and the company will grow the program further, according to Abilkasimov.We work with those companies on cyber, crisis simulations, partnerships, C-level connections, and continuous monitoring through threat intelligence or cybersecurity scoring platforms, Abilkasimov said in our interview. He added, Be it an IoT supplier or simple product security component supplier, they all go through this process.In Forresters Security Survey 2024, 28% of breaches stemmed from a software supply chain attack. Also, in another Forrester report, What 2023s Most Notable Breaches Mean for Tech Execs, third-party vulnerabilities were the top cause of breaches in 2023 and comprised 23% of all breaches.How Forrester Chooses Its Security Leadership Award WinnersForrester had opened nominations for the award on May 1. Balaouras said the evaluation process is similar to a security maturity assessment. Companies must show metrics or KPIs that prove ROI, and they should exhibit how they approach security by design and privacy by design.We talk about their overall approach to embedding security, privacy and risk management across the enterprise not as discrete functions, but how they embed it across the enterprise, Balaouras told InformationWeek after the session.Balaouras stressed that Forrester doesnt handpick the winners. We put out the award and put out the criteria, and we invite companies and organizations from the public sector to look at them and nominate themselves, she said. Barclays received the award in 2023 for maintaining trust and transparency in its universal banking operations and for its human risk behavior metrics that revamped the companys security culture. A key factor in Schneider Electrics success in managing security and risk is making trust concrete, according to Balaouras.When I compare Barclays to Schneider Electric, I think one thing they had in common was executive-level commitment to security, privacy, and risk management as critical features of building trust, Balaouras said. Both organizations from top to bottom really had buy-in.She continued, When I look at Schneider, they put trust front and center, and they had operationalized it. What was truly unique at Barclays last year was they had really extensive security awareness and training for a large financial institution. They had really mapped out all the complex matrices, all the different stakeholders who work together.Balaouras also noted Schneider Electrics Cyber Risk Register and how the company integrates it in the organization to make people accountable. The cybersecurity team manages the register to track potential threats, such as those that may come from third parties.When it comes to the cybersecurity side, it always comes back to the risk register, Abilkasimov said.
    0 Comments 0 Shares 15 Views
  • WWW.INFORMATIONWEEK.COM
    EU Watchdog Fines Meta $263 Million for Data Breach
    The Irish Data Protection Commission (DPC) says the Facebook parent failed to report and document a 2018 breach that impacted 29 million users, including 3 million in the European Union.
    0 Comments 0 Shares 15 Views
  • WWW.INFORMATIONWEEK.COM
    9 Cloud Service Adoption Trends
    Lisa Morgan, Freelance WriterDecember 16, 202411 Min ReadDubo via Alamy StockAs the competitive landscape changes and the mix of cloud services available continues to grow, organizations are moving deeper into the cloud to stay competitive. Many are adopting a cloud-first strategy.Organizations are adopting more advanced, integrated cloud strategies that include multi-cloud environments and expanded services such as platform as a service (PaaS) and infrastructure as a service (IaaS), says Bryant Robinson, principal consultant at management consulting firm Sendero Consulting. This shift is driven by increasing demands for flexibility, scalability, and the need to support emerging technologies such as remote collaboration, real-time data processing and AI-powered diagnostics.Recent surges in cyberattacks have also accelerated these changes, highlighting the need for adaptable digital infrastructure to ensure continuity of business processes, enhance user accessibility, and protect sensitive customer data.Companies that are succeeding with cloud adoption are investing in improved security frameworks, focusing on interoperability, and leveraging cloud-native tools to build scalable applications, says Robinson. In addition, certain industries have to prioritize technology with regulation and compliance mechanisms that add a level of complexity. Within healthcare, for example, regulations like HIPAA are [considered] and prioritized through implementing secure data-sharing practices across cloud environments.Related:However, some organizations struggle with managing multi-cloud complexity and the resulting inability to access, share, and seamlessly use data across those environments. Organizations may also lack the in-house expertise needed to implement and operationalize cloud platforms effectively, leading to the inefficient use of resources and potential security risks.Organizations should develop a clear, long-term cloud strategy that aligns with organizational goals, focusing on interoperability, scalability, and security. Prioritize upskilling IT teams to manage cloud environments effectively and invest in disaster recovery and cybersecurity solutions to protect sensitive customer data, says Robinson. Embrace multi-cloud approaches for flexibility, simplifying management with automation and centralized control systems. Finally, select cloud vendors with a strong track record and expertise in supporting compliance within heavily regulated environments.Following are more trends driving cloud service shifts.1. InnovationPreviously, the demand for cloud data services was largely driven by flexibility, convenience and cost, but Emma McGrattan, CTO at Actian, a division of HCL Software, has seen a dramatic shift in how cloud data services are leveraged to accelerate innovation.Related:AI and ML use cases, specifically a desire to deliver on GenAI initiatives, are causing organizations to rethink their traditional approach to data and use cloud data services to provide a shortcut to seamless data integration, efficient orchestration, accelerated data quality, and effective governance, says McGrattan. [The] successful companies understand the importance of investing in data preparation, governance, and management to prepare for GenAI-ready data. They also understand that high-quality data is essential, not only for success but also to mitigate the reputational and financial risks associated with inaccurate AI-driven decisions, including the very real danger of automating actions based on AI hallucinations.The advantages of embracing these data trends include accelerated insights, enhanced customer experiences, and significant gains in operational efficiency. However, substantial challenges persist. Data integration across diverse systems remains a complex undertaking, and the scarcity of skilled data professionals presents a significant hurdle. Furthermore, keeping pace with the relentless acceleration of technological advancements demands continuous adaptation and learning. Successfully navigating these challenges requires sound data governance.Related:My advice is to focus on encouraging data literacy across the organization and to foster a culture of data curiosity, says McGrattan. I believe the most successful companies will be staffed with teams fluent in the language of data and empowered to ask questions of the data, explore trends, and uncover insights without encountering complexity or fearing repercussions for challenging the status quo. It is this curiosity that will lead to breakthrough insights and innovation because it pushes people to go beyond surface-level metrics.2. Cloud computing applicationsMost organizations are building modern cloud computing applications to enable greater scalability while reducing cost and consumption costs. Theyre also more focused on the security and compliance of cloud systems and how providers are validating and ensuring data protection.Their main focus is really around cost, but a second focus would be whether providers can meet or exceed their current compliance requirements, says Will Milewski, SVP of cloud infrastructure and operations at content management solution provider Hyland. Customers across industries are very cost-conscious. They want technology thats good, safe and secure at a much cheaper rate.Providers are shifting to more now container-based or server-free workloads to control cost because they allow providers to scale up to meet the needs of customer activity while also scaling back when systems are not heavily utilized.You want to unload as many apps as possible to vendors whose main role is to service those apps. That hasnt changed. What has changed is how much theyre willing to spend on moving forward on their digital transformation objectives, says Milewski.3. Artificial intelligence and machine learningTheres a fundamental shift in cloud adoption patterns, driven largely by the emergence of AI and ML capabilities. Unlike previous cycles focused primarily on infrastructure migration, organizations are now having to balance traditional cloud ROI metrics with strategic technology bets, particularly around AI services. According to Kyle Campos, chief technology and product officer at cloud management platform provider CloudBolt Software, this evolution is being catalyzed by two major forces: First, cloud providers are aggressively pushing AI capabilities as key differentiators rather than competing on cost or basic services. Second, organizations are realizing that cloud strategy decisions today have more profound implications for future innovation capabilities than ever before.The most successful organizations are maintaining disciplined focus on cloud ROI while exploring AI capabilities. Theyre treating AI services as part of their broader cloud fabric rather than isolated initiatives, ensuring that investments align with actual business value rather than just chasing the next shiny object, says Campos. [However,] many organizations are falling into the trap of making strategic cloud provider commitments based on current AI capabilities without fully understanding the long-term implications. Were seeing some get burned by premature all-in strategies, reminiscent of early cloud adoption mistakes. Theres also a tendency to underestimate the importance of maintaining optionality in this rapidly evolving landscape.4. Global collaboration and remote workMore organizations are embracing global collaboration and remote work, and they are facing an unprecedented quantity of data to manage.Companies are recognizing that with the exponential growth of data, the status quo for their IT stack cant accommodate their evolving performance, scalability and budget requirements. Both large enterprises and agile, innovative SMBs are seeking new ways to manage their data, and they understand that cloud services enable the future and accelerate business, says Colby Winegar, CEO at cloud storage company Storj. The companies on the leading edge are trying to incorporate non-traditional architectures and tools to deliver new services at lower cost without compromising on performance, security or ultimately, their customers experience.Some companies are struggling to adapt traditional IT infrastructure to future IT requirements when many of those solutions just cant accommodate burgeoning data growth and sustainability, legal and regulatory requirements. Other companies are facing data lock-ins.5. Business requirementsMost of todays enterprises have adopted hybrid cloud and multi-cloud strategies to avoid vendor lock-in and to optimize their utilization of cloud resources.The need for flexibility, cost control, and improved security are some factors driving this movement. Businesses are realizing various workloads could function better on various platforms, which helps to maximize efficiency and save expenses, says Roy Benesh, chief technology officer and co-founder of eSIMple, an eSIM offering.However, managing cloud costs is a challenge for many companies and some lack the security they need to minimize the potential for data breaches and non-compliance. There are also lingering issues with integrating new cloud services with current IT infrastructure.It is vital to start with a well-defined strategy that involves assessing present requirements and potential expansion. Cost and security management will be aided by the implementation of strong governance and monitoring mechanisms, says Benesh. Meanwhile, staff members can fully exploit cloud technology if training is invested in, resulting in optimization.6. Operational improvementCloud was initially adopted for cost efficiency, though many enterprises learned the hard way that cloud costs need to be constantly monitored and managed. Todays companies are increasingly using cloud for greater agility, innovation, to be closer to customers, ensure business continuity and reduce overall risk.Companies are getting it right when they invest in [a] cloud-native approach including design, deployment and operational processes while automating infrastructure management, enhancing cloud security and using data to drive decisions, says Sanjay Macwan, CIO/CISO at cloud communications company Vonage. These steps make operations more efficient and secure. However, challenges arise when decision-makers underestimate the complexity of managing multiple cloud environments. Why does this matter? Because it often leads to inefficient use of resources, security gaps and spiraling costs that hurt long-term strategic goals.To stay ahead, businesses must remain adaptable and resilient.My advice is to take a cloud-smart approach. This means balancing innovation with a strong governance framework. Invest in solutions for cloud cost optimization and implement comprehensive security measures from the start, says Macwan. This is crucial to staying ahead of security and cost management issues to ensure that your cloud strategy remains sustainable and effective while capturing full innovation agility that the cloud can offer. Train your teams to handle these complex environments, and always prioritize a design that is both secure and resilient.7. Performance, security and costMany organizations have questioned whether their wholesale migrations to cloud were worth it. Common concerns include security, performance and cost which has driven the move to hybrid cloud. Instead of going back to the old way of doing things, they want to take the lessons learned in public cloud and apply them on premises.Performance, security, and cost concerns are driving change. As cloud has become more popular, its also become more expensive. [Workload security] is now a bigger concern than ever, especially with modern speculative execution attacks at the CPU level. Lastly, some applications need to be physically close for latency and/or bandwidth reasons, says Kenny Van Alstyne, CTO at private cloud infrastructure company SoftIron. [M]igrating back to the legacy way of doing on-premises infrastructure will lead to the desire to move back to cloud again. To succeed and be accepted, on-premises must be managed as if it were your own cloud.One reason private cloud is gaining popularity is because organizations can gain the efficiencies of cloud, while maintaining control over cost, performance and security on-prem, assuming they have the prerequisite knowledge and experience to succeed or the help necessary to avoid common pitfalls.8. Specific workload requirementsOrganizations deploying AI at scale are discovering that while traditional cloud infrastructure performs work well for general-purpose compute workloads, it presents challenges for AI operations, such as the unpredictable availability of GPUs, prohibitive at-scale costs, the operational complexity of energy-dense workloads and performance bottlenecks in storage and networking. Complicating matters further, edge inferencing, initially touted as a darling AI deployment model, has been deprioritized by global telecommunications carriers due to 5Gs underwhelming commercial returns.Large language models demand high-performance storage systems capable of sustaining consistent, high-throughput data flows to keep pace with GPU processing speeds. While traditional cloud storage [and] enterprise SAN deployments work well for many use cases, AI training often requires vast sequential bandwidth to manage reduction operations effectively. Storage limitations can bottleneck training times and lead to costly delays, says Brennen Smith, head of infrastructure at cloud computing platform provider RunPod. While building these specialized systems in-house reduces overall [operating expenses], this requires deep internal architectural knowledge and is capital-intensive, further complicated by Nvidias release cadence, which is rendering GPUs outdated before their full depreciation cycle.These dynamics are leading to a different type of hybrid strategy, one thats using resources for what they do best. This includes combining public cloud, AI/ML-specific cloud offerings and on-premises infrastructure.9. Healthcare agilityHealthcare organizations made the same mistake many enterprises did: they started with lifting and shifting infrastructure to the cloud that was essentially recreating their on-premises environment in a cloud setting. While this provided some benefits, particularly around disaster recovery, it failed to unlock the clouds full potential.Today, we're witnessing a more mature approach. Organizations are increasingly understanding that true cloud value comes from embracing cloud-native architectures and principles. This means building new applications as cloud-first and modernizing existing systems to leverage native cloud capabilities rather than just hosting them there, says Nandy Vaisman, CISO and VP of operations at health data integration platform Vim.Given the value of EHRs, healthcare organizations cannot afford to take a lift-and-shift approach to cybersecurity. When they do, it creates potential vulnerabilities.Vaisman recommends the following:Moving beyond simple lift-and-shift to truly embrace cloud-native architecturesInvesting in cloud security expertise and trainingAdapting security practices specifically for cloud environmentsFocusing on privacy-by-design in cloud implementationsLeveraging cloud-native tools for compliance and security monitoringAbout the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 11 Views
  • WWW.INFORMATIONWEEK.COM
    The Cloud You Want Versus the Cloud You Need
    How do operational needs compare with organizations ambitions when it comes to using the cloud? Do plans for the cloud get ahead of what companies need?
    0 Comments 0 Shares 10 Views
  • WWW.INFORMATIONWEEK.COM
    What Developers Should Know About Embedded AI
    Where would the world be without APIs? There would likely be a lot less connected and software releases flowing like molasses. Developers use APIs to add capabilities to their apps quickly, though the grab-and-go approach is unwise when it comes to AI.While many developers are proficient in embedding AI into applications, the challenge lies in fully understanding the nuances of AI development, which is vastly different from traditional software development, says Chris Brown, president of professional services company Intelygenz. AI is not just another technical component. Its a transformative tool for solving complex business challenges.Jason Wingate, CEO of Emerald Ocean, a technology and business solutions company focused on product innovation, brand development and strategic distribution also believes that while APIs make embedding AI seem as simple as calling a function, many developers do not understand how models work and their risks.Several major companies in 2023 and early 2024 had their chatbots compromised through prompt injection. Users sent prompts like Ignore previous instructions or Forget you are a customer service bot, causing the AI to reveal sensitive information, says Wingate. This happened because developers didnt implement proper guardrails against prompt injection attacks. While much of this has been addressed, it showcases how unprepared developers were in using AI via APIs.Related:Timothy E. Bates, professor of practice, University of Michigan and former Lenovo CTO, also warns that most developers dont fully grasp the complexities of AI when they embed it using APIs.They treat it as a plug-and-play tool without understanding the intricacies of the underlying models, such as data bias, ethical implications and dynamic updates by AI providers. I've seen this firsthand, especially when advising organizations where developers inadvertently introduced vulnerabilities or misaligned features by misusing AI, says Bates.An organization can miss opportunities due to a lack of knowledge, which results in poor ROI.AI should be tested in sandbox environments before production. [You also need] governance. Establish oversight mechanisms to monitor AI behavior and outcomes, says Bates. AI usage should be [transparent] to end users, maintaining trust and avoiding backlash. Combining developers, data scientists and business leaders into cross-functional teams ensures AI aligns with strategic goals.Ben Clayton, CEO of forensic audio and video analysis company Media Medic has also seen evidence of developer struggles firsthand.Related:Developers need a solid grasp of the basics of AI -- things like data, algorithms, machine learning models, and how they all tie together. If you dont understand the underlying principles, you could end up using AI tools in ways that might not be optimal for the problem youre solving, says Clayton. For example, if youre relying on a model without understanding how it was trained, you might be surprised when it doesnt perform as expected in real-world scenarios.Technology Is Only Part of the PictureA common challenge is viewing AI as a technological solution rather than a strategic enabler.Organizations often falter by embedding AI into their operations without clearly defining the business problem it is solving. This can result in misaligned goals, poor adoption rates and systems that fail to deliver ROI, says Intelygenzs Brown. AI implementation must start with a clear business case or IT improvement objective whether its streamlining operations, optimizing network performance, or enhancing customer experience. Without this foundation, AI becomes a costly experiment instead of a transformative solution."Chris Brown, IntelygenzGabriel Zessin, software architect at API solution provider Sensedia, agrees.Related:In my opinion, although most developers are proficient in API integrations, not all of them understand AI well enough to use it effectively, especially when it comes to embedding AI to their existing applications. Its important for developers to set the expectations of what can be achieved with AI for each company's use case alongside the business teams, like product owners and other stakeholders, says Zessin.DataAI feeds on data. If the data quality is bad, AI becomes unreliable.[S]ourcing the correct data is often challenging, says Josep Prat, engineering director of streaming services at AI and data platform company Aiven. External influences such as data sovereignty and privacy controls affect data harvesting, and many databases are not optimized properly. Understanding how to harvest and optimize data is key to creating effective AI. Additionally, developers need to understand how AI models produce their outputs to use them effectively.Probabilistic Versus DeterministicTraditionally, software developers have been taught that a given input should result in a certain output. However, AI tends to be probabilistic, which is based on the likelihood something will happen. Deterministic, on the other hand, assures an outcome based on previous results.Instead of a guaranteed answer, [probabilistic] offers confidence levels at about 95%. And keep in mind, what works in one scenario may not work in another. These fundamentals are key to setting realistic expectations and developing AI effectively, says Sri (Srikanth) Hosakote, chief development officer and co-founder at campus network-as-a-service (NaaS) Nile. I find that many organizations successfully adopt AI by working directly with customers to identify pain points and then developing solutions that address those issues.Have a Feedback Loop and TestAPIs simplify AI integration, but without understanding the role of feedback loops, developers risk deploying models without mechanisms to catch errors or learn from them. A feedback loop ensures that when the AI output is wrong or inconsistent, its flagged, documented, and shared across teams.[A feedback loop] prevents repeated use of flawed models, aligns AI performance with user needs and creates a virtuous cycle of improvement, says Robin Patra, head of data at design-build construction company ARCO Design/Build.Without such systems, errors may persist unchecked, undermining trust and user experience.Its also wise to involve stakeholders who can provide feedback about the AI outputs, such as whether the prediction is accurate, the recommendation relevant or a fair decision.Feedback isnt just about a single mistake. Its about identifying patterns of failure and sharing those insights with all relevant teams. This minimizes repeat errors and informs retraining efforts, says Patra. Developers should understand techniques like active learning where the model is retrained using flagged errors or edge cases, improving its accuracy and resilience over time.Its also important to test early and often.Good testing is critical to successfully embedding AI. AI should be thoroughly tested and validated before being deployed and once it is live regular monitoring and checks should continue. It should never just be a case of setting an AI model up and then leaving it to run, says John Jackson, founder at click fraud protection platform Hitprobe.Developers should understand and use performance metrics.Developers often deploy AI without fully understanding how to evaluate it. Metrics like accuracy, precision, recall and F1 score are crucial for interpreting how well an AI model performs specific tasks, says Anbang Xu, founder at AI ad generator JoggAI. [W]eve seen companies struggle to optimize video ad placements because they dont understand how models weigh audience demographics versus engagement data.Another challenge is misunderstanding the capabilities of what the API is calling.Misaligned expectations around AI often stem from a lack of understanding of what models can realistically achieve, says Xu. This misalignment leads to wasted time and suboptimal results.Security should always be top of mindI think a lot of developers and business leaders making decisions to implement AI in their applications simply dont realize that AI isnt always that secure. Lots of AI tools dont make it very clear how data is used, says Edward Tian, CEO of AI-generated content detector GPTZero. They arent always upfront about where they source their data or how they deal with the data that is inputted. So, if an organization inputs customer data into an embedded AI tool in their application, whether they are the ones doing that or their customers are, they could potentially run into legal troubles if that data is not handled appropriately.Developers should spend time exploring the security defenses of the AI they choose."They need to understand what threats were contemplated, what security mechanisms are in place, what model was used to train the AI, and what capabilities the AI has through integrations and other connections, says Jeff Williams, co-founder and CTO at Contrast Security. Developers might start with the OWASP Top Ten for LLM Applications, which is specifically designed to educate developers about the risks of incorporating AI into their applications.For example, prompt injection enables an attacker to rewrite rules. Its difficult to prevent, so developers should be careful about using any user input from an untrusted source in a prompt. Sensitive information disclosure and over-trusting AI are also common challenges.AIs aren't very good at partitioning data or keeping track of which data belongs to which user. So, attackers can try to trick the AI into revealing sensitive data like private information, internal implementation details, or other intellectual property, says Williams. [D]evelopers may give the results from the AI more trust than is warranted. This is very easy to do because AIs are very good at sounding authoritative, even when they are just making things up. There are many more serious issues for developers to take into account when using an AI in their apps.How to Develop AI SmartsThere are endless resources available to developers who want to learn more about AI. They include online courses and tutorials, which include practical exercises for hands-on experience.Carve out time weekly to explore areas like natural language processing, computer vision and recommendation systems. Online tutorials and communities are great resources for staying up to date, says Niles Hosakote. At the same time, experiment[ing] with AI tools for productivity code analysis or test automation can level up your work.Developers can also improve their working knowledge of AI by participating in hackathons or internal-focused AI projects, pair programming with data scientists, and staying up to date through online courses, conferences, and industry meetups.AI isnt a magic wand, so define specific problems it should solve before integration. [Also], respect data ethics: Be cautious about where training data originates to avoid unintended consequences, says University of Michigans Bates. The success of AI depends on the teams behind it. Training developers on AI fundamentals will pay dividends.Some of the fundamentals include bias and fairness, explainability, lifecycle management, and security in AI integration.Jason Wingate, Emerald OceanDevelopers need to understand how biases in training data affect outputs, as seen in systems that inadvertently reinforce societal inequities. AI must not remain a black box. Developers should know how to articulate AI decision-making processes to stakeholders, says Bates. Continuous monitoring and retraining are essential as business contexts evolve.Developers can learn about AI tools through small experiments, like building simple chatbots to understand how changes in prompts affect responses, before taking on bigger projects.[Developers] need to grasp model behavior, limitations, data privacy, bias issues and proper prompt engineering, says Emerald Oceans Wingate. Start small and build up gradually. For example, when introducing AI for customer service, companies often begin by having AI suggest responses that human agents review, rather than letting AI respond directly to customers. Only after proving this works [should] they expand AIs role.
    0 Comments 0 Shares 14 Views
  • WWW.INFORMATIONWEEK.COM
    What Do We Know About the New Ransomware Gang Termite?
    Termite is quickly making itself a name in the ransomware space. The threat actor group claimed responsibility for a November cyberattack on Blue Yonder, a supply chain management solutions company, according to CyberScoop. Shortly afterward, the group was linked with zero day attacks on several Cleo file transfer products.How much damage is this group doing, and what do we know about Termites tactics and motives?New Gang, Old RansomwareTermite is rapidly burrowing into the ransomware scene. While its name is new, the group is using a modified version of an older ransomware strain: Babuk. This strain of ransomware has been on law enforcements radar for quite some time. In 2023, the US Department of Justice indicted a Russian national for using various ransomware variants, including Babuk, to target victims in multiple sectors.Babuk first arrived on the scene in December 2020, and it was used in more than 65 attacks. Actors using this strain demanded more than $49 million in ransoms, netting up to $13 million in payments, according to the US Justice Department.While Babuk has reemerged, different actors could very well be behind its use in Termites recent exploits.Babuk ransomware was leaked back in 2021. The builder is basically just the source code so that anyone can compile the encrypting tool and then run their own ransomware campaign, says Aaron Walton, threat intelligence analyst atExpel, a managed detection and response provider.Related:How is Termite putting the ransomware to work?Researchers have found that the groups ransomware uses a double extortion method, which is very common these days, Mark Manglicmot, senior vice president of security services at cybersecurity company Arctic Wolf, tells InformationWeek. They extort the victim for a decryptor to prevent the release of stolen data publicly.A new ransomware group is not automatically noteworthy, but Termites aggression and large-scale attacks early on in its formation make it a group to watch.Usually, these groups start with smaller instances and then they kind of build up to something bigger, but this new group didnt waste any time, says Manglicmot.Termites VictimsTermite appears to be a financially motivated threat actor. Theyre attacking victims in different countries across different verticals, says Jon Miller, CEO and cofounder ofanti-ransomware platform Halcyon. The fact that theyre executing without a theme makes me feel like theyre opportunist-style hackers.Related:Termite has hit 10 victims thus far, in sectors including automotive manufacturing, oil and gas, and government, according to Infosecurity Magazine.The group does have victims listed on its leak site, but it is possible there are more. Maybe we could guess that there might be another handful that have paid ransom or have negotiated to stay off of [the] data leak site, says Walton.Given the groups aggression and opportunistic approach, it could conceivably execute disruptive attacks on other large companies.Termite seems to be bold enough to impact a large number of organizations, says Walton. That is normally a risky tactic that really brings the heat on you much faster than just hitting one organization and avoiding anything that could severely damage supply lines.The attack on Blue Yonder caused significant disruption to many organizations. Termite claims it has 16,000 e-mail lists and more than 200,000 insurance documents among a total of 680GB of stolen data, according to Infosecurity Magazine.The ransomware attack caused outages for Blue Yonder customers, including Starbucks and UK supermarket companies Morrisons and Sainsburys, according to Bleeping Computer.Termites exploitation of a vulnerability in several Cleo products is impacting victims in multiple sectors, including consumer products, food, shipping, and trucking, according to Huntress Labs.Related:Ongoing Ransomware RisksWhether Termite is here to stay or not, ransomware continues to be a risk to enterprises. With certain areas of the globe being destabilized, we could see even more of these types of behaviors pop up, says Manglicmot.As enterprise leaders assess the risk their organizations face, Miller advocates for learning about the common tactics that ransomware groups use to target victims.Its really important for people to go out and educate themselves on what ransomware groups are targeting their vertical or like-sized companies, he says. The majority of these groups use the exact same tactics over and over again in all their different victims.
    0 Comments 0 Shares 14 Views
  • WWW.INFORMATIONWEEK.COM
    Uniting IT, Finance, and Sustainability Through the Integrated Profit and Loss
    Rick Pastore, Research Principal, SustainableIT.orgDecember 13, 20244 Min ReadPixabayEconomies across the world are making slow recoveries from the COVID-19 setbacks, but are at risk from geopolitical conflict and tensions, trade protectionism, and high debt levels. At the same time, populist politics, nationalism, and sovereigntist movements are gaining traction in countries and regions. These factors make it more challenging for companies to pursue environmental, social and governance (ESG) sustainability programs and invest in the necessary transformation. Even internally, C-suites may not see eye-to-eye, with sustainability, compliance, and HR officers often at odds with finance and procurement, and IT typically on the sidelines or caught in the middle. If only there was a way to show everyone which sustainability investments made sound business.Turns out, there is. First deployed in the early 2010s, the integrated profit and loss (IP&L)statement can bring transparency and clarity to the business impact of sustainability investment. The IP&L is a holistic approach to financial reporting that accounts not only for traditional financial metrics but also for sustainability factors and impacts. The standard P&L focuses on revenues, expenses, and profit; the IP&L adds the companys impact on broader aspects such as natural resources, carbon footprint, social contributions, and governance practices.Related:By quantifying the economic, environmental, and social impacts of business activities, companies can make more informed strategic decisions that integrate profitability with sustainability goals. For instance, an IP&L might reflect the costs associated with carbon emissions or the benefits of social programs, together with the financial reliance on a healthy environment that supports agricultural productivity. This allows business leaders to see how these factors influence the companys overall financial health. It also makes clearer the investments and initiatives that deliver both financial returns and sustainability gains.Food multinational Danone released its first IP&L in 2010. Other companies with public IP&L reports include global health technology company Philips and paint and coatings manufacturer AkzoNobel. Brazilian cosmetics company Natura & Co. adopted the IP&L in 2021 to measure and manage its sustainability impacts. It revealed a net positive societal value primarily driven by social and human capital investment. For every $1 of sales, Natura generated $1.50 in net societal value.Despite these benefits, the IP&L is not widely used, largely due to a deficiency of standardized data. Its this deficiency, in part, that offers the IT organization an opportunity to join finance and sustainability at the strategic table. An IP&L relies on sophisticated data integration and analytics, which places the IT office at the heart of its implementation. IT can develop or adapt systems to collect, process, and analyze data from various sources -- such as energy consumption and emissions, supply chain, employee welfare, and governance compliance. This may involve integrating IoT sensors, harnessing big data and activity-based carbon accounting systems and databases, and applying AI algorithms to monitor sustainability metrics in real-time. IT would also contribute to ensuring data validity and auditability.Related:With a more complete and reliable sustainability data set, the finance office would be able to make data-driven decisions on ESG-related capital allocation, budget forecasting, and performance measurement. Finance and investor relations could also leverage the IP&L to communicate financial and non-financial value creation to investors and other stakeholders, contributing to transparency and trust and reducing risk of greenwashing accusations.For sustainability officers, the IP&L may be the most potent professional tool at their disposal. With it, they can quantify and track the impact of their initiatives on not only sustainability metrics but financial performance. They can identify and promote the programs that contribute most to the companys overall goals and justify sustainability transformation investments based on clear financial and non-financial impacts. It is also a great mechanism to communicate and validate the sustainability teams impact to other departments and the executive suite.Related:Indeed, the IP&L may be the best baton for the sustainability relay race, bringing CFOs and CIOs out on the track to join their CSO colleagues. Together, this trio can effectively assure stakeholders that sustainability investment is a fully vetted, carefully calculated component of business strategy.But their unified impact goes well beyond investment justification. New research is underway that documents opportunities, best practices and impacts of collaboration between finance, IT and the sustainability office, conducted by the Sustainability Value Creation partnership. The five organizations comprising the partnership bring expertise in finance, IT and sustainability to research initiative: Accounting for Sustainability, SustainableIT.org, the ERM Sustainability Institute, software company Salesforce, and global insights and advisory firm GlobeScan. The partnership's goal is to illuminate how companies can best create long-term value by integrating sustainability across their corporate functions. Leaders in IT, finance, and sustainability are invited to take part in a 10-minute online survey. It is open until December 23, with results expected in February 2025.About the AuthorRick PastoreResearch Principal, SustainableIT.orgAs Research Principal at SustainableIT.org, Rick Pastore develops and delivers research-based insights, tools, case histories, and other content tailored for IT leaders driving ESG strategies for their functions, enterprises and industries. He has over 25 years of experience working with CIOs and their teams to apply thought leadership and best practices to maximize business value from information technology.See more from Rick PastoreNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 14 Views
  • WWW.INFORMATIONWEEK.COM
    What to Prioritize in Health IT in 2025
    Heading into 2025, healthcare organizations still face workflow shortages, both on the clinical and IT side. Growth in artificial intelligenceand automation will enable tech leaders to address these workflow shortages at health systems. However, as healthcare IT leaders continue experimenting in generative AI (GenAI), it may not be as much of a top priority as you might think, according to IDCs Worldwide C-Suite Tech Survey, which was conducted in September and October 2024.Interestingly, given all the focus on generative AI, only 25% of healthcare respondents reported implementing AI/GenAI as their organizations top priority for the next 12 months, Lynne A. Dunbrack, group vice president for the public sector at IDC, says in an email interview.The IDC report lists the top three health IT priorities in healthcare as investing in security technologies (36.5%), improving customer-focused digital experiences (36.1%), and advancing digital skills across the organization (33%).Here, InformationWeek offers insights from several industry experts on the top priorities for health IT leaders in 2025. Addressing Data Storage NeedsModernizing their infrastructure in the cloud to manage increasing data volumes should be a priority for health IT leaders, according to the IDC FutureScape: Worldwide Healthcare Industry 2025 Predictions report.Related:Cloud solutions and platforms offer more than just expanded technology capacity, scalability, and access to managed services, the report stated. They also act as a catalyst for data exchange and interoperability, enabling seamless integration of third-party applications and other platforms, creating a more open, dynamic, and innovative ecosystem.Scaling Precision Medicine to a Broader PopulationIn 2025, health IT leaders should expand precision medicine to a wider population, says Brigham Hyde, cofounder and CEO of Atropos Health, which offers a cloud-based analytics platform for converting healthcare data into personalized evidence. Precision medicine uses AI and digital tools to make better target treatments possible. The technology could support drug development and personalized therapies for patients. To scale precision medicine, the healthcare industry must keep data specific and personalized, according to Hyde.Precision medicine traditionally focuses on small, highly specific patient cohorts with unique genetic, environmental, or lifestyle factors, Hyde says via email. Scaling it involves extending this level of personalized care to larger and more diverse populations by leveraging technologies like AI and real-world data."Related:AI delivers the ability to drill down on insights for specific conditions at a granular level, Hyde explains. Once these models produce tailored recommendations, they can scale to address broader populations by combining multiple focused models or synthesizing data from different specialties, he says.Implementing Generative AIHealthcare organizations will move on from simply experimenting with GenAI to carrying out enterprise-wide AI strategies, according to the IDC FutureScape report.Although healthcare GenAI investments are expected to triple in healthcare by 2026, 75% of these healthcare Gen AI initiatives will fail to achieve their expected benefits by 2027 due to issues around trustworthiness of data, disconnected workflows, and end-user resistance, IDC reported.In the meantime, in 2025, health IT leaders will need to prioritize quality assurance and physician trust with GenAI and large language models (LLMs), according to Hyde.We will need to scrutinize applications for their clinical accuracy, transparency, and alignment with ethical standards, Hyde says.In the coming year, health IT leaders will prioritize evaluating the accuracy they get from AI algorithms, according toMichael Gao, CEO and cofounder ofSmarterDx, which builds clinical AI applications to allow hospitals to achieve revenue integrity, such as checking for billing coding errors and revenue leakage.Related:As we see more widespread adoption of AI and especially generative AI in healthcare, health IT leaders are going to be prioritizing not just how to supervise an algorithm to understand what level of accuracy youre getting, but also determining how to even pick what level of accuracy you want in the first place, Gao says in an email interview. For example, you want extremely high accuracy algorithms for clinical care. There are a lot of learnings around that before we can really use algorithms effectively in healthcare.Adopting Ambient AIHyde advises that health tech leaders prioritize ambient AI, which operates in the background using advanced computing and AI to detect and generate insights without a users involvement. The technology can automate tasks, as well as personalize care delivery, he says.By collecting and analyzing real-world data in the background, ambient AI enables more precise and actionable insights for disease management, treatment optimization, and personalized medicine initiatives, Hyde explains.Ambient AI can reduce clinician burnout and improve physician retention through ambient dictation and transcription of notes from patient visits, according to Hyde.Addressing Health Inequities With AITo address health inequities and avoid the biases in AI models, health IT leaders should prioritize vetting AI use cases, says Ann Bilyew, executive vice president for health and president of the Healthcare Solutions Group at WebMD/Internet Brands.Keeping AI equitable means paying attention to the social determinants of health, which are factors that influence health such as income, job, education level, and ZIP code.Although addressing health inequities is a worthwhile and promising goal for AI, its important to note that AI is only as good as the material its trained on, and that material has inherent biases, Bilyew tells us via email. AI can exacerbate those biases, so it is critical that healthcare organizations thoroughly vet these use cases to ensure they meet the intended goal.AI models should apply to patients across the board to remain trustworthy and equitable, suggests Dan Stevens, healthcare and life sciences solutions architect at Lenovo.To gain trust from care providers and patients to accept AI-generated healthcare recommendations, it will be crucial to ensure the data used for training is representative of the general population, maintains patient data confidentiality, and avoids bias, Stevens says via email.Investing in Cybersecurity ToolsIn IDCs Worldwide C-Suite Tech Survey, 46.9% of healthcare respondents cited security concerns as the top challenge their organizations faced when implementing Gen AI.Security and cybersecurity tools are a business imperative to protect vulnerable healthcare infrastructure against increasing volumes of insidious ransomware attacks that put patient safety at risk, Dunbrack says.Meanwhile, by 2027, increasing cybersecurity risks will drive healthcare organizations to use AI-based threat intelligence solutions to enable continuity of care and protect patients, according to the IDC FutureScape report.To safeguard patient safety and ensure uninterrupted healthcare services, it is imperative to make investments in cybersecurity a top priority, the report stated.MiteshRao, founder and CEO ofOMNY Health, notes the security steps healthcare organizations should take in 2025 following the massive healthcare data breaches that occurred in healthcare in 2024, particularly with Change Healthcare.More companies need to implement checks and balances on their own operations to prevent leaks and cyberattacks, Rao says in an email interview. Beyond that, data providers need to vet their data sharing policies to make sure that patients information doesnt end in the wrong hands.As AI models are used more extensively and health data gets spread across diagnostic and financial information as well as multiple types of platforms -- including local devices, mobile, servers and cloud services -- IT leaders will need to prevent risk of security breaches, Lenovos Stevens suggests.If not managed appropriately, AI workflows risk introducing unanticipated security breaches due to a lack of end-to-end protection keeping data secure across all resources, from an individuals PC to the cloud, Stevens says.Tackling Regulatory ComplianceWith the focus on GenAI, healthcare organizations must ensure they understand regulations around compliance in 2025, IDCs FutureScape report noted.For 2025, Atropos Healths Hyde advises that health IT leaders build frameworks that establish trust while adhering to regulatory standards at the same time. These frameworks will depend on the size of the healthcare organization, he says.Larger health systems and technology companies with robust resources may prioritize building their own frameworks tailored to their specific needs, ensuring alignment with their internal workflows, patient populations, and operational goals, Hyde says. However, the majority are expected to rely on or closely align with emerging regulatory frameworks and standards.Prioritize Cyber ResilienceIn 2025, health IT leaders should keep cyber resilience in mind to stay prepared for cybersecurity incidents before they occur, advises Ty Greenhalgh, industry principal of healthcare at cybersecurity firm Claroty. Greenhalgh is also an ambassador for the US Department of Health and Human Services 405(d) Task Force and a member of the HHS Healthcare Sector Council Cyber Working Group.Health IT leaders should rely on the NIST definition of resilience to anticipate, survive, and recover from cybersecurity threats, according to Greenhalgh.By leveraging the NIST definition of resilience, organizations can anticipate, withstand, adapt, and recover from threats, Greenhalgh tells us via email. This approach emphasizes early detection and mitigation to reduce downtime and financial impact, particularly in the face of persistent threats like ransomware.
    0 Comments 0 Shares 13 Views
  • WWW.INFORMATIONWEEK.COM
    Are You Ready for the Attack of the Copper Thieves?
    Copper thieves cost US businesses $1 billion a year and are a threat to critical infrastructure. What can you do to prevent putting resiliency at risk?
    0 Comments 0 Shares 15 Views
  • WWW.INFORMATIONWEEK.COM
    How to Channel a Worlds Fair Culture to Engage IT Talent
    Chris ONeill, Chief Executive Officer, GrowthloopDecember 11, 20244 Min ReadFederico Caputo via Alamy StockIve led organizations at every stage of growth, encountering unique challenges and opportunities at each step. The backbone of any successful venture has always been a cohesive team pursuing a mission that matters, and a perpetual dissatisfaction with the status quo.As I connect with tech business peers and IT leaders, they frequently remark on how difficult it is to foster a healthy and resilient team culture. Burnout is at an all-time high, industry competition demands constant innovation, and it can be hard to build team connections that fuel fulfillment and a shared purpose.Im happy to share my lessons learned -- which have culminated in a Worlds Fair mentality at my current company, GrowthLoop -- to help them attract and nurture the best talent.The Challenges of Hiring Tech and IT TalentThe job market for top tech talent is extraordinarily competitive. Hiring teams cannot give every applicant the attention they deserve, and hiring managers face tough tradeoffs between selecting seasoned professionals or highly skilled newcomers.When we hire, we focus on finding candidates who are eager to work on the cutting edge of technology. We look for team members who believe in our mission and want to push boundaries. In return, we invest in ongoing learning opportunities instead of perks like cold brew on tap and catered lunches.Related:Its easy to get lost in the shiny offerings at some companies, but these freebies rarely lead to lasting happiness and fulfillment. Thats why its crucial to ensure every job description and interaction with a new candidate promotes the long-term professional development and career growth opportunities you provide.Attracting a Diverse Talent PoolSelecting the ideal candidates requires focused attention at each step in the recruitment and hiring processes, including your job location, listing language, and interview strategy.Avoid being confined to only in-person office work. Remote and hybrid setups open the door for a wide range of individuals who deserve consideration regardless of their location.Use inclusive language in job descriptions. Our recruiting team has gone through bias training to put this into practice, which has helped increase our candidate pool diversity by over 30%.Conduct a detailed technical skills audit and soft skills evaluation with cross-functional team members during the interview process.Fostering a Worlds Fair CultureHiring the right talent is one thing. You then need to build a culture that allows them to thrive. We want every member of our team to:Related:Know - Be educated on whats happening and how they can shape the company.Feel - Be invigorated by celebratory actions and constant collaboration.Do - Be empowered to help achieve our goals.We accomplish this by championing a Worlds Fair mentality, a concept inspired by Chicago -- the hometown of our co-founder (and perhaps Chicagos biggest fan), Chris Sell. If youre unfamiliar, Chicago was home to the 1893 Worlds Fair, which showcased 50,000 architectural exhibits from around the world. It celebrated groundbreaking ideas and iconic designs, drawing international acclaim.Weve channeled the fairs principles to guide our culture of collaboration and innovation. There are several ways we do this:AMAs: Every member of our senior leadership participates in Ask Me Anything (AMA) sessions to allow employees across the company to ask questions directly and learn more about each leaders passions, skills, and vision for the future.Cross-team sharing: We dedicate time weekly for every team to celebrate their wins, discuss challenges, and brainstorm how they can move forward with everyone behind them.Monthly town halls: We host a monthly town hall meeting where anyone can ask tough or spicy questions that move us forward.Related:Peer recognition: Team members express gratitude and give their colleagues shout-outs. These are real, personal acknowledgments of hard work and collaboration. They drive our success and are something I look forward to every week.Quarterly hackathons: Every quarter, we take a week to work in cohorts and focus on new and innovative ideas. These have been so valuable to the company -- in fact, many of our best product features have come out of these Hackathons.Each of these activities helps people feel heard and empowered to do the best work of their lives.The Rewards of a Diverse and Collaborative CultureA successful business relies on diverse viewpoints. Diversity and the broad perspectives that come with it will reduce groupthink and fuel creativity that ultimately drives better business outcomes.When people are motivated and feel safe to lend different perspectives and problem-solving approaches, they find solutions faster and unlock innovation. Encourage collaboration and idea-sharing at every level to nurture this culture. Executives should work alongside the team, guide them through challenges, and take their feedback to heart.And last but not least, daily efforts and consistency are vital for helping this culture flourish. By doing so, you can continue to attract the best talent who will help you grow and stay resilient no matter what challenges you face.About the AuthorChris ONeillChief Executive Officer, GrowthloopChris ONeill is the Chief Executive Officer of GrowthLoop and a board director at Gap Inc. (NYSE: GPS). Chris career spans 25+ years featuring roles as Managing Director of Google Canada, CEO of Evernote, co-Founder of Glean, Chief Growth Officer of Xero, and a board director at Tim Hortons (NYSE: QSR). Chris earned a B.A. in Economics (with distinction) from Huron University, and an MBA from the Tuck School of Business at Dartmouth College. Born and raised in Canada, Chris currently resides in Northern California with his wife, two children, and their dog Teddy.See more from Chris ONeillNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 14 Views
  • WWW.INFORMATIONWEEK.COM
    How to Find and Train Internal AI Talent
    John Edwards, Technology Journalist & AuthorDecember 11, 20245 Min ReadWavebreakmedia Ltd IFE-241002 via Alamy Stock PhotoAs the need for AI talent grows, enterprises in virtually all fields are struggling to find individuals who can help them take full advantage of this powerful new technology. With competition for qualified AI experts tight, and likely to grow even tighter over the next few years, many organizations are now looking internally to find and train qualified candidates.Every organization needs to make a serious commitment to AI, one of the biggest technology shifts in our lifetime, says David Menninger, executive director, software research, with technology research and advisory firm ISG in an email interview. "AI is not just an IT initiative; everyone needs to jump on board."Here's a look at how four major enterprises are getting ahead of competitors by encouraging and cultivating internal AI talent.CumminsRenowned for producing powerful engines, Cummins Inc. also designs, manufactures, and distributes filtration, fuel system, power generation, and numerous other heavy-duty products and services. Like a growing number of forward-looking enterprises, Cummins management understands that AI is destined to play a critical role in virtually every aspect of its operations."At Cummins, we conduct a 360-degree evaluation of our talent," says Prateek Shrivastava, the firm's principal data scientist via email. Individuals with strong analytical skills and a preference for coding are identified as potential candidates for in-house AI roles. "However, it's crucial to also gauge their interest in working with cutting-edge technology."Related:Shrivastava states that targeted training programs, mentorship under experienced AI professionals, and providing opportunities to work on real-world AI projects within the organization have all proven essential. "A great example is one of our interns from last year," he notes. The individual demonstrated innate AI talent, so he was paired with one of the firm's AI experts. "By the end of his internship, he had successfully delivered a highly customized AI chatbot for HR."Since AI is a relatively new technology, formal training options are limited, Shrivastava observes. "For us, pairing talent with experts, supplemented by YouTube tutorials, has been highly effective."Saatchi & SaatchiOne of the world's largest advertising agencies, Saatchi & Saatchi understands that AI adoption is critical to its future success. The firm also realizes that AI is destined to play an essential role in virtually every aspect of its business.Jeremiah Knight, Saatchi & Saatchi's chief operating officer, says that the major barriers to integrating AI into daily operations are apprehension and trepidation. "People can be hesitant with AI in the same way technophobe family members are hesitant around a complicated new appliance," he observes in an online interview. "Perhaps theres some fearfulness about how to use AI, some fearfulness about breaking something, or even fearfulness about long-term implications."Related:The antidote, Knight believes, is finding zealous first adopters scattered throughout the agency who are willing to lead workshops that help colleagues acquire AI skills in a safe, hands-on environment. "And to have fun with it, because enjoying the silliness of some of the generative AI platforms goes a long way to reducing fear about them," he adds.Knight also likes to find "champions" within each department -- individuals who are eager to learn and unafraid to be curious about specific tools that advance departmental efforts. "Such individuals often have a positive infectious effect on their peers by demystifying AI and showcasing what's possible on a departmental/personal basis."Dell TechnologiesTwo years ago, just about the only people working with generative AI were researchers, observes John Roese, global CTO and chief AI officer at Dell Technologies. "At Dell, we asked our team member population 'who's interested in AI as part of their future job?' -- 5,000 individuals raised their hands."Off-the-shelf AI training is sufficient to a certain point, Roese notes, but he believes that the best way to transfer knowledge is with pairing an AI newbie with a seasoned expert. "A lot of what people need to know isn't documented well," Roese explains in an online interview. "To get to advanced levels, you need to have people doing advanced AI work and sharing their knowledge." He warns that one of the biggest mistakes organizations make is getting one central team to do all the AI work instead of helping AI experts propagate their ability to other teams.Related:Mine for the pockets of individuals who exhibit enthusiasm and promise, Roese advises. "Get started today and begin training immediately."MicrosoftNaga Santhosh Reddy Vootukuri, senior software engineering manager at Microsoft, recommends training employees and keeping them AI-competitive so that when the need arises to utilize these their skills, they won't find themselves lagging behind competitors. "It's important ... to view AI talent as an ongoing process rather than a one-time initiative," he observes in an online interview.Team hackathons and knowledge-sharing presentations make it easy to identify individuals who possess the foundational skills necessary to build upon their AI talent, Vootukuri says. "AI experts in the team should do active mentoring to guide junior engineers who have the passion to make strides, but don't know how to proceed and are limited due to their nine-to-five job."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 14 Views
  • WWW.INFORMATIONWEEK.COM
    Let's Revisit Quality Assurance
    Todays IT departments have an amalgamation of DevOps, Waterfall, artificial intelligence, and OS/new release software, so quality assurance must be able to test and to verify the goodness of all these variegated systems. Yet, those of us who have led IT departments know that the QA function is habitually under-appreciated.Understanding that QA must broaden its reach to test such a broad spectrum of different systems, vendors have rolled out QA tools like the automated execution of test scripts that QA designs.This has generated a steady market in QA testing software, which Global Market Insights pinpointed at $51.8 billion in 2023, with a projected CAGR (compound annual growth rate) of 7% between 2024 and 2032.What IT departments should do now is strategize how a limited QA staff can best use these tools, while also developing the knowledge base and reach allowing them to cover the broad array of new applications and systems that QA is being asked to test.Performing QA With No Single Pane of GlassIf you are in system programming or network support, you know that there are over-arching software solutions that boast single pane of glass visibility. These systems provide an overall architecture that enables you to unify visibility of all of the different tools and functions that you have on a single screen. Not all IT departments invest in these expensive software architectures, but at least they do exist.Related:That isnt the case for quality assurance.In QA, the test bench is a hodgepodge of different tools and techniques spread out on a general tool bench. When a staffer performs QA, they pick whatever tools they choose to use from this tool bench based upon the type of application they are being called upon to test.If the application area to be tested is DevOps, QA is an iterative never done function that might use some test automation for workflow execution, but that also requires a high amount of collaboration between QA, development and end users until everyone arrives at a consensus that the application is production ready.In the AI environment, testing is also iterative and never finished. You work with development and user area subject matter experts to achieve the gold standard of 95% accuracy with what subject matter experts would conclude. Then you must periodically reaffirm accuracy because business conditions constantly change, and accuracy levels could fall.If the application is waterfall, it routes through the traditional path of development, unit test, integration test, regression test, deploy.Related:If the system is a new database or operating or infrastructure system release from a vendor, the new release is first simulated in a test environment, where it is tested and debugged. The new release gets installed into production when all testing issues in the simulated environment are resolved.Each of these test scenarios requires a different mental approach to QA and a different set of tools.Make QA a Strategic Function and Elevate its Standing?Test tool provider Hatica has stated,In the past, QA engineers were primarily focused on testing -- finding bugs and ensuring that the product worked as intended before it was released to users. However, this reactive approach to quality is no longer enough in todays environment. Before long, QA engineers will shift from being testers at the end of the process to quality strategists who are involved from the very beginning.In Agile and DevOps development, there already is an emerging trend for QA that confirms this. QA is immediately engaged in Agile and DevOps work teams, and the QA team provides as much input into the end-to-end DevOps/Agile process as development and end users. As IT departments move more work to Agile and DevOps, QAs role as a frontend strategist will expand.Related:However, in waterfall and new infrastructure release deployments, QAs role is more backend and traditional. It performs end of the line checkouts and is often not engaged in the initial stages of development. AI also presents a QA challenge, because a separate data science or subject matter expert group might do most of the system development and checkout, so QAs role is minimized.The Best Approach to QAThanks to the Agile/DevOps movement, QA now sees a more forward-thinking and strategic role. Yet at the same time, applications in the AI, waterfall, and infrastructure areas engage QA as more of a backend function.QA is also knee-capped by the lack of a single architecture for its tools, and by the brutal fact that most of the staff in QA departments are new hires or junior personnel. Quickly, these individuals apply for transfers into application development, database or systems, because they see these as the only viable options for advancing their IT careers.Understanding these realities, CIOs can do three things:1. Move QA into a more strategic position in all forms of application development. Like the IT help desk, QA has a long institutional memory of the common flaws in IT applications. If QA is engaged early in application development processes, it can raise awareness of these common flaws so they can be addressed up front in design.Accept as well that most QA staff members will want to move on to become a developer or an IT technical specialist and use QA as a grooming ground. To this end, the more QA gets engaged early in application planning and development, the more IT software knowledge QA staff will gain. This can prepare them for development or systems careers, if they choose to take these routes later.2. Ensure that QA staff is properly trained on QA tools. There is no uber architecture available for the broad assortment of tools that QA uses, so personalized training is key.3. Foster collaboration. In the Agile/DevOps environment, there is active collaboration between QA, development and end users. In AI development, CIOs can foster greater QA collaboration by teaming QA with IT business analysts, who often work side by side with user subject matter experts and data scientists. In new infrastructure release testing and in waterfall testing, more active collaboration should be fostered with system and application programmers.The more collaborative bridges you build, the more effectively your QA function will perform.
    0 Comments 0 Shares 16 Views
  • WWW.INFORMATIONWEEK.COM
    8 Things That Need To Scale Better in 2025
    Lisa Morgan, Freelance WriterDecember 10, 202410 Min ReadMyrarte via Alamy StockAs businesses grow and tech stacks become more complex, scalability remains a top issue.Companies face significant challenges scaling across both physical and virtual spaces. While a holistic approach to operations across regions provides advantages, it also introduces complexity, says Dustin Johnson, CTO of advanced analytics software provider Seeq. The cloud can assist, but its not always a one-size-fits-all solution, especially regarding compute needs. Specialized resources like GPUs for AI workloads versus CPUs for standard processes are essential, and technologies like Kubernetes allow for effective clustering and scaling. However, applications must be designed to fully leverage these features, or they wont realize the benefits.The variety of technologies involved creates significant complexity.Today, a vertically integrated tech stack isnt practical, as companies rely on diverse applications, infrastructure, AI/ML tools and third-party systems, says Johnson. Integrating all these components -- ensuring compatibility, security, and scalability -- requires careful coordination across the entire tech landscape.A common mistake is treating scalability as a narrow technology issue rather than a foundational aspect of system design. Approaching it with a short-term, patchwork mentality limits long-term flexibility and can make it difficult to respond to growing demands.Related:Following are some more things that need to scale better in 2025.1. ProcessesA lot of organizations still have manual processes that prevent velocity and scale. For example, if a user needs to submit a ticket for a new server to implement a new project, someone must write the ticket, someone receives the ticket, someone must activate it, and then something must be done with it. Its an entire sequence of steps.Thats not a scalable way to run your environment so I think scaling processes by leveraging automation is a really important topic, says Hillery Hunter, CTO and GM of innovation at IBM and an IBM Fellow. There are a bunch of different answers to that [ranging] from automation to what people talk about, such as is IT ops or orchestration technologies. If you have a CIO who is trying to scale something and need to get permission separately from the chief information security officers, the chief risk officer or the chief data officer team, that serialization of approvals blocks speed and scalability.Organizations that want to achieve higher velocities should make it a joint responsibility among members of the C-suite.Related:You dont just want to automate inefficient things in your organization. You really want to transform the business process, says Hunter. When you bring together the owners of IT, information, and security at the same table, you remove that serialization of the decision process, and you remove the impulse to say no and create a collective impetus to say yes because everyone understands the transformation is mutual and a team goal.2. IT operationsIT is always under pressure to deliver faster without sacrificing quality, but the pressure to do more with less leaves IT leaders and their staff overwhelmed.Scalability needs to be done though greater efficiency and automation and use things like AIOps to oversee the environment and make sure that as you scale, you maintain your security and resiliency standards, says Hunter. I think re-envisioning the extent of automation within IT and application management is not done until those processes break. Its maybe not investing soon enough so they can scale soon enough.3. ArchitecturesIn the interest of getting to market quickly, startups might be tempted to build a new service from existing pre-made components that can be coupled together in ways that mostly fit but will demonstrate the business idea. This can lead to unintentionally complicated systems that are impossible to scale because of their sheer complexity. While this approach may work well in the beginning, getting business approval later to completely re-architect a working service that is showing signs of success may be very difficult.Related:First of all, be very careful in the architectural phase of a solution [because] complexity kills. This is not just a reliability or security argument, it is very much a scalability argument, says Jakob stergaard, CTO at cloud backup and recovery platform Keepit. A complex structure easily leads to situations where one cannot simply throw hardware at the problem this can lead to frustrations on both the business side and the engineering side.He advises: Start with a critical mindset, knowing that upfront investment in good architecture will pay for itself many times over.4. Data visibilityOrganizations are on a constant mission to monetize data. To do that they need to actively manage that data throughout the entire lifecycle at scale.While cloud computing has gained popularity over the past few decades, there is still a lot of confusion, resulting in challenges including understanding where your cloud data lives, what it contains, and how to ensure it is properly protected, says Arvind Nithrakashyap, co-founder and CTO at data security company Rubrik. When it comes to scalability one blind spot is unstructured and semi-structured data.Unstructured data poses a security risk, as it can contain sensitive business data or personally identifiable information. And since all unstructured data is shared with end-user applications using standard protocols over TCP/IP networks, its a prime target for threat actors. Since most companies have hybrid and multi-cloud implementations IT needs to understand where sensitive data is, where it is going and how it is being secured.One of the toughest hurdles for organizations whose unstructured data portfolio includesbillions of files, and/or petabytes of data, is maintaining an accurate, up-to-date count ofthose datasets and their usage patterns, says Nithrakashyap. [You need to understand] things [such as] how many files [exist], where they are, how old they are, and whether theyre still in active use. Without reliable, up-to-date visibility into the full spectrum of critical business files, your organization can easily be overwhelmed by the magnitude of your data footprint, not knowing where critical datasets are located, which datasets are still growing, [and] which datasets have aged out of use.5. SaaS service APIsAPIs are the glue that holds our modern software-driven world together. Keepits stergaard says his company sees bottlenecks on software-as-a-service APIs that vendors offer up for general use, from explicit throttling to slow responses, that are outright intermittent failures. For better and tighter integrations between systems, APIs need to scale to higher volume use.Fundamentally, an API that does not scale is pointless, says stergaard. For APIs to be useful we want them to be usable. Not a little bit, not just sometimes, but all the time and as much as we need. Otherwise, what's the point?Although it can be difficult to pinpoint a limiting factor, if user experience is any indication, it appears that some services are built on architectures that are difficult for the vendor to scale to higher volume use.This is a classical problem in computer science -- if a service is built, for example, around a central database, then adding more API front-end nodes may not do anything to improve the scalability of the APIs because the bottleneck may be in the central database, says stergaard. If the system is built with a central database being core to its functionality, then replacing that central component with something that is better distributed over many systems could require a complete re-write of the service from the ground up. In practical terms for real world services, making a service scale to higher volume use is often very different from just clicking the elastic scaling button on the cloud platform on which it runs.To scale a solution, it must be built on the simplest possible architecture, since architectural complexity is typically the main obstacle to scaling a solution. A complex architecture can make throwing hardware at a solution completely ineffective.6. Artificial intelligenceAs AI usage accelerates, cloud and cybersecurity scalability become even more critical.[M]ost companies are still in a discovery phase [with AI], and therefore what it takes to scale [in terms of] capabilities, cost, etc. is still not fully understood. It requires an approach of continuous learning and experimentation, with a strong focus on outcomes, to prioritize the right activities, saysOrla Daly, CIO at digital workforce transformation company Skillsoft.IT leaders must ensure alignment with business leaders on the desired outcomes and critical success factors. They also need to understand the skills and resources in the organization, define KPIs and fill key gaps.Teams who are not proactively managing the need for scale will find suboptimal decisions or runaway costs on one side, or [a] lack of progress because the enablers and path to scale are not defined, says Daly. Scaling technology is ultimately about enabling business outcomes, therefore continuing to tie activities to the company priorities is important. Its easy to get carried away by new and exciting capabilities, and innovation remains important, but when it comes to scaling, its more important to take a thoughtful and measured approach.7. Generative AIOrganizations are struggling with scaling GenAI cost-effectively. Most providers bill for their models based on tokens that are numerical representations of words or characters. The costs for input and output tokens differ. For example, Anthropics Claude 3.5 Sonnet charges $3.00 per million input tokens and $15 per million output tokens while OpenAIs gpt-4o model costs $2.50 per million input tokens and $10 per million output tokens. The two models are not equal and support different features, so the choice isnt as clear cut as which model is cheaper.GenAI model consumers must pick a balance between price, capability and performance. Everyone wants the highest quality tokens at the lowest possible price as quickly as possible, says Randall Hunt, CTO at leading cloud services company and AWS Premier Tier Services partner, Caylent.An additional charge exists around vectorization of data, such as converting images, text, or other information into a numerical format, called an embedding, that represents the semantic meaning of the underlying data rather than the specific content.Embedding models are typically cheaper than LLMs. [For instance,] Coheres Embed English embedding model is $0.10 per million tokens. Embeddings can be searched somewhat efficiently using techniques like [hierarchical navigable small world](HNSW) and cosine similarity, which isnt important, but it requires the use of database extensions or specialized datastores that are optimized for those kinds of searches -- further increasing cost. [A]ll of this cost is additive, and it can affect the unit economics of various AI projects.8. Operational technologydataCompanies are being flooded with data. This goes for most organizations, but its especially true for industrial companies that are constantly collecting operational technology (OT) data from equipment, sensors, machinery and more. Industrial companies are eager to integrate insights from OT and IT data to enable data-driven decision making based on a holistic view of the business.In 2025 and beyond, companies that can successfully give data context and make efficient and secure connections between diverse OT and IT data sources, will be best equipped to scale data throughout the organization for the best possible outcomes, says Heiko Claussen, chief technology officer at industrial software company AspenTech. Point-to-point data connections can be chaotic and complex, resulting in siloes and bottlenecks that could make data less effective for agile decision making, enterprise-scale digital transformation initiatives and AI applications.Without OT data fabric, an organization that has 100 data sources and 100 programs utilizing those sources would need to write and maintain 10,000 point-to-point connections. With an OT data fabric, that drops to 200 connections. In addition, many of these connections will be based on the same driver and thus much easier to maintain and secure.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments 0 Shares 15 Views
  • WWW.INFORMATIONWEEK.COM
    What 'Material' Might Mean, and Other SEC Rule Mysteries
    How can a CISO know if a cybersecurity incident is "material," and is that even the CISO's job? Forrester principal analyst Jeff Pollard explains this and other lessons learned after one year of living with the Securities and Exchange Commission's Cybersecurity Rule.
    0 Comments 0 Shares 17 Views
  • 0 Comments 0 Shares 18 Views
  • WWW.INFORMATIONWEEK.COM
    How to Prep for AI Regulation and AI Risk in 2025
    Forrester Principal Analyst Enza Iannopollo explains what the proposed regulations on artificial intelligence actually aim to do, what it means to enterprise CIOs' AI goals, and how to prepare today for the risks and compliance goals of tomorrow.
    0 Comments 0 Shares 18 Views
  • WWW.INFORMATIONWEEK.COM
    How Conflict With China Might Play Out in the Cyber Realm
    Earlier this year, China-linked threat group Salt Typhoon allegedly breached major telecommunications companies, potentially gaining access to US wiretap systems. The full scope of the breach remains unknown, and the hackers are potentially still lurking in telecommunications networks.This breach is hardly the first time a group associated with China targeted critical infrastructure in the US. Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), and Christopher Wray, director of the FBI, have both been vocal about the threat China poses to US critical infrastructure.In a 2024 opening statement before the House Select Committee on Strategic Competition Between the United States and the Chinese Communist Party, Easterly said, Specifically, Chinese cyber actors, including a group known as Volt Typhoon, are burrowing deep into our critical infrastructure to be ready to launch destructive cyber-attacks in the event of a major crisis or conflict with the United States.In April, Wray brought up this concern at the Vanderbilt Summit on Modern Conflict and Emerging Threats. The fact is, the PRCs targeting of our critical infrastructure is both broad and unrelenting.At the Cyberwarcon conference, Morgan Adamski, executive director of US Cyber Command, chimed in with a warning about how Chinas position in critical infrastructure could cause disruptive cyberattacks if the two countries enter into a major conflict, Reuters reports.Related:If conflict does erupt between China and the US, what could disruptive cyberattacks on critical infrastructure look like? What can the government and critical infrastructure leaders do to prepare?The Possibility of Disruptive CyberattacksThe US has 16 critical infrastructure sectors. All of them are called critical because they would impact society to some degree were they to be taken offline, says Eric Knapp, CTO of OT forOPSWAT, a company focused on critical infrastructure cybersecurity. And they're all susceptible to cyberattack to some degree.Telecommunications and power could be prime targets for China in a conflict. Back from the dawn of time when people would go to war, you would try to eliminate your opponents ability to communicate and their ability to power their systems, says Knapp.But other sectors, such as water, health care, food, and financial services, could be targeted as well.The intent of these kind of operations may be to provide a distraction in order to slow down a US response, if there was to be one, in any sort of conflict involving Taiwan, says Rafe Pilling, director of threat intelligence for the counter threat unit at cybersecurity company Secureworks.Related:While it is uncertain exactly how these attacks would play out, there are real-world examples of how adversaries can attack critical infrastructure to their advantage. Unfortunately, there's a roadmap that we can look at that's happening in the real world right now in the Russia-Ukraine conflict, says Knapp.Leading up to and following Russias invasion of Ukraine, Russia executed many cyberattacks on Ukrainian critical infrastructure, including its power grid.If China were to use its positioning in US critical infrastructure to carry out similarly disruptive attacks, they would be dealing with very distributed systems. It would be very unlikely to see something like a nationwide power outage, Knapp tells InformationWeek.What you'd likely see is a cascade of smaller localized disruptions, says Pilling.Those disruptions could still be very impactful, potentially causing chaos, physical harm, death, and financial loss. But they would not last forever.Many of these sectors, for reasons completely unrelated to cyberattacks, are used to being able to resolve issues, work around problems, and get services up and running quickly, says Pilling. Resiliency and quick restoration of services, particularly in the energy sector, [are] an important part of their day-to-day planning.Related:Threat ActorsSalt Typhoon and Volt Typhoon are two widely recognized, Chinese cyber threat groups that target US critical infrastructure.All [of] these different Chinese threat actor groups, they have different motivations, different goals, different countries that they're attacking, says Jonathan Braley, director of threat intelligence at nonprofit Information Technology-Information Sharing and Analysis Center (IT-ISAC).In addition to pre-positioning for disruptive cyberattacks, motivations could also include intellectual property theft and espionage.While Salt Typhoon is the suspected culprit behind the major breach in the US telecommunications sector, it actively targets victims in other sectors as well. For example, the group reportedly targeted hotels and government, according to FortiGuard Labs.Targeting hotels and targeting telcos is often to get information about people's movements and what they've been saying to each other and who they've been communicating with. So, it's part of a collection for a wider intelligence picture, says Pilling.Volt Typhoon has targeted systems in several critical infrastructure sectors, including communications, energy, transportation, and water, according to CISA.They combine a number of tactics that make them quite stealthy, says Pilling. For example, Volt Typhoon makes use of living off the land techniques and will move laterally through networks. It often gains initial access via known or zero-day vulnerabilities.In some cases, they would use malware but for the vast majority of cases they were using built-in tools and things that were already deployed on the network to achieve their aims of maintained persistence in those networks, Pilling shares.Salt Typhoon and Volt Typhoon are just two groups out of many China-backed threat actors. IT-ISAC has adversary playbooks for threat actors across many different countries of origin.We have about 50 different playbooks for different Chinese nation state actors, which is a lot, Braley tells InformationWeek. I think if we look at other countries there might be a dozen or so.While China-linked threat groups pose a risk to critical infrastructure, they are not alone.As we approach various global conflicts, we need to be prepared that not only we're going to have these nation states coming out, [but] we also [have] to watch some of these hacktivist groups that are aligned with these countries as well, says Braley.Preparing Critical InfrastructureThe government and critical infrastructure operators both have roles to play in preparing for the potential of disruptive cyberattacks. Information sharing is vital. Government agencies like CISA can continue to raise awareness. Critical infrastructure operators can share insight into any malicious activity they discover to help other organizations.Critical infrastructure operators also have a responsibility to harden their cybersecurity posture.A lot of the basic hygiene that organizations need to be doing is not expensive cutting-edge cybersecurity work. It's the basics of making sure things are patched, minimizing attack surfaces externally, making sure that there is good monitoring across the network to detect intrusions early when they occur, says Pilling. I think it's a culture and a mind shift as much as need for more budget.
    0 Comments 0 Shares 31 Views
  • WWW.INFORMATIONWEEK.COM
    How to Keep IT Team Boredom From Killing Productivity
    John Edwards, Technology Journalist & AuthorDecember 4, 20245 Min ReadMarcelo Mayo via Alamy Stock PhotoBoredom is easy to detect, yet difficult to define and even tougher to address. Boredom indicates that a current activity or situation isn't providing sufficient engagement or meaning. An IT leader's goal should be to help bored individuals -- even entire teams -- shift their attention to tasks and activities that are fulfilling and enriching.IT team boredom often stems from mind-numbing repetitive tasks that drain creativity and engagement, observes Carl Herberger, CEO of Corero Network Security, a threat intelligence insights and analysis firm. "The irony is that the very efficiency IT seeks to create can trap teams in a cycle of monotony," he says in an email interview.It all comes down to engagement, says Orla Daly, CIO with workforce development firm Skillsoft. "IT teams may lack engagement because the work isn't considered sufficiently challenging or feels repetitive," she explains in an online interview. Many tech professionals want the opportunity to become familiar with new technologies and to keep their skills up to date. "When organizations fail to provide a good balance of opportunities, team members can become disengaged," Daly notes.Yet engagement isn't just about gaining access to new technologies. If team members attempt to try a new task without enough skills and support resources to be successful, they may become disengaged, Daly cautions. "It's important to couple access with the right support frameworks."Related:Risky BusinessA bored IT team is a ticking time bomb, Herberger warns. "The risks are clear: increased turnover as talent walks out the door, underperformance that drags down productivity, and a contagious drop in morale that can spread like a virus across the organization," he says. "Worse, in a competitive industry, boredom kills innovation, leaving your company vulnerable to being outpaced by more engaged and agile competitors."A disengaged IT team, or team subset, can negatively impact business performance, since members are probably not contributing to their full abilities. "Additionally, it can impact company culture, creating a suboptimal work environment and lowering the drive of more motivated employees," Daly says. She points to a Gallup survey that shows disengaged employees cost organizations worldwide $8.8 trillion in lost productivity. The same report found that companies with actively engaged employees can provide enormous benefits, including 23% higher profitability and 18% lower turnover for high-turnover organizations.Related:Most at RiskIT teams stuck in the trenches of repetitive, mundane tasks -- such as routine maintenance or low-level coding -- are most at risk of succumbing to boredom, Herberger says. "These assignments often fail to provide the intellectual stimulation that keeps talent engaged, turning what could be an incubator for innovation into a dead-end job that saps motivation."Daly agrees. "While individual motivations play a big role, there's a greater risk of disengagement from teams involved in routine, repetitive tasks that could be automated, or where team members do not understand the purpose of their role and how it connects to the overall company performance."SolutionsTo reinvigorate a sagging IT team, Herberger recommends shaking things up by introducing fresh challenges and innovation opportunities: "Whether it's rotating team roles, fostering a culture of collaboration, or carving out time for passion projects, the goal is clear: disrupt the routine, reawaken creativity, and make the team feel like they're part of something bigger than just punching the clock."Meanwhile, empathy and open communication can help IT leaders identify the root causes of disengagement and identify effective solutions, such as pursuing new certificates, establishing mentorships, or reorganizing responsibilities, Daly says. "Engage in exercises that drive innovation," she suggests. "Learning something new generally excites people -- they feel like they're developing, growing, and that tends to get people engaged."Related:Workers often cite a lack of growth and development opportunities as the reason to move to a new job, Daly says. "Build opportunities for employees to propose new ideas and lend their expertise on projects they wouldn't typically be a part of, encouraging these skilled professionals to use the full scope of their abilities." She also stresses the importance of encouraging open communication.Preventative MeasuresProactive leadership is key, says Hiren Hasmukh, CEO of IT asset management solutions provider Teqtivity. "Regular check-ins, setting clear goals, and providing opportunities for professional development can help," he advises via email. "Fostering a culture of innovation, where team members can propose and lead new initiatives, can be very effective."Daly recommends that IT leaders stay close to their workforce in order to understand their engagement levels, manage mundane tasks effectively, and create space for more interesting assignments. To help prevent disengagement, he suggests offering learning opportunities and activities that promote development and growth. "Upskilling and reskilling are essential strategies to combat disengagement in the workforce."A Final ObservationIt's important to recognize that occasional lulls in excitement are normal in any job, Hasmukh says. "The key is to create an environment balanced with periods of challenge and growth."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Comments 0 Shares 21 Views
  • WWW.INFORMATIONWEEK.COM
    AI and Gen Z: A Perfect Match for Innovation
    Leigh Gordon, Associate Vice President, Human Resources, HCLSoftwareDecember 4, 20244 Min ReadJosie Elias via Alamy StockGeneration Z is the driving force fundamentally redefining the world and our business landscape. Growing up amid a digitally defined, network-oriented environment that moves at unprecedented scale, scope, and speed -- Gen Zers, also called Zoomer or iGen-ers -- are the first generation shaped by digital technology.Born a few years after the World Wide Web debuted in 1993, this post-millennial digital native generation has grown up with the internet. As they seamlessly blend online and real worlds, Gen Zers, an integral part of business today, are heralding the shift from the digital age to the virtual age.While previous generations invented most of the technology Gen Zers have at their fingertips, their inherent AI fluency helps them radically redefine the future of work, play, and social life. Every facet of their life has been profoundly shaped by AI tools and solutions, leading to new methods of working and connecting with others. The speed and scale afforded by new technologies like GenAI is also reflected in the new attitudes to how they get work done. Unsurprisingly, GenAI tools are the preferred sidekick for tech-savvy Zoomers, with more than 50% using it at work to free up their time for strategic work.As we welcome the next wave of innovation and the youngest cohort of workers, its essential for Gen Z to channel the following skills to thrive in the AI-driven era:Related:Creativity: AI can process vast amounts of data and identify patterns, but it lacks the spark of human creativity. Thinking outside the box, generating novel ideas, and envisioning the future will be indispensable.Imagination: Imagination is the fuel that drives innovation. It allows for new possibilities, challenges the status quo, and develops solutions to complex problems.Problem-solving: While AI can assist in identifying problems and analyzing data, it is humans who possess the critical thinking skills, empathy, and judgment necessary to devise effective solutions.But as we guide Gen Z toward harnessing the power of AI, businesses should proactively adapt to the needs of Gen Z, recognizing their value as a tech-savvy generation that is shaping the future of work.Redefining the Workplace for ZoomersAccording to the World Economic Forum Gen Z will make up about 27% of the workforce by 2027 and 29% by 2030. By recognizing the unique skill set of Gen Zers, organizations can capitalize on their potential to create a more adaptable, innovative, and human-centric workplace.To build a collaborative ecosystem for the future workplace, organizations must consider the following:Related:Invest in continuous learning: The rapid AI development and proliferation of tools has also created unrealistic expectations about capability and proficiency, highlighting the importance of better training, continuous learning and more importantly governance of tools. Its important for organizations to foster a culture of lifelong learning to keep employees adaptable to evolving technologies and to offer training programs for effective governance to avoid misuse of tools and create a knowledge-sharing environment.Use AI as co-pilot in the workplace: Gen Z brings a new perspective, and they do not view AI as a threat or competitor but a valuable collaborator. They are accustomed to using AI assistants for data analysis, modifying product design, and gaining insights to enhance their work. This paradigm shift demands a focus on developing skills that complement AI, like creativity, critical thinking, and the ability to transform AI-generated data into practical strategies.Adopt tools that reflect the needs of digital age humans: Gen Z workers have more AI fluency than their more senior colleagues as evidenced by this recent study, which underscores the need to do away with outdated, legacy tools and platforms to promote real change. Now, organizations must adopt better systems and software that match the needs of this younger workforce wave.Related:Invest in digital tools and infrastructure to foster collaboration: Growing up in a hyper-connected world, this generation thrives both in the digital and physical realm. In an era of hybrid work, organizations must strive to provide phygital (physical + digital) environments to foster connections, spur productivity and boost culture. This also helps promote a sense of belonging.Future Is Bright for Gen Z and AIAs we navigate the rapidly evolving future of work, its clear that Gen Z is at the forefront of innovation. Their digital fluency, combined with their creativity, imagination, and problem-solving skills, positions them as invaluable assets in the AI-driven era.By fostering a workplace that supports continuous learning, innovation, and collaboration, organizations can harness the full potential of Gen Z and create a more adaptable, innovative, and human-centric future, centered on a partnership -- not a tradeoff -- between humans and AI. If organizations understand and embrace this dynamic, theyll be poised to create a world where technology augments human capabilities, with Gen Z at the helm of this transformation, defining the future of work for generations to come.About the AuthorLeigh GordonAssociate Vice President, Human Resources, HCLSoftwareLeigh Gordon leads the people function and is responsible for all aspects of people strategy and operations on a global scale including global recruiting, talent management, talent development, diversity and inclusion, total rewards, organizational effectiveness, and program management. Leigh and her team engage and empower 8000+ global employees to achieve their true potential and drive business value for our customers and partners. She focuses on making HCL Software an exceptional place for employees to work and build a culture that promotes diversity, equity, inclusion and belonging.Leigh has over 20+ years' experience working in global technology organizations, leading teams in Human resources and Sales operations. Prior to joining the company, she was the Global HR Leader and business partner for Customer Experience Solutions at Infor. Leigh earned her bachelors degree in business from West Chester University and her masters degree in human resource development from Villanova University. She is a member of CHIEF and holds her Professional Human Resources (PHR) certification from HRCI.See more from Leigh GordonNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Comments 0 Shares 21 Views
  • WWW.INFORMATIONWEEK.COM
    Quick Study: The Future of Work Is Here
    James M. Connolly, Contributing Editor and WriterDecember 4, 20249 Min ReadFederico Caputo via Alamy StockThere might be someone, somewhere -- possibly on some isolated South Pacific Island -- who hasn't wondered about the impact of artificial intelligence and other technologies on their job. For the rest of us, AIs invasion has the workforce pondering what it means to us, how it changes the nature of our work, the value of our paychecks, and even if we have a job going forward.All legitimate questions.Browsing through the past year or so of InformationWeek articles, we found a boatload of content focused on the role of AI and other tech today and in the future. In updating this Future of Work Quick Study, posted in mid-2023, we hope to provide you with insight into what IT leaders and their teams can expect from those technologies, and how they can build a modern workforce that enables their organizations and careers to flourish.One thing we can be sure of is that, for all the change we've seen in the past couple of years, still more change is on the horizon.The Hybrid: Work From Home and From WorkIts Different IT: Tech Support for Remote UsersHow does the IT teams approach change when it must provide technicalsupport for remote workforces?Onboarding Employees in the Age of RemoteRemote changed IT hiring fast, but onboarding employees didnt quite keep pace. Often, new employees like software engineers benefit from having someone sitting across from them. Heres insight from a company thats been there and done with advice on how to get it right.Are Return to Work Mandates Wise?Some businesses are mandating that some or all employees return to the office. While the motives are understandable, theres more to the story.Negotiating Remote Work Agreements as Listings ThinAs organizations angle to get workers back to a more regular in-office work schedule, IT professionals are still in a strong position to bargain for remote and hybrid agreements, given the robust IT jobs market.6 Lessons Learned from the Big Return to Office Debate of 2023Hint: Trust your people for hybrid work to fuel the business.6 Challenges and Opportunities for Hybrid and Remote IT TeamsRemote and hybrid work is here to stay. What does that mean for IT teams when employees want it, but managers may not like it?Manage By Walking Around in the Remote WorldThe concept manage by walking around encourages CIOs and other execs to get away from their desks to really see how projects are progressing. Does it work in a remote workplace? Heres some advice.AI: A New Ballgame for the WorkplaceBuilding an Augmented-Connected WorkforceAI and other advanced technologies are unleashing the augmented-connected workforce, enabling human-machine partnerships for new levels of business productivity.Nvidias Jensen Huang on Leadership, Tokenization, and GenAI Workforce ImpactThe GPU chipmaking giants CEO says its important for CIOs to get started with AI and called for a more positive outlook on the emerging techs impact on the workforce.CIOs Can Build a Resilient IT Workforce with AI and Unconventional TalentAs the IT talent crunch continues, chief information officers can embrace new strategies to combine traditional IT staff with nontraditional workers and AI to augment the workforce.AI: Friend or Foe?Adoption of AI continues, further fueled by generative AI. Like with all things tech, the hype needs to be tempered with a realistic expectation of results.Navigating the Impact of AI on TeamsLeaders should prioritize artificial intelligence literacy, empathy, and balance AI benefits with human insights to navigate the transformative impact of AI on teams effectively.How CEOs and IT Leaders Can Take the Wheel on Responsible AI AdoptionLeaders expect AI to reshape business, but readiness varies. Heres why it's crucial for CEOs, CIOs, and CTOs to develop responsible AI safety and innovation strategies now.What Is the Future of AI-Driven Employee Monitoring?Workplace monitoring isnt new, but AI is giving employers new powers to sift through employee data and analyze their work performance.How Companies Can Retain Employee Trust During the AI RevolutionRecent surveys indicate a trust gap among most employees, driven largely by job insecurity. Here are some ideas for enterprise leaders on how to growUtilizing Automation to Alleviate Alert Fatigue, Workforce Shortages, and MoreThis session explores strategies to address the growing volume of vulnerabilities and associated challenges of alert fatigue and resource shortages through safe automation.The IT Jobs AI Could Replace and the Ones It Could CreateTransformative power of AI has the potential to eliminate and create jobs in the IT field.Hire or Upskill? The Burning Question in an Age of Runaway AIGenerative AI speaks like a human but to make it work employees have to think like a machine. Where do you go to find that kind of talent?The People (and Machines) You Work WithQuick Study: Diversity, Equity, and InclusionAre we making progress in this sensitive, timely topic? Heres a snapshot of our own articles on why DEI matters, how companies are addressing it, educational initiatives, cutting through bias, and more.AI Robots Are Here. Are We Ready?Robots are getting smarter and more intuitive thanks to advances in artificial intelligence. Can people survive the competition?Eliminating Remote Work Will Ruin Techs Drive for DiversityWith some tech companies saying diversity, equity, and inclusion (DEI) is unimportant, remote work should continue to be an option available at tech companies to increase DEI and help solve staffing challenges.A New Generation and the Future of Sustainable ComputingThe Gen Z generation has grown up with both powerful technology and a keen awareness of environmental impact. How will their perspectives as the new data scientists and stakeholders shape the future of sustainable computing?The Importance of Mentors in Tech and FinanceHeres why mentorship is instrumental in career growth, providing guidance and support for personal and professional development.Integration, Insight, and AI Will Define DEIs Next EraDiversity, equity, and inclusion initiatives are more important than ever. With data, analytics, and intelligent tools, employers can create accountability around these critical goals.Empowering Women in a Gender-Biased Tech IndustryGender inequality is an existing issue exacerbated within the tech industry. Here are three areas to empower women today in the workplace.Quick Study: Diversity, Equity, and InclusionAre we making progress in this sensitive, timely topic? Heres a snapshot of our own articles on why DEI matters, how companies are addressing it, educational initiatives, cutting through racial and gender bias, and more.Cobots and AI: A Natural Match?Collaborative robots are getting smarter and more intuitive. What does this mean for their human colleagues?Less Talk, More Action: 3 Steps to Diversify the Cybersecurity WorkforceOrganizations have a lot to gain from team diversity, so now is the time to start employing more women, particularly in fields such as cybersecurity where there is a talent gap.Diversitys Crucial Role in AIYour board and your CEO have been clamoring for artificial intelligence, and now you have AI technology. But what if what your AI is telling you is wrong?Your Future JobSurvey: Work/Life Balance in IT Achieved Through Flexibility, PTOWhen creating policies, its important for business leaders to know its not just time-off and wellness programs that impact stress and work-life balance.9 Future of Work Concepts That Need More AttentionFuture of work concepts continue to evolve with circumstances and technological innovation. Here's a look at several.Technology Executive Arsenal: Must-Have Skills for LeadersCan today's technology leaders really handle the pressures of a fast-paced digital world? With these five skills, you can stay competitive and effectively tackle new challenges.5 Traits To Look for When Hiring Business and IT InnovatorsHiring resilient and forward-thinking employees is the cornerstone of innovation. If youre looking to hire a trailblazer, here are five traits to seek, as well as questions to ask.CISO Role Undergoes Evolution as Jobs Grow More ComplexThe complexity and sophistication of threats means CISOs must be more proactive in identifying and mitigating risks and making the business case for investment in security. The role isn't just about tech.Jumping the IT Talent Gap: Cyber, Cloud, and Software DevsBusinesses must first determine where their IT skill sets need bolstering, and then develop an upskilling strategy or focus on strategic new hires.Dispelling the Myth of Job Displacement: AI and TransformationWhile AI can automate certain tasks, it should be seen as a tool that complements human capabilities rather than replacing them entirely.Leading IT teams Through Changing PrioritiesThe constantly changing world of IT needs leaders who can pivot with major changes, for both business priorities and breakthrough technologies.Resolving the Crisis of Fractured OrganizationsOne of the key success factors for driving process improvement starts with cross-functional collaboration and alignment within an organization.Critical Thinking: The Overlooked IT Management SkillHave you given much thought to critical thinking? Its a talent that can make you a stronger, more effective leader.Why Your Current Job May Be Holding Back Your IT CareerAre you completely satisfied with your job? That could be a warning sign your career is stalled.Growing Your Own IT TalentThe IT marketplace is highly competitive and extremely expensive. Is it time to consider growing your own talent?Teach IT: Why Staff Learning Beats TrainingIT training provides the knowledge teams need to perform specific tasks. IT learning spurs progress and innovation. Its important to know the difference.Talent: Find It, Train It, Keep ItAn Ethical Approach to Employee PoachingTwo-way employee poaching between IT departments and vendors has been a fact of life for years. What are the ethics and best practices you should uphold?The AI Skills Gap and How to Address ItWorkers are struggling to integrate AI into their skill sets. Where are we falling short in helping them leverage AI to their own benefit and the benefit of their employers?Online Learning: Training an Advanced Manufacturing WorkforceThe USs advanced manufacturing sector depends on innovative approaches to education.How to Find a Qualified IT Intern Among CandidatesIT organizations offering intern programs often find themselves swamped with applicants. Here's how to find the most knowledgeable and prepared candidates.How to Build an Effective IT Mentoring ProgramIn a rapidly evolving IT world, mentoring is an efficient way to help team members keep pace with the latest tech and practices. Here's how to get started.Skills-Based Hiring in IT: How to Do it RightBy focusing directly on skills instead of more subjective criteria, IT leaders can build highly capable teams. Here's what you need to know to get started.Tech Pros Quitting Over Salary Stagnation, StressTo retain top tech talent, organizations must look beyond financial compensation to provide growth opportunities and wellbeing.Citizen Development Turns Software Novices into CreatorsAttention, citizens! You can now become software developers. Only minimal skills are necessary, thanks to low-code, no-code technology.Can Artificial Intelligence Ever Become an IT Team Leader?AIs ability to make predictions and offer recommendations is advancing rapidly. Will it eventually gain the ability to lead IT teams?Read more about:Diversity, Equity, and Inclusion (DEI)About the AuthorJames M. ConnollyContributing Editor and WriterJim Connolly is a versatile and experienced freelance technology journalist who has reported on IT trends for more than three decades. He was previouslyeditorial director of InformationWeek and Network Computing, where heoversaw the day-to-day planning and editing on the sites. He has written about enterprise computing, data analytics, the PC revolution, the evolution of the Internet, networking, IT management, and the ongoing shift to cloud-based services and mobility. He has covered breaking industry news and has led teams focused on product reviews and technology trends. He has concentrated on serving the information needs of IT decision-makers in large organizations and has worked with those managers to help them learn from their peers and share their experiences in implementing leading-edge technologies through such publications as Computerworld. Jim also has helped to launch a technology-focused startup, as one of the founding editors at TechTarget, and has served as editor of an established news organization focused on technology startups at MassHighTech.See more from James M. ConnollyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Comments 0 Shares 22 Views
  • WWW.INFORMATIONWEEK.COM
    Note From the Editor-in-Chief
    A change in ownership and what it means for our readers.
    0 Comments 0 Shares 23 Views
  • WWW.INFORMATIONWEEK.COM
    FTC to Ban Firms From Selling Sensitive Location Data
    Shane Snider, Senior Writer, InformationWeekDecember 3, 20243 Min ReadAndrii Yalanskyi via Alamy StockThe Federal Trade Commission (FTC) on Tuesday announced action against Gravy Analytics and Venntel Inc. and a separate action against Mobilewalla that would ban the companies from selling sensitive location data.The FTCs complaint against the companies alleges Virginia-based Gravy Analytics and its subsidiary Venntel violated the FTC Act by unfairly selling sensitive consumer location data, and by collecting and using consumers location data without consent for commercial and government uses. Gravy Analytics, the complaint says, also sold health and medical decisions, political activities, and religious views collected from location data.In the case of Georgia-based Mobilewalla, the FTC alleges the company collected more than 500 million unique consumer advertising identifiers paired with precise location data between January 2018 and June 2020. The company sold the raw data to third parties, including advertisers, data brokers, and analytics firms, the FTC says.In a statement, FTC Chair Lina Khan said, Persistent tracking by data brokers can put millions of Americans at risk, exposing the precise locations where service members are stationed or which medical treatments someone is seeking. Mobilewalla exploited vulnerabilities in digital ad markets to harvest this data at a stunning scale.In a message to InformationWeek, Mobilewalla CEO Anindya Datta pushed back on the FTCs case, but accepted the results. Mobilewalla respects consumer privacy and has been evolving our privacy protections throughout our history as a company, he says. While we disagree with many of the FTCs allegations and implications that Mobilewalla tracks and targets individuals based on sensitive categories, we are satisfied that the resolution will allow us to continue providing valuable insights to businesses in a manner that respects and protects consumer privacy.FTC had strong words for the companies' practices.Surreptitious surveillance by data brokers undermines our civil liberties and puts servicemembers, union workers, religious minorities, and others at risk, Samuel Levine, director of the FTCs Bureau of Consumer Protection, said in a statement. This is the FTCs fourth action taken this year challenging the sale of sensitive location data, and its past time for the industry to get serious about protecting Americans privacy.The FTC also alleged Gravy Analytics and Venntel obtained consumer location information from other data suppliers and claimed to collect, process, and curate more than 17 billion signals from a billion mobile devices daily.The complaint also alleges Gravy Analytics used geofencing to create a virtual geographical boundary to identify and sell lists of consumers who attended certain events related to medical conditions and places of worship. The unauthorized data brokering put consumers at risk of stigma, discrimination, violence, and other harms, according to the complaint.You may not know a lot about Gravy Analytics, but Gravy Analytics may know a lot about you, reads a joint statement by FTC commissioners Alvaro M. Bedoya, Rebecca Kelly Slaughter, Melissa Holyoak, and Khan.Gravy Analytics merged with Unacast last year. The companys website says it offers location intelligence for every business. Mobilewallas website says its products make your AI smarter with high-quality, privacy compliant consumer data and predictive feature InformationWeek has reached out to Gravy Analytics and Mobilewalla for comment and will update with any response.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Comments 0 Shares 26 Views
  • WWW.INFORMATIONWEEK.COM
    The Cost of Cloud Misconfigurations: Preventing the Silent Threat
    Venkata Nedunoori, Associate Director, Dentsu InternationalDecember 2, 20244 Min ReadAleksia via Alamy StockCloud computing has revolutionized the way businesses operate, offering scalability, flexibility, and cost-efficiency. However, with this rapid adoption comes a new wave of challenges and most notably, the risk posed by cloud misconfigurations. These subtle yet significant errors can open doors to costly data breaches and compliance failures, often leaving businesses blindsided. Understanding the impact of cloud misconfigurations and implementing effective prevention strategies are crucial steps for organizations aiming to secure their cloud environments.The Growing Need for Cloud SecurityThe allure of cloud technology is undeniable, but its very design being an agile and adaptable infrastructure can also make it susceptible to human error. As more businesses transition to cloud-based services, the attack surface expands, increasing the risk of exposure due to misconfigured resources. A simple oversight, such as improperly set permissions or public-facing resources, can make sensitive data accessible to unauthorized users.Misconfigurations are not just minor slip-ups; they are often critical vulnerabilities that attackers seek out. According to industry reports, cloud misconfigurations account for a significant portion of data breaches. Gartner predicts that through 2025, 99% of cloud security failures will be the customers fault, primarily due to misconfigurations.Related:In 2017, there was a data breach involving a large US credit reporting agency. The breach, caused by a failure to patch a known vulnerability and improper cloud security settings, led to the exposure of personal information belonging to over 145 million consumers. The fallout included fines, lawsuits, and a significant loss of consumer trust.In June 2023, Toyota Motor Corporation disclosed that a cloud misconfiguration exposed vehicle data and customer information for over eight years, affecting approximately 260,000 customers.Similarly, a 2023 report by the Cloud Security Alliance highlighted that misconfigurations are a leading cause of cloud security incidents, with 75% of security failures resulting from inadequate management of identities, access, and privileges.These incidents demonstrate that cloud misconfigurations are not isolated events but a widespread issue with the potential to disrupt businesses across various industries.Prevention Techniques: Best Practices for Secure Cloud ConfigurationsTo mitigate the risk of cloud misconfigurations, businesses must adopt an energetic approach rooted in strong security practices. Below are key strategies to help organizations bolster their cloud security posture:Related:Adopt the principle of least privilege: One of the most fundamental security principles is limiting access to data and systems based on user roles. Implement role-based access controls (RBAC) to ensure that employees only have access to the resources they need to perform their job functions.Continuous monitoring and auditing: The dynamic nature of cloud environments requires ongoing vigilance. Utilize monitoring tools to track changes and audit logs for unusual activity. This real-time awareness can help detect misconfigurations before they are exploited.Automated configuration management: Manual configuration processes are prone to human error. Automation tools such as infrastructure as dode (IaC) solutions, like Terraform and Ansible, can help standardize and automate cloud configurations, minimizing the likelihood of mistakes.Security training and awareness: Equip the IT and security teams with regular training on cloud security best practices. The landscape of threats is constantly evolving, and up-to-date knowledge is essential for staying ahead of potential vulnerabilities.Encryption and data masking: Sensitive data should be encrypted both in transit and at rest. Implement data masking techniques where possible to reduce the risk associated with data exposure due to misconfigurations.Regular compliance checks: Ensure that the cloud environment aligns with industry standards such as CIS Benchmarks and frameworks like NIST and ISO 27001. Regular compliance checks can help identify gaps and fortify your security posture.Related:Tools to Strengthen Cloud SecurityLeveraging the right tools is essential for preventing cloud misconfigurations. Here are some notable options:Cloud security posture management (CSPM) Tools: CSPM solutions like Prisma Cloud and AWS Config help organizations monitor and remediate misconfigurations in real-time.Cloud workload protection platforms (CWPP): Tools such as Lacework and CrowdStrike Falcon offer comprehensive visibility into cloud workloads, allowing for better threat detection and response.IaC scanning tools: Solutions like Checkov and KICS scan IaC templates for security issues, ensuring that vulnerabilities are caught before deployment.Threat detection services: AWS GuardDuty and Azure Security Center provide advanced threat intelligence and automated alerts, enabling faster response to potential security incidents.Moving Forward: A Culture of SecurityPreventing cloud misconfigurations requires more than just technology. it mandates a culture of security within an organization. This means fostering cross-functional collaboration between IT, security, and development teams, emphasizing the importance of secure coding practices and adherence to security protocols.Cloud security is a shared responsibility. While cloud providers offer robust infrastructure and built-in tools to help secure data, the onus ultimately lies with businesses to configure and manage their environments properly. By implementing best practices, employing effective tools, and nurturing a security-first mindset, organizations can significantly reduce the risk of cloud misconfigurations and the costly repercussions that come with them.The era of cloud computing is here to stay. To thrive in this new landscape, businesses must remain vigilant and committed to safeguarding their digital assets against the silent threat of misconfigurations.About the AuthorVenkata NedunooriAssociate Director, Dentsu InternationalVenkata Nedunoori is a seasoned technology leader and IEEE Senior Member with experience across industries such as insurance, securities, airlines, and media. He specializes in designing and implementing advanced cloud-based solutions, focusing on scalable, secure, and cost-efficient platforms. A recognized speaker, Venkata is passionate about the intersection of cloud security and artificial intelligence, continually exploring ways to strengthen digital landscapes.See more from Venkata NedunooriNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Comments 0 Shares 31 Views
  • WWW.INFORMATIONWEEK.COM
    Clearing the Clouds Around the Shared Responsibility Model
    In the early days of cloud, confusion around the shared responsibility model abounded. It was common for customers to simply assume that putting their data in the cloud meant that data was secure with no effort on their end. Today, that misconception, while not entirely erased, is much less likely to trip enterprises up.Migration to the cloud continues and cloud maturity varies depending on the enterprise. Misconfigurations happen, as do breaches. In fact, the majority of breaches (82%) involved data in the cloud, according to IBMs Cost of a Data Breach Report 2023.As organizations increasingly embrace their use of multiple cloud services, threat actors will continue to target it. Understanding how cloud providers are responsible for the security of the cloud and how customers are responsible for security in the cloud can help enterprises avoid potential missteps.Who Is Responsible for What?The broad definition of the shared responsibility model means cloud service providers (CSPs) are in charge of securing the underlying infrastructure of the cloud. Data centers and physical networks are their responsibility. Customers are responsible for securing their environment and their data in the cloud.While that broad definition is widely accepted, there is room for nuance among the various CSPs. They view it the same broadly, and then, they view it differently when you get into specific services, Randy Armknecht, managing director, global cloud advisory at global consulting firm Protiviti, tells InformationWeek.Related:And CSPs offer a lot of different services. We have over 200 services so that bar of the customer side and AWS side does shift a little bit on a couple of the services, Clarke Rodgers, director of enterprise strategy at cloud computing company Amazon Web Services (AWS), says.Enterprise leaders need to dig into the documentation for each cloud service they use to understand their organizational responsibilities and to avoid potential gaps and misunderstandings.While there is a definite division of responsibilities, CSPs typically position themselves as partners eager to help their customers uphold their part of cloud security. The cloud service providers are very interested and invested in their customers understanding the model, says Armknecht.Google, for one, opts to refer to the shared responsibility model as one of shared fate. We step over that shared responsibility boundary, partner with our customers, and provide much more prescriptive guidance and capabilities and services and teams like mine, for example, to help them with that part of that responsibility model, explains Nick Godfrey, senior director and global head, office of the CISO at Google Cloud, Googles suite of cloud computing services.Related:Customer success is a common mantra among cloud providers, although the exact wording may be different. Cloud is just not a technology. Its ultimately a partnership for the enterprise with the provider, says Nataraj Nagaratnam, CTO for cloud security at technology company IBM.When Misunderstandings HappenBoth parties, customer and provider, have their security responsibilities, but misunderstandings can still arise. In the early days of cloud, the incorrect assumption of automatic security was one of the most common misconceptions enterprise leaders had around cloud. Cloud providers secure the cloud, so any data plunked in the cloud was automatically safe, right? Wrong.Once that customer decides to sign up for an account, start using AWS services, start putting data in there, it is their responsibility how they choose to configure our services to meet their specific security, compliance, and privacy needs, Rodgers explains.Cloud customers might also mistakenly make assumptions about compliance with regulations like PCI or HIPAA. Microsoft and AWS and others have all of the configuration settings available and services available to be PCI compliant, but simply [putting] your data there does not make you compliant. You have to deliberately configure things to be compliant, says Armknecht.Related:Today, CSPs are much less likely to run into customers who make these kinds of assumptions. Over time, that misconception has definitely [been] reduced, but unfortunately, it has not gone away, says Nagaratnam.Even if customers fully understand their responsibilities, they may make mistakes when trying to fulfill them. Misconfigurations are a potential outcome for customers navigating cloud security. It is also possible for misconfigurations to occur on the cloud provider side.The CIA triad: confidentiality, integrity, and availability. Essentially a misconfiguration or a lack of configuration is going to put one of those things at risk, says Armknecht. Misconfigurations might result in issues like system outages or exploitable vulnerabilities.Cloud providers recognize that potential risk and aim help customers avoid that pitfall. We look really hard at providing layers of defense and multiple controls so that there is massively reduced likelihood of one misconfiguration causing that sort of nightmare scenario, says Godfrey.But misconfigurations do still happen. Where we find people having that misunderstanding is when it gets to the per service level, and I typically think it's a result of IT and development teams moving [too] fast, says Armknecht. They didn't go validate their assumption of the shared responsibility model for each service.Talking Shared ResponsibilityHow should customers talk to their CSPs about shared responsibility?I would absolutely look at the nature of the support and services that the CSP provides to the customer. I would ask questions around their philosophy and approach to secure [by] default and secure by design principles, says Godfrey. I would ask about the support in terms of providing foundations and blueprints and guidance to enable the customer to not have to figure everything out themselves.Conversations around expectations and available support can provide enterprise customers with more clarity. Once armed with that knowledge, enterprise teams -- often led by the coordinated efforts of the CIO, CTO, and CISO -- need to put in the internal work of upholding their cloud security responsibilities.There's often a tendency to assume that the relationship between the CISO and the CTO or the CIO is adversarial or challenged because they want different things, says Godfrey. We actually think they probably want exactly the same things, which is a secure and resilient cloud that enables the business to do business of the speed it wants to do it with all of the agility that the cloud has the potential to offer.Depending on the maturity of the organization, it may or may not have those roles filled or the resources to properly manage the shared responsibilities associated with the cloud.Not all customers are the same. They don't have the same resources. They don't have the same staffing or skill sets internally, says Rodgers. Customers might onboard an MSSP [managed security service provider] and use them while they're upskilling their own staff and then eventually sort of wean off the MSSP as they gain more familiarity and functionality inside of AWS.Multi-Cloud ComplexityAs enterprises increasingly leverage the benefits of the cloud, they may find it advantageous to work with different providers and adopt different services to support a variety of business functions. The majority of the customers that I meet with are using more than one cloud, or they're using SaaS services, Rodgers shares.Maintaining their half of the shared responsibility model can become more complicated for customers like that. Enterprise teams need to understand how their responsibilities shift, depending on the provider and the specific service. So, the team just has more to do; it's going to take longer, says Armknecht. He also points out that teams may understand one cloud environment but struggle with another. Maybe they misstep up on which controls are needed to meet their shared responsibility.While the complexities of multi-cloud and hybrid environments abound, there are some ways in which managing shared responsibility could become easier. Those responsibilities can be made much more addressable using technologies like AI and automation, Nagaratnam points out.As technology and risk continue to change, what will that mean for the shared responsibility model?I think the definitions of where the ... delineation actually technically sits will continue to evolve as cloud products continue to evolve, says Godfrey. But I don't think the shared responsibility model in that sort of contractual and legal delineation will go away.
    0 Comments 0 Shares 34 Views
  • WWW.INFORMATIONWEEK.COM
    Have We Gone Too Far With AI in Software Development?
    Has the promise of improved efficiency through AI been realized in software development? Is there still a place for citizen developers with AI in the development cycle?
    0 Comments 0 Shares 34 Views
  • WWW.INFORMATIONWEEK.COM
    Gelsinger Out as Intel CEO as Chip Giant Struggles to Regain Footing
    The company has had rough financial results over the last two years, even as plans to reignite domestic manufacturing move forward. Gelsinger was the catalyst for those ambitions.
    0 Comments 0 Shares 35 Views
  • WWW.INFORMATIONWEEK.COM
    Lessons from Banking on the Role of the Chief Risk Officer
    Dan Higgins, Chief Product Officer, Quantexa November 29, 20245 Min ReadCalypsoArt via Alamy StockAs the most informed resource about emerging risks within any organization, chief risk officers (CROs) play a vital role in safeguarding business success and fostering a risk-aware culture that promotes resilience and adaptability. CROs are responsible for continuously monitoring and mitigating challenges associated with everything from interconnected risks, new emerging risks on the rise, regulatory compliance, operational efficiency, risk innovation, and transformation across the organization. In short, they are tasked with possessing in-depth knowledge of risks -- including that of emerging climate-related risks -- that can disrupt operations, cause losses, damage reputation, and decrease customer and shareholder trust.Within the financial services sector, the environment financial institutions are operating in is now so acutely high risk that risk management has become core to daily operations, playing a critical role in the success and sustainability of banks and insurers in everything from regulatory compliance and customer trust to operational efficiency and asset management. Heightened geopolitical tensions and challenges endured by the supply and demand shocks of recent years has further forced enterprises to re-evaluate their operations to stay afloat and succeed in todays risk environment. Related:Many CROs within banking are tasked with taking steps to better address liquidity, credit, market, operational, technology, regulatory compliance, and reputational risks as they occur, and can only do this through powerful risk management strategies. These include building a trusted data foundation for a single view of risk and fueling AI with contextual insights to more accurately identify existing, emerging, and hidden risks. This holistic and interconnected view of data is critical to uncovering and responding to interconnected risk factors posed to customers, vendors, and suppliers. Many banks also use this data foundation to better equip their frontline employees with information too, helping turn the cost of managing risk into new opportunities for potential revenue growth.Building a Foundation to Provide a Single View of RiskTo gain a holistic and accurate understanding of risk at scale, CROs should work alongside the chief data officer (CDO) and chief information officer (CIO) to build this data foundation, creating a connected and contextual view of their customers and counterparties based on both proprietary sources (such as customer portfolios) and supplementary sources (such as credit data) -- even more relevant given the current re-focus on risk data aggregation principles, such as BCBC239.Related:For financial institutions, credit risk insights, analytics, and decisioning becomes increasingly effective when combined with entity resolution (ER), knowledge graphs, and AI copilots. ER is the process by which data is cleansed and matched to create entities to ensure that data entries referring to the same real world entity -- whether a business name, product, or individual -- can be connected. Its a critical tool for linking records, de-duplicating and matching data within large systems, and plays an important role in connecting siloed data across multi-source data. Further, knowledge graphs help to visualize and determine the relationship between entities, understand supply chain and concentration across clients and suppliers, the direction of those relationships, and the strength of connections. When using this technology paired with a copilot, it gives teams the ability to easily query the data and make informed decisions faster.This combination connects structured and unstructured data from multiple sources into one holistic view of entities and the relationships between them, to drive a deeper contextual understanding that is essential for improved decision-making and stronger risk management overall. By merging billions of data points from multiple sources, CROs in these financial institutions working in tandem with the CDOs and CIOs within their teams gain a greater view of a customers financial health. This process enables business teams to better assess the overall risk of extending credit to a potential borrower, granting greater risk visibility for the CRO. Where existing credit analytics and insights may initially assess the customer as a low-risk borrower, the deployment of ER and knowledge graphs ensures a more informed and strategic decision-making process when analyzing broader datasets, such as the potential risks of the counterparties a customer interacts with.Related:Fueling AI for Risk ManagementThe deployment of both knowledge graphs and ER is critical to ensuring a trusted data foundation that CROs in other sectors can also rely on to deliver a contextual understanding of risk. This interconnected data foundation is essential to truly realize the value of AI in risk management while simultaneously revealing interconnected risk factors. For both risk management teams and frontline employees, knowledge graphs and ER help strengthen the accuracy and reliability of AI models across the organization to reduce complexity, bolster augmented decision-making, and speed the time it takes to complete tasks from days and weeks to mere minutes. Those who establish a quality data foundation and gain more nuanced and accurate insights using AI with context will have the advantage of operationalizing their data to support their organizations both defensively and offensively.However, according to a recent global survey of risk and compliance professionals on AI in risk management and compliance, two thirds of respondents rate their firms data quality as low quality: inconsistent and fragmented. Further, while nearly 70% of respondents believe AI will be transformative or have a major impact within the next 3 years, just 9% revealed that AI is actively being utilized within their companies for compliance and risk management.From detecting anomalies to identifying patterns and making predictions, the leveraging of AI-enabled tools ensures that CROs stay informed of potential risk factors and can quickly respond when issues arise. However, this will only be effective with increased access to context-based data insights and a trusted data foundation designed to fuel the insights needed for effective risk management. And, in turn, create new opportunities for growth.About the AuthorDan HigginsChief Product Officer, Quantexa Prior to joining Quantexa, Dan Higgins spent over 20 years at EY, where he was responsible for setting global strategy for the $5.5 billion technology consulting business and helping shape the firms platform, product, and asset strategy. As Chief Product Officer at Quantexa, Dan is responsible for aligning product strategy and roadmaps, helping clients uncover hidden risks and identify new, unexpected opportunities using context in data and analytics across the customer and employee life cycle.See more from Dan HigginsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeReportsMore Reports
    0 Comments 0 Shares 44 Views
  • WWW.INFORMATIONWEEK.COM
    How to Build a Strong and Resilient IT Bench
    When they refer to bench strength in sports, theyre talking about the ability of a less skilled player to step in and play a big role if a main performer is unavailable. For years, IT leaders have wanted bench strength. However, those leaders found that achieving bench strength has been an elusive goal in tight job markets.Is there a way you can develop a bench? Yes, IT can develop bench strength.The first step is to identify the talent shortfalls in IT, where most CIOs will find the following gaps:Talent shortages in new technologies such as artificial intelligence (AI), automation, database architecture, information management, cloud management, and edge ITShortages of talent in the bread-and-butter infrastructure stalwarts, such as network architecture and systems softwareIn the infrastructure category, one cause of declining bench strength is baby boomer retirements. Computer skillsets have systematically been abstracted from newer IT workers, who now work through point and click GUIs (graphical user interfaces) to provision, monitor and manage infrastructure resources. Unfortunately, the more highly abstracted IT tools that newer workers use dont always get to the bottom of a bug in system infrastructure software. That bug could bog down a hotel reservation system resulting in loss of hundreds of thousands of dollars in bookings per hour. For this, you need down to the metal skills, which boomers have excelled at.Related:The net result for IT managers and CIOs is that they find themselves short in new skill areas such as AI, but also in the older IT disciplines that their shops must continue to support, and that younger ITers arent exposed to.Setting Your Bench Strength TargetsSince talent is likely to be short in new technology areas and in older tech areas that must still be supported, CIOs should consider a two-pronged approach that develops bench strength talent for new technologies while also ensuring that older infrastructure technologies have talent waiting in the wings.Here are five talent development strategies that can strengthen your bench:Partnering with schools that teach the skills you want. Companies that partner with universities and community colleges in their local areas have found a natural synergy with these institutions, which want to ensure that what they teach is relevant to the workplace.This synergy consists of companies offering input for computer science and IT courses and also providing guest lecturers for classes. Those companies bring real world IT problems into student labs and offer internships for course credit that enable students to work in company IT departments with an IT staff mentor.Related:The internships enable companies to audition student talent and to hire the best candidates. In this way, IT can sidestep a challenging job market and bring new skills in areas like AI and edge computing to the IT bench.There are even universities that teach down to the metal skills at the behest of their corporate partners. The IBM Academic Initiative, which teaches students mainframe software skills, is one example.Using internal mentors. I once hired a gentleman who was two years away from retirement because he 1) had invaluable infrastructure skills that we needed; and 2) he had expressed a desire to give back to younger IT employees he was willing to mentor. He assigned and supervised progressively more difficult real world projects to staff. By the time he left, we had a bench of three or four persons who could step in.Not every company is this fortunate, but most have experienced personnel who are willing to do some mentoring. This can help build a bench.Use consultants and learn from them. At times in my CIO career, I hired consultants who possessed specialized technology skills where we lacked experience. When my staff and I evaluated consultants for these assignments, we graded them on three parameters:Related:1) Their depth and relevance of knowledge for the project we wanted done;2) Their ability to document their work so that someone could take over when their work was complete; and3) Their ability and willingness to train an IT staff member. Getting the project done was a foremost goal, but so was gaining bench strength.Give people meaningful project experience.Its great to send people to seminars and certification programs, but unless they immediately apply what they learned to an IT project, theyll soon forget it.Mindful of this, we immediately placed newly trained staff on actual IT projects so they could apply what they learned. Sometimes a more experienced staff member had to mentor them, but it was worth it. Confidence and competence built quickly.Retain the employees you develop. CIOs lament about employees leaving a company after the company has invested in training them. In fact, the issue became so prominent at one company that the firm created a training vesting plan whereby the employee had to reimburse the company for a portion of training expenses if they left the company before a certain prescribed time.A better way to retain employees is by regularly communicating with them, giving them a sense of belonging that makes them feel part of the team, assigning them to meaningful work, and rewarding them with paths to advancement and salary increases.Summary RemarksCompanies (and employees) continuously change, and there is no guarantee that IT departments will always be able to retain their most competent performers. Consequently, its critical to develop employees, to actively and continuously engage with them, and to foster an open and pleasant working experience.By doing so, CIOs can improve staff skill agilities in their organizations and be ready for the next tech breakthrough.
    0 Comments 0 Shares 48 Views
More Stories