InformationWeek
InformationWeek
News and Analysis Tech Leaders Trust
1 people like this
193 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • How Operating Models Need to Evolve in 2025
    www.informationweek.com
    Lisa Morgan, Freelance WriterJanuary 22, 20259 Min ReadSiarhei Yurchanka via Alamy StockIT operating models continue to evolve as new tech and business trends emerge. Its a constant state of change that needs to be well thought out and managed, with an eye toward fueling innovation.In 2025, the IT operating model will inevitably undergo a profound transformation, especially in automation, intelligence and flexibility, says Alex Li, founder of StudyX at AI education company StudyX.AI in an email interview. With the continuous advancement of technology, emerging technologies such as cloud computing, AI, and machine learning will be the key driving forces for upgrading the IT operating model.However, the core factors driving this transformation are not just the technology itself, he says. As consumer demands for personalization and high-quality services continue to rise, the IT operation model must be adaptable and responsive.Trevor Fry, founder & tech consultant at TreverFry.tech, says IT leaders are facing two big trends this year.First, AI is maturing. Its growing beyond the wild west phase and is becoming a useful, everyday tool. Thats exciting, but means we need to be more mindful about data security and ethical usage. We are also just barely starting to discover the ecological impacts of utilizing these tools, says Fry in an email interview. Second, theres a workplace revolution happening.Related:Specifically, employees are burned out and disillusioned about the cultures and flexible schedules they were promised only to discover that company policies have changed. Alternatively, they may be too scared to exercise the rights their employers give them -- such as flexible work hours -- for fear of being first on the cut list, Fry says. As organizations become leaner, employees are job-hoarding, doing anything they can to keep their position while managers are stretched thin. Since that model doesnt scale well, its forcing IT to rethink how their operating models should work.Alex Li, StudyX.AIEfrain Ruh, field CTO continental Europe at SaaS-based autonomous enterprise software provider for IT and business operations Digitate, foresees enterprises making heavy investments to reduce IT operating environmental complexity. Some companies will focus on moving to SaaS and PaaS platforms, but they will need to maintain certain critical workloads running on legacy systems until they figure out the best way to migrate.In 2025, enterprises are looking to achieve autonomous and self-healing IT environments, which is currently referred to as AIOps. However, the use of AI will become so common in IT operations that we wont need to call it [that] explicitly, says Ruh in an email interview. Instead, the term, AIOps will become obsolete over the next two years as enterprises move towards the first wave of AI agents, where early adopters will start deploying intelligent components in their landscape able to reason and take care of tasks with an elevated level of autonomy.Related:All that will lead to a ticketless IT operating enterprise known as ZeroOps, he says. It wont happen overnight, and to achieve it applications must be resilient by design. Attempting to apply ZeroOps to a complex existing environment requires an enormous amount of effort that may not be justifiable.I see similarities between the auto industry struggl[ing] to provide a full-autonomous driving experience, and IT ops trying to deploy a fully autonomous solution for operations, says Ruh. It is not that the technology is not available, it has [more] to do more with liability: Who do we blame when an AI agent makes a mistake with catastrophic results?"Kent Langley, founder at strategic technology advisory firm Factual, says organizations must embrace agility, using AI as a connective tissue to enable transparency, autonomy, and alignment across teams, but decentralization without structure risks redundancy and chaos.Related:The IT operating model of 2025 must adapt to a landscape shaped by rapid decentralization, flatter structures, and AI-driven innovation, says Langley in an email interview. These shifts are driven by the need for agility in responding to changing business needs and the transformative impact of AI on decision-making, coordination and communication. Technology is no longer just a tool but a connective tissue that enables transparency and autonomy across teams while aligning them with broader organizational goals.Challenges With TransformationFry says IT leaders are facing the challenge of creating the right culture, since IT operating model evolution isnt just a tech issue.Trevor FryIT leaders need to get better at listening to the people doing the work. Theyre the ones who see the cracks and inefficiencies that leadership might miss, says Fry. But -- and this is critical -- aligning that feedback with a strategic vision is key. We cant just hand over the reins [to the business], but we also cant succeed without their insights. We are seeing many legacy companies start to cycle out of the old ways and dip their toes into modern technologies as they are falling behind their competitors and no longer able to work around legacy tools and processes.Therefore, the IT operating model of 2025 needs to prioritize adaptability and focus.Its about giving teams the tools and clarity they need to do great work while protecting them from unnecessary burdens, says Fry. Technology, like AI and automation, can help streamline operations, but we cant lose sight of the human element. Success will come from leaders who actively support their teams, not just direct them.Factuals Langley believes decentralization will be challenging. Without clear structures, organizations risk redundant work, fragmented knowledge and reduced cohesion.IT leaders must transition from traditional hierarchical roles to facilitators who harness AI to enable autonomy while maintaining strategic alignment. This means creating systems for collaboration and clarity, ensuring the organization thrives in a decentralized environment, says Langley. In 2025, we anticipate leveraging AI-driven tools to evolve our IT model toward more autonomy, coordination and resilience. By emphasizing these principles today, were preparing to thrive in an agile, AI-empowered future.Raviraj Hegde, SVP of growth at non-profit online fundraising platform Donorbox, believes the two major challenges continue to be keeping data secure and controlling costs.The biggest challenge will be a balancing act between innovation and stability. IT teams will have to adopt new tools but also make sure systems stay reliable. Collaboration with other departments will be very important in understanding what the business really needs, says Hegde in an email interview. [T]he IT model at DonorBox will be more integrated with AI and automation to serve nonprofits even better. Most [likely], we will be working on smart usage of data and enhancement of our systems to scale efficiently.Dan Merzlyak, senior vice president, global head of data, analytics and AI at Postgres data and AI company EnterpriseDB, says the need for faster innovation, operational efficiency and seamless customer experiences are pushing IT to the forefront.The rapid advancements in AI, like GenAI and predictive analytics, further amplify ITs role in enabling smarter decisions and faster outcomes. In 2025, IT wont just facilitate better, faster outcomes; it will shape them, serving as the backbone for competitive, technology-driven business strategies, says Merzlyak in an email interview. Corporate, personnel, and data security will remain top challenges. As new technologies -- whether traditional, cloud-based, or AI-drivenbecome easier to adopt, the risk of exposing sensitive business assets grows. IT must lead the charge in modernizing operational strategies while maintaining a relentless focus on safeguarding the companys most critical data and systems. Balancing innovation with robust security will be the key to long-term success.In 2025, EnterpriseDB will continue exploring ways to enhance traditional IT practices through automation and AI so IT leaders can focus on high-value, high-impact initiatives that drive exponential growth. By leveraging automation and AI, IT will be in a better position to support the companys expansion while maintaining security and operational excellence.In the Industrial Sector, Composability Will Be KeyKevin Price, global head of enterprise asset management at enterprise cloud and Industrial AI software provider IFS, says in the industrial sector, asset lifecycle management is problematic because its complicated and IT structures have a lot of problems.Its [convoluted] because people in those application roles tend to focus on what that function does, and they try to extend it. [Or] they think about of how specific that individual function should be, says Price. [W]hat we lose when we do that is a focus on what matters, and what matters in the industry matters in the business.He sees composability being a major trend in 2025, so instead of running everything with general industrial applications, individual functions can be combined as necessary for the specific industry and use case.[M]id-stream oil and gas [is] pretty asset intensive and risk critical. When it fails, it's a big disaster. People [and] the environment get hurt. There's just loads of concerns from a technology security perspective, says Price. I should have models or agents, so I [can] have a system that composes for that need of mid-stream oil and gas, but I should have agents that I can selectively deploy.That way, organizations can avoid using a system designed for a mining operation in a manufacturing or oil and gas environment, for example, and instead use components that were designed with the specific use case in mind.Bottom LineAs tech and business requirements change, so must the IT operating model. Because agility and speed remain top priorities, tech choices and IT operations need to align to make that happen.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·41 Views
  • Why Every Employee Will Need to Use AI in 2025
    www.informationweek.com
    Over the past year, weve seen organizations differ in their approaches to AI. Some have taken every opportunity to embed AI in their workflows; others have been more cautious, experimenting with limited proof-of-concept projects before committing to larger investments.But unlike past technology breakthroughs that were only relevant for specific employees, AI is a horizontal skill. Business leaders need to embrace this fact: Every single employee needs to become an AI employee.In 2025 and beyond, we will start to see the difference between companies that treat AI as a feature and those that view it as a transformation. Here's how business and learning leaders should think about AI adoption throughout their organization.Establishing an AI-Ready Skills VisionFor businesses to develop an AI-ready workforce, they need to establish a skills vision that sets out which employees require which level of competency. This vision shouldn't be permanent; instead, it should evolve in response to technological advances and the needs of the business.There are two ways of structuring an AI skills vision. The first is simple: builders and users. A small portion -- roughly 5% -- of an organizations workforce will require the expertise to build AI systems, products, evaluation tools and language models. The remaining 95% simply need to know how to use AI to augment and accelerate their existing workflows.Related:For a more detailed framework, leaders can break down their workforce into four levels:Center of excellence: Synonymous with AI builders. Think about data scientists, machine learning engineers, and software engineers. Their entire role is to design, build, and refine AI tools for internal or external clients.AI + X: These are the subject matter experts whose roles can be reimagined with the addition of AI. Employees at this level could come from a wide range of backgrounds, from mechanical engineers to finance leaders. AI can help these employees build something truly meaningful in their specific area of expertise.Fluency: At the fluency level, you dont need to know how to use AI tools or apply them to your workflows. Instead, fluency is the required level for employees who are interacting with a technical counterpart. For example, a marketer selling a highly technical AI product needs a certain level of understanding to be able to accurately and effectively market that product.Literacy: This is the basic level of AI skills needed for front-line workers and individual contributors. AI literacy could help these employees boost productivity depending on their role and responsibilities. But its equally important for these employees to be part of the broader cultural change. A company is in a better position to innovate when every employee has achieved a standard level of AI literacy.Related:Avoiding Dangerous AmateursFor an organization to make the most out of AI, it needs to know the precise skill levels of its employees and where they need to grow in the future.For example, a companys solutions will only ever be as good as their best contributors. Organizations must do everything they can to maximize the abilities of their Center of Excellence employees, because they set the bar for the rest of the organization. At one software company, I saw leaders transfer an expert in clean coding to a team struggling with code quality; improvements were evident across the organization within weeks, demonstrating the contagious nature of expertise.But, while experts should be placed at the forefront and driven to achieve more, organizations must be careful not to give the same opportunities to those who overstate their abilities. My friend and collaborator Fernando Lucini refers to these employees as dangerous amateurs, and they can slow down an organizations progress with AI. As companies transition from prototyping to productizing an AI solution, they may realize that the experts they were counting on dont have the skills needed to bring the product to market. Meanwhile, competitors with an accurate measure of employee skill levels will race ahead.Related:Create the Foundation for InnovationFor companies to innovate, they need to be able to adapt quickly to changing technologies and skills demands. In 2016, one of my most important tools was TensorFlow, a commonly used programming language. Less than a decade later, TensorFlow has evolved so much that I can no longer use it effectively without retraining and updating my skills. Highly technical skills perish quickly.Employees must establish a strong foundation in durable skills in order to master the perishable, cutting-edge technical skills. OpenAI built ChatGPT using innovative, breakthrough technologies. However, they could only create ChatGPT by drawing on their foundations in durable skills like mathematics, statistics, coding and English. AI-ready companies will need to embrace a T-shaped approach to skills development, combining a broad base of horizontal skills with a narrow set of deep, vertical skills. Innovation breaks through as a result of perishable skills but sustains as a result of durable skills.Every company is becoming an AI company. Every employee will need to use AI. Those who dont embrace the change will inevitably fall behind.
    0 Comments ·0 Shares ·51 Views
  • Securing a Better Salary: Tips for IT Pros
    www.informationweek.com
    Nathan Eddy, Freelance WriterJanuary 22, 20255 Min ReadCagkan Sayin via Alamy StockNegotiating a higher salary or better benefits can be daunting, but IT professionals can strengthen their case by aligning their contributions with organizational goals and adopting strategic approaches.The key to securing a raise lies in preparation, communication, and demonstrating measurable value to higher-ups. Quantifiable metrics are crucial during salary discussions, as they provide clear evidence of your impact. Key performance indicators (KPIs) to highlight include revenue generation, cost savings, productivity improvements, customer satisfaction, and security or risk mitigation.Demonstrating how your contributions align with these metrics makes a compelling case for your value to the organization.Scott Wheeler, cloud practice lead at Asperitas, says its important to start raise negotiation preparations by understanding the organizations strategic and tactical goals.Taking on projects that are both impactful and achievable shows alignment with the companys priorities. Identify work that aligns with those goals and has reasonable delivery timelines, preferably under a year, Wheeler says.He adds that building a productive rapport with managers is another cornerstone of effective salary negotiations. Understand what your manager values and what they will be evaluated on, Wheeler says. Align your work with their goals and share progress on your projects regularly.Related:He says establishing a personal connection with higher-ups can also help. Knowing what your manager values, both in and outside of work, creates a better partnership and makes communication easier, Wheeler explains.Megan Smith, head of HR at SAP North America, says she agrees the more an employee can master the art of communicating proactively with their manager, the greater the trust they can build.This includes things like sharing the right level of information at the right time, she explains via email.For example, providing a heads up around possible risks in a project, and sharing summary updates regularly of what is being accomplished, helps the manager trust they have the right degree of visibility necessary for the overall success of the team.Salary as Reflection of PerformanceSmith says having a conversation with your manager about your salary is really a conversation about how you are achieving your goals, because a salary increase reflects your performance.Discuss your performance with your manager early and often, so that when you want to connect it to salary, which can be done at any time but recommend at least a couple months prior to the salary review timeline of your company, this is a natural connection, she says.Related:She recommends approaching salary conversations with curiosity, for example by asking your manager how they perceive your salary aligning to your contributions and impact.Get educated on your own point of view, she adds. Do you have any data from internal salary ranges to suggest if you are positioned low?Smith says its important that you dont make it about asking for a raise but rather, make the conversation about an informed discussion about how your salary reflects your contributions, and if that presents opportunity for an increase in the next salary review cycle.IT as a Leadership ProfessionFrom the perspective of Mark Ralls, president at Auvik, the nature of IT work provides ample opportunities for IT pros to show leadership even if they are not in a formal managerial role.Cross-functional or team-based project work allows IT pros to demonstrate the ability to manage through influence, where they help coordinate the efforts of others through relationship building and persuasion rather than formal authority, he says.Wheeler also emphasizes the importance of teamwork and collaboration in achieving goals.Related:Form partnerships, either internally or externally, that can help you deliver results, Wheeler says. Most work requires a team effort, and sometimes moving to a different internal team may be necessary to produce the desired outcome.Documenting and showcasing these successes are critical to building a strong case during salary discussions.Success in salary negotiations also depends on effective communication and the ability to understand and address the motivations of various stakeholders to align everyone with a common objective.Gaining buy-in and achieving desired outcomes by establishing credibility and trust is a key indicator that someone is ready for that next step to management, earning a raise and potentially a promotion in the process, Ralls says.A recent engineering career mobility report by SignalFire indicates specialization is a key way to turbocharge upward mobility -- and with it, salary bumps.Jarod Reyes, head of developer community at SignalFire, says instead of focusing on a general KPI around developer productivity, he would focus on finding a project, or place in the engineering organization where one can become the specialist.We can see in the data that specialization is the key to rapid upward mobility for engineers happy in their current role, he says. We could see engineers who wanted to move into management roles would take paths that developed more broad skill sets, expanding their surface area and sphere of influence.This includes finding ways to lead a project and looking for opportunities to improve the business or reduce costs -- what Reyes calls sure fire bets.He notes that engineers who wanted to move up a non-management path (down a specialist path, like principal or staff engineer) focused on narrowing their skill sets, taking roles where they were expected to be the directly responsible individual like a site-reliability engineer or data architect.Reyes says from personal experience managing engineering teams and building engineering teams for the last 13 years he could say communicating often with the team about the values that are rewarded is very important.Having direct conversations not just annually, but monthly with your engineers is an important way of building trust and earning loyalty, he says. I think more important than upward mobility I have found that engineers really enjoy working on a team that is crucial, efficient and impact oriented.About the AuthorNathan EddyFreelance WriterNathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.See more from Nathan EddyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·50 Views
  • Untangling Enterprise Reliance on Legacy Systems
    www.informationweek.com
    While the push for digital transformation has been underway for years, many enterprises still have legacy technology deeply ingrained in their tech stacks. In many cases, these systems are years or even decades old but remainintegral to keeping a business operational. Simply ripping them out and replacing them is often not a plausible quick fix.It's actually quite hard to fully demise previous versions of technology as we adopt new versions, and so you end up with the sort of layering of various ages of all the technologies, says Nick Godfrey, senior director and global head, office of the CISO at Google Cloud.Given that continued use of legacy systems comes with risk, why are legacy systems still so common today? How can enterprise leaders manage that risk and move forward?A Universal ChallengeIn 2019, the Government Accountability Office (GAO) identified 10 critical federal IT legacy systems. These systems were 8 to 51 years old and cost roughly $337 million to operate and maintain each year.Government is hardly the only sector that relies on outdated systems. The banking sector uses COBOL, a decades-old coding language, heavily. The health care industry is rife with examples of outdated electronic health record (EHR) systems and legacy hardware. One survey found that 74% of manufacturing and engineering companies use legacy systems and spreadsheets to operate.Related:If we talk about banking, manufacturing, and health care, you would find a big chunk of legacy systems are actually elements of the operational technology that it takes to operate that business, says Joel Burleson-Davis, senior vice president of worldwide engineering, cyber at Imprivata, a digital identity security company.The cost of replacing these systems isnt simply the price tag that comes with the new technology. Its also the downtime that comes with making the change.The hardest way to drive the car is when you're trying to change the tire at the same time, says Austin Allen,director of solutions architecture at Airlock Digital, an application control company. You think about one hour of downtime you can be talking about millions of dollars depending on the company.A survey conducted by commercial software company SnapLogic found that organizations spent an average of $2.7 million to overhaul legacy tech in 2023.As expensive as it is to replace legacy technology, keeping it in place could prove to be more costly. Legacy systems are vulnerable to cyberattacks and data breaches. In 2024, the average cost of a data breach is $4.88 million, according to IBMs Cost of a Data Breach Report 2024.Related:Evaluating the Tech StackThe first step to assessing the risk that legacy systems pose to an enterprise is understanding how they are being used. It sounds simple enough on the surface, but enterprise infrastructure is incredibly complicated.Everybody wishes that they had all of their processes. and all of their systems integrations documented, but they don't, says Jen Curry Hendrickson, senior vice president of managed services at DataBank, a data center solutions company.Once security and technology leaders conduct a thorough inventory of systems and understand how enterprise data is moving through those systems, they can assess the risks.This technology was designed and installed many, many years ago when the threat profile was significantly different, says Godfrey. It is creating an ever more complex surface area.What systems can be updated or patched? What systems are no longer supported by vendors? How could threat actors leverage access to a legacy system for lateral movement?Managing Legacy System RiskOnce enterprise leaders have a clear picture of their organizations legacy systems and the risk they pose, they have a choice to make. Do they replace those systems, or do they keep them in place and manage those risks?Businesses are fully entitled -- maybe they shouldn't [be] -- but they're fully entitled to say no, I understand the risk and that's not something we're going to address right now, says Burleson-Davis. Industries that tend to have lower margins and be a little more resource-strapped are the likeliest to make some of those tradeoffs.Related:If an enterprise cannot replace a legacy system, its security and technology leaders can still take steps to reduce the risk of it becoming a doorway for threat actors.Security teams can implement compensating controls to look for signs of compromise. They can implement zero-trust access and isolate legacy systems from the rest of the enterprises network as much as possible.Legacy systems really should be hardened from the operating system side. You should be turning off operating system features that do not have any business purpose in your environment by default, Allen emphasizes.Security leaders may even find relatively simple ways to reduce risk exposure related to legacy systems.People will often find, Oh, I'm running 18 different versions of the same virtualization package Why don't I go to one? Burleson-Davis shares. We find people running into scenarios like that where after doing a proper inventory [they] find that there was some low-hanging fruit that really solved some of that risk.Transitioning Away from Legacy SystemsEnterprise leaders have to clear a number of hurdles in order to replace legacy systems successfully. The cost and the time are obvious challenges. Given the age of these systems, talent constraints come to the fore. Does the enterprise have people who understand how the legacy system works and how it can be replaced?You end up with a very complex skills requirement inside of your organization to be able to manage very old types of technologies through to cutting-edge technologies, Godfrey points out.A change advisory board (CAB) can lead the charge on strategic planning. That group of people can help answer vital questions about the timeline for the transition, the potential downtime, and the people necessary to execute the change.How does that affect anything downstream or upstream? Where is my data flowing? How are these systems connected? How do Ikeep them connected? What am I going to break? asks Curry Hendrickson.Allen stresses the importance of planning for a way to roll back the implementation of new technology. What's the strategy for rolling back if it goes wrong? Because that's arguably the most important piece of this, and many times it will go wrong, he says.To reduce the chance of the implementation failing, the transition team needs to consider how the new technology will interact within the IT or OT environments. How is that different compared to the legacy system?[Understand] what it is that new system needs, [put] some of those changes in place before you implement the new system. That way the new system has every opportunity to be successful, says Allen.After pouring resources into modernizing technology, some enterprises make a fundamental mistake by forgetting to include the end users in the process. If end users arent prepared or willing to adopt new technology, that initiatives chances of success drop.One good example [is] introducing almost anything into a clinical setting and not including doctors and nurses. It is the guaranteed, number one way to fail, says Burleson-Davis.Curry Hendrickson also warns of the potential for vendor lock-in as enterprises examine ways to adopt new technology. You could get yourself into a scenario where you're so excited and you have this great environment, it is so flexible and then all of a sudden you're using way too many of this vendors tools, and now it's going to be a real problem to move out, she explains.This kind of technological transformation is often a multi-year project that requires the board, CISO, CIO, CTO, and other business leaders to agree on a strategy and consistently work toward it.There are going to be inevitably short-term trade-offs that have to be made during that transformation, during the journey to that north star, says Godfrey. The key to enabling that or unlocking the opportunity is thinking about it as a kind of organizational transformation as well as a technological transformation.
    0 Comments ·0 Shares ·64 Views
  • How to Persuade an AI-Reluctant Board to Embrace Critical Change
    www.informationweek.com
    Kip Havel, Chief Marketing Officer, DexianJanuary 21, 20254 Min ReadRawpixel Ltd via Alamy StockAs an IT leader, youre no stranger to helping executives decipher and understand groundbreaking technology. The process usually takes persistence, careful abstraction, and a stockpile of success stories to make a persuasive business case. With luck, you eventually persuade the board of the value of your next significant IT initiative.But selling the board on AI implementation is another challenge altogether.Its not surprising that many boards are undecided about AI. A recent Deloitte study on AI governance found that Board members rarely get involved with AI:14% discuss AI at every meeting25% discuss AI twice a year16% discuss AI once a year45% never discuss AI at allOnly 2% of respondents considered board members highly knowledgeable or experienced in AI. These circumstances present a serious hurdle as IT teams not only try to implement AI solutions but also strive to build the appropriate guardrails into the AI strategy.Helping the board understand the power of black sky thinking can help to counteract some of their reservations about pursuing AI. Heres what you need to know:Black Sky Thinking Offers a New Approach to InnovationArtificial intelligence is taking enterprises to a place where no man has gone before. Even though the market is starting to define AI norms, establish regulations, determine the technologys shortcomings, and pinpoint when we need a human in the loop, were collectively flying through unfamiliar skies. As a result, IT leaders need to persuade the board of directors to embrace a more transformative way of solving problems. Enter black sky thinking.Related:The black sky thinking concept emerged during the 1960s space race and was then popularized by Rachel Armstrong, author and futurist, at the FutureFest in London in 2014 as she described the mentality necessary for humans to thrive on the cusp of unparalleled disruption.In a follow-up essay, she explains the difference between blue sky thinking (where were at now) and black sky thinking this way:Blue sky thinking is a way of innovating by pushing at the limits of possibility in existing practices.Black sky thinking is more aspirational, producing new kinds of future that enable us to move into uncharted realms with creative confidence.Rather than being constrained by current paradigms, organizations boards and leaders need to envision the future they want and reverse engineer the steps necessary to reach the desired destination. Its like planning for oceanic voyages or trips to the moon but at a societal level.Related:You might be saying, Thats great, but how does it apply to convincing the board to embrace AI use cases? Before you can unlock the power of AI, you need board members to shift from blue sky to black sky thinking and embrace aspirational, limitless potential.Leadership Is on Board with Black Sky Thinking: Now What?Even when theyre onboard with black sky thinking, most board members are going to focus on mitigating risk and maximizing profits for shareholders and the corporation. Thats a fine strategy if youre trying to maintain stasis, but not if youre attempting to break barriers and drive innovation. Your next goal is to convince the board that AI is an acceptable investment if theyre going to achieve their black sky-driven goals.Fortunately, you can increase the success of your petition by getting two key board members on your side: the CEO and general counsel.The CEO is often an easier sell. KPMG surveys indicate 64% of CEOs treat AI as a top investment priority. Since your goals align, the CEO can be a co-champion, providing profiles on each board member and answering these key questions:Which specific industry AI use cases will be the most persuasive?Related:Will AI examples from Fortune 500s carry the most weight?Which biases will you need to combat in your argument?When it comes to in-house counsel, you need to demonstrate a strong command of the legal and ethical implications of what youre proposing. General counsel and CFOs, being naturally risk-averse, require you to come prepared with your:Recognition of potential risksAwareness of pending legal casesCommitment to ethical implementationWith your CEO and general counsel as AI champions, your next step is to demonstrate ROI if the board is going to approve investment in AI. Showcasing results from programs that have already yielded measurable success can reduce barriers to an AI-forward mentality. For example, in healthcare, Kaiser Permanente has demonstrated how AI can save clinicians an hour of documentation daily -- a powerful use case to highlight.Ultimately, youll need to show them that the risk of doing nothing at all can be just as catastrophic as taking a big gamble on emerging technology. Tailored pitches to board members, both individually and collectively, can embolden them to step out of their comfort zones. This approach encourages the embrace of unconventional -- or even unknown -- solutions to complex challenges. When everyone embraces black sky thinking, no horizon is completely out of reach.About the AuthorKip HavelChief Marketing Officer, DexianKip Havel is the chief marketing officer of Dexian, forging strategies that bridge the gap between the brand and its diverse audiences. Passionate about collaboration and black sky thinking, his vision and execution have strengthened company partnerships and grown Dexians footprint in the market. He led the creation of the Dexian brand and has earned honors such as the American Marketing Associations4 Under 40 and PR Weeks Rising Star. A University of Miami alumnus, Kip has held senior marketing roles at Aflac, Randstad US, Cross Country Healthcare, and SFN Group.See more from Kip HavelNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·72 Views
  • Mobile App Integrations Day Has Come
    www.informationweek.com
    The mobile application market is projected at an annual compound growth rate (CAGR) of 14.3% betweennow and 2030, and businesses are capitalizing by developing mobile applications for customers, business partners, and internal use.In large part, the mobile app market is being driven by the explosive growth of mobile devices, which over60% of the worlds populationuse. Not all of this use is confined to social media, emails, phone calls, and texts. Accordingly, businesses have become involved with launching retail websites for mobile devices, as well as transactional engines for mobile payment processing, e-commerce, banking and booking systems for use in a variety of smart mobile devices.In the process, the key for IT has been the integration of these new mobile applications with enterprise systems. How do you ensure that a mobile app is tightly integrated into your existing business processes and your IT base, and how do you ensure that it will perform consistently well every time it is used? Is your security policy across mobile devices as robust as it is across other enterprise assets, such as mainframes, networks and servers? Does the user interface across all mobile devices navigate equally well and with a certain degree of consistency, no matter which device is used?Related:In most cases, IT departments (and users and customers) will say that totalmobile device integrationis still a work in progress.The Role of Mobile App IntegrationIn the past, the integration of mobile applications with other IT infrastructure was more or less confined to the IT assets that the mobile app minimally needed to perform its functions. If the app was there for placing an online order, access to the enterprise order entry, inventory and fulfillment systems was needed, but maybe nothing else for the first installation. If the app was designed for a warehouse worker to operate a series of robots to pick and place items in a warehouse, it was specifically developed just for that, and on first installment, it might not have been integrated into inventory and warehouse management systems. However, now that tech companies are placing theirR&D emphasison smart phones and devices, IT needs to formulate a more inclusive integration strategy for mobile applications that these apps more complete when they launch.The Elements of Mobile App IntegrationTo achieve total integration with the rest of the enterprise IT portfolio, and possibly with third-party services, a mobile app must do the following:Related:Attain seamless data exchange across all systems, along with having the ability to invoke and use system-level infrastructure components such as storage or system-level routines to do its work.Use application programming interfaces (APIs) so it can access other IT and/or vendor systems.Conform to the same security and governance standards that other IT assets are subject to.Provide users and customers with a simple and (as much as possible) uniform graphical user interface (GUI).Be right-fitted into existing business and system workflows.This isnt just good IT. It also makes major contributions to user productivity and customer satisfaction.Workflow IntegrationIn late 2024, a health insurance company unveiled an automated online process for new customer registration. Unfortunately, the new app didnt include all data elements needed for registration, and it actually froze in process. Users ended up calling the company and enduring long wait times until they could complete their registrations with a human agent.This was a case of workflow integration failure, because critical ingredients required for registration had been left out of the online mobile app. How did this happen?The project might have been rushed through to meet a deadline or signed off as a first (albeit incomplete) version of an app that would be later enhanced. Or, possibly, QA might have been skipped. But to an experienced IT eye, the app was clearly missing data, which suggested that integration with other enterprise systems, or data transfers via API with supporting vendor systems, had been missed.Related:The apps process flow also was a miss because if the project team had tested the mobile apps process flow against the business workflow, they would have seen (like customers did) that key data elements were missing, and that the workflow didnt work.The project team should also have verified that security and governance standards had been met, and that the mobile app user experience was consistent, whether the customer was using an iPhone or an Android.SummaryStatista says that the mobile application market will reach $756 billion by 2027. In the US,47% of mobile appsare being used for retail transactions, and another 19% are serving as portals, whether for customers, business partners or employees.There is virtually no business that isnt developing mobile apps today for its customers, business partners and/or employees, but whathaslagged is the same level of discipline over mobile app development that IT expects for traditional enterprise app development.Central to this is mobile application integration.Its no longer acceptable to let an app fly with just the basics, but with many functions and data elements still missing. Its time for top-to-bottom mobile app integration, whether that integration requires complete data, a uniform user experience across all devices, or something else.
    0 Comments ·0 Shares ·68 Views
  • AI Risk Management: Is There an Easy Way?
    www.informationweek.com
    When ChatGPT commercially launched in 2022, governments, industry sectors, regulators and consumer advocacy groups began to discuss the need to regulate AI, as well as to use it, and it is likely that new regulatory requirements will emerge for AI in the coming months.The quandary for CIOs is that no one really knows what these new requirements will be. However, two things are clear: It makes sense to do some of your own thinking about what your companys internal guardrails should be for AI; and there is too much at stake for organizations to ignore thinking about AI risk.The annals of AI deployments are rife with examples of AI gone wrong, resulting in damage to corporate images and revenues. No CIO wants to be on the receiving end of such a gaffe.Thats why PWC says, Businesses should also ask specific questions about what data will be used to design a particular piece of technology, what data the tech will consume, how it will be maintained and what impact this technology will have on others It is important to consider not just the users, but also anyone else who could potentially be impacted by the technology. Can we determine how individuals, communities and environments might be negatively affected? What metrics can be tracked? Related:Identify a Short List of AI RisksAs AI grows and individuals and organizations of all stripes begin using it, new risks will develop, but these are the current AI risks that companies should consider as they embark on AI development and deployment:Un-vetted data.Companies arent likely to obtain all of the data for their AI projects from internal sources. They will need to source data from third parties.A molecular design research team in Europe used AI to scan and digest all of the worldwide information available from sources such as research papers, articles, and experiments on that molecule. A healthcare institution wanted to use an AI system for cancer diagnosis, so it went out to procure data on a wide range of patients from many different countries.In both cases, data needed to be vetted.In the first case, the research team narrowed the lens of the data it was choosing to admit into its molecular data repository, opting to use only information that directly referred to the molecule they were studying. In the second case, the healthcare institution made sure that any data it procured from third parties was properly anonymized so that the privacy of individual patients was protected.By properly vetting internal and external data that AI would be using, both organizations significantly reduced the risk of admitting bad data into their AI data repositories.Related:Imperfect algorithms.Humans are imperfect, and so are the products they produce. The faulty Amazon recruitment tool, powered by AI and outputting results that favored males over females in recruitment efforts, is an oft-cited example -- but its not the only one.Imperfect algorithms pose risks because they tend to produce imperfect results that can lead businesses down the wrong strategic paths. Thats why its imperative to have a diverse AI team working on algorithm and query development. This staff diversity should be defined by a diverse set of business areas (along with IT and data scientists) working on the algorithmic premises that will drive the data. An equal amount of diversity should be used as it applies to the demographics of age, gender and ethnic background. To the degree that a full range of diverse perspectives are incorporated into algorithmic development and data collection, organizations lower their risk, because fewer stones are left unturned. Poor user and business process training.AI system users, as well as AI data and algorithms, should be vetted during AI development and deployment. For example, a radiologist or a cancer specialist might have the chops to use an AI system designed specifically for cancer diagnosis, but a podiatrist might not.Related:Equally important is ensuring that users of a new AI system understand where and how the system is to be used in their daily business processes. For instance, a loan underwriter in a bank might take a loan application, interview the applicant, and make an initial determination as to the kind of loan the applicant could qualify for, but the next step might be to run the application through an AI-powered loan decisioning system to see if the system agrees. If there is disagreement, the next step might be to take the application to the lending manager for review.The keys here, from both the AI development and deployment perspectives, are that the AI system must be easy to use, and that the users know how and when to use it.Accuracy over time.AI systems are initially developed and tested until they acquire a degree of accuracy that meets or exceeds the accuracy of subject matter experts (SMEs). The gold standard for AI system accuracy is that the system is 95% accurate when compared against the conclusions of SMEs. However, over time, business conditions can change, or the machine learning that the system does on its own might begin to produce results that yield reduced levels of accuracy when compared to what is transpiring in the real world. Inaccuracy creates risk.The solution is to establish a metric for accuracy (e.g., 95%), and to measure this metric on a regular basis. As soon as AI results begin losing accuracy, data and algorithms should be reviewed, tuned and tested until accuracy is restored.Intellectual property risk.Earlier, we discussed how AI users should be vetted for their skill levels and job needs before using an AI system. An additional level of vetting should be applied to those individuals who use the companys AI to develop proprietary intellectual property for the company.If you are an aerospace company, you dont want your chief engineer walking out the door with the AI-driven research for a new jet propulsion system.Intellectual property risks like this are usually handled by the legal staff and HR. Non-compete and non-disclosure agreements prerequisite to employment are agreed to. However, if an AI system is being deployed for intellectual property purposes, it should be a bulleted check point on the project list that everyone authorized to use the new system has the necessary clearance.
    0 Comments ·0 Shares ·64 Views
  • What Happens if AI No Longer Has Access to Good Data to Train On?
    www.informationweek.com
    As new policies on privacy take hold, it might change the availability of data AI can train on.
    0 Comments ·0 Shares ·52 Views
  • A New Reality for High Tech Companies: The As-a-Service Advantage
    www.informationweek.com
    Global IT spending continues to rise, and enterprises are increasingly moving budgets to services and software away from hardware investments. This shift in spending directly influences the strategic, operational, and investment decisions of high-tech providers. To stay competitive, they must prioritize customer-centric strategies and align business goals with operations. To facilitate this, embracing as-a-service (AaS) models is vital to meet current demands and drive future growth. Yet, most providers are not equipped to adequately address the demands associated with such an enterprise change.The AaS OpportunityIntegration of AaS offerings will be crucial for companies reinvention strategies and a well-executed AaS strategy benefits both tech providers and their customers. Recent Accenture research found that executives recognize the flexibility, stability and potential growth opportunities that come along with this. We found there is a shared optimism, with a measurable confidence in generative AIs (GenAI) applications to support business transformation. In fact, 97% of executives believe that gen AI can help their companies accelerate the shift towards models that focus on annual recurring revenue (ARR) and AaS offerings and 85% think that AaS offerings will add to their revenue stream but at the expense of their current products or services.Related:Worryingly, 75% agree that legacy technology hardware companies will no longer exist unless they begin acting more like software companies. That underpins the urgency for high tech companies to reinvent themselves immediately, not plan for it somewhere down the line. The benefits are twofold, for the customer this shift provides continued and superior value year over year. Additionally, providers have registered a positive impact on long-term revenue, customer retention and overall customer lifetime value.Addressing the Roadblocks to AaS AdoptionDespite the benefits of shifting to new models, which can bridge the gap between high-tech players and their customers, our findings point to a significant confidence split among respondents. Only 50% of executives believe they can meet theirpublicly statedARR goals. Although high-tech companies have the ideal products and services that could benefit from a cloud-hosted, subscription-based model via AaS to generate recurring revenue, manyfaceinternal challenges like grappling with legacy systems and tech debt.While theres positivity around the opportunity that AaS can bring, theres also hesitation in the industry to adopt it because many executives believe AaS models might cannibalize their existing offerings. They also believe that the success of implementing these models is heavily dependent on their sales force's readiness to adopt new ways of selling. This outlook questions the preparedness of high-tech companies to adapt to such a transformation.Related:However, to maintain a competitive advantage, high-tech companies need to implement a customer-centric strategy. This is especially critical given that enterprise customers are increasingly redirecting their IT budgets to prioritize services and software, with a notable focus on software as a service (SaaS).Embracing AaS to Navigate Customer Demand and RetentionThe primary benefits of shifting to an AaS model arm high-tech providers with the ability to address modern customer expectations and overcome the limitations of traditional product lifecycles, to build lastingvalue-driven relationships. Here are the keycustomer-centric strategiesthat executives need to focus on to establish themselves as leaders in the AaS era:Pivoting from transactional to relational customer engagement:With 98% of executives acknowledging that a companys products and services define their customer relationship, products need to serve more than just one transaction in their lifecycle and should be part of an ongoing relationship with the customer base. Therefore, they should move from product-focused to subscription-based organization to create long-term revenue growth and higher customer retention.Related:Replacing legacy systems with modern IT:Modernizing IT infrastructure is centered around creating a strong digital core, which consists of a cloud infrastructure,dataand AI. This will help companies stay ahead of competitors, expeditegrowthand guarantee operational security.Shifting focus from product features to customer outcomes:Customer needs have evolved and creating a dedicated customer success function will become a critical need for high-tech companies to enable AaS adoption. Gen AI is an essential technology that can provide a more detailed customer behavior analysis and will help identify new customer needs.Recalibrating the sales force:Although executive confidence in their sales forces ability to shift from transaction based to outcome-based compensation is in the majority, training talent to accelerate adoption and preparing them to sell under the new model is critical to enabling AaS across the organization.A rapidly changingdigital landscape and evolving market dynamics requires high-tech companies to assume more agility. To that end, meeting their ARR goals will also require adopting an AaS model that prioritizes customer-centricity. By leveraging these strategies, which rely on gen AI integration and Total Enterprise Reinvention, providers can make a decided effort to future-proof their companies and ensure sustainable growth.
    0 Comments ·0 Shares ·58 Views
  • Demand and Supply Issues May Impact AI in 2025
    www.informationweek.com
    Lisa Morgan, Freelance WriterJanuary 17, 20255 Min ReadAndriy Popov via Alamy Stock This may well be a sobering year when it comes to AI adoption, use and scaling. On the demand side, organizations will be pulling investments back prematurely because theyre not seeing the value they expected. On the supply side, supply shortages, unmet expectations and investor pressure have caused one big tech company to reduce AI infrastructure investments and others will follow, according to Forrester.To date, organizations have been investing heavily in AI and GenAI, not necessarily with a view toward ROI, though ROI can be difficult to quantify from a hard dollar perspective, which senior executives and boards now want. The anticipated shortage of infrastructure will also likely have an impact.Whats Happening on the Demand SideOrganizations will not continue to increase investments in AI if theyre not seeing the value they expect.[C]ompanies are scaling back on their AI investments or too impatient in terms of ROI. They will [likely] scale back on their AI investment prematurely, which is not a good strategy, says Jayesh Chaurasia, analyst at Forrester. The other factor that might be fueling this is the current economic climate. In the last three months, almost everyone is trying to cut back on any type of investment that is not generating a clear ROI, and not only the AI-related stuff.Related:Executives are asking for ROI numbers on analytics, data governance, and data quality programs, and they are demanding dollar values as opposed to improving customer experience or increasing operational efficiency.In 2023 and this year too, we are seeing more focus on ROI related to generative AI, says Chaurasia. Almost every executive was talking about how generative AI is going to just change the world, but it's not as easy as just deploying a model or a generated AI function and then say your job is done because there is a foundational data analytics requirement that will eventually enable it, and which means you need to have proper privacy and security protocols, [such as] access management and data governance. You also must supply better data quality [because] these models are trained on the entire data set from the internet.The fact that people know the models are trained on internet data has inspired internet postings that are intentionally inaccurate or misleading, so the models wont work right.The better answer is, of course, to use your own industry enterprise data, which gives the AI model more information about your company, says Chaurasia. You can very easily set up a connection with your data warehouse and get all the data into the model, but its not that easy because privacy, security, and governance are not in place. So, you're not 100% sure whether you're sharing your data with the model or the entire world.Related:Organizations have expected quick returns but not realized them because the initial expectations were unrealistic. Later comes the realization that the proper foundation has not been put in place.Folks are saying they expect ROI in at least three years and more than 30% or so are saying that it would take three to five years when weve got two years of generative AI. [H]ow can you expect it to perform so quickly when you think it will take at least three years to realize the ROI? Some companies, some leadership, might be freaking out at this moment, says Chaurasia. I think the majority of them have spent half a million on generative AI in the last two years and havent gotten anything in return. That's where the panic is setting in.Explaining ROI in terms of dollars is difficult, because its not as easy as multiplying time savings by individual salaries. Some companies are working to develop frameworks, however.Some managers are reaching out to every business unit to ask the benefits that they have received with proper understanding of ownership, where the data exists [and] lineage of particular data set. They are using some custom surveys to reach out to all the employees in the organization to for their suggestions as well as their metrics, says Chaurasia. Unfortunately, there is no single framework that I would suggest works for every company.Related:Jayesh Chaurasia, ForresterChaurasia is working on KPIs for the various domains, in terms of quality, governance, MDM, data management, data storage and everything that companies can track over the time to see the improvement, but theyre not connected to dollar value.What I'm recommending is find at the tactical, managerial, and executive levels what matters to them [and have] KPIs for each of those different layer levels to maintain and calculate that ROI regularly, so that they can use that KPI those metrics to show the benefit of whether they have improved over time or not.View From the Supply SideIf enterprises are reducing AI investments because the anticipated benefits arent being realized, vendors will pull back. Meanwhile, China has banned the export of critical materials required for semiconductors and other tech-related technologies in response to President-elect Donald Trumps planned tariffs, not to mention the downstream impacts of tariffs -- higher production costs and therefore higher tech prices IT departments will have to bear when budgets are already tight and may become tighter.Bottom LineInfrastructure shortages due to reduced AI investments on the demand side combined with higher prices and a potential US chip shortage due to lack of materials on the supply side would in turn impact the calculus of AI ROI. There are also broader impacts of the incoming administrations policies such as mass deportation, which could impact tech workers, including AI talent, and their employers.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·53 Views
  • LA Wildfires Raise Burning Questions About AIs Data Center Water Drain
    www.informationweek.com
    Shane Snider, Senior Writer, InformationWeekJanuary 17, 20254 Min ReadTithi Luadthong via Alamy StockApocalyptic scenes from greater Los Angeles, Californias continuing wildfire devastation raise serious questions about ITs growing need for a resource in short supply: water. The explosion of power-hungry AI models is a growing strain on water resources, even as the industry makes strides in mitigation efforts.Many factors -- from water shortages due to an ongoing drought, to infrastructure restraints -- led to a shortage of water and water pressure in fire hydrants throughout Los Angeles County. The shortages fueled partisan finger-pointing over blame. Water is increasingly becoming a major stress point for governments as IT needs increase.Three Democratic California lawmakers introduced four separate bills last week aimed at slowing AI and data center water consumption. One of the bills authors, Assemblymember Dian Papan, told Politico, Waters a limited resource. Im trying to make it so we are prepared and ahead of the curve as we pursue new technology.Providing a snapshot of increasing data center water use needs, the US Department of Energys report on the countrys data center energy use pegs total 2023 water use at 66 billion liters, up from 21.2 billion liters in 2014. And thats just for direct consumption to cool data centers themselves -- water needed to cool power plants supplying electricity to data centers, also adds to the total.Related:But transparency on water use is an issue. About 50% of organizations do not collect water usage data for data center operations, according to Statista.Data Center Map counts 286 data centers in California, including 69 in Los Angeles.AI Proliferation Driving Increased Water NeedsArtificial intelligence drove about 20% of new data center demand over the last year, according to a report from global commercial real estate firm JLL. The market for colocation data centers, which soak up some of the highest water use rates, doubled in size over the past 4 years, according to the report. Data creation is expected to increase at a compound annual growth rate of 32% through 2030.The arms race to develop AI tools in the enterprise has driven excitement and fear about the emerging technology.Much of the hype around AI ethics has revolved around potential existential threats. Energy consumption and water use may not be a topic quite as scintillating as impending robot doomsday scenarios, but experts say the environmental impacts may pose the most immediate threat.Its heartbreaking to witness the aftermath of the LA fires and how theyve exposed critical water infrastructure challenges, says Manoj Saxena, CEO and founder of Responsible AI Institute and InformationWeek Insight Circle member. While we often debate the existential threats of AI, the immediate reality is its growing environmental impact -- particularly on carbon emissions and water consumption.Related:Pointing to statistics from the World Economic Forum, Saxena says global AI demand could push water usage to an astonishing 1.7 trillion gallons of annual water use. The fact that 20% of these servers already rely on water from stressed watersheds is a wake-up call.Water Saving Strategies: Can We Keep Up?There are many water saving techniques data centers are deploying, including immersion cooling (submerging servers in liquid), free cooling (using outside air in colder climates), direct-to-chip cooling, and more.But as more sustainable techniques come online, the need for much more powerful data center servers could cancel out those efforts. Older data centers cannot keep up with the computing needs of new AI systems. And upgrades mean more strain on water resources, so experts are pushing for initiatives to keep up with increasing demand.Companies are racing to adopt more sustainable data center plans. Microsoft, for example, is moving forward with new data center designs that use chip-level cooling to consume no water. This design will avoid the need for more than 125 million liters of water per year per datacenter, Steve Solomon, Microsofts vice president for datacenter infrastructure engineering, said in a blog post.Related:Considering Microsoft reported its cloud data centers had soaked up 6.4 million cubic meters of water in 2022 (a 34% increase from the year prior), canceling out water use would be a big win. But overall, tech companies have struggled to meet previously set sustainability goals as generative AI unexpectedly took off with the release of ChatGPT.But RAIs Saxena says more needs to be done -- and quickly.We need to act now to ensure AIs growth doesnt come at the cost of our planet, Saxena says. This means adopting water-efficient cooling technologies, capping water use in drought-prone regions, promoting closed-loop cooling systems, incentivizing renewable-powered AI operations, and fostering public-private partnerships to set sustainable infrastructure policies.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·33 Views
  • Why Liberal Arts Grads Could Be the Best Programmers of the AI Era
    www.informationweek.com
    In the world of programming, technical chops have always been the golden ticket. But over the years, some of the best programmers Ive hired and worked with didnt come from computer science backgrounds. They came from the humanities -- music, philosophy, literature. These liberal arts grads brought a fresh perspective to programming, one thats not always easy to find.And as generative AI changes the game, this edge will only become more valuable. With AI handling the ABCs of programming -- the line-by-line code writing -- whats left is the harder stuff: understanding problems deeply, communicating with stakeholders, and designing solutions that make sense in the real world.Programming Isnt Just About CodeProgramming has never been purely about logic. Sure, you need what used to be called left-brain skill -- the ability to translate technical specs into precise code. But a programmers real value comes when they push beyond that: recognizing patterns, solving complex problems, and seeing connections that others miss.I first noticed this long ago. A talented colleague used to entertain a roomful of fellow IT workers by playing and singing Eric Clapton tunes. He was also a gifted coder, capable of recognizing patterns, and solving problems in a different way.Related:Programming is a creative process, not unlike music. The notes matter, but so does knowing when to riff, how to structure, and how to build something thats more than the sum of its parts. Its no coincidence that the best developer I ever worked with, period, was a music major.Liberal arts majors dont come to work burdened with technical rigidity. Theyve spent their time dissecting ideas, making connections between concepts, and thinking critically. Theyve honed their writing and storytelling. Those skills are incredibly valuable, especially now.GenAI Is Changing the JobGenAI is fundamentally changing what it means to be a programmer. Tools like GitHub Copilot and Googles Gemini can write code, debug simple issues, and automate many of the tasks that used to take up time. But AI doesnt know how to ask the right questions, interpret user needs, or mold its output into something that makes sense in a broader context. Thats still a human job.The role of the programmer is evolving, possibly splitting into two paths. There will always be a place for the hardcore programmer with a computer science background, someone to make systems talk to one another. For others, call them citizen programmers, the work is no longer just about writing code line by line; its about knowing how to work with AI, guiding it, and knowing when and where human input is most needed.Related:This is where that liberal arts mindset comes in -- being able to understand the nuances, think critically about user experience, explain things simplistically, and piece together ideas in new ways.Preparing for the AI FutureSo, what should businesses do with this insight? First, its time to rethink talent and look for people who can adapt, think on their feet, and see the big picture. This outreach could start at the university level where IT recruiters begin visiting leading liberal arts and music colleges in addition to the traditional technical schools on their lists.We also need to recognize that the most valuable skills dont always show up on a resume. How do you measure the ability to see a new solution that nobody else considered? Or the capacity to understand what a user is really asking for, even if they cant quite articulate it? These are the skills that will matter most, even if they dont fit neatly into a job description.And once these new minds are hired, theres a need to change how we approach development within our teams. AI isnt going to stop evolving, and neither can we. For the next few years, people will focus on learning how to use these new tools. But beyond that, itll be about figuring out how to create with them. And thats going to require people who arent afraid to question how things have always been done.Related:All this change isnt mere theory; its happening right now. Instead of looking for people who tick all the technical boxes, Im looking for those who bring a creative mindset to the table. Hiring cannot be merely about pulling in more STEM graduates. It must be about building an environment where people with different backgrounds can work together to solve problems.The future of tech work will be shaped by those who can use AI to amplify their creativity, their empathy, and their ability to solve tough problems. In my experience, thats often the person with a background in the humanities.
    0 Comments ·0 Shares ·71 Views
  • What Does Biden's New Executive Order Mean for Cybersecurity?
    www.informationweek.com
    Carrie Pallardy, Contributing ReporterJanuary 16, 20255 Min ReadPresident Joe Biden meets with White House staff in the Oval Office, 2022, to review remarks he will give at an executive order signing. (Official White House Photo by Adam Schultz) American Photo Archive via Alamy Stock PhotoOn. Jan. 16, just days before leaving office, President Biden issued an executive order on improving the nations cybersecurity. The extensive order comes on the heels of the breaches of US Treasury and US telecommunications providers perpetrated by China state-sponsored threat actors.Adversarial countries and criminals continue to conduct cyber campaigns targeting the United States and Americans, with the Peoples Republic of China presenting the most active and persistent cyber threat to United States Government, private sector, and critical infrastructure networks, the order states.This new executive order, building on the one Biden issued in 2021, is extensive. It addresses issues ranging from third-party supply chain risks and AI to cybersecurity in space and the risks of quantum computers.Could this executive order shape the federal governments approach to cybersecurity? And how uncertain is its impact under the incoming Trump administration?The Executive OrderThe executive order outlines a broad set of initiatives to address nation state threats, improve defense of the nations digital infrastructure, drive accountability for software and cloud providers, and promote innovation in cybersecurity.Like the 2021 executive order, the newly released order emphasizes the importance of collaboration with the private sector.Related:Since it's an executive order, it's mainly aimed at the federal government. It doesn't directly regulate the private sector, Jim Dempsey, managing director of the Cybersecurity Law Center at nonprofit International Association of Privacy Professionals (IAPP), tells InformationWeek. It indirectly aims to impact private sector cybersecurity by using the government's procurement power.For example, the order directs software vendors working with the federal government to submit machine-readable secure software development attestations through the Cybersecurity and Infrastructure Security Agency (CISA) Repository for Software Attestation and Artifacts (RSAA).If CISA finds that attestations are incomplete or artifacts are insufficient for validating the attestations, the Director of CISA shall notify the software provider and the contracting agency, according to the order.The order also calls for the development of guidelines relating to the secure management of cloud service providers access tokens and cryptographic keys. In 2023, China-backed threat actor stole a cryptographic key, which led to the breach of several government agency Outlook email systems, Wired reports. A stolen key was behind the compromise of BeyondTrust that led to the recent US Treasury breach.Related:AI, unsurprisingly, doesnt go untouched by the order. It delves into establishing a program for leveraging AI models for cyber defense.The Biden administration also uses the executive order to call attention to cybersecurity threats that may loom larger in the future. The order points to the risks posed by quantum computers and space system cybersecurity concerns.Bidens Cyber LegacyThe Biden Administration made cybersecurity a priority. In addition to the 2021 executive order on cybersecurity, the administration released a National Cybersecurity Strategy and an implementation plan in 2023.The current administration also took sector-specific actions to bolster cybersecurity. For example, Biden issued an executive order focused on maritime cybersecurity.Kevin Orr, president of RSA Federal at RSA Security, a network security company, saw a positive response to the Biden Administrations efforts to improve cybersecurity within the government.I was surprised at how many agencies have leaned in the last 18 months, especially within the intelligence community, have really adopted basic identity proofing, coming forward with multifactor authentication, and really strengthening their defenses, Orr shares.Related:While the Biden Administration has worked to further cybersecurity, there are questions about adoption of new policies and best practices. Some stakeholders call for more regulatory enforcement.Much like any regulation, people are only going to follow it if there's some type of regulatory teeth to it, Joe Nicastro, field CTO at software security firm Legit Security, argues.Others argue for incentives are more likely to drive adoption of cybersecurity measures.Cybersecurity is an ongoing national security concern, and the Biden administration is soon passing the torch.I think this administration can leave extremely, extremely proud, says Dempsey. Certainly, they are handing over the nations cybersecurity to the incoming Trump administration in far better shape than it was four years ago.A New AdministrationWhile the order could mean big changes in the federal governments approach to cybersecurity, the timing makes its ultimate impact uncertain. Many of its directives for federal agencies have a long runway, months or years, for compliance. Will the Trump administration enforce the executive order?Cybersecurity has largely been painted as a bipartisan issue. And there has been some continuity between the first Trump Administration and the Biden Administration when it comes to cyber policies.For example, the Justice Department recently issued a final rule on Bidens Executive Order 14117 Preventing Access to Americans Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern. That order charges the Justice Department with establishing a regulatory program to prevent the sale of Americans sensitive data to China, Russia, Iran, and other foreign adversaries. That order and subsequent ruling stem from an executive order signed by Trump in 2019.Bidens 2025 cybersecurity executive order puts a spotlight on cyber threats from China, and President-Elect Trump has been vocal about his intention to crack down on those threats. But that does not preclude changes to or dismissal of provisions in Bidens final cybersecurity executive order.There may be some things that the incoming administration will ignore or deprioritize. I'd be a little surprised if they repealed the order, says Dempsey.CISA was a major player in the Biden administrations approach to cybersecurity, and it will continue to play a big role if this new executive order rolls out as outlined. But the federal agency has been criticized by several Republican lawmakers. Some have called to limit its power or even shut it down, AP News reports.The incoming Trump administration is also expected to take a more hands-off approach to regulation in many areas. Critical infrastructure is consistently at the heart of national cybersecurity conversations, and the majority of critical infrastructure is owned by the private sector.In terms of new regulation aimed at the private sector, I think we probably will not see anything out of the Trump administration, Dempsey predicts.Cybersecurity policy could look different under the Trump administration, but it is likely it will remain at the forefront of national security discussions.I'm hoping that threat of what China is doing with their cybersecurity programs and how they're facilitating attacks against BeyondTrust and US treasury et cetera, will help continue the progress that we've made within cybersecurity, says Nicastro.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·84 Views
  • How Tech Supports the Emergency Response to the LA County Wildfires
    www.informationweek.com
    Joao-Pierre S. Ruth, Senior EditorJanuary 16, 20254 Min ReadFlame front of the Eaton Fire on the first night during the January 2025 California wildfires in Altadena and Los Angeles.Timothy Swope via Alamy Stock PhotoSatellite-based communication helped clear up some of the smoke and confusion that arose from the LA County wildfires that tore into Southern California.Firefighters, who came from across the country, Canada, and Mexico, contained a number of the devastating fires that began the first week of 2025 but some of the largest patches of flame continue to burn.Rescue and recovery efforts require cohesive communication, for individuals and emergency responders, in such a widespread disaster. So far, the infernos across the region consumed collectively more than 40,000 acres of land, destroyed entire communities, and claimed at least 24 lives.Companies such as Intrado and Cisco offer resources that can help ensure clear lines of communication remain available during disasters that might disrupt standard means of staying connected. What happens in these disaster situations is traditional networks may have impacts to them because of the nature of hurricanes or fires and knocks out the traditional communication what were all used to, says Josh Burch, vice president of product operations at Intrado.That might include cellular, landline, VoIP, or voice over Wi-Fi networks, he says. In such instances, Burch says the use of satellite-based communication, including direct to device satellite communication, may come into play.Related:Two to three years ago, something like this wouldnt even have been possible, Burch says. The ability for satellite constellations to communicate with wireless handsets is now possible in certain scenarios.That communication might be limited to text communication in certain circumstances but can still allow emergency messages to be transmitted by individuals who might otherwise lose contact and service on their phones, Burch says. Intrado has processed more than 2,000 TXT29-1-1 messages in LA county since the wildfires began.The LA County wildfires saw Cisco called into action to provide support to agencies tasked with rescue and relief efforts.To avoid confusion in already challenging circumstances, Cisco Crisis Response Director Erin Connors says the Cisco team gets rolling only after they make contact with the emergency response agencies that focus on critical infrastructure, government continuity, aid delivery, and public safety. She says her team leverages Cisco resources, funding, technology, and expertise to connect vulnerable communities in crisis.In the case of the LA fires, there has been some degradation to cellular infrastructure, but not extensively, Connors says. Her team is made up of emergency response network engineers who can deploy in the event of emergencies. We dont self-deploy; its always at the request of partner agencies that are on the ground that have an expressed need that we can meet.Related:For the LA County fires, Connors says her team received requests from state and local response agencies for such needs as connectivity support for command posts and incident management teams. In a lot of cases, this is where theyre setting up new offices, she says. There might still be some backhaul or cellular connectivity, but if theyre setting up new field offices to be able to coordinate and manage their relief activities, they need network infrastructure.Cisco lent satellite backhaul where needed, Connors says, pairing Cisco Meraki security appliances to secure the network. All this goes toward managing and prioritizing network traffic to help response centers deliver services to affected citizens, she says. You can block Netflix streaming, for example, so that doesnt actually take up all the bandwidth for really critical communications to manage response efforts.Resources Cisco brings to bear for such disasters include equipment or remote guidance while the emergency response agencies look for longer-term solutions. Connors says Ciscos response team continues to provide remote support for the recovery from Hurricane Helene and also lent support in response to the 2023 fires in Maui. The team not only works with government agencies, but also supports nonprofit organizations and provides community Wi-Fi and shelters, she says.Related:Though the LA County wildfires covered a vast breadth of geography and cause widespread damage and displacement, Connors says the situation has not necessarily affected the resources Cisco made available. That is despite the challenge of oversight and communication needed to coordinate support from across international. There is just a lot more to manage for all of the agencies, she says. That maybe is a little bit different.Evolution of satellite technology such as Starlink, Connors says, made these types of resources more accessible and affordable to put into play during such crises. Thats been a big game changer.In prior years, Connors says, when Cisco first offered crisis response, resources had to be deployed in person with skilled technicians to set up and manage the network. Nowadays, she says, with Cisco Meraki the equipment can be shipped to location, is relatively easy to install, and features remote support. AI can also be used to scan for threats and troubleshoot the systems, making it easier to manage without always needing boots on the ground for technology to deliver support to agencies that respond to a disaster. We dont necessarily need to fly in, deploy, set it up, and be there long term, Connors says.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·67 Views
  • Microsoft Rings in 2025 With Record Security Update
    www.informationweek.com
    TechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Microsoft Rings in 2025 With Record Security UpdateMicrosoft Rings in 2025 With Record Security UpdateCompany has issued patches for an unprecedented 159 CVEs, including eight zero-days, three of which attackers are already exploiting.Dark Reading, Staff & ContributorsJanuary 16, 20251 Min ReadElena11 via ShutterstockMicrosoft's January update contains patches for a record 159 vulnerabilities, including eight zero-day bugs, three of which attackers are already actively exploiting.Theupdateis Microsoft's largest ever and is notable also for including three bugs that the company said were discovered by an artificial intelligence (AI) platform. Microsoft assessed 10 of the vulnerabilities disclosed this week as being of critical severity and the remaining ones as important bugs to fix. As always, the patches address vulnerabilities in a wide range of Microsoft technologies, including Windows OS, Microsoft Office, .NET, Azure, Kerberos, and Windows Hyper-V. They include more than 20 remote code execution (RCE) vulnerabilities, nearly the same number of elevation-of-privilege bugs, and an assortment of other denial-of-service flaws, security bypass issues, and spoofing and information disclosure vulnerabilities.Read the Full Article on Dark ReadingAbout the AuthorDark ReadingStaff & ContributorsDark Reading: Connecting The Information Security CommunityLong one of the most widely-read cybersecurity news sites on the Web, Dark Reading is also the most trusted online community for security professionals. Our community members include thought-leading security researchers, CISOs, and technology specialists, along with thousands of other security professionals.See more from Dark ReadingNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·45 Views
  • 10 Unexpected, Under the Radar Predictions for 2025
    www.informationweek.com
    TechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.10 Unexpected, Under the Radar Predictions for 202510 Unexpected, Under the Radar Predictions for 2025From looming energy shortages and forced AI confessions to the rising ranks of AI-faked employees and a glimmer of a new cyber-iron curtain, heres whats happening that may require you to change your companys course.Pam Baker, Contributing WriterJanuary 16, 202510 SlidesAlready have an account?Bombaert Patrick via Alamy StockYouve seen all the expected predictions for 2025 in all the usual places, but you know that there has to be more afoot. After all, 2025 is a year starting off with a bang as a politically loaded, globally tense, technologically lopsided, inflationary trippy, and myopically viewed period. Theres bound to be lots of stuff brewing beneath the radar.Scared yet? Excited instead? Yes, no, maybe? It doesnt matter, we all want to find the opportunities and dodge the risks, like we do every year. To that end, consider the following blips on some very special, somewhat obscure radar screens that may grow into the next thing that changes everything. Or maybe not.Now, whether these barely noticed or unexpected insights slide us back toward normalcy or tip us overboard into chaos is a different story for a different day. For now, lets peer ahead and see whats lurking in the foggy future.About the AuthorPam BakerContributing WriterA prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which are "Decision Intelligence for Dummies" and "ChatGPT For Dummies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.See more from Pam BakerNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·64 Views
  • 3 Strategies For a Seamless EU NIS2 Implementation
    www.informationweek.com
    Businesses everywhere face pressures to enhance their security postures as cyberattacks across sectors rise. Even so, many organizations have been hesitant to invest in cybersecurity for a variety of reasons such as budget constraints and operational issues. The EUs new Network and Information Security Directive (NIS2) confronts this hesitancy head on by making it mandatory for companies in Europe and those doing business with Europe to invest in cybersecurity and prioritize it regardless of budgets and team structures.What Is NIS2?The first NIS Directive was implemented in 2016, which was the EUs endeavor to unify cybersecurity strategies across member states. In 2023, the commission introduced the NIS2 Directive, a set of revisions to the original NIS. Each member state was required to implement the NIS2 recommendations into their own national legal systems by October 17, 2024.The original NIS focused on improving cybersecurity for several sectors, such as banking and finance, energy and healthcare. NIS2 expands that scope to other entities, including digital services, such as domain name system (DNS) service providers, top-level domain (TLD) name registries, social networking platforms and data centers, along with manufacturing of critical products, such as pharmaceuticals, medical devices and chemicals; postal and courier services; and wastewater and waste management.Related:Organizations in these industries are now required to implement more robust cyber risk management practices like incident reporting, risk analysis and auditing, resilience/business continuity and supply chain security. For example, member states must ensure TLD name registries and domain registration services collect accurate and complete registration data in a dedicated database. The new regulations also strengthen supervision and enforcement mechanisms, requiring national authorities to monitor compliance, investigate incidents and impose penalties for non-compliance.The goal of these new measures is to ensure the stability of societys infrastructure in the face of cyber threats. Entities in the EU will benefit from adopting these security measures over the long run, better preventing a devastating cyberattack. In doing so, they will also avoid the NIS2 penalties, which are significantly more punitive and clearly defined than those created under the original directive.Impact on OrganizationsMuch like how the European Unions General Data Protection Regulation (GDPR) reset the standard for privacy globally, NIS2 sets clear requirements for businesses to establish stronger security defenses, but not without a cost. Failing to comply can lead to severe financial penalties and legal implications.Related:The official launch of NIS2 in October was met with mixed reactions. While some organizations could testify, they had been preparing all along, many others had left NIS2 on the backburner. In addition, as a result of the new sectors covered by NIS2, there were businesses that did not initially believe they would be impacted and therefore had not laid their own groundwork.All this said, it will be interesting to see how penalty enforcement plays out in 2025. If organizations dont demonstrate compliance early in the new year, or at least show progress toward becoming compliant, I predict we will start to see consequences, though it may be too soon to tell which sectors will face them first.To those still grappling with NIS2 implementation, it may understandably seem like a daunting task, but it does not have to be. Here are three actions organizations can take today to ensure a more seamless NIS2 implementation:1. Evaluate your business partners. NIS2 is not just about strengthening one business security; It also demands businesses thoroughly evaluate every entity they engage with in their supply chain. A chain is only as strong as its weakest link, and the same can be said for businesses and their partners security postures. It is essential for organizations to audit their partners to ensure every entity they do business with meets NIS2 requirements. Evaluating any security gaps now can help to avoid overlooked issues down the road.Related:2. Consolidate your domains. We have heard anecdotally that some businesses are not fully aware of their domain registrars or who is responsible for managing and securing the domains within their organization. This lapse in knowledge creates more than siloed work environments; it can cause major repercussions when it comes to secure domain management and NIS2 compliance. Taking a more consistent, consolidated approach to managing and securing domains helps strengthen an organizations overall domain security and checks one more task off the teams compliance checklist.3. Stay security-minded, organization-wide. With new NIS2 requirements, businesses must report cybersecurity incidents within 24 hours. This demand requires an organization-wide culture shift to a more security-minded approach to the way they do business. For example, businesses may need to evaluate what cybersecurity protocols they have in place to secure the way they interact with their customers and their supply chain. Without security being top-of-mind, businesses may miss NIS2 requirements that could lead to revenue loss, loss of customers and even dents in their reputation. This shift doesnt happen overnight but working with partners that are security-minded helps organizations stay a step ahead in their security.As cybercriminals become more elusive in targeting reputable organizations, and as global geopolitical tensions leave many companies in the crossfires of nation-state attacks, adhering to NIS2 standards becomes all the more critical. These three strategies are guiding principles for organizations to contribute to a safer, more secure enterprise environment in Europe and around the world.
    0 Comments ·0 Shares ·47 Views
  • What Security Leaders Get Wrong About Zero-Trust Architecture
    www.informationweek.com
    John Edwards, Technology Journalist & AuthorJanuary 15, 20255 Min ReadAlexander Yakimov via Alamy Stock PhotoZero-trust architecture has emerged as the leading security method for organizations of all types and sizes. Zero-trust shifts cyber defenses away from static, network-based perimeters to focus directly on protecting users, assets, and resources.Network segmentation and strong authentication methods give zero-trust adopters strong Layer 7 threat prevention. That's why a growing number of enterprises of all types and sizes are embracing the approach. Unfortunately, many security leaders continue to deploy zero-trust incorrectly, weakening its power and opening the door to all types of bad actors.To prevent the mistakes that many organizations make when planning a transition to zero-trust security, here's a look at six common misconceptions you need to avoid.Mistake One: A single security vendor can supply everythingOne vendor can't provide everything your organization needs to implement a zero-trust architecture strategy, warns Tim Morrow, situational awareness technical manager in the CERT division of Carnegie Mellon University's Software Engineering Institute."Its dangerous to accept zero-trust architecture vendors' marketing material and product information without considering whether it will meet your organizations security priority needs and its capability to implement and maintain the architecture," Morrow says in an email interview.Related:Mistake Two: Zero-trust is too costly to implementAside from the costs saved by reducing the risk of a breach, zero-trust can help save long term expenses by improving asset utilization, operational effectiveness, and reduced compliance costs, says Dimple Ahluwalia, vice president and managing partner, security consulting and systems integration at IBM via email.Mistake Three: Underestimating the technical challengesIT and security leaders often overlook the need to implement and manage foundational security practices before establishing a zero-trust architecture, says Craig Zeigler, an incident response senior manager at accounting and business advisory firm Crowe, in an online interview. They may also fail to identify potential gaps, such as vendor-related issues, and ensure that the chosen solution is not only compatible with their specific needs but also equipped with the appropriate controls to provide equal or greater security. "In essence, without security leaders having a thorough understanding of their team and endpoints, implementing zero trust becomes a daunting task."Mistake Four: Failing to align zero-trust architecture strategy with overall enterprise assets and needsRelated:Cyberattacks are growing in number and severity. "A continuous vigil concerning the organization's security operations ... must be maintained," Morrow says. The zero-trust architecture must fully mesh with business operations and goals.Understand your organization's current assets -- data, applications, infrastructure, and workflows -- and set up a procedure to update this information periodically, Morrow advises. "Yearly updates of your organizations assets will definitely no longer be enough."Organizations also need to remember that their business and reputation are on the line each and every day, Morrow says. "Not doing your best to reduce your organizations risks to cyber threats can be very costly."Mistake Five: Viewing zero-trust as a solution rather than an ongoing strategyIt's essential for security leaders to understand that zero-trust is not a static goal, but a dynamic, evolving strategy, says Ricky Simpson, solutions director at Quorum Cyber, a Microsoft cybersecurity partner. "Building a culture that prioritizes security at every level, from executive leadership to individual employees, is critical to the success of zero-trust initiatives," he notes via email.Related:Simpson feels that continuous education, regular assessments, and a willingness to adapt to new threats and technologies are key components within a sustainable zero-trust framework. "By fostering collaboration and maintaining a vigilant stance, security leaders can better protect their organizations in an increasingly complex and hostile digital environment."Mistake Six: Believing that implementing zero-trust is simply a one-and-done projectZero-trust is actually a holistic and strategic approach to security that requires ongoing evaluations of trust and threats. "It's not a quick fix but a long-term shift in strategy," says Shane O'Donnell, vice president of Centric Consultings cybersecurity practice.Underestimating zero-trust implementation poses two major risks, notes O'Donnell in an email interview. First, unrealistic timelines and expectations can derail project planning, exhaust budgets, and drain resources. Second, hasty or flawed execution can actually create new security vulnerabilities, defeating the very purpose of a zero-trust architecture.O'Donnell says this misconception can be addressed through continuous education and understanding. "It's vital for security leaders to realize that transitioning to a zero-trust architecture means substantial technological and organizational changes," he says. "This strategy should be treated as an ongoing commitment that lasts way beyond the initial set-up stage."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·60 Views
  • How CISOs Can Build a Disaster Recovery Skillset
    www.informationweek.com
    You hear this mantra in cybersecurity over and over again: Its not if, its when. Data breaches, ransomware attacks, and all manner of incidents abound, it seems like disaster lurks around every corner. The prevalence of these incidents has shifted the CISOs emphasis from prevention to resilience. Yes, even the most prepared enterprises can still get hit. What matters is how they bounce back.Todays CISO role has disaster recovery baked into the job description. How can they cultivate that skillset and use it to guide their organizations through the fallout of a major cybersecurity incident?Defining Critical Disaster Recovery SkillsDisaster recovery has become an essential part of the CISO role. In cybersecurity, we live in the world of incidents, whether it's someone clicking on a phish or someone plugging in a USB drive, or someone who's conducted fraud against your company, Ross Young, CISO in residence atventure capital fund Team8, tells InformationWeek.Incident response and disaster recovery go hand in hand. Some of the best CISOs are some of the best understanders of disaster recovery efforts and apply those in their own security response plans, says Matt Hillary, CISO at compliance automation platform Drata.Effective disaster recovery requires both technical skills and human skills.Related:On the technical side, CISOs must understand how each part of the technology stack is used in their organizations and how that technology impacts the CIA triad: confidentiality, integrity, and availability.A lot of that technical work is going to be driven down to the engineering level. Ideally, the CISO will have done the right work to bring in the right talent and drive the technical remediation, says Marshall Erwin, CISO at Fastly, a cloud computing services company.CISOs also need to be able to put themselves in the mindset of attackers to understand their goals and what they could be doing once inside the network. You can say, Team, here's where we need to be looking, here's where we need to point our lens and our forensic skills to identify what an attacker did to be able to make sure that we kicked them out and have cleaned up our internal network, says Erwin.But human skills are equally important. CISOs need to be able to communicate effectively across multiple teams and with C-suite peers to lead an effective response.What you feel you need to do from a security investigative perspective might be the opposite from [what] business resilience folks want to take, says Mandy Andress, CISO at Elastic, an AI search company. How do you navigate, communicate, and find the compromises.Related:A lot of that work is best done in advance of an actual incident. CISOs can add their voice to disaster recovery plans to ensure the security perspective is in place before an attacker gets inside.In the heat of a cybersecurity disaster, CISOs also have a responsibility to their team. They need skills to get them through the incident response process.It seems like every incident I've ever seen, it always happens on a Saturday when everybody's at their kids baseball game or something else. It's the most inconvenient time possible. How do you keep the positive moral? says Young.Remaining calm and decisive in the midst of a stressful situation that can last days, weeks, or even months is necessary and not without its challenges. I think there is a lot of bravado sometimes in the security community, says Hillary. I don't know if it's a mask or if it's something else that leads us to not being as human as we need to be. And so just to continue to be humble, teachable, and learn throughout that incident.Cultivating Disaster Recovery SkillsWhile people may have different career paths that lead them to the CISO role, theyve most likely worked through cybersecurity incidents along the way.Related:Incidents are frequent enough that you're going to have that experience at some point in your career and develop that expertise organically, says Erwin.While trial by fire is an excellent teacher, there are other ways that CISOs can shore up their disaster response and recovery toolboxes. Industry conferences, for example, can offer valuable training.When I was the CISO of Caterpillar Financial, I went to FS-ISAC [Financial Services-Information Sharing and Analysis Center], and they had a CISO conference where they did tabletop exercises simulating an insider threat, Young shares.CISOs can lead their own tabletop exercises at their enterprises to better understand the holes in their incident response plans and areas where they need to strengthen their own skills.Other leaders within an organization can be valuable resources for CISOs looking to cultivate these skills. One of my closest peers that I usually go to is someone who's over on the infrastructure team, says Hillary. Any kind of disaster impact or availability incident that they experience on their end, they have a plan for, they have a really good, well-exercised muscle within the organization to recover.CISOs can also look outside of their organizations for ways to sharpen their skills. Hillary shares that he always looks at other breaches and outages. I usually ask myself two questions. How do I know that this same vector isn't being used against my company right now? How do I know this same incident that this other company is experiencing can't happen to us? he says. So, it helps drive a lot of preventative measures.Navigating DisasterIn a world of third-party risk, human error, and motivated threat actors, even the best prepared CISOs cannot always shield their enterprises from all cybersecurity incidents. When disaster strikes, how can they put their skills to work?It is an opportunity for the CISO to step in and lead, says Erwin. That's the most critical thing a CISO is going to do in those incidents, and if the CISO isn't capable doing that or doesn't show up and shape the response, well, that's an indication of a problem.CISOs, naturally, want to guide their enterprises through a cybersecurity incident. But disaster recovery skills also apply to their own careers.I don't see a world where CISOs don't get some blame when an incident happens, says Young.There is plenty of concern over personal liability in this role. CISOs must consider the possibility of being replaced in the wake of an incident and potentially being held personally responsible.Do you have parachute packages like CEOs do in their corporate agreements for employability when they're hired? Young asks. I also see this big push of not only CISOs on the D&O insurance, but they're also starting to acquire private liability insurance for themselves directly.Andress shares that she is seeing CISOs be replaced less often. More often it's a recognition of underinvestment. And so, what I see more of is an increasing investment in the security program after an event or incident occurs, she says.After each incident, CISOs have the opportunity to learn about the strengths and weaknesses in the enterprises security and incident response plan, as well as in their own skillsets.For Andress, one of the biggest lessons has been to focus on the people involved in incident response. Everyone's looking at the technology. Everyone's looking at communication plans, but there're people working a lot of hours. How do we make sure that they're taking breaks? Getting rest. Getting fed, she says. If you want to have a strong and successful response making sure that you're focusing on not just the technology and the process aspects but really focusing on the people as well.
    0 Comments ·0 Shares ·42 Views
  • Technology Leadership: The Sky Isnt the Limit
    www.informationweek.com
    Here are six lessons that flying taught me about being a leader.
    0 Comments ·0 Shares ·31 Views
  • Why Enterprises Are Prioritizing Employee Experience Again
    www.informationweek.com
    Lisa Morgan, Freelance WriterJanuary 14, 20257 Min ReadAleksandr Davydov via Alamy StockIts a tough time for organizations trying to hire and retain tech talent. Big Tech is poaching smaller company IT workers, and a lot of organizations cant compete with the compensation packages. However, what they can do is prioritize employee experience, so candidates are more willing to say, yes, and employees are more likely to stay. Employee experience is particularly important to younger generations.While organizations have long placed varying degrees of importance on employee experience, it is now re-emerging as a differentiator for many, Nikita McClain, founder of management firm for HR and organizational development strategies Hayes Street Consulting, says in an email interview. The pandemic fundamentally shifted workplace expectations with employees increasingly prioritizing flexibility, values alignment, and work-life balance. Additionally, skills gaps and talent shortages in critical sectors have given trained workers more leverage in demanding better experiences.Organizations focused on designing positive employee experiences can follow a 4-step process: commit through strategy, communicate through feedback loops, connect through analysis, and improve continuously.Making employee experience a strategic initiative ensures it can receive ongoing leadership evaluation and support needed to design, adapt, and sustain initiatives that resonate highly with employees, says McClain. Implementing accessible, real-time feedback channels can help proactively identify what matters most to employees. Organizations can promote ongoing feedback by demonstrating responsiveness and transparency in acting upon and communicating changes that result from received feedback.Related:Nikita McClain, Hayes Street ConsultingTo evaluate what does and does not work, organizations should identify KPIs that connect employees sentiments about experience to overall business outcomes. This becomes the business case for employee experience that validates its value amidst competing organizational priorities and budget constraints. Using insights from employee feedback and data analysis can help organizations refine employee experience over time, keeping it relevant.Finally, continuous improvement enables organizations to evolve with the times.One challenge for organizations embarking to improve employee experience is elevating it from a function of HR to a strategic initiative that defines the organizations employee value proposition, says McClain. Efforts to strengthen communication and accountability between HR and executive leadership may be needed to ensure all are working toward the same goal.Related:Middle managers play a vital role in employee perceptions of the workplace. Organizations should engage and educate middle management on how to effectively address matters, such as when employee and operational needs do not align. Notably, not all employees prioritize and value the same things. Among differing work styles, generations, and life stages it can be difficult to identify one-size-fits-all initiatives that work well for everyone.When possible, be flexible and consider how employee experience can be tailored to account for diversity of need among employees, says McClain. Above all, recognize that employee experience is a journey. The most effective initiatives are those that remain flexible and responsive to changing workforce needs while maintaining clear alignment with organizational goals and values.The Talent Shortage Is a Major FactorOne big deal breaker is a return to office (RTO) policy. Organizations that are clinging to pre-pandemic business as usual are finding that some candidates wont compromise and employees will complain, if not quit outright. The talent shortage just exacerbates the problem.Younger generations have many options and choose workplaces that meet their needs for pay, growth and flexibility. The current dissatisfaction with RTO policies is revealing, says Justina Raskauskiene, human resource team lead at omnichannel marketing platform Omnisend, in an email interview. At Omnisend, we see the value of in-person collaboration for creativity and teamwork but also recognize that flexibility is now non-negotiable for many employees. If RTO is implemented, the focus should be on making people want to come to the office by emphasizing the benefits of in-person interaction.Related:Justina Raskauskiene, OmnisendTo ensure a great employee experience, she says her company makes a point of listening to employees and acting on their feedback. The company also holds regular one-on-one meetings, so employees have a mechanism to share concerns.The biggest challenge for organizations, I think, is overcoming resistance to change. Many companies operate on a this is how weve always done it mindset, which makes implementing flexibility look daunting, says Raskauskiene. As flexibility varies for everyone, companies may struggle balancing individual needs and team goals. Equally important, employee experience isnt just about perks, its about ensuring the work itself is meaningful and engaging.Times Have ChangedIn the past, a good employee experience was synonymous with ping pong tables, and free snacks and drinks -- a stark departure from button-down Corporate America of yesteryear. Later, the pandemic reshaped expectations, challenging employees to seek work-life balance, joy and purpose.Katie Roland, chief human resources officer at KCSA Strategic Communications, says her company is trying to enable work-life balance with tailored PTO programs and wealth-building opportunities. However, along the way, it became clear that a one-size-fits-all benefits approach no longer worked.The pandemic brought into hyperfocus, that each employee was faced with different personal challenges at home. To address diverse employee needs, we adopted Overalls LifeConcierge, a service offering expert assistance with time-consuming personal tasks like finding healthcare specialists, navigating Medicare for aging parents, or scheduling home repairs, says Roland in an email interview. Overalls isnt just a perk, its a game-changing solution for helping employees achieve true work-life balance. Organizations should provide meaningful, flexible support that addresses the whole employee.Toward that end, KCSA focuses on initiatives that reduce stress, save time and address individuals needs. They include flexible work schedules, mental health days, a wide choice of benefits through a professional employer organization and partnering with providers.They arent just workers, theyre parents, caregivers, partners and more. Companies that acknowledge this and take steps to help employees succeed in every aspect of their lives will stand out in todays competitive talent market, says Roland. Creating a great employee experience starts with listening through surveys and conversations and responding with meaningful support to build trust. At KCSA, insights from employees led to initiatives like Overalls LifeConcierge and No Zoom Fridays, addressing both personal and professional needs.Apparently, KCSAs approach is working. In 2024, Newsweek ranked the company #34 on its Most Loved Places to Work list.The Tricky PartOne challenge is to balance employer and employee interests in a way that benefits both. For example, most workers view RTO as beneficial for the company and management, but not necessarily the employee.The best benefits are those that directly address employee pain points. Transparency and education also matter. Communicate why youre offering certain programs and how they support your teams overall well-being. When real life issues come up that a benefit you offer can assist with, make sure to help the employee utilize it. Authenticity builds trust, and trust builds loyalty, says Roland. Weve seen how investing in innovative, practical benefits fosters a culture of care and empowerment. When employees feel supported, theyre not only more engaged, theyre also more likely to stay and thrive. In todays workplace, thats not just an advantage, its a necessity.A Job Versus an ExperienceJohn Jackson, founder at click fraud protection platformHitprobe, says todays workers are not just looking for a job to pay the rent, they are looking for an experience.This particularly applies to the younger generations and their way of looking at the world is influencing their older colleagues, says Jackson in an email interview. While this change was already happening, it has been sped up by the Great Resignation and the ongoing debates around hybrid working and organizations changing policies on this post-COVID. I believe that the key to creating a great employee experience centers around authenticity and adaptability. Implementing trendy perks is easy, but to make yourself the employer of choice you [must] genuinely understand what your team needs and respond to it.Hitprobe implements anonymous feedback on a regular basisto find out what employees want. However, what they want often differs from initial assumptions.Also important is a commitment to act on the feedback. If the workforce knows we will listen, they will continue to talk to us and stay with us. Flexibility is key, says Jackson. What works for one team member may not work for another, and diverse teams have varying needs and expectations. Open communication channels, transparency, public acknowledgement and recognition, and providing clear growth opportunities all go a long way towards building a team that is committed and productive.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·100 Views
  • Are We Ready for Artificial General Intelligence?
    www.informationweek.com
    The artificial intelligence evolution is well underway. AI technology is changing how we communicate, do business, manage our energy grid, and even diagnose and treat illnesses. And it is evolving more rapidly than we could have predicted. Both companies that produce the models driving AI and governments that are attempting to regulate this frontier environment have struggled to institute appropriate guardrails.In part, this is due to how poorly we understand how AI actually functions. Its decision-making is notoriously opaque and difficult to analyze. Thus, regulating its operations in a meaningful way presents a unique challenge: How do we steer a technology away from making potentially harmful decisions when we dont exactly understand how it makes its decisions in the first place?This is becoming an increasingly pressing problem as artificial general intelligence (AGI) and its successor, artificial superintelligence (ASI), loom on the horizon.AGI is AI equivalent to or surpassing human intelligence. ASI is AI that exceeds human intelligence entirely. Until recently, AGI was believed to be a distant possibility, if it was achievable at all. Now, an increasing number of experts believe that it may only be a matter of years until AGI systems are operational.Related:As we grapple with the unintended consequences of current AI application -- understood to be less intelligent than humans because of their typically narrow and limited functions -- we must simultaneously attempt to anticipate and obviate the potential dangers of AI that might match or outstrip our capabilities.AI companies are approaching the issue with varying degrees of seriousness -- sometimes leading to internal conflicts. National governments and international bodies are attempting to impose some order on the digital Wild West, with limited success. So, how ready are we for AGI? Are we ready at all?InformationWeek investigates these questions, with insights from Tracy Jones, associate director of digital consultancy Guidehouses data and AI practice, May Habib, CEO and co-founder of generative AI company Writer, and Alexander De Ridder, chief technology officer of AI developer SmythOS.What Is AGI and How Do We Prepare Ourselves?The boundaries between narrow AI, which performs a specified set of functions, and true AGI, which is capable of broader cognition in the same way that humans are, remain blurry.As Miles Brundage, whose recent departure as senior advisor of OpenAIs AGI Readiness team has spurred further discussion of how to prepare for the phenomenon, says, AGI is an overloaded phrase.Related:AGI has many definitions, but regardless of what you call it, it is the next generation of enterprise AI, Habib says. Current AI technologies function within pre-determined parameters, but AGI can handle much more complex tasks that require a deeper, contextual understanding. In the future, AI will be capable of learning, reasoning, and adapting across any task or work domain, not just those pre-programmed or trained into it.AGI will also be capable of creative thinking and action that is independent of its creators.It will be able to operate in multiple realms, completing numerous types of tasks. It is possible that AGI may, in its general effect, be a person. There is some suggestion that personality qualities may be successfully encoded into a hypothetical AGI system, leading it to act in ways that align with certain sorts of people, with particular personality qualities that influence their decision-making.However, as it is defined, AGI appears to be a distinct possibility in the near future. We simply do not know what it will look like.AGI is still technically theoretical. How do you get ready for something that big? Jones asks. If you cant even get ready for the basics -- you cant tie your shoe --how do you control the environment when it's 1,000 times more complicated?Related:Such a system, which will approach sentience, may thus be capable of human failings due to simple malfunction or misdirection due to hacking events or even intentional disobedience on its own. If any human personality traits are encoded, intentionally or not, they ought to be benign or at least beneficial -- a highly subjective and difficult determination to make. AGI needs to be designed with the idea that it can ultimately be trusted with its own intelligence -- that it will act with the interests of its designers and users in mind. They must be closely aligned with our own goals and values.AI guardrails are and will continue to come down to self-regulation in the enterprise, Habib says. While LLMs can be unreliable, we can get nondeterministic systems to do mostly deterministic things when were specific with the outcomes we want from our generative AI applications. Innovation and safety are a balancing act. Self-regulation will continue to be key for AI's journey.Disbandment of OpenAIs AGI Readiness TeamBrundages departure from OpenAI in late October following the disbandment of its AGI Readiness team sent shockwaves through the AI community. He joined the company in 2018 as a researcher and led its policy research since 2021, serving as a key watchdog for potential issues created by the companys rapidly advancing products. The dissolution of his team and his departure followed on the heels of the implosion of its Superalignment team in May, which had served a similar oversight purpose.Brundage claimed that he would either join a nonprofit focused on monitoring AI concerns or start his own. While both he and OpenAI claimed that the split was amicable, observers have read between the lines, speculating that his concerns had not been taken seriously by the company. The members of the team who stayed with the company have been shuffled to other departments. Other significant figures at the company have also left in the past year.Though the Substack post in which he extensively described his reasons for leaving and his concerns about AGI was largely diplomatic, Brundage stated that no one was ready for AGI -- fueling the hypothesis that OpenAI and other AI companies are disregarding the guardrails their own employees are attempting to establish. A June 2024 open letter from employees of OpenAI and other AI companies warns of exactly that.Brundages exit is seen as a signifier that the old guard of AI has been sent to the hinterlands -- and that unbridled excess may follow in their absence.Potential Risks of AGIAs with the risks of narrow AI, those posed by AGI range from the mundane to the catastrophic.One underappreciated reason there are so few generative AI use cases at scale in the enterprise is fear -- but its fear of job displacement, loss of control, privacy erosion and cultural adjustments -- not the end of mankind, Habib notes. The biggest ethical concerns right now are data privacy, transparency and algorithmic bias.You dont just build a super-intelligent system and hope it behaves; you have to account for all sorts of unintended consequences, like AI following instructions too literally without understanding human intent, De Ridder adds. Were still figuring out how to handle that. Theres just not enough emphasis on these problems yet. A lot of the research is still missing.An AGI system that has negative personality traits, encoded by its designer intentionally or unintentionally, would likely amplify those traits in its actions. For example, the Big Five personality trait model characterizes human personalities according to openness, conscientiousness, extraversion, agreeableness, and neuroticism.If a model is particularly disagreeable, it might act against the interests of humans it is meant to serve if it feels that is the best course of action. Or, if it is highly neurotic, it might end up dithering over issues that are ultimately inconsequential. There is also concern that AGI models may consciously evade attempts to modify their actions -- essentially, being dishonest to their designers and users.These can result in very consequential effects when it comes to moral and ethical decision making -- with which AGI systems might conceivably be entrusted. Biases and unfair decision making might have potentially massive consequences if these systems are entrusted with large-scale decision making.Decisions that are based on inferences from information on individuals may lead to dangerous effects, essentially stereotyping people on the basis of data -- some of which may have originally been harvested for entirely different purposes. Further, data harvesting itself could increase exponentially if the system feels that it is useful. This intersects with privacy concerns -- data fed into or harvested by these models may not necessarily have been harvested with consent. The consequences could unfairly impact certain individuals or groups of individuals.Untrammeled AGI might also have society-wide effects. The fact that AGI will have human capabilities also raises the concern that it will wipe out entire employment sectors, leaving people with certain skill sets without a means of gainful employment, thus leading to social unrest and economic instability.AGI would greatly increase the magnitude of cyber-attacks and have the potential to be able to take out infrastructure, Jones adds. If you have a bunch of AI bots that are emotionally intelligent and that are talking with people constantly, the ability to spread disinformation increases dramatically. Weaponization becomes a big issue -- the ability to control your systems. Large-scale cyber-attacks that target infrastructure or government databases, or the launch of massive misinformation campaigns could be devastating.Tracy Jones, GuidehouseThe autonomy of these systems is particularly concerning. These events might happen without any human oversight if the AGI is not properly designed to consult with or respond to its human controllers. And the ability of malicious human actors to infiltrate an AGI system and redirect its power is of equal concern. It has even been proposed that AGI might assist in the production of bioweapons.The 2024 International Scientific Report on the Safety of Advanced AI articulates a host of other potential effects -- and there are almost certainly others that have not yet been anticipated.What Companies Need To Do To Be ReadyThere are a number of steps that companies can take to ensure that they are at least marginally ready for the advent of AGI.The industry needs to shift its focus toward foundational safety research, not just faster innovation. I believe in designing AGI systems that evolve with constraints -- think of them having lifespans or offspring models, so we can avoid long-term compounding misalignment, De Ridder advises.Above all, rigorous testing is necessary to prevent the development of dangerous capabilities and vulnerabilities prior to deployment. Ensuring that the model is amenable to correction is also essential. If it resists efforts to redirect its actions while it is still in the development phase, it will likely become even more resistant as its capabilities advance. It is also important to build models whose actions can be understood -- already a challenge in narrow AI. Tracing the origins of erroneous reasoning is crucial if it is to be effectively modified.Limiting its curiosity to specific domains may prevent AGI from taking autonomous action in areas where it may not understand the unintended consequences -- detonating weapons, for example, or cutting off supply of essential resources if those actions seem like possible solutions to a problem. Models can be coded to detect when a course of action is too dangerous and to stop before executing such tasks.Ensuring that products are resistant to penetration by outside adversaries during their development is also imperative. If an AGI technology proves susceptible to external manipulation, it is not safe to release it into the wild. Any data that is used in the creation of an AGI must be harvested ethically and protected from potential breaches.Human oversight must be built into the system from the start -- while the goal is to facilitate autonomy, it must be limited and targeted. Coding for conformal procedures, which request human input when more than one solution is suggested, may help to rein in potentially damaging decisions and train models to understand when they are out of line.Such procedures are one instance of a system being designed so that humans know when to intervene. There must also be mechanisms that allow humans to intervene and stop a potentially dangerous course of action -- variously referred to as kill switches and failsafes.And ultimately, AI systems must be aligned to human values in a meaningful way. If they are encoded to perform actions that do not align with fundamental ethical norms, they will almost certainly act against human interests.Engaging with the public on their concerns about the trajectory of these technologies may be a significant step toward establishing a good-faith relationship with those who will inevitably be affected. So too, transparency on where AGI is headed and what it might be capable of might facilitate trust in the companies that are developing its precursors. Some have suggested that open source code might allow for peer review and critique.Ultimately, anyone designing systems that may result in AGI needs to plan for a multitude of outcomes and be able to manage each one of them if they arise.How Ready Are AI companies?Whether or not the developers of the technology leading to AGI are actually ready to manage its effects is, at this point, anyones guess. The larger AI companies -- OpenAI, DeepMind, Meta, Adobe, and upstart Anthropic, which focuses on safe AI -- have all made public commitments to maintaining safeguards. Their statements and policies range from vague gestures toward AI safety to elaborate theses on the obligation to develop thoughtful, safe AI technology. DeepMind, Anthropic and OpenAI have released elaborate frameworks for how they plan on aligning their AI models with human values.One survey found that 98% of respondents from AI labs agreed that labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming.Even in their public statements, it is clear that these organizations are struggling to balance their rapid advancement with responsible alignment, development of models whose actions can be interpreted and monitoring of potentially dangerous capabilities.Alexander De Ridder, SmythOSRight now, companies are falling short when it comes to monitoring the broader implications of AI, particularly AGI. Most of them are spending only 1-5% of their compute budgets on safety research, when they should be investing closer to 20-40%, says De Ridder.They do not seem to know whether debiasing their models or subjecting them to human feedback is actually sufficient to mitigate the risks they might pose down the line.But other organizations have not even gotten that far. A lot of organizations that are not AI companies -- companies that offer other products and services that utilize AI -- do not have aI security teams yet, Jones says. They havent matured to that place.However, she thinks that is changing. Were starting to see a big uptick across companies and government in general in focusing on security, she observes, adding that in addition to dedicated safety and security teams, there is a movement to embed safety monitoring throughout the organization. A year ago, a lot of people were just playing with AI without that, and now people are reaching out. They want to understand AI readiness and theyre talking about AI security.This suggests a growing realization amongst both AI developers and their customers that serious consequences are a near inevitability. Ive seen organizations sharing information -- there's an understanding that we all have to move forward and that we can all learn from each other, Jones claims.Whether the leadership and the actual developers behind the technology are taking the recommendations of any of these teams seriously is a separate question. The exodus of multiple OpenAI staffers -- and the letter of warning they signed earlier this year -- suggests that at least in some cases, safety monitoring is being ignored or at least downplayed.It highlights the tension that is going to be there between really fast innovation and ensuring that it is responsible, Jones adds.
    0 Comments ·0 Shares ·57 Views
  • AIs on Duty, But Im the One Staying Late
    www.informationweek.com
    Asaff Zamir, VP of Global Customer Success, Solution Architecture, and Business Operations at AI2January 14, 20254 Min ReadBrain light via Alamy StockWorkplace efficiency is a significant challenge in todays fast-paced business environment. With new technologies entering our workplaces daily, employees spend more time upskilling and adapting to new tools than ever before.While these innovations are intended to enhance productivity and efficiency, employees often report the opposite experience. Upwork Study (July 2023) reveals that while 96% of C-suite leaders expect AI to boost worker productivity, 77% of employees report that AI has increased their workload instead. This disconnect suggests a gap between the potential of AI technologies and their current implementation in workplaces.In a survey I conducted earlier this year within a professional community of 5,000 customer success and other go-to-market functions, several clear challenges emerged -- challenges that are echoed in multiple research studies and provide a deeper overlook on these gaps.Adoption of AI Workflows in Enterprise EnvironmentsI believe there are two distinct phases that illustrate the progression of AI adoption in enterprise workflows, starting with the implementation of large language models (LLMs) for specific, high-impact use cases with a clear return on investment (ROI), followed by the adoption of intelligent, proactive personal assistants that will revolutionize the way employees engage with their work.Related:Phase 1: Adoption of LLMs with/without retrieval-augmented generation (RAG) for a specific use case with a clear and easy to achieve ROI.The technology is already here. The primary challenge is not the technology itself but identifying use cases that generate significant value relatively easily, while addressing concerns over data privacy, data structuring, and availability.Hence, use cases like auto-text generation or personalized and grounded chat solutions through LLMs, enhanced by RAG, have an increasing demand and far less resistance from the employees in specific industries where there is a significant amount of time invested on research, and in environments that require a large amount of manual repetitive tasks.Here are a few use cases we have already experienced, and I believe will spearhead this phase:Boosting customer service. Banks, for example, have relied on chatbots for 24/7 customer support, but generic chatbots struggle to provide the personalized and specific answers customers expect. As a result, banks spend too much time on inquiries and too little on customer engagement and experience. By integrating LLMs with a RAG engine, banks can offer personalized, grounded and real-time assistance.Related:According to a study by Salesforce, customer expectations for personalization have increased, with 81% of customers now expecting more personalized experiences than in the past.While implementing a chat + RAG solution with a leading bank, they report a 40% reduction in support tickets post-implementation, unlocking human agents' time for strategic, proactive conversations. This is a win-win for both customer satisfaction and employee fulfillment.Empowering research/medical oriented workflows. We learned that doctors, for example, in several healthcare institutions, often search manually for medical research documentation and guidelines, a tedious and time-consuming process.Oncologists face significant time burdens when searching for and managing medical literature, including time spent navigating electronic health records and managing documentation, rather than engaging directly in patient care.In a field where time equals life, inefficiencies can have serious consequences. By providing a chatbot + RAG system that enables doctors to easily interact with medical guidelines and literature, they can receive accurate answers in real-time, eliminating the need to navigate through multiple articles.Related:Enhancing e-commerce efficiency. The e-commerce sector has seen phenomenal growth in recent years, but many sellers struggle with providing high-quality product information. Incomplete descriptions, missing specifications, and poor-quality images often result in customer dissatisfaction, increased returns, and eroded trust. A Syndigo (2024) report highlighted that 65% of product returns are due to incomplete or inaccurate product descriptions.Additionally, 83% of global respondents stated they would abandon a website if they couldn't find sufficient product information, with 73% of shoppers thinking less of a brand if they find inconsistent or incorrect product details. These issues not only hurt sales but damage brand trust and lead to higher return rates.LLM-powered product description generation can automate the creation of detailed, engaging, and accurate product descriptions, enabling sellers to offer a richer shopping experience at scale.Phase 2: We will witness the rise of personal genius assistants that go beyond expertise in a specific use case and seamlessly integrate into workplace ecosystems. These assistants will not only automate repetitive tasks or answer simple questions, but also proactively suggest relevant context and resources before important milestones. Instead of merely responding to prompts, these assistants will anticipate needs, provide actionable insights, and help employees stay one step ahead.Imagine an assistant that can research complex questions, providing fast, curated information and recommendations. This will allow employees to focus on higher-order, creative tasks while feeling more fulfilled at work. The assistant will act as an intelligent collaborator, enhancing productivity and fostering a sense of accomplishment among employees.The journey toward fulfilling employees real potential lies in leveraging AI to work for us, not the other way around. By adopting technologies like LLMs with RAG and developing personal genius assistants, enterprises can transform workflows, enhance productivity, and, most importantly, allow employees to focus on meaningful, value-generating tasks.About the AuthorAsaff ZamirVP of Global Customer Success, Solution Architecture, and Business Operations at AI2Asaff Zamir is VP of Global Customer Success, Solution Architecture, and Business at AI21. A recognized thought leader in customer success and operations, he was named one of the Top 100 Customer Success Strategists from 2020 to 2023. Eight years ago, Asaff founded the Israel CS community and continues to contribute to its growth. He is also a guest lecturer at several academic institutions and developed Israels first-ever customer success academic course as part of MTA (The Academic College of Tel Aviv-Yaffo) MBA program, which launched in 2022.Previously, Asaff served as COO at Zencity and, before that, he built and led customer success teams at Siemplify (acquired by Google) and Mobilogy (Cellebrite).See more from Asaff ZamirNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·67 Views
  • Why Your Business May Want to Shift to an Industry Cloud Platform
    www.informationweek.com
    John Edwards, Technology Journalist & AuthorJanuary 13, 20255 Min ReadWavebreakmedia Ltd IFE-240329_4 via Alamy Stock PhotoUnlike their generic cloud counterparts, industry cloud platforms provide specialized services tailored to meet the needs of businesses in specific industries, such as healthcare, finance, or manufacturing.Industry clouds can be best understood as industry-specific solutions, says Brian Campbell, a principal at Deloitte Consulting. He notes in an email interview that all cloud providers have evolved significantly over the past few years. "Initially, they offered infrastructure as a service (IaaS), then moved to platform as a service (PaaS), and now we see the emergence of business outcomes as a service."A growing number of cloud service providers now address business challenges unique to specific industries. "These problems are deeply embedded in ... value chains and require tailored solutions to achieve desired business outcomes," Campbell says. He observes that the number of industry cloud solutions is expanding rapidly, driven largely by sophisticated technology advancements, such as GenAI. "This growth allows businesses to solve industry-specific challenges more effectively and efficiently."Industry cloud services typically embed the data model, processes, templates, accelerators, security constructs, and governance controls required by the adopter's industry, says Shriram Natarajan, a director at technology research and advisory firm ISG, in an online interview. "This [approach] allows faster development of new functionality, better security and governance, and an enhanced and user/stakeholder experience."Related:Industry cloud platforms are pre-configured with industry-specific features, integrations, and workflows that cater to the unique regulatory, operational, and customer needs of a [particular] sector, says Herb Hogue, CTO at systems integrator Myriad360, in an online interview. "Examples include Epic Cloud for healthcare, Siemens' Insights Hub for industrial IoT, SAP for inventory and workflow management, Oracle for ERP and financial services, and CoreWeave, which provides a cloud infrastructure optimized for AI and high-performance computing."Multiple BenefitsCampbell observes that moving to an industry cloud has already helped many enterprises connect to customers and suppliers in highly compelling ways. He notes that adopters generally obtain the greatest benefit when they tie their use of an industry cloud to their business strategy, business outcomes, and return on investment. Other significant benefits include faster innovation, modularity (as new technologies or approaches become available), increased efficiency, more effective business processes, and greater employee engagement.Related:Enterprises spanning many industries can benefit significantly by moving to an industry cloud platform, Campbell says. "Businesses that are faced with many regulations and operational requirements can especially benefit from the specialized services industry cloud platforms," he notes, adding that many industry cloud platforms are preconfigured to meet specific needs, which can help accelerate the time to value realized.Many enterprises have a blinkered view on verticalized solutions, Natarajan says. "They tend to see the platforms they already have in-house and look for solutions that these platforms provide." He believes that enterprise IT and business teams can both benefit from looking at the landscape of verticalized industry cloud platforms.Cloud platforms are continuously evolving and expanding in scope, offering new capabilities that make them attractive to businesses looking to scale rapidly within their industry. "However, businesses must weigh the benefits of speed, and functionality against long-term costs and the potential for vendor lock-in," Hogue warns. "While these platforms often provide faster implementation and industry-specific capabilities at a lower initial cost compared to custom-built solutions, ongoing costs such as subscription fees and upgrades can accumulate over time." He advises potential adopters to carefully evaluate a platform's total cost, capability to match or exceed long-term business goals, and its potential for continuity and adaptability.Related:Getting StartedEnterprises that are ready to transition to an industry cloud platform should begin the process by taking a holistic approach to vendor selection. "The transformation should be supportive of your business strategy and ... driven by where to differentiate in order to best meet the needs of customers, employees, and other stakeholders," Campbell says. He also recommends following the fastest possible path to value. "Numerous providers offer industry cloud solutions, and existing relationships and platform preferences may facilitate an easier integration."Campbell suggests identifying the specific business requirements and regulatory needs that the industry cloud solution will address. He recommends evaluating providers by comparing their features, compliance capabilities, and pricing. "Align use of the solutions to your business strategy and then create a detailed implementation plan that includes goals, timelines, and key performance indicators (KPIs)." Team training is also important. "Help them understand and utilize the new platform effectively."Finally, consider data sharing and security requirements when evaluating an industry cloud platform. Prioritize flexibility and the capacity for innovation, Campbell advises. "The market is evolving quickly, and modular implementations are replacing monolithic ones, offering user-friendly building blocks that are continuously enhanced."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·81 Views
  • How Do Companies Know if They Overspend on AI and Then Recover?
    www.informationweek.com
    Pouring money into a strategy can be more of a problem than a solution, especially when companies explore new technology.
    0 Comments ·0 Shares ·37 Views
  • Ensure Your Organizations Cloud Is Ready for AI Innovation
    www.informationweek.com
    Sekhar Koduri, Senior Director, Enterprise Offerings, DMIJanuary 13, 20255 Min ReadAleksia via Alamy StockGlobal spending on public cloud services will reach $805 billion this year and double by 2028. This exponential growth is being driven, in part, by a growing interdependence between artificial intelligence innovation and cloud infrastructure.AI systems demand tons of computational power and data. Generating this data in an on-premises data center is extremely expensive and impractical for most organizations. Conversely, the cloud provides a scalable and adaptable environment for AI to thrive.For instance, cloud platforms provide on-demand, fixed, and ephemeral compute resources for advanced AI processing. These resources are crucial for rapid prototyping and experimentation in AI innovation.However, unexpected costs, security, and regulatory concerns prevent cloud investments from reaching their full potential. Without a solid, sustainable cloud infrastructure, strong data foundation, and enterprise governance, organizations will not be able to reap the many benefits that AI has to offer.Fortunately, many prevalent challenges surrounding data management, cybersecurity, enterprise governance, cost containment, and change management are solvable.Strong Data Foundation Drives Business OutcomesAI applications demand vast amounts of data for training, testing, and validation, necessitating robust data storage solutions. As such, organizations must strengthen their data governance, integration, preparation, scalability, and financial policies to prepare a cloud environment for AI innovation.Related:Cloud platforms provide scalable computing power for AI workloads, eliminating the need for physical servers. With cloud-native object storage and distributed frameworks, organizations can efficiently process and store large datasets to ensure scalability and optimize performance. Also, modern semantic databases can provide the data foundation for ever-growing generative AI workloads.Organizations can use the pay-as-you-go model to eliminate upfront costs and dynamically scale resources based on demand. Additionally, multi-tier storage options optimize costs by storing data based on access patterns, and serverless platforms can provide cost-effective solutions for storing and analyzing petabytes of data.A robust data foundation not only supports scalability and cost-efficiency but also ensures ethical alignment and operational excellence.Security at the CoreFor organizations looking to incorporate AI-powered tools into their cloud environments, it's imperative to ensure the environment is secure beforehand.Related:Concerningly, 80% of data breaches in 2023 involved data stored in the cloud. In dynamic environments like the cloud, a zero-trust philosophy is a necessity.Several key components of zero-trust can fortify an organization's cloud security. For instance, asset discovery and misconfiguration monitoring can ensure organizations maintain visibility into their cloud environment. Further, cloud identity and entitlement management can ensure users can only access the minimum resources and permissions necessary to perform their tasks.No cyber efforts are foolproof -- even with these strategies in place, threat detection and incident response tools remain critical in case a malicious actor does breach the network. These should include continuous monitoring, vulnerability scanning, and guided remediation across cloud assets, workloads, and identities. Once security is assured, organizations can focus on other pressing challenges.Robust Enterprise Governance Framework Is Essential In response to the rapidly evolving field of AI, including generative AI, organizations should establish a multidisciplinary team dedicated to integrating AI within rigid regulatory frameworks, like the NIST AI Risk Management Framework.Related:Organizations should adhere to best practices like scalability, data management, and automation to create a secure, ethical, and efficient environment for AI deployment. Luckily, cloud providers offer cloud-native capabilities to navigate compliance requirements. For instance, some providers offer model bias, explainability features, and generative AI safeguards.Overcoming governance challenges through structured frameworks ensures that AI systems align with organizational goals and societal values, paving the way for responsible AI innovation.FinOps Keeps Ballooning Cloud Costs Under ControlOne of the most prevalent cloud migration and management concerns is cost. According to a McKinsey report, the average company spends 14% more than they intend to on cloud migration each year, and 75% of organizations exceed their planned budget. Cloud financial operations, or FinOps, provide a technological and organizational solution.The technological aspect enables cost observability through dashboards, regular reporting and alerts for cost overruns. These tools provide visibility into current and future costs and enable proactive management, so organizations aren't surprised by cloud invoices. Additional FinOps procedures and policies include approval processes for resource changes that impact costs and ongoing cloud cost forecasts.Implementing FinOps solutions and procedures drives financial accountability, efficiency, and overall cost control in cloud environments. As a result, organizations can optimize resources and investments.OCM Unites People, Processes & TechnologyAny technology or procedure is only as effective as the people using it. Organizations tend to underestimate the role of their workforce in ensuring a successful and sustainable cloud deployment.Leadership must focus on the employee perspective and experience to prevent delays and ensure successful cloud deployments. Organizational change management, or OCM, is a fundamental part of the transformational journey to the cloud.Leadership can ensure a smooth transition to the cloud with effective change communication, stakeholder collaboration, transparency, and thorough education. Organizations must account for the triumvirate of people, processes and technology throughout cloud migration, deployment and management.When the three work in harmony, organizations can significantly improve their operations, maximize the value of their cloud infrastructure, and capitalize on the boundless potential of AI.About the AuthorSekhar KoduriSenior Director, Enterprise Offerings, DMISekhar Koduri, DMIs lead for the data and analytics practice under the DMI CTO/EO office, spearheads transformative initiatives by leveraging the power of advanced analytics, data science, and AI for DMIs customers. He designs and implements robust data-driven strategies and AI workloads, enhancing operational efficiency and service delivery across sectors while ensuring AI safety and transparency. He is currently engaged in initiatives that explore the practical applications of Generative AI and large language models in the rapidly evolving AI landscape. He unlocks business utility and transformative capabilities by integrating these GenAI capabilities into customers operations.See more from Sekhar KoduriNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·71 Views
  • Addressing the Security Risks of AI in the Cloud
    www.informationweek.com
    Carrie Pallardy, Contributing ReporterJanuary 13, 20257 Min ReadKittipong Jirasukhanont via Alamy Stock PhotoThe majority of organizations --89% of them, according to the 2024 State of the Cloud Report from Flexera --have adopted a multicloud strategy. Now they are riding the wave of the next big technology: AI. The opportunities seem boundless: chatbots, AI-assisted development, cognitive cloud computing, and the list goes on. But the power of AI in the cloud is not without risk.While enterprises are eager to put AI to use, many of them still grapple with data governance as they accumulate more and more information. AI has the potential to amplify existing enterprise risks and introduce entirely new ones. How can enterprise leaders define these risks, both internal and external, and safeguard their organizations while capturing the benefits of cloud and AI?Defining the RisksData is the lifeblood of cloud computing and AI. And where there is data, there is security risk and privacy risk. Misconfigurations, insider threats, external threat actors, compliance requirements, and third parties are among the pressing concerns enterprise leaders must addressRisk assessment is not a new concept for enterprise leadership teams. Many of the same strategies apply when evaluating the risks associated with AI. You do threat modeling and your planning phase and risk assessment. You do security requirement definitions [and] policy enforcement, says Rick Clark, global head of cloud advisory at UST, a digital transformations solutions company.Related:As AI tools flood the market and various business functions clamor to adopt them, the risk of exposing sensitive data and the attack surface expands.For many enterprises, it makes sense to consolidate data to take advantage of internal AI, but that is not without risk. Whether it's for security or development or anything, [youre] going to have to start consolidating data, and once you start consolidating data you create a single attack point, Clark points out.And those are just the risks security leaders can more easily identify. The abundance of cheap and even free GenAI tools available to employees adds another layer of complexity.It's [like] how we used to have the shadow IT. Its repeating again with this, says Amrit Jassal, CTO at Egnyte, an enterprise content management company.AI comes with novel risks as well.Poisoning of the LLMs, that I think is one of my biggest concerns right now, Clark shares with InformationWeek. Enterprises aren't watching them carefully as they're starting to build these language models.Related:How can enterprises ensure the data feeding the LLMs they use hasnt been manipulated?This early on in the AI game, enterprise teams are faced with the challenges of a managing the behavior and testing systems and tools that they may not yet fully understand.What's new and difficult and challenging in some ways for our industry is that the systems have a kind of nondeterministic behavior, Mark Ryland, director of the Office of the CISO for cloud computing services company Amazon Web Services (AWS), explains. You cant comprehensively test a system because it's designed in part to be critical, creative, meaning that the very same input doesn't result in the same output.The risks of AI and cloud can multiply with the complexity of an enterprises tech stack. With a multi-cloud strategy and often growing supply chain, security teams have to think about a sprawling attack surface and myriad points of risk.As an example, we have had to take a close look at least privilege things, not just for our customers but for our own employees as well. And, then that has to be extended not to just one provider but to multiple providers, says Jassal. It definitely becomes much more complex.AI Against the CloudWidely available AI tools will be leveraged not only by enterprises but also the attackers that target them. At this point, the threat of AI-fueled attacks on cloud environments is moderately low, according to IBMs X-Force Cloud Threat Landscape Report 2024. But the escalation of that threat is easy to imagine.Related:AI could exponentially increase threat actors capabilities via coding-assistance, increasingly sophisticated campaigns, and automated attacks.We're going to start seeing that AI can gather information to start making personalized phishing attacks, says Clark. There's going to be adversarial AI attacks, where they exploit weaknesses in your AI models even by feeding data to bypass security systems.AI model developers will, naturally, attempt to curtail this activity, but potential victims cannot assume this risk goes away. The providers of GenAI systems obviously have capabilities in place to try to detect abusive use of their systems, and I'm sure those controls are reasonably effective but not perfect, says Ryland.Even if enterprises opt to eschew AI for now, threat actors are going to use that technology against them. AI is going to be used in attacks against you. You're going to need AI to combat it, but you need to secure your AI. It's a bit of a vicious circle, says Clark.The Role of Cloud ProvidersEnterprises still have responsibility for their data in the cloud, while cloud providers play their part by securing the infrastructure of the cloud.The shared responsibility still stays, says Jassal. Ultimately if something happens, a breach etcetera, in Egnytes systems Egnyte is responsible for it whether it was due to a Google problem or Amazon problem. The customer doesn't really care.While that fundamental shared responsibility model remains, does AI change that conversation at all?Model providers are now part of the equation. Model providers have a distinct set of responsibilities, says Ryland. Those entities [take] on some responsibility to ensure that the models are behaving according to the commitments that are made around responsible AI.While different parties -- users, cloud providers, and model providers -- have different responsibilities, AI is giving them new ways to meet those responsibilities.AI-driven security, for example, is going to be essential for enterprises to protect their data in the cloud, for cloud providers to protect their infrastructure, and for AI companies to protect their models.Clark sees cloud providers playing a pivotal role here. The hyperscalers are the only ones that are going to have enough GPUs to actually automate processing threat models and the attacks. I think that they're going to have to provide services for their clients to use, he says. They're not going to give you these things for free. So, these are other services they're going to sell you.AWS, Microsoft, and Google each offer a host of tools designed to help customers secure GenAI applications. And more of those tools are likely to come.We're definitely interested in increasing the capabilities that we provide for customers for risk management, risk mitigation, things like more powerful automated testing tools, Ryland shares.Managing RiskWhile the risks of AI and cloud are complex, enterprises are not without resources to manage them.Security best practices that existed before the explosion of GenAI are still relevant today. Building an operation of an IT system with the right kinds of access controls, least privilege making sure that the data's carefully guarded and all these things that we would have done traditionally, we can now apply to a GenAI system, says Ryland.Governance policies and controls that ensure those policies are followed will also be an important strategy for managing risk, particularly as it relates to employee use of this technology.The smart CISOs [dont] try to completely block that activity but rather quickly create the right policies around that, says Ryland. Make sure employees are informed and can use the systems when appropriate, but also get proper warnings and guardrails around using external systems.And experts are developing tools specific to the use of AI.There're a lot of good frameworks in the industry, things like the OWASP top 10 risks for LLMs, that have significant adoption, Ryland adds. Security and governance teams now have some good industry practices codified with input from a lot of experts, which help them to have a set of concepts and a set of practices that help them to define and manage the risks that arise from a new technology.The AI industry is maturing, but it is still relatively nascent and quickly evolving. There is going to be a learning curve for enterprises using cloud and AI technology. I don't see how it can be avoided. There will be data leakages, says Jassal.Enterprise teams will have to work through this learning curve, and its accompanying growing pains, with continuous risk assessment and management and new tools built to help them.About the AuthorCarrie PallardyContributing ReporterCarrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.See more from Carrie PallardyNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·66 Views
  • Why So Many Customer Experiences Are Mediocre at Best
    www.informationweek.com
    Lisa Morgan, Freelance WriterJanuary 10, 20258 Min Readdesigner491 via Alamy StockSometimes, it may seem that customers are never satisfied. They abuse call center staff, rage online or worse, abandon the brand. Functional gaps within organizations, siloed technology, a lack of accountability, and erroneous assumptions are the main reasons companies don't understand how customers really feel.Not looking at customer journeys holistically means broken journeys. Theres a lot of focus on initial interactions such as initial calls into the contact center, but many times theres a lack of focus on the fulfilling of customers intent, says Jay Patel, SVP and GM, Webex Customer Experience Solutions atCisco. Additionally, a lack of data-driven insights leads to many businesses failing to tailor their services to meet individual customer needs. Disconnected communication channels often lead to inconsistent experiences, as customers may receive varying levels of service depending on the platform they use.Many companies rely on legacy systems that are unable to support the dynamic needs of modern consumers. Additionally, the lack of a unified data strategy hampers a companys ability to gain a holistic view of the customer journey, resulting in fragmented interactions. Similarly, siloed organizations have led to disparate and separate decision making, processes, technology, and more across IT and lines of business. And because call centers are swamped, improving customer experience starts with improving employee experience, Patel says.Related:To deliver exceptional customer experiences, organizations must recognize the direct connection between employee satisfaction and customer satisfaction. A positive employee experience translates into better customer interactions, says Patel. Striking the right balance between automation and human touch is also critical. While virtual agents can handle routine tasks, customers should always have access to human support when needed. Leveraging data analytics to understand customer preferences and behaviors enables more personalized and effective service. This data can also be used to map customer journeys end-to-end to ensure seamless and consistent experiences across all channels where they are engaged.Additionally, implementing AI should be approached with intention, focusing on enhancing customer service without compromising the personalized, human-centric elements that drive loyalty and trust.Jay Patel, CiscoIts important to have a strategy and roadmap for transforming CX. Over time, itll be important to innovate and adapt -- making this not a one-time activity, says Patel. Continuous measurement and reviews are needed to ensure the organization continues to innovate and improve experiences.Related:Customer Satisfaction Less Important Than Cost?Melissa Copeland, founder and principal at customer experience consultancy Blue Orbit Consulting, says customer experience design should include three different perspectives: the customer, the company, and the employee.From a customer perspective, its helpful to understand what they expect, why they call, and the outcome they anticipate. From a company perspective, consider how to deliver a consistent brand image, drive loyalty and maintain or grow revenue in a manner that improves share of wallet. Then, from an employee perspective, understand how they receive the customer, what they are incented to do and how much support they have.The magic is when all three [perspectives] are knit together in an ecosystem supported by processes and technology that make all these things happen quickly and synchronized, says Copeland. Savvy organizations start with an understanding of the customer and sample personas or profiles. From there, an organization can create desired customer experience and build the systems and processes to deliver that experience. The best experiences are often driven by looking at things from a customer perspective.Related:Customer experience is a choice companies make. For example, have they gone overboard with automation or do they want to make it easy for the customer to reach a human? Is cost more important than customer satisfaction?When a company creates [an experience] based on what is cost effective or easy for the company rather than the customer, we get into a mismatch [of] expectations, language and often outcomes, says Copeland. The other common challenge is [technology implementation]. Often self-service or chatbots [are built], without spending the time and resources to be sure they line up with that desired customer experience.John Rossman, founder and managing partner at strategy and management advisory firm Rossman Partners, says a lack of clear metrics, an absence of accountability, and short-term optimization over long-term loyalty are issues.The Right Things Arent Being MeasuredMany organizations fail to establish insightful, actionable metrics that truly reflect the customer experience. Without clarity on what defines excellence, its nearly impossible to achieve it. Metrics must measure what matters most to customers, not just whats easy to track, says Rossman. Turning these metrics into service level agreements (SLAs) that hold teams accountable for delivering an exceptional customer experience is another gap. SLAs are a commitment to excellence -- a pledge that customers can count on and that serve as a forcing function for accountability and improvement.The real reason, however, lies in prioritizing short-term financial results at the expense of long-term customer loyalty. Organizations often sacrifice sustainable growth, which is built on trust and exceptional experiences, for immediate financial returns.Transforming customer experiences isnt just about reacting to complaints or tweaking processes. Its about creating clarity, aligning teams on a bold vision of customer delight and maintaining velocity in execution, says Rossman. Organizations that prioritize customer loyalty, deeply understand customer pain points and apply resources to fix the root causes are the ones that thrive. This is Amazon's approach.Customer Data Is a MessBrands have a hard time meeting this expectation mostly because they lack a full, detailed understanding of their customers. With data siloed across channels and processes, one functional team might know what the customer is doing on one channel, but not another. The customer might receive what the brand considers an exceptional CX on that one channel, but from the customers perspective it is an uneven experience because it does not consider or reflect the entirety of the customers journey with the brand.[W]hile breaking down siloes is a great starting point, many brands fail to prioritize data quality. To truly understand a customer, all incoming data must be cleansed, normalized, enriched, and precisely matched so that brands can accurately distinguish one customer from another, and even understand a customer in the dynamics of various relationships, such as in the context of a household or business, says John Nash, chief marketing and strategy officer at strategy and management advisory firm Redpoint Global. It is impossible to deliver a real-time, personalized experience when the company is still trying to figure out which customer is visiting the website, dialing the call center, engaging with a chatbot, etc.John Nash, Redpoint GlobalOrganizations should have unified customer profiles and make those profiles available and accessible across the enterprise. A unified profile should ingest customer data from all possible sources, continuously perform data quality processes as data is ingested, and apply advanced identity resolution functionality to accurately distinguish one customer from another.Watch Out for Confirmation BiasDennis Lenard, CEO at Creative Navy UX Agency, says hes observed that confirmation bias can undermine the creation of truly exceptional customer experiences. Many subpar experiences persist not because teams lack skills or resources but because theyre unknowingly trapped in patterns of organizational blindness. This happens when the very structures creating the problems prevent teams from identifying and addressing those issues objectively. Cognitive dissonance often compounds the issue, as individuals struggle to reconcile conflicting information that challenges their existing beliefs or assumptions.One of the biggest challenges Ive observed is how flawed thinking can skew the problem-solving process. For instance, teams often claim they are validating solutions internally. However, this approach is inherently flawed, says Lenard. Based on how learning processes and logical reasoning work, we can only invalidate assumptions through testing and experimentation. Confirmation bias makes it tempting to interpret any evidence as supporting a hypothesis, even when it doesnt.Joe Crawford, global head of technology at AI-fueled customer intelligence solutions provider Glassbox, agrees.Too often, companies assume they know what customers want and then pour their resources into those ideas. [W]hen you consider all the sources that suggest customers think CX has gotten worse in recent years, it highlights an obvious misunderstanding between what customers want versus what organizations think customers want.Customer Experience Is Everyones JobToya Del Valle, chief customer officer at workforce agility platform Cornerstone OnDemand, says poor customer experiences are caused misalignment across business functions and customer expectations.When customer perspectives are not incorporated into all business processes, it heightens the risk of a substandard experience substantially. To ensure customer satisfaction, customer success goals must be reflected across the entire company, not just those directly engaging with customers, says Del Valle.For example, if chief customer officers partner with HR, they can introduce shared metrics that tie employee engagement to customer satisfaction and find a rhythm to engage employees with training, support and metric tracking against this shared goal. That said, all business leaders must drive experience within their organizations.Ultimately, delivering exceptional customer experiences should be a company-wide priority driven by executive leadership. However, if a unified company-wide focus is not yet achievable, it is crucial to ensure strong collaboration between marketing, IT, and customer success teams, says Glassboxs Crawford. [M]arketing and customer success teams are on the ground thinking of new ways to reach current and prospective customers daily [but], IT has the clearest view into online behavior, [such as] what areas of a website customers are interacting with, points of customer friction, and the data behind the behavior.Bottom LineCustomer experiences continue to suffer because organizations arent prioritizing, measuring, or doing the right things. With all the internal and technological disconnects, coupled with the need to deliver omnichannel experiences the way customers want them, its a difficult problem to solve.The worst part about it is that companies dont understand their customers well enough to design the right kind of experiences in the first place. Moreover, delivering a great customer experience is a moving target and therefore a journey, not an event. Its also everyones responsibility. And ultimately, companies need to run their operations with a customer-first mentality.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
    0 Comments ·0 Shares ·69 Views
  • What Could Less Regulation Mean for AI?
    www.informationweek.com
    President-elect Trump has been vocal about plans to repeal the AI executive order signed by President Biden. A second Trump administration could mean a lot of change for oversight in the AI space, but what exactly that change will look like remains uncertain.I think the question is then what incoming President Trump puts in its place, says Doug Calidas, senior vice president of government affairs for Americans for Responsible Innovation (ARI), a nonprofit focused on policy advocacy for emerging technologies. The second question is the extent to which the actions the Biden administration and the federal agencies have already taken pursuant to the Biden executive order. What happens to those?InformationWeek spoke to Calidas and three other leaders tuned into the AI sector to cast an eye to the future and consider what a hands-off approach to regulation could mean for the companies in this booming technology space.A Move to Deregulation?Experts anticipate a more relaxed approach to AI regulation from the Trump administration.Obviously, one of Trumps biggest supporters is Elon Musk, who owns an AI company. And so that coupled with the statement that Trump is interested in pulling back the AI executive order suggest that we're heading into a space of deregulation, says Betsy Cooper, founding director at Aspen Tech Policy Hub, a policy incubator focused on tech policy entrepreneurs.Related:Billionaire Musk, along with entrepreneur Vivek Ramaswamy, is set to lead Trumps Department of Government Efficiency (DOGE), which is expected to lead the charge on significantly cutting back on regulation. While conflict-of-interest questions swirl around his appointment, it seems likely that Musks voice will be heard in this administration.He famously came out in support of California SB 1047, which would require testing and reporting for the cutting-edge systems and impose liability for truly catastrophic events, and I think he's going to push for that at the federal level, says Calidas. That's not to take away from his view that he wants to cut regulations generally.While we can look to Trump and Musks comments to get an idea of what this administrations approach to AI regulation could be, but there are mixed messages to decipher.Andrew Ferguson, Trumps selection to lead the US Federal Trade Commission (FTC), raises questions. He aims to regulate big tech, while remaining hands-off when it comes to AI, Reuters reports.Of course, big tech is AI tech these days. So, Google, Amazon all these companies are working on AI as a key element of their business, Cooper points out. So, I think now we're seeing mixed messages. On the one hand, moving towards deregulation of AI but if you're regulating big tech then it's not entirely clear which way this is going to go.Related:More Innovation?Innovation and the ability to compete in the AI space are two big factors in the argument for less regulation. But repealing the AI executive order alone is unlikely to be a major catalyst for innovation.The idea that by even if some of those requirements were to go away you would unleash innovation, I don't think really makes any sense at all. There's really very little regulation to be cut in the AI space, says Calidas.If the Trump administration does take that hands-off approach, opting not to introduce AI regulation, companies may move faster when it comes to developing and releasing products.Ultimately, mid-market to large enterprises, their innovation is being chilled if they feel like there's maybe undefined regulatory risk or a very large regulatory burden that's looming, says Casey Bleeker, CEO and cofounder of SurePath AI, a GenAI security firm.Does more innovation mean more power to compete with other countries, like China?Related:Bleeker argues regulation is not the biggest influence. If the actual political objective was to be competitive with China nothing's more important than having access to silicon and GPU resources for that. It's probably not the regulatory framework, he says.Giving the US a lead in the global AI market could also be a question of research and resources. Most research institutions do not have the resources of large, commercial entities, which can use those resources to attract more talent.[If] we're trying to increase our competitiveness and velocity and innovation putting funding behind research institutions and education institutions and open-source projects, that's actually another way to advocate or accelerate, says Bleeker.Safety Concerns?Safety has been one of the biggest reasons that supporters of AI regulation cite. If the Trump administration chooses not to address AI safety at a federal level, what could we expect?You may see companies making decisions to release products more quickly if AI safety is deprioritized, says Cooper.That doesnt necessarily mean AI companies can ignore safety completely. Existing consumer protections address some issues, such as discrimination.You're not allowed to use discriminatory aspects when you make consumer impacting decisions. That doesn't change if it's a manual process or if it's AI or if you've intentionally done it or by accident, says Bleeker. [There] are all still civil liabilities and criminal liabilities that are in the existing frameworks.Beyond regulatory compliance, companies developing, selling, and using AI tools have their reputations at stake. If their products or use of AI harms customers, they stand to lose business.In some cases, reputation may not be as big of a concern. A lot of smaller developers who don't have a reputation to protect probably won't care as much and will release models that may well be based on biased data and have outcomes that are undesirable, says Calidas.It is unclear what the new administration could mean for the AI Safety Institute, a part of the National Institute of Standards and Technology (NIST), but Cooper considers it a key player to watch. Hopefully that institute will continue to be able to do important work on AI safety and continue business as usual, she says.The potential for biased data, discriminatory outcomes, and consumer privacy violations are chief among the potential current harms of AI models. But there is also much discussion of speculative harm relating to artificial general intelligence (AGI). Will any regulation be put in place to address those concerns in the near future?The answer to that question is unclear, but there is an argument to be made that these potential harms should be addressed at a policy level.People have different views about how likely they are ... but they're certainly well within the mainstream of things that we should be thinking about and crafting policy to consider, Calidas argues.State and International Regulations?Even if the Trump administration opts for less regulation, companies will still have to contend with state and international regulations. Several states have already passed legislation addressing AI and other bills are up for consideration.When you look at big states like California that can have huge implications, says Cooper.International regulation, such as the EU AI Act, has bearing on large companies that conduct business around the world. But it does not negate the importance of legislation being passed in the US.When the US Congress considers action, it's still a very hotly contested because US law very much matters for US companies even if the EU is doing some different, says Calidas.State-level regulations are likely to tackle a broad range of issues relating to AI, including energy use.I've spent my time talking to legislators from Virginia, from Tennessee, from Louisiana, from Alaska, Colorado, and beyond and what's been really clear to me is that in every conversation about AI, there is also a conversation happening around energy, Aya Saed, director of AI policy and strategy at Scope3, a company focused on supply chain emissions data, tells InformationWeek.AI models require a massive amount of energy to train. The question of energy use and sustainability is a big one in the AI space, particularly when it comes to remaining competitive.There's the framing of energy and sustainability actually as a national security imperative, says Saed.As more states tackle AI issues and pass legislation, complaints of a regulatory patchwork are likely to increase. Whether that leads to a more cohesive regulatory framework on the federal level remains to be seen.The Outlook for AI CompaniesThe first 100 days of the new administration could shed more light on what to expect in the realm of AI regulation or lack thereof.Do they pass any executive orders on this topic? If so, what do they look like? What do the new appointees take on? How especially does the antitrust division of both the FTC and the Department of Justice approach these questions? asks Cooper. Those would be some of the things I'd be watching.Calidas notes that this term will not be Trumps first time taking action relating to AI. The American AI Initiative executive order of 2019 addressed several issues, including research investment, computing and data resources, and technical standards.By and large, that order was preserved by the Biden administration. And we think that that's a starting point for considering what the Trump administration may do, says Calidas.
    0 Comments ·0 Shares ·64 Views
  • The Network Metrics That Really Matter
    www.informationweek.com
    Every network leader seeks fast and reliable performance. Network metrics provide the insights necessary to achieve those goals.
    0 Comments ·0 Shares ·88 Views
More Stories