InformationWeek
InformationWeek
News and Analysis Tech Leaders Trust
1 pessoas curtiram isso
337 Publicações
2 fotos
0 Vídeos
0 Anterior
Atualizações Recentes
  • CIOs Confront Cloud Budget Overruns With Smarter Cost Management
    www.informationweek.com
    Nathan Eddy, Freelance WriterMarch 24, 20255 Min ReadWavebreakmedia Ltd FUS1407 via Alamy StockCloud storage was once hailed as a cost-effective solution for businesses, but hidden fees and unpredictable costs are causing widespread financial strain.More than half of businesses globally have experienced IT or business delays due to unexpected cloud storage expenses, and 62% of organizations exceeded their cloud budgets last year, according to a report from Wasabia.As chief information officers and IT leaders reassess cloud spending, many are looking for new strategies to prevent waste, improve forecasting, and better manage their data storage policies.Soumya Gangopadhyay, technology strategist at EY, points to a lack of financial transparency and poor forecasting as key reasons why cloud costs spiral out of control. Certain issues arise when organizations dont track IT costs in a way that enables breaking out expenses to support analysis or forecasts, he says. Data egress fees, complex storage tiering, and sudden spikes in data processing all contribute to budget overruns.He cautions that without clear visibility into usage and cost structures, companies would struggle to predict expenses, leading to unforeseen financial burdens.Egress Fees, Over-Provisioning Drive Up CostsOne of the biggest financial pitfalls in cloud storage is egress fees, the costs incurred when transferring data out of a cloud providers ecosystem.Related:These fees, often overlooked in budgeting, can add up quickly and disrupt IT operations.Will Milewski, senior vice president of cloud infrastructure and operations at Hyland, notes businesses frequently underestimate the impact of egress fees. With regulatory shifts like the European Data Act prompting major providers to adjust these fees, organizations are still challenged by unanticipated usage that drives up costs, he says via email.He explains IT leaders can mitigate these impacts by consolidating data within a single ecosystem, employing intelligent tiering strategies, and utilizing data compression or deduplication techniques.Beyond egress fees, companies are also over-provisioning cloud resources, paying for storage they dont fully utilize.Many organizations, eager to embrace cloud agility, end up spending more than necessary due to a lack of integrated visibility across their data assets.Cost overruns often stem from over-provisioning, unpredictable data growth, and the complexity of managing diverse data workloads, Milewski says. By leveraging unified platforms, companies can streamline workflows, improve forecasting, and right-size storage needs.Related:The Challenge of Cloud Cost TransparencyWhile cloud providers offer cost management tools, many organizations find pricing models too complex to navigate effectively.Gangopadhyay explains some cloud providers obscure costs through complicated pricing structures, making it difficult for IT teams to plan accordingly. Not all providers offer robust tools for forecasting costs based on usage patterns, which is another factor organizations should consider when working with a cloud provider, he says.Milewski echoes this concern, pointing out that cloud providers are offering more AI-driven cost management tools, but expertise is required to use them effectively. Were seeing cloud providers introduce reserved pricing models, savings plans, and AI-driven cost dashboards, he says. However, many pricing structures remain complex, requiring organizations to build in-house expertise or partner with specialized vendors.Without dedicated cost management teams or external partners, businesses often struggle to fully optimize cloud spending.IT Leaders Take Control of Cloud CostsCIOs and IT leaders can execute several proactive measures as they look to regain control of their cloud budgets.Related:Gangopadhyay suggests implementing real-time monitoring tools, resource tagging taxonomies, and predictive analytics to improve cost forecasting. Organizations need to have a clear understanding or adherence to existing capabilities and performance -- without it, engineering workload performance can be a challenge, he says.By leveraging historical data and automating governance policies, businesses can eliminate waste and prevent unexpected cost spikes.Milewski advises companies to audit their storage policies and shift to a more strategic, tiered approach. Optimizing storage begins with aligning data policies to actual usage, he says. Prioritizing high-performance tiers for critical content while shifting less-accessed data to real solutions ensures cost efficiency without compromising performance or compliance.He also highlights automation and AI-driven insights as key tools for identifying redundancies and reducing expenses.Another crucial step is building a chargeback model that aligns IT costs with business strategy.Gangopadhyay says he believes organizations should implement chargeback mechanisms that assign storage costs to individual business units, making cloud expenses more transparent.Developing an enterprise chargeback strategy ensures that cloud spending is directly tied to business objectives, he says.By making business units accountable for their storage usage, companies can drive more responsible cloud consumption.The Future of Cloud Cost ManagementAs cloud storage pricing evolves, IT leaders must stay ahead of emerging trends to keep costs under control.Gangopadhyay says he expects increased competition among cloud providers, which could lead to more dynamic pricing models. We can expect to see more providers adopting real-time usage-based pricing and offering incentives for eco-friendly storage options, he says.Companies that embrace flexible budgeting practices and sustainable cloud solutions will be better positioned to navigate shifting cost structures.Milewski predicts that AI and automation will play a bigger role in optimizing cloud spending.The cloud storage landscape is evolving toward more dynamic, consumption-based pricing models, he says. Businesses will need to embrace FinOps practices, leveraging advanced analytics and automated tools, to adapt to these trends.FinOps, or cloud financial management, is becoming increasingly critical for organizations aiming to turn unpredictable expenses into predictable, manageable investments.Gangopadhyay stresses the key to reducing waste is aligning cloud costs with business goals.Reducing cloud expenses comes down to aligning business goals with business costs, he says. Organizations can better identify and eliminate unnecessary or redundant data by implementing automated policies, conducting regular audits, and establishing clear retention guidelines.Milewski underscores the importance of staying ahead of pricing trends and investing in cost optimization strategies.By leveraging automation, real-time monitoring, and AI-driven insights, companies can ensure that their cloud investments remain both strategic and cost-efficient.Businesses that combine modern infrastructure with intelligent cost management can empower themselves to navigate future challenges effectively, he says.About the AuthorNathan EddyFreelance WriterNathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.See more from Nathan EddyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·15 Visualizações
  • Five Years of Cloud Innovation: 2020 to 2025
    www.informationweek.com
    John Edwards, Technology Journalist & AuthorMarch 24, 20255 Min ReadTetra Images, LLC via Alamy Stock PhotoThe cloud has come a long way over the past five years. The technology has undergone a succession of radical upgrades and transformations that have surprised even many of the technology's strongest advocates.As cloud service providers and adopters move into the next half-decade, here's a look at five important ways that the cloud has advanced since 2020.1. Zero-trust architecture emergesAs organizations move more workloads to the cloud, often in response to the existential demands of a high-velocity digital economy, traditional perimeter-based security models have failed to keep pace with the dynamic, distributed nature of traditional digital architectures, says Nigel Gibbons, a director and senior advisor at cybersecurity services firm NCC Group, in an email interview. "Amid these challenges, the concept of zero trust emerged alongside the secure by design cornerstone principle, fundamentally reappraising identity, access and trust within cloud environments."Previously, security strategies relied on guarding a static network perimeter. Once inside the corporate network, users, devices, and services were often trusted by default. Zero trust, by contrast, assumes no inherent trust and evaluates each request as if it comes from an untrusted network. "In cloud settings, where applications, data, and users reside across numerous remote endpoints, zero trust ensures that each interaction is strictly verified, regardless of location or prior access," states Gibbons.Related:Gibbons observes that zero trust has also accelerated improvements in identity and access management solutions, such as multifactor authentication, single sign-on, and just-in-time access, with adaptive access policies built on continuous adaptive risk and trust assessment principles becoming standard practice.2. FinOps standardizes cloud spendingThe FinOps organization and the implementation of FinOps standards across cloud providers has been the most impactful development over the last five years, states Allen Brokken, head of customer engineering at Google, in an online interview. This has fundamentally transformed how organizations understand the business value of their cloud deployments, he states. "Standardization has enabled better comparisons between cloud providers and created a common language for technical teams, business unit owners, and CFOs to discuss cloud operations."The FinOps framework helps organizations understand exactly what they're spending, how they're spending it, and where they're spending it. "This enables better demand shaping, whether through moving workloads to spot instances or improving committed use management," Brokken says.Related:3. Public cloud adoption democratizes accessWidespread adoption of public cloud architecture has been one of the most important developments of the past five years, says Lloyd Adams, president of enterprise application software firm SAP North America.The public cloud has democratized access to technology and increased accessibility for organizations across industries that have faced intense volatility and change in the past five years, Adams observes via email. "This innovation has facilitated a new level of co-innovation and enabled new business models that allow companies to realize future opportunities with ease."Public cloud platforms offer adopters immense benefits, Adams says. "With the public cloud, businesses can scale IT infrastructure on-demand without significant upfront investment." This flexibility comes with a reduced total cost of ownership, since public cloud solutions often lead to lower costs for hardware, software and maintenance.Public cloud adopters also reap the benefit of immediate access to cutting-edge technologies, such as artificial intelligence, machine learning, and analytics. "The cloud's flexibility and speed have enhanced agility and innovation, enabling companies to experiment with new ideas and bring products to market faster," Adams says.Related:4. Security as code arrivesSecurity as code, in the form of DevSecOps, leverages a collection of cloud native technologies and methods. "This has not only shifted security into the delivery team, it allowed security to scale as an embedded consideration, rather than an external force that development and infrastructure teams feel like they need to resist or operate around/within," says Travis Runty, CTO of public cloud at Rackspace Technology, in an online interview.Security as code is an example of hyper-converging skillsets and teams, further enabling natural awareness and ownership, Runty states. "It's a great example of a technology creating velocity, changing the way teams are structured, and ultimately reducing overall business risk".Having the ability to incorporate security into core and real-time infrastructure deployments has allowed teams to leverage security as a strength, and enforce it without fail, Runty says. "This enforcement can include general security best practices, compliance considerations, protecting credentials, or other sensitive information -- even managing internal design and architectural standards."5. Serverless computing arrivesServerless computing has emerged as a key cloud innovation, helping organizations become more agile while accelerating time-to-market, minimizing infrastructure overhead, and optimizing cloud costs, says Farid Roshan, global head of AI at data and digital engineering solutions firm Altimetrik.In response to specific events, serverless platforms work to execute small, stateless code segments known as functions. "These functions simplify scaling and reduce the complexity of computing resources, which are allocated only for the functions execution duration, eliminating the need for pre-provisioned infrastructure," Roshan says in an email interview.With serverless computing, engineering teams are freed from managing servers, operating environments, and scaling mechanisms. "This allows engineers to focus on innovation, building scalable, cost-efficient applications by shifting operational overhead to cloud providers," Roshan concludes.About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·11 Visualizações
  • Are CIO Plans for AI and the Cloud Permanently Joined Together?
    www.informationweek.com
    TechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Are CIO Plans for AI and the Cloud Permanently Joined Together?Are CIO Plans for AI and the Cloud Permanently Joined Together?Will the development of these resources proceed in tandem at enterprises, or must CIOs evolve these technologies with separate strategies?Joao-Pierre S. Ruth, Senior EditorMarch 24, 2025As substantial plans to invest in AI and the cloud take shape, do CIOs at enterprises want to develop both resources together to maximize the potential they offer? Is it necessary for them to operate on dual or shared paths of evolution for these technologies to deliver? Should they favor one technologys use and deployment over the other?Leadership at enterprises may have hard choices to make on the resources they put toward their technology implementations. What if a CIO is caught in a circumstance where they only have the means -- whether it is a constraint on personnel, time, or money -- to truly invest in cloud or AI, but not both? Where should they put in their energies?Luiz Domingos, CTO for Mitel; Jon Kuhn, senior vice president of product for Delinea; Anshu Jain, co-founder and CTO with Outmarket AI; and Steve Williams, CISO, NTT DATA, tackled that and other questions in this episode of DOS Wont Hunt.Where does the conversation start in the C-suite on how to balance the development and investment into the cloud and AI? What is at stake if CIOs cannot steer the IT strategy to support cloud and AI? How far behind could a company fall if they dont pursue both technologies vigorously?Listen to the full episode here.About the AuthorJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·10 Visualizações
  • Building Trust with Conversational AI: How to Avoid Common Pitfalls
    www.informationweek.com
    Kathryn Murphy, Senior Vice President of Product, TwilioMarch 21, 20254 Min Readhirun laowisit via Alamy StockTrust is the foundation of any relationship, whether between individuals or between businesses and their customers. Philosopher Friedrich Nietzsche once said, Im not upset that you lied to me, Im upset that from now on, I cant believe you.While his words may evoke thoughts of interpersonal relationships, they resonate equally in the business world, where trust in technology plays an increasingly vital role.The rise of conversational AI -- spanning chatbots and LLM-powered virtual agents -- is reimagining how people interact with businesses. This isnt just a fleeting trend; its a transformative shift. The market, valued at $5.8 billion in 2023, is projected to soar to $31.9 billion by 2028, according to IDC. That growth underscores the pivotal role this technology will play in redefining customer engagement for every business.But heres the catch: Trust is everything. One poor interaction can unravel months of goodwill, sowing seeds of doubt and eroding confidence. As Nietzsche cautioned, a single misstep can resonate deeply, and businesses can ill afford to lose the faith of their customers.The secondary challenge -- and what many businesses learned over the course of last year -- is that scaling a flashy conversational AI demo to meet the needs of a live customer environment is far from easy.Related:Below are some actionable tips for businesses to effectively build trust with their conversational AI customer engagement.Establish Clear, Customer-Centric GoalsWhen deploying conversational AI, even small missteps can lead to significant consequences, tarnishing a brands reputation and eroding customer trust. A strong foundation when implementing any AI solution begins with clear goal setting. Before rolling out their initiatives, businesses must prioritize the customer and recognize that AI is just a tool for enhancing their experience, rather than a solution in itself.Identify Potential Pain PointsOne of the most frequent sources of customer frustration lies in poor human-to-AI handoffs in conversational AI situations. When escalations lead to a loss of context or require customers to repeat information, their experience can quickly sour. To avoid this, businesses should establish clear protocols for transitioning conversations to live agents, ensuring all relevant information is seamlessly carried over. Without this, frustrations may escalate into doubts about the reliability of the service, jeopardizing trust altogether.Continuously Monitor to Improve ExperiencesRelated:Equally important is the practice of ongoing monitoring and optimization. By consistently collecting feedback, organizations can refine their conversational AI implementation, improving results and growing customer satisfaction. These efforts signal a commitment to continuous improvement, a cornerstone of building and maintaining trust.Feedback loops play a vital role in enhancing large language model (LLM) performance over time. Actively building and testing these loops, alongside robust escalation workflows, ensures customer concerns are addressed. A common misstep that organizations make is deploying AI systems that lack empathetic conversation management. Integrating AI-driven sentiment analysis can bridge this gap, allowing models to guide interactions with greater sensitivity.Minimize Bias Through PersonalizationTo provide a positive customer experience -- one that increases engagement and brand affinity -- businesses also need to ensure conversational AI solutions deliver consistent, unbiased and personalized support. With increasing levels of scrutiny paid to large language models and how information is culled, bias can be minimized by leveraging a customer data platform with unified profiles for a personalized experience.Related:For example, bias may surface if an AI agent provides differing responses based on perceived gender or cultural background, such as assuming certain tasks or preferences are linked to one gender. Regular audits are essential to identify and mitigate such issues, especially when this technology is still in its early stages. Adopting a test and learn approach can further refine these systems and create more authentic and human-like interactions.Lead With TransparencyTransparency is another cornerstone of building trust. Customers should always know when they are engaging with an AI agent. Clearly labeling these interactions not only prevents confusion but also aligns with ethical best practices, reinforcing the integrity of the customer experience.Should an organization fall victim to a scenario where AI systems fail to meet customer expectations, honesty is the best policy. Be truthful about the limitations or errors of AI and provide quick resolutions through escalation to live agents. Nobody wants to dramatically scream REPRESENTATIVE!!! to themselves and into the ether when looking for a solution to their concerns.Closing ThoughtsTrust, once broken, is challenging to regain. As Nietzsche reminds us, the erosion of trust leaves behind doubt, making it harder to rebuild relationships. For conversational AI, this means every interaction is an opportunity to strengthen -- or weaken -- customer confidence. By avoiding common pitfalls, prioritizing transparency, and continuously optimizing AI systems, businesses can build lasting trust and foster meaningful customer relationships.The call to action is clear: Businesses should begin by auditing their current conversational AI solutions, identifying gaps in trust-building measures, and implementing best practices that foster confidence and engagement from the very first interaction.About the AuthorKathryn MurphySenior Vice President of Product, TwilioKathryn Murphy has over 20 years of experience in product management, design and engineering with a deep domain in retail, commerce, payments, customer data platforms and multi-channel marketing. Kathryns focus has always been on using technology to improve the customer experience. As the SVP of Product and Design at Twilio, she leads the team focused on accelerating Twilios communications and data capabilities.See more from Kathryn MurphyReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·86 Visualizações
  • Whats New in Augmented Reality and Virtual Reality?
    www.informationweek.com
    The boom in AI has virtually eclipsed technologies like augmented reality (AR) and virtual reality (VR). Nevertheless, there are still good reasons to keep AR and VR on the IT strategic roadmap.AR is so named because it can embellish the physical world with digital artifacts. VR goes one step further by immersing participants in an alternate world of virtual experience.There are a number of AR/VR use cases that are working in business.The retail and real estate industries use AR and VR technologies to give customers a preview of how a household item would look in their home. It also can give would-be buyers a virtual walkthrough of a vacation home that they are considering purchasing that is thousands of miles away.Building engineers and inspectors use AR with the help of special glasses that display blueprints of electrical wiring that they can superimpose upon a finished wall in a structure; the military uses VR to simulate battlefield scenes for trainees; and baseball players use AR/VR to improve the mechanics of their swings.The CIOs Position on AR/VR TodayFor CIOs, AR/VR is taking a backseat to artificial intelligence, which Statista sees as exceeding $1.8 trillion in business investment by 2030. Consequently, there is little left in most IT budgets for anything else.Related:CIOs also know that most AR/VR investments cant be done on the cheap. AR/ VR implementations often require significant customization to achieve the right fit for specific business cases, and they can require expensive investments in headgear, workstations and other hardware.Finally, its not always easy to justify an AR/VR investment. While an AR/VR investment might be close to mandatory when the military is training personnel to disarm bombs on a battlefield, its not that easy to justify AR/VR simulations for more mundane use cases.Collectively, these circumstances have put AR/VR on the IT back burner, but it doesnt mean that they dont deserve a spot on IT's strategic roadmap.Where AR and VR Could Play in a BusinessAdecco, a corporate recruiter, reported in 2023 that 92% of executives think that American workers aren't as skilled as they need to be. And the World Economic Forum expects that 39% of skills will be outdated by 2030. At the same time, younger employees entering the workforce are less likely to learn by reading manuals, and more likely to further their learning through AR, VR and other visual technologies.This makes workforce education a prime area for AR/VR utilization. In addition, many of the skills that must be learned by employees across a wide swath of industries are somewhat generic (for example, the basics of lending for a financial institution, or the fundamentals of waste management and collection for sanitation workers). So, it is possible that more generic and cost-effective AR/VR offerings can be used without much need for company-specific customization.Related:Schools are already integrating AR/VRinto their curricula, and there is no reason that companies cant do the same to help address their employee skills shortages.Another AR/VR use case that has been used successfully is in retail sales where AR/VR can simulate product experiences in a virtual environment. With AR/VR, a prospect can experience what a trip to Belize would be like or do a visual walkthrough of a beach home in Miami. A customer can try on a sweater virtually, or they can see how a new dining room table looks like in their home.All these examples are already in play and generating revenue in e-commerce markets, where it is important for customers to experience what it would be like to own or experience something that they cant physically see or touch. The value proposition for using AR/VR in retail is further sweetened because companies dont have to invest in special hardware. Instead, customers can use the AR/VR on regular home computers and mobile devices.Related:New product development is one more area where companies are adopting AR/VR. Constructing physical prototypes of new products that may not work is expensive and time-consuming. If new product designs and simulations can be generated with 3D modeling and AR/VR, the technology investment may be worth it.AR/VR TrendsLooking forward, it is reasonable to expect that AR/VR use will expand in the areas where it is already gaining a footing: education/training, retail sales and product development.Also, there are three AR/VR trends that CIOs should note:Cloud-based AR/VR. A user can put on a wireless headset and use AR/VR from the cloud if the computing requirements for the app arent overly intensive. Education and training AR/VR in most cases would work in this scenario, although there might be a need to invest in more bandwidth.Better ergonomic experiences for users. AR/VR headgear is clunky and uncomfortable. Vendors know this and are at work at creating more wearable and tetherless headsets that deliver a better ergonomic experience to users. That lighter, more agile hardware could also lead to lower costs.A focus on security and governance. AR/VR vendors havent paid much attention to security and governance in the past, but they will in the future because enterprise customers will demand it, and the enterprise market is too big to ignore.Wrap-UpWhile AR/VR technology isnt front-and-center in technology discussions today, it could emerge in the future as a way for companies to streamline education and training, improve new product development and times to market, grow retail revenues, and even simulate scenarios. For example, it could be used for a disaster recovery operational failover in a simulated scenario.AR/VR are not todays hot technologies, but they should nonetheless be listed in IT strategic plans, because they are logical extensions of more corporate virtualization. Plus, they can address several of the persistent pain points that companies continue to grapple with.
    0 Comentários ·0 Compartilhamentos ·75 Visualizações
  • Why Your Business Needs an AI Innovation Unit
    www.informationweek.com
    John Edwards, Technology Journalist & AuthorMarch 21, 20256 Min Readtanit boonruen via Alamy Stock PhotoJust about everybody agrees that AI is an essential business tool. This means that it's now time to give the technology the status it deserves by creating a business unit that's completely dedicated to deploying innovative AI applications across the enterprise.An AI innovation unit serves as an organizational hub for designing and deploying AI solutions, as a catalyst for adopting and integrating of AI, and as a focal point for AI business exploration and experimentation, says Paul McDonagh-Smith, a senior lecturer in information technology and executive education at the MIT Sloan School of Management. "By spinning-up an AI innovation unit, your company can accelerate its digital transformation, sustain competitiveness, and create a culture of innovation," he explains in an online interview.McDonagh-Smith believes that an AI innovation unit can help convert the AI's potential into enhanced product offerings and customer experiences, unlocking new revenue streams and creating a competitive advantage. "Your AI innovation unit will also provide a space and a place to combine AI research and responsible application of AI to help you minimize risks while maximizing benefits."Mission GoalsAn AI innovation unit's mission should be to coordinate, plan, and prioritize efforts across the enterprise, says Steven Hall, chief AI officer at technology research and advisory firm ISG. "This can include ensuring the right data assets are used to train models and that proper guardrails are established to manage risks," he recommends in an email interview. Hall adds that unit leaders should also work toward keeping relevant individuals in the loop while prioritizing use cases and experiments.Related:An AI innovation unit should always support sustainable and strategic organizational growth through the ethical and impactful application and integration of AI, McDonagh-Smith says. "Achieving this mission involves identifying and deploying AI technologies to solve complex and simple business problems, improving efficiency, cultivating innovation, and creating measurable new organizational value."A successful unit, McDonagh-Smith states, prioritizes aligning AI initiatives with the enterprise's long-term vision, ensuring transparency, fairness, and accountability in its AI applications. "An effective AI innovation unit also increases the flow of AI-enhanced policies, processes, and products through existing and emerging organizational networks."Carolyn Nash, chief operations officer for open-source software products provider Red Hat, says her firm recently established an AI innovation unit when enterprise leaders recognized that AI had become a top IT strategy priority. "This newly-formed team is now focusing on putting the appropriate infrastructure foundations in place for AI to be developed at scale, and in a cost-efficient manner," she explains in an online interview. Part of that work, Nash notes, includes identifying and creating productivity use cases.Related:Leadership RequirementsAn AI innovation unit leader is foremost a business leader and visionary, responsible for helping the enterprise embrace and effectively use AI in an ethical and responsible manner, Hall says. "The leader needs to understand the risk and concerns, but also AI governance and frameworks." He adds that the leader should also be realistic and inspiring, with an understanding of the hype curve and the technology's potential.The unit should be led by a chief AI officer (CAIO), or an equivalent senior executive with expertise in both AI technology and strategic business management, McDonagh-Smith advises. "While this leader possesses a strong understanding of data science, machine learning, and innovation strategy. alongside finely-tuned leadership skills, this individual also needs to be adept at bridging technical and non-technical teams to ensure AI that initiatives are practical, scalable, and personalized to business goals."Related:Team BuildingMcDonagh-Smith recommends staffing the AI unit with a multidisciplinary team that combines the capabilities of data scientists, machine learning engineers, and software engineers, as well as AI ethicists, HR experts, UX /UI designers, and change management specialists. "This will provide the diversity of perspective and expertise necessary to fuel and drive your AI innovation unit forward."Nash observes that there will also be times when it becomes necessary to seek advice and support from other enterprise stakeholders, particularly when collaborating on projects with elements that lie beyond the main team's skills and knowledge. She adds that the unit should focus on addressing existing business issues, not seeking new problems to solve. "Proactively capturing requirements from strategic leaders across the business -- HR, marketing, finance, products, legal, sales -- is critical to ensuring the AI unit is correctly focused."ReportingMcDonagh-Smith recommends that the AI innovation unit's leader should report directly to the enterprise C-suite, ideally to the CEO or chief digital officer (CDO). "This reporting structure ensures that AI initiatives remain a visible strategic priority and are seamlessly integrated with broader business goals," he says. "It also allows for clear communication between the unit and top-level leadership, helping to secure the necessary support for scaling successful AI-forward projects across the organization."A Collaborative CultureAn AI innovation unit requires a collaborative culture that bridges silos within the organization and commits to continuous reflection and learning, McDonagh-Smith says. "The unit needs to establish practical partnerships with academic institutions, tech startups, and AI thought leadership groups to create flows of innovation, intelligence, and business insights."McDonagh-Smith believes that the unit should be complemented by a strong governance framework that will allow it to manage AI risks, uphold ethical standards, and ensure AI deployments that align with enterprise values and societal responsibilities. "By introducing regular impact assessments and transparent reporting on AI initiatives, you'll build trust both internally and externally ... and establish your team as a leader in evolving business practices."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·64 Visualizações
  • Asias Top Integrated Security Exhibition Is Underway
    www.informationweek.com
    SECON & eGISEC 2025 is underway, showcasing a large and diverse array of both physical and cybersecurity innovations and products. While there is a comprehensive display of advancements in traditional physical security measures and cybersecurity products, its the integration in the converged security realm thats arguably gaining the most attention, particularly in critical sectors. By all accounts, the depth of information in the latest security developments at the exhibition is complemented by the width of diversity in products and innovations.In particular, we plan to focus on AI-driven privacy protection, enhancing security in cloud environments, and the latest trends in pseudonymization and anonymization technologies. Additionally, understanding how privacy protection solutions are applied across various industries is a key objective, as real-world case studies provide valuable practical insights, said Lee Hyejun, Associate at EASYCERTI, a provider of AI, big data, and cloud-based privacy protection and privacy data solutions and an exhibitor at the event.Beyond its booth at the exhibition, EASYCERTIs Senior Researcher, Seunghoon Yeom will be delivering presentations at the conference within. The first, on March 20th is on the topic of latest trends and countermeasures in privacy protection. The second presentation is on the following day and covers standards pertaining to securing personal information and verification processes.Related:There are over 400 exhibitors taking part in the exhibition and the displays are enticing and informative. The variety of security issues addressed by this years new product offerings cover the gamut of known vulnerabilities with no previously known countermeasures.There are no known solutions to prevent paper document leaks around the world. Through exhibitions like this, we hope to show people that such a solution exists and the increasing number of companies and organizations are using our solution: docuBLOCK, said Myungshin Lee, CEO of ANYSELL Co., Ltd. and an exhibitor at the event.The event spreads over 28,000 m of space and is expecting more than 30,000 visitors from around the world. The products and innovations on display cover the gamut of security sectors including edge devices with on-device AI, converged security, cloud and IoT security, smart city security, automotive security, and maritime security, among others.One of the key solutions we will be presenting at SECON & eGISEC 2025 is real-time log and file encryption. This technology encrypts data the moment it is generated making it essential for industries such as finance, public sector and medical fields where both security and real-time processing are critical, said Haeun JI at iNeb Inc, a provider of encryption and data security and an exhibitor.Related:A comprehensive conference and seminar program is happening inside of SECON & eGISEC 2025. The program was developed in collaboration with prominent institutions and industry leaders. It features over 100 sessions across 30+ tracks. Attendees can join discussions on critical topics such as industrial security, advanced CCTV management, aviation protection, counterterrorism tactics, personal data privacy, and other pressing security concerns.This years key security issues featured at the exhibition include:Edge devices with on-device AI (local AI)Convergence of cybersecurity and physical securityEvolving zero-trust security modelIntensifying software supply chain security threatsCyber fraud as a service (Qshing)Cybercrime targeting youth and social media restrictionsConcerns on whether cloud security platforms and cloud service platforms can continue to coexist independentlyHidden risks in old "new" technologiesRelated:Event organizers cited as examples of hidden risks: Cloud services have suffered from human errors, leading to unintended data leaks. Similarly, ChatGPT has raised serious concerns, as users often unintentionally expose sensitive information through interactions with the AI. These risks have prompted ChatGPT bans in several countries.However, risks are growing in other areas, too, even across entire industries. For example, the financial industry is intensely attractive to thieves and fraudsters.YH Database Co., Ltd. has introduced newly released products to buyers every year since 2013, mainly introducing financial security and informatization solutions.This year, the company is showcasing AI-specialized products, including y-SmartChat, y-SmartData, and y-MobileMonitorSDK3.0, said Kim JungWon, senior executive director of YH Database.For example, y-SmartData can be used not only as an abnormal transaction detection system, FDS, that can prevent financial accidents through voice phishing and fake bank accounts, but also as an internal audit control system, ADS, that can detect and prevent illegal money laundering, and AML, a money laundering prevention system, that can detect and prevent illegal money laundering, Kim added.Attendees appear universally eager to check out possible solutions for these and the other top security issues of the day. Exhibitors are just as eager to demonstrate their technological breakthroughs and checkout the competition.Aircode expects many customers to look for an alternative solution in response to the relaxation of network separation regulations, said Yunsang Kim, presales vice president at Aircode, and also an exhibitor at the event.AirCode will be launching a browser-based virtualized web isolation product (AirRBI) that we believe is competitive in terms of functionality and efficiency compared to other solutions. And also, Aircode want to check and learn what web isolation solutions are available on the market and what features they have, added Yunsang. Aircode is also presenting a talk on Secure Web Browsing inNetwork-Separated Environments at the conference program within the exhibition.It can be difficult to choose which exhibits to visit and which presentations and keynotes to attend. Thats because there is such diversity in security topics and products.At SECON & eGISEC 2025, AhnLAb will showcase its latest security solutions built upon 30 years of comprehensive security expertise. Additionally, we are hosting booths for our subsidiaries -- NAONWORKS, Jason, and AhnLab CloudMate -- where participants can explore each companys specialized technologies in OT/ICS (industrial control system), AI, and MSP (managed service provider), as well as their synergy with AhnLab, said Junghyun Kim, Marketing Director at AhnLab, Inc. and an exhibitor at the event.The exhibition runs from March 19 to 21 and is held at Hall 3-5 in Kintex, Korea.
    0 Comentários ·0 Compartilhamentos ·65 Visualizações
  • Lessons on Attack Attribution for CIOs and CISOs
    www.informationweek.com
    Attribution can be a tricky process. In the case of a DDoS attack, threat actors often employ botnets to direct a high volume of traffic to a target, overwhelming that network and disrupting its service.After outages at Xcaused allegedly by a DDoS attack, plenty of people asked who was responsible. Elon Musk cast blame on Ukraine, Politico reports. Cybersecurity experts pushed back against that assertion. Meanwhile, Dark Storm, a pro-Palestinian group, claimed responsibility, further muddling attempts at attribution.A botnet is generally a network of compromised computers. In essence, they [a victim] are being hit from different IP addresses, different systems. So, you really can't actually pinpoint that it came from this specific location, which makes it difficult to identify root cause, explains Vishal Grover, CIO at apexanalytix, a supplier onboarding, risk management, and recovery solutions company.How should CIOs and CISOs be thinking about attribution and their own approach when they are faced with navigating the aftermath of a cyberattack?Vishal GroverVishal GroverThe Importance of AttributionAttribution is important. But it isnt necessarily the first priority during incident response.The concern that I probably would have as a CISO is addressing the vulnerability that allowed them in the door in the first place, says Randolph Barr, CISO atCequence Security, an API and bot management company.Related:Once an incident response team addresses the vulnerability and ensures threat actors arent lingering in any systems, they can dig into attribution. Who executed the attack? What was the motivation? Getting the answers to those questions can help security teams mitigate the risk of future attacks from the same group or other groups that leverage similar tactics.Of course the larger the company and the more widespread the disruption, the louder the calls for attribution tend to be. When you have a large organization like X, there's going to be a lot of people asking questions. When other folks get involved, then attribution becomes important, says Barr.For smaller organizations, attribution may be a lower priority as they leverage more limited resources to work through remediation first.How to Tackle AttributionIn some cases, attribution may be quite simple. For example, a ransomware gang is likely to be forthright about their identity and their financial motivations.But threat actors that step into the limelight arent always the true culprits. Sometimes people claim publicly that they did it, but you can't really necessarily confirm that they actually did it. They just may want the eyes on them, Barr points out.Related:Attribution tends to be a complicated process that takes a significant amount of time and resources: both technical tools and threat intelligence. Whether done internally or with the help of outside experts, the attribution process typically culminates in a report that details the attack and names the responsible party, with varying degrees of confidence.Sometimes you might not get a definitive answer. There are times when you won't be able to determine the root cause, says Grover.Attribution and Information SharingAttribution can help an individual enterprise shore up its security posture and incident response plan, but it also has value to the wider security community.That's one of the primary reasons that you go and attend a security conference or security meeting. You definitely want to share your experiences, learn from their experiences, and understand everybody's perspective, says Grover.Threat intelligence and security teams can collaborate with one another and share information about the groups that target their organizations. Threat intel teams might also pick up information about planned attacks on the dark web. Sharing that information with potential targets is valuable.Related:We build those relationships so that we know that we can trust each other to say, Hey, if our name comes up, please let us know, says Barr.Not all companies have a culture that facilities that kind of information sharing. Cyberattacks come with a lot of baggage. Theres liability to worry about. Brand damage. Lost revenue. And just plain embarrassment. Any one of those factors, or a combination thereof, could push enterprises to err on the side of silence.We're still trying to figure out, as security professionals, what is it that would allow for us to have that conversation with other security professionals and not worry about exposing the business, says Barr.
    0 Comentários ·0 Compartilhamentos ·63 Visualizações
  • 3 Myths Creating an Inflated Sense of Cybersecurity
    www.informationweek.com
    Andy Lunsford, Matt HartleyMarch 20, 20254 Min Readsyahrir maulana via Alamy StockThe reality of the current cyber environment is harsh, and no matter how well-funded or how skilled a security team may be, theres a good chance theyre not quite as prepared as they think.Verizons most recent Cost of a Data Breach Report found that more than 10,000 breaches were reported last year, exposing over 8.2 billion records.With an average cost nearly $5 million, you can imagine the toll of mega-breaches that are making global headlines. The true financial and reputational toll of a breach is incalculable.While its tempting to think that experience and planning can shield an organization from an attack, the simple fact is that incidents happen. No matter an organization's size, malicious actors target networks for financial gain or strategic advantage. Cybercriminals and nation states are relentless, skilled and constantly evolving. For most companies, its not a matter of if they will face a breach but when. Despite best intentions, no company is prepared for the moment that when turns to now.There are several misconceptions fueling an inflated sense of security. Only by acknowledging these limitations can organizations begin to effectively address the challenges when its their turn under the gun.Our Plan Will Guide Us Safely Through a CrisisRelated:Incident response (IR) plans have been an essential component of most companies cybersecurity strategy for a long time. But when an attack takes place and the rubber meets the road, many IR plans tend to be overly strategic and somewhat theoretical, lacking real value for security teams on the ground who are trying to mitigate the impact. In practice, they often fall short because the plan does not include the detailed information necessary to address the chaotic, real-world nature of a cyberattack and the high-stress decision-making that takes place when an attack occurs.When talking with firms specializing in cybersecurity, we hear the same thing almost without exception: Weve never once used a companys IR plan as part of our process. These plans often are too high-level, updated once a year at best, and predominantly focus on broad, strategic directives. When an attack occurs, the immediate need is for clear, actionable steps that reflect the dynamic, evolving nature of the breach, not just an outline of who should be informed and when.We Nailed Our Tabletop Exercise, So Were ReadyWhile tabletop exercises are valuable tools for familiarizing teams (and especially leadership) with incident scenarios, they fall short when it comes to executing in the face of the complexities of a real-world attack.Related:Its hard enough to gather multiple departments -- legal, compliance, IT, public relations and senior leadership, to name a few -- with their own priorities and spread out across multiple locations and time zones during times of real crisis. Now, imagine trying to get a half-day block into calendars for what many of the employees who are needed for the tabletop to be effective -- are likely to write off as an inconsequential training exercise. To maximize participation and secure critical buy-in from across departments, organizations should consider hybrid or staggered exercises that mimic the complexity of live incidents.When the time comes, most internal teams -- no matter how recently theyve had their last training -- will default to what they know. In times of crisis, people will inevitably drop everything and start executing. That often means they do it without planning or following existing procedures, if those even exist. Worst Case: We Break Glass and Experts Come to RescueMany organizations fall prey to the heroic expertise fallacy. Thats the belief that if something catastrophic happens, expert third parties who are external incident response teams, lawyers, and consultants will swoop in and save the day. While third-party experts are certainly skilled at what they do, it takes costly time to develop the understanding that will allow them to be effective.Related:Additionally, during large-scale cyber incidents, your company is not the only one calling for help. If multiple organizations are affected, external IR teams and law firms may be overwhelmed, with larger companies -- often with bigger budgets -- taking precedence. Its a harsh reality: Expert help is often in high demand, and when everyone faces the same crisis, response times can be slower than anticipated, even if youre paying through the nose for it.Building Cyber Resilience in an Unpredictable LandscapeNo organization is truly prepared for a cyber incident. Attacks are unpredictable, messy, and fast-moving, and no amount of planning can fully eliminate the risks. That said, proactive planning is critical in reducing potential incident impacts. Successful organizations recognize the inherent uncertainties and complexities of a breach, even a small one, and take steps to prepare much more thoroughly.The goal isnt to achieve perfect preparation. Thats impossible. Rather, its to build resilience, flexibility, and the organizational muscle memory to respond effectively when the inevitable occurs.About the AuthorsAndy LunsfordChief Executive Officer and Co-Founder, BreachRxAndy Lunsford is CEO and co-founder of BreachRx, provider of the first intelligent incident response platform designed for the entire enterprise. Prior to founding BreachRx, Andy spent 15 years in privacy law and large-scale commercial litigation. Andy co-founded BreachRx to transform incident response and reporting into a routine operational business process while shielding C-level executives from personal liability. Andy has a BA from Washington and Lee University, a JD from the University of Arkansas, and an MBA from the Wharton School of the University of Pennsylvania.See more from Andy LunsfordMatt HartleyChief Product Officer and Co-Founder, BreachRxMatt Hartley is co-founder and chief product officer of BreachRx. He is a 20+ year innovator in cyber security, threat intelligence, cyber warfare, and information operations. Prior to BreachRx, he was a Senior Vice President of Engineering at FireEye and Vice President of Product at iSIGHT Partners, where he held a variety of other leadership roles. Matt previously served in the US Air Force in the Air Intelligence Agency and Air Force Information Warfare Center. After leaving the military, he led research and development teams creating disruptive and next generation cyber and information security, cyber warfare, and information operations technologies at Sytex Inc. and Lockheed Martin's Advanced Technology Labs. Matt holds a CISSP and a Bachelors and Masters in Computer & Systems Engineering from Rensselaer Polytechnic Institute.See more from Matt HartleyWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·65 Visualizações
  • What CIOs Should Know About Post-Election Winners and Losers
    www.informationweek.com
    The Trump Administration is making big changes faster than any other modern president. The movement of many kinds of levers -- tariffs, buyout packages, government layoffs, and more are disrupting the status quo, and the effects will affect tech companies and enterprise chief information officers.Like the pandemic, the extreme and rapid changes require extreme organizational agility, Monte Carlo simulations, an open mind, and a CIO unafraid to lead. As with all major changes, organizations need to have a strong vision and the ability to execute it within the context of changing circumstances.Organizations Will Depend More on MSPsJonathan Lerner, president and CEO at MSP InterVision Systems, believes deregulation will create a lot of confusion.Its about your customers, the people you work with every day, who are going to have a lot of questions right now, says Lerner. Small business owners, CIOs and people just trying to keep their systems running smoothly will be asking their MSPs, How does this affect my data, network security, and software updates? How will I continue to innovate and better serve in a period of uncertainty? In this business, our job is to simply provide stable, reliable business solutions, and these kinds of rapid changes make that harder.Related:InterVision Systems will focus on strengthening its security and compliance expertise to stay a step ahead. Lerner says his company will be spending a lot of time helping its customers understand the new rules and how to adapt to them.Jonathan Lerner, InterVision SystemsJonathan Lerner, InterVision SystemsWere going to have to be flexible, focusing on sustaining strong relationships, listening to our customers and providing clear, practical advice focused on outcomes to drive their strategy, says Lerner. I hope this leads to less red tape so that businesses can thrive. But, in the meantime, my advice to anyone in our field, especially MSPs, is to be prepared to become a guide.Businesses Will Also Turn to ConsultantsJenny Rae Le Roux, CEO at consulting industry news publisher and business skills training company Management Consulted, says while there are always economic winners and losers, the consulting industry is the canary in the coal mine for who is who.Outsized demand for services from one sector [or] function usually indicates robust growth or big challenges in the broader economy, says Le Roux. The sectors that will grow in 2025 [are] supply chain, healthcare, and cloud services. The losers [will be] businesses that focus on DEI, ESG, and federal government consulting work.Related:There is also a strategic shift occurring among clients that is driving demand for more consulting assistance.Jenny Rae Le Roux, Management ConsultedJenny Rae Le Roux, Management Consulted2025 is bringing with it a reordering of traditional business cycles. Typically, firms think about macro business cycles in eight-year increments. The first four years are focused on growth, and the second four years are focused on cost optimization, says Le Roux. As AI and trade policy transform the way the world does business, clients are now asking firms to help them deliver on a dual mandate: Drive growth and optimization simultaneously. This is a meaningful driver of increased demand for consulting services.Data Security Will Remain a PriorityArnaud Treps, CISO at Salesforce data security platform Odaseva, expects that some companies will find themselves unprepared for policy changes and will be forced to scramble to catch up.Organizations that arent proactive or are incapable of rapidly pivoting in the face of shifting regulatory environments will suffer, says Treps. Regardless of changes in regulations, policy, or administrations, underlying security challenges remain. Even if there are fewer regulatory requirements, security threats dont just disappear if the regulations do. As a result, data security investments will be driven more by business needs rather than investing in security just because regulations require companies to do so.Related:Odaseva is encouraging its customers to implement the strongest security and management capabilities, so that regulatory or policy changes don't require exponential or rapid scaling up of data security and management approaches.We will carefully monitor policy changes and identify trends, while continuing to offer products and services to our customers that allow them to independently secure their data at the highest level and achieve agility so they can pivot as necessary based on policy and/or geopolitical changes, says Treps.Of course, its unclear where regulations, policy, and geopolitics are headed in the short term and long term.Arnaud Treps, OdasevaArnaud Treps, OdasevaSecuring and managing SaaS data is the most important thing you can do in the face of regulatory uncertainty, says Treps. Understanding your data model and how employees, third parties, fourth parties, and customers all interact with it puts you in the best possible position to navigate regulatory changes as they emerge.Accessibility Will SufferJosh Miller, co-CEO at media accessibility company 3Play Media, the leader in media accessibility says with the Trump administrations focus on abolishing DEI and a suit filed by 17 states against Section 504 of the Rehabilitation Act, there is obvious risk to the accessibility space.As a vendor that provides accessibility services, we are close to the impact on people with disabilities, vendors that provide accessibility services [and] businesses that have prioritized accessibility and are now questioning whether they still should or need to, says Miller. Ultimately, it would be naive to think that the current political climate wont have a negative impact on accessibility. Some businesses will deprioritize making their websites, products and spaces accessible.Federal enforcement of accessibility law may wane, but individual or independent litigation and state enforcement will persist. People with disabilities will continue to fight for their right to access, and many organizations will continue to support them.Miller says 3Play Media began pushing into the video localization space to expand itsfootprint beyond accessibility. In the accessibility space, the company plans to support Canadian and upcoming EU accessibility regulations under the European Accessibility Act (EAA), where enforcement is still a priority. It will also continue to support the many U.S. customers that prioritize compliance with accessibility laws, making sure their services are accessible to the millions of U.S. with disabilities.Energy Companies Will Have Uneven ImpactsChris Black, CEO at GridX says most of what happens in the electric and gas utility space is decided state by state. It's both a blessing and a curse when selling to utilities companies because they answer to different state regulators with different objectives. This means his company is more subject to state-level decisions than national ones.Companies like GridX that work in the back-of-house operations of utilities are treated more like infrastructure investments than anything else. Utilities make decisions on 20-year cycles, not four-year political terms, says Black. When utilities invest in grid infrastructure, they're planning for decades, which gives us some insulation from the political winds of the moment.However, not everyone in the utility tech space is so fortunate. Certain areas, such as residential energy-efficiency credits and offshore wind players, should have a plan B in place because they rely on federal support and are frequently tied to green messaging that may be less favored in the current climate.For companies traditionally emphasizing environmental benefits, you can still achieve the same positive impacts while shifting your communications to highlight cost savings and reliability,says Black. Getting people to shift their energy usage away from peak times aligns with clean energy and renewables goals, even if you're not leading with an environmentalmessage.The biggest challenge will be prioritization. For example,a utility may have requests for $30 billion in infrastructure projects but can only fund $8 billion of it.Those tough choices will become even more critical, and companies that can demonstrate immediate economic value will be at an advantage. For those in areas that feel more vulnerable to administrative changes, now's the time to focus on complementary offerings or pivot to aspects of your business that are less dependent on federal priorities, says Black. The fundamentals haven't changed: Utilities must deliver reliable service and modernize aging infrastructure. Companies that help them do this more efficiently will continue to find opportunities, regardless of who's in Washington. Just be prepared to frame your value in terms that resonate with the moment while staying true to your core mission.Bottom LineSome CIOs will be harder hit than others by the changes the current administration is making. This has been true of any administration. However, the speed of change this time around is unprecedented, so organizations need to focus on, organizational agility partnering with consultants and vendors that can help them weather the shifts while meeting the demands of customers.
    0 Comentários ·0 Compartilhamentos ·68 Visualizações
  • Why Every Company Needs a Tech Translator -- And How to Be One
    www.informationweek.com
    Ebrahim Alareqi, Principal Machine Learning Engineer, IncortaMarch 19, 20254 Min ReadCagkan Sayin via Alamy StockTechnology is transforming every industry, but the biggest roadblock isnt the technology itself -- its communication. Engineers and executives often dont speak the same technical language, which means big ideas get lost in translation, budgets get wasted, and projects fail before they even get off the ground. Thats where what I like to call tech translators come in. They connect the dots, helping technical teams and leadership stay aligned.I know this struggle all too well. At my previous job at Volvo, I led AI initiatives and saw firsthand how technical ideas were misunderstood in executive meetings, leading to misalignment and missed opportunities. This experience highlighted the need for better communication between technical and business teams. Recognizing this challenge, I decided to deepen my understanding of business by pursuing an MBA, which gave me the tools to communicate technologys impact in a way that resonates with executives. That decision completely changed my approach, and its why I believe tech translators are critical to every company investing in data and technology.If you want to future-proof your career and make a bigger impact, heres how you can become an indispensable tech translator.1. Learn the business basicsRelated:Too often, tech teams build amazing solutions that never get adopted because they dont tie back to business priorities. Understanding core business principles -- finance, operations, and strategy -- helps you connect the dots between technology and real-world results.At Volvo, my team worked on an AI-powered recommendation system for the online cars configurator. While we focused on accuracy and relevance from a technical perspective, business leaders cared more about cost savings and efficiency. Once we demonstrated how much the system could reduce stockout of cars, it became a priority.My tip: A few online business courses, some reading on corporate finance, or even an MBA can go a long way in strengthening your ability to bridge the gap between technology and business. In this case, knowledge is power --and connection.2. Be a bridge between tech and strategyBeing a tech translator means having a foot in both worlds. You need to understand the business objectives while also keeping up with technical developments. That means showing up to both business strategy meetings and technical standups.At my current company, I work with product managers, engineers, and marketing teams. I help marketing craft messaging thats both engaging and accurate while ensuring our data and automation strategies align with business needs.Related:My tip: Sitting in on meetings outside your core team -- whether it's product roadmap discussions, business reviews, or shadowing sales calls -- can help you understand customer pain points and the bigger business picture. Offer to explain technical projects to business leaders and vice versa.3. Make technology understandable through storytellingEven the most technical discussions benefit through storytelling.Humans remember stories more than raw data, and in a room filled with both technical engineers and business leaders, finding the right balance is key.For example, rather than simply stating, We reduced processing time from weeks to days, tell the story of a customer who struggled with inefficiencies, how a specific integration challenge was solved, and what that meant for their business.This approach maintains technical depth while making a tangible impact.My tip: Structure technical discussions as narratives.Whether its a case study, an engineering challenge, or a breakthrough, frame the details within a story that connects the dots for everyone in the room.4. Stay ahead of tech and business trendsTechnology is evolving fast and staying relevant means keeping up with both technical advances and business trends. I balance deep dives into research with staying plugged into industry conversations to make sure I see the full picture.Related:I run internal knowledge-sharing sessions where we break down new trends and discuss how they apply to our business.My tip: Staying ahead means keeping up with research papers, following business news, and participating in industry forums. Engaging with different perspectives can provide valuable insights into emerging trends. Consider exploring a mix of sources -- hackathons where young developers adopt zeitgeist technologies, technical blogs, leading tech publications like MIT Technology Review, and innovation-focused communities. Following influential thinkers on social media and monitoring leaderboards on platforms like Hugging Face and Hacker News can also help you stay ahead. The more perspectives you have, the better.The most successful professionals in the data-driven future wont just build systems.Theyll be the ones who can explain them, align them with business goals, and push them into real-world use. Becoming a tech translator isnt just a nice skill -- its a game-changer for your career.For companies, the message is clear: If you dont have tech translators, youre wasting your technology investment. For individuals, the opportunity is huge. Master these skills, and youll be indispensable in any data-driven organization.About the AuthorEbrahim AlareqiPrincipal Machine Learning Engineer, IncortaEbrahim Alareqi is a principal machine learning engineer at Incorta. With a PhD in Computational and Data-Enabled Science and Engineering and an MBA, he specializes in making data science and business strategy work together. Connect with him on LinkedIn.See more from Ebrahim AlareqiWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·60 Visualizações
  • 8 Ways Generative AI Can Help You Land a New Job After a Layoff
    www.informationweek.com
    Pam Baker, Contributing WriterMarch 19, 20257 Min ReadAndyS via Alamy StockIts hard to survive a layoff or a firing. Its tougher now than ever given the long hiring cycles and the growing number of ghost jobs. Adding to the unemployment chaos are the massive White House and DOGE firings as well as expected upticks in layoffs correlating with a spike in tariffs. The future looks bleak for the unemployed.Layoffs arent just numbers but a form of economic trauma, says Lars Nyman, CMO of CUDO Compute, a platform that powers many AI programs.Right now, the job market feels like an obstacle course rigged against the people running it. Generative AI isnt a magic bullet and actually takes many jobs, but if you use it right, it can tilt the odds back in your favor, Nyman adds.But how do you use AI right to get another job. Here are eight ways to apply GenAI to your advantage.1. AI as exorcist: expelling ghost jobs from your job huntGhost jobs arent a fluke. They are an actual business strategy for companies seeking to accomplish goals that often have nothing to do with hiring anyone now.A whopping 40% to 50% of job postings are ghost jobs, meaning roles companies never intend to fill but leave up to look busy or fish for future talent. Generative AI tools can analyze patterns in job postings to flag likely fakes. Red flags would be vague descriptions, recycled listings, and positions staying open forever. If youre seeing those, move on, says Nyman.Related:Use general generative AI tools like ChatGPT, Claude, Gemini, Copilot, or Perplexity AI to weed out ghost jobs. Sample prompts for GenAI tools connected to the Internet: A ghost job is a job opening announcement which the company does not actually plan to fill now or maybe ever. The following are indicators that a job announcement is a ghost job: vague descriptions, recycled listings, and positions staying open (listed or relisted) for longer than 3 months in a one-year period. Analyze the following job announcement to determine whether it is a legitimate job opening or a ghost job [copy and paste job description here].2. AI as scam buster: identifying scam job postingsDesperate times call all the predators forth to feast on desperate people. Watch out for fake jobs. You might apply, be interviewed, get hired, fill out the onboarding forms with your personal data including your social security number and bank details. After that theres only crickets. Youve been scammed. This job didnt exist.AI can help identify fake jobs posted by scammers. But do research job listings further, too. AI can get you a false red or green flag.Related:Feed the job posting text into AI, asking it to flag suspicious phrases or inconsistent requirements, says Sam Wright, head of operations and partnerships at Huntr.You can add flags to the prompt too, like asking AI to check the email or web address in the job announcement against the web address and emails of the company advertised. Add any other red flags too, so AI can make a better assessment.3. Use AI to automatically fill out job application formsAutofill helps fill out some fields on online forms, but not very many. That leaves you to fill out the rest of job application forms online. If youre smart and playing the numbers, youre filling out a lot of job applications. That means entering the same data over and over and over again. AI can do that for you and much faster!I used this extension called: Simplify Jobs, it allowed me to fill most of the job applications within one minute, which allowed me to apply to 30+ jobs within an hour. Generally, it takes 15-30 minutes for each application if you manually do it, says Devansh Agarwal, senior machine learning engineer at Amazon Web Services.I set up LinkedIn alerts for the companies and job types that I was interested in and every day I would receive mails with job posting alerts. Then I would use Simplify Jobs to apply to these roles, Agarwal adds. He also says that all the views he expressed in this article are his own and do not reflect his current or previous employers.Related:4. Use AI to tailor resumes and cover lettersYou can use a generative AI tool to create your resume. When creating your resume, you need to be precise, as the recruiters spend less than 10 seconds looking at one. You need to write your experiences in the correct format, it takes time and is complicated to do it yourself. ChatGPT can help you rephrase things and write them in the proper format, says Agarwal.But then be sure to use AI to repurpose your resume and cover letter to fit the exact requirements for each job application. Go the extra mile and prompt the GenAI tool to use keywords that will trigger the AI on the other side to conclude an exact candidate match.You can automate job applications, and without looking like a bot. There are tools like LazyApply that mass-send applications, but the value is in using AI to customize your resume and cover letter -- producing quality, accurate material at scale, says Nyman. AI can tweak tone, highlight specific achievements, and adjust language to better suit company culture.5. Use AI to boost your LinkedIn reach so more employers see youPosting often on LinkedIn is a great way to build your following and profile views which can be helpful to your job search.During a job search there isnt enough time to focus on posting content on LinkedIn but since it is the primary platform for job search it is important to increase your network on it. People use GenAI to post things on LinkedIn and increase their reach, by using GenAI it barely takes any time to create the posts so you can continue to focus on job search and interview preparation, says Agarwal.6. Use AI for practice in mock interviewsIt can be hard to think of a good answer to an interview question on the spot. Some people find it helpful to prepare ahead of time by interacting with GenAI tools in mock interviews or as career coaches.Tools like ChatGPT or Gemini can simulate interview Q&As, giving you feedback and potential follow-up questions -- helping you feel more confident and prepared, says Agarwal.Dont forget to use AI to do your homework on the company before the interview too.Sometimes you need to review certain topics just before the interview, searching for notes online is time consuming and often times they are not concise. In this situation, you can ask LLMs to explain the topic and provide the important concepts in this topic with examples. This is extremely useful and can save hours of time, says Agarwal.7. An extra AI tip for government employees and their supportersMass government layoffs have led to mass despair with little recourse other than the courts. While youre waiting for your case to be heard, it may be helpful to connect with your federal representatives.You can use AI tools to write a series of emails that you, your family, your church, your friends, and community supporters can send daily to government representatives and hotlines. Changing the message to reflect the feelings and concerns of each person sending it or simply changing the wording some to keep the message fresh over time is a good use of AI. Who knows, maybe a congressman will help you get your job back. Its worth a try.8. Use AI to automate everything you can in the job search processYou dont have to limit your use of AI to the things on this list. Take notice of the things youre doing manually in the job search -- especially if youre doing the same thing repeatedly. Whatever those steps are, the odds are that you can use AI tools to do it automatically for you. Give it a shot and see where it leads.Weve seen job seekers slash hours off their application process by harnessing generative AI for resumes, cover letters, and mock interviews, leaving them more time to handle the real-life challenges that come with a layoff, says Wright.About the AuthorPam BakerContributing WriterA prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which are "Decision Intelligence for Dummies" and "ChatGPT For Dummies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.See more from Pam BakerWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·94 Visualizações
  • Key Attributes That Lead to an Ethical IT Department
    www.informationweek.com
    John Edwards, Technology Journalist & AuthorMarch 19, 20255 Min ReadDenis Putilov via Alamy Stock PhotoArtificial intelligence, video surveillance, facial recognition: Today's IT leaders must struggle with an increasing number of ethical dilemmas. While innovation supports business growth, it also creates opportunities for potential abuse.IT leaders lead because they already have an important combination of procedural knowledge and ethics expertise, states Jonathan Beever, an associate professor of ethics and digital culture at the University of Central Florida. "IT leaders benefit, like we all do, from continued literacy building as new technologies and techniques challenge ethical understanding," he adds in an email interview.An ethical IT department operates with transparency, integrity, and accountability, while balancing the needs of the business and its customers, says Mike Lebron, senior IT director at photography and imaging firm Canon USA. "This involves not only adhering to regulatory standards, but also proactively addressing ethical considerations that may arise from the use of technology," he notes via email. "By fostering an environment where ethical conduct is prioritized, IT departments can help build trust both internally within the organization and externally with customers and partners."First StepsAn important first step is embracing the classical adage of knowing thyself, Beever says. "What values guide you personally?" He explains that values shape decisions implicitly and making values explicit helps leaders understand their own actions and decisions.Related:Beever, who is also the director and co-founder of the UCF Center for Ethics, advises IT leaders to question the values that guide their department. "Are these clear and transparent to all stakeholders?" Also consider what possible conflicts might arise between individual values and department commitments. "Finally, what ethical decision-making strategies can help navigate those possible conflicts."Codes of ethics provide guidance at the organizational level. Yet broader strategies, such as principlism, suggest key ethics principles of beneficence, nonmaleficence, respect for autonomy, and justice offers attributes that cut across departments/cultures/disciplines, Beever says. "Since interdisciplinary work is essential for IT departments, maybe now more than ever shared ethics principles can help communication about values across boundaries."Success in the digital era hinges on trust and an ethical approach to all aspects of IT operations fosters this trust, Lebron says. "Trust builds a virtuous cycle that enhances collaboration and strengthens relationships," he explains. When stakeholders, including employees, customers, and partners, feel confident that an organization's IT operations are guided by strong ethical principles, they're more likely to engage positively and collaborate effectively, potentially creating a stable and sustainable path forward.Related:Trust is also the foundation of customer loyalty, and an ethical IT approach is key to maintaining and strengthening that foundation, Lebron advises. "Organizations that embrace ethical practices may experience quicker decision-making, resilience, and long-term sustainability."Leadership ValuesEthically literate individuals are necessary to build ethical cultures, Beever says. "There seems to be a traditional corporate move to train top-down, as if regulations and rules could govern ethical behavior," he observes. Beever notes that professional ethics codes, such as the one created by The Association for Computing Machinery, push against this trend by directing responsible individuals. "But what opportunities do IT departments give their workers to develop the skills required to analyze, understand, and implement the principles of those codes?" he asks. "An ethical IT department would couple procedural literacy to ethics literacy, in support of an ethical culture."Related:Ethical considerations should be factored into every aspect of digital projects, from data privacy and cybersecurity to AI and automation, Lebron says. "Ethical IT practices help ensure that technology is used responsibly and unintended consequences that could negatively impact customers are avoided," he notes. "By doing so, organizations can mitigate risks, enhance their reputation, and drive more meaningful innovation." Lebron believes that the trust that's built from ethical IT practices can move the needle in all aspects within an organization, creating a competitive edge, a true force multiplier.Responsibility and accountability for technology outcomes -- including failures -- are key to building trust between stakeholders and IT, Lebron says. "Ethical vendor selection means you choose partners who align with your organizations ethical standards," he explains. "Accessibility and inclusivity in technology allows you to create products and services that consider people with disabilities so that everyone benefits."Ethics SuccessEthical practices should not come solely from within the IT department, Lebron advises. "They should also be shaped by those whom IT serves and supports." Engaging with a diverse set of stakeholders -- including employees, customers, partners, and community members -- helps ensure that ethical standards reflect a wide range of perspectives and needs.Inclusivity not only builds trust but also helps create more comprehensive and relevant ethical guidelines, Lebron says. Furthermore, open communication channels allow the continuous exchange of ideas, fostering a culture of transparency and mutual respect. "By embracing diverse inclusion and active communication, IT departments can ensure that their transformation efforts are well-informed, equitable, and truly supportive of all stakeholders."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·91 Visualizações
  • From AI Fling to the Real Thing
    www.informationweek.com
    Lindsay Phillips, COO and Co-Founder, SkyPhi StudiosMarch 18, 20254 Min ReadLi Ding via Alamy StockWhen AI is successfully implemented, it fundamentally changes your team. Much like a romantic relationship, a new partnership is formed -- greater than the sum of its parts. You must approach AI not as a side piece but as a full-fledged partner, ready to work differently and ultimately -- better together.The question isnt whether to adopt AI, but how to ensure it leads to meaningful use. Just like picking a mate, companies must evaluate tools carefully and integrate them thoughtfully in order for the relationship to work. Think of AI adoption in terms of relationship stages: Honeymoon, conflict, commitment, and thriving.Phase 1: The HoneymoonYouve identified a need, purchased an AI tool, and are excited to get started -- swoon. Youre daydreaming about what this new team member will bring to the table.Emotions at this stage: Excitement runs high, but engagement is sporadic as the team adjusts to the new tool. Optimism might blind you to the inevitable complexities of long-term integration.Risks at this stage:Choosing the wrong (adoption) partner or going forth without an adoption plan at all.Not having crucial conversations or setting unreasonable expectations for the tool or your team, and overwhelming both.Action to be taken at this stage:Related:Define business objectives and how the tool should support those. Define clear goals.Create an adoption plan or find an adoption partner. How do you expect people to change to use the tool and are your expectations realistic?Phase 2: Conflict ArisesA heart-sinking moment; youve had your first fight. As you start working with the tool, conflicts emerge: misaligned workflows, unclear responsibilities, or differing interpretations of the tools value.Emotions at this stage: Frustration and confusion dominate. The excitement of the honeymoon gives way to chaos as the team struggles to integrate the tool into daily operations.Risks at this stage: Disengagement can tank adoption, leading to distrust in leadership and abandonment of the tool altogether.Action to be taken at this stage:Clarify roles and responsibilities. Identify which tasks AI will take over and how your team must adjust to make room for this.Redesign workflows. Map how data flows through the system. Define who handles each step and how the AIs outputs are utilized.Set expectations for both team and tool. Training is important, but its more critical to align on when and why to use the tool, than how.Phase 3: Commitment to Working Through the KinksRelated:Now comes the commitment phase. Youve decided to put in the effort to make the relationship work. This is where your team begins to norm -- finding ways to resolve conflicts, clarify roles and build trust.Emotions at this stage: Calm and determined. The team is less reactive, focused on solving problems, and unified in working toward shared goals.Risks at this stage: Complacency can derail momentum, pushing your team back into conflict or leading to abandonment if vigilance wanes.Action to be taken at this stage:Assign owners and incentives. Designate individuals responsible for AI implementation and incentivize their success.Hold regular check-ins. Create opportunities to address challenges and refine processes.Celebrate wins. Acknowledge progress to keep morale high and reinforce positive behaviors.Phase 4: Thriving TogetherYour team and AI tool are in sync, working seamlessly together. The partnership has matured into something greater than the sum of its parts --a thriving relationship. Youre no longer focused on making it work; youre discovering new ways to grow together and achieve shared goals.Emotions at this stage: Excitement and pride. Your team feels empowered by what youve built together and evangelizes the success -- confident in its ability to work, evolve and last.Related:Risks at this stage: Even in a thriving relationship, theres a risk of falling into complacency. If you stop nurturing the partnership, you may achieve some success, but youll miss out on its full potential. Staying curious and engaged ensures your (AI) partnership continues to grow stronger and more meaningful.Action to be taken at this stage:Expand responsibilities. Just as in a strong relationship, trust allows you to take on new challenges together. Build on initial success by exploring new use cases for the tool.Stay curious. Keep the spark alive by asking: What else can this tool do? Whats next for us?Foster a community of practice. Identify super-users who act as ambassadors, sharing insights and helping others deepen their connection with the tool.In Perfect HarmonyAdopting AI is not a one-and-done affair. Its a process that requires intentionality, flexibility, and commitment at every stage. By treating AI as a valued partner -- one that requires clear communication, defined roles, and ongoing support -- you can move beyond the initial honeymoon phase and build a lasting, thriving relationship.With the right approach, AI can transform your organization, allowing your team to achieve more. The key is ensuring that both sides -- human and machine -- are willing to work differently to work better together.About the AuthorLindsay PhillipsCOO and Co-Founder, SkyPhi StudiosLindsay Phillipsis the co-founder and chief operating officer of SkyPhi Studios, a change firm that delivers transformative success by empowering organizations to realize the full value of their digital investments. She specializes in guiding organizations through change, fostering collaboration, and enhancing engagement. Her expertise in leadership coaching, sales process support, and culture change initiatives helps organizations not just adopt new tools but embrace a holistic approach to transformation.See more from Lindsay PhillipsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·92 Visualizações
  • Implementing an IT-User Exchange Program
    www.informationweek.com
    Like foreign student exchange programs, a regular exchange program betweentheITteamand end user departmentsin which an IT business analyst spends six weeks in an end-user area doing end- user work, and a person from the end-user area spends six weeks in IT, canbuild bench strength and collaborative relationships between IT analysts and business users.Yetmanywho have tried this idea have exited with mixed results. What are thepitfalls, and is there a way to run an employee exchange program that delivers quality outcomes for everyone?First, WhyDo it?Cross-disciplinary team building and the development of empathy andunderstanding of the business and ITacross departmentsare the driving forces behind user-IT employee exchanges. Youcantteach practical company business acumen to IT staff withtextbooksand college courses. IT needs boots on experience in user departments, where business analysts directly experience the day-to-day process problems and pain points that users do.End users who take a tour of duty in IT have a chance to see the other side, which must plan carefully about how to integrate and secure software, while users complain that application deployments are taking too long.On paper, there is virtually no one in userdepartmentalor IT management whothinksthat employee exchange is a bad idea.So,why havent these exchanges been widely embraced?Related:PitfallsThere are several reasons why employee exchanges between users and IT have faltered:1. The time commitmentWhetheryoureinIT or end-user management, exchanging an employee who is fully trained in your department for another employee who will be a trainee,at best,is not an easy sacrifice to make. There are projects and daily work toaccomplish. Can your department afford an employee exchange that could compromise productivity when you might already be running lean?2. Lack of management commitmentThe user-IT employee exchange starts out strong, with both user and IT management highly enthusiastic about the idea. Then, an unexpected priority comes up on either the user or IT side, and the manager who is affected says, Imsorry.Imgoing to have to pull back my employee from the exchange because we have this important project to get out.Iveseen this scenario happen. Employees get pulled out of the exchange program, and in good faith their managers try to reengage them in the exchange once a crisis has been resolved, but the continuity of the exchange has been interrupted and much of theinitialeffort is lost.Related:3. Failure to set attainable goalsOften, users and IT will agree to an employee exchange with a loose goal of immersing employees in different departments so employees can gain a better understanding of the company.The employees, and those whom they work with in their new departments, arent really sure about what they should be focusing on.When the exchange period ends, no one is exactly sure about what knowledge has been gained, and theycantexplain it to upper management, either.4. Lack offollow upDid the employees in the exchange come back with value-added knowledge that is aiding them in new projectsthatthey are doing? Most managers I speak with who have done these exchanges tell me thattheyrenot sure.One way to be sure is to check in with employees after they complete exchanges to see whattheyrelearned, and howtheyreapplying this new knowledge to their work. For example, if an IT employee goes to accounting to learn about risk management and works six weeks with the risk group, does the employee come back with new knowledge that helps them develop more insightful analytics reports for thatgroup?5. Lack of practicalknow-howLack ofknow-howin running employee exchanges goes hand in hand with the failure to set attainable goals,or to follow up.The managers who are best in these areas are individuals who have backgrounds in teaching and education, but not everybody does.Related:When you exchange employees for purposes of knowledge transfer and growth of business understanding, setting goals and staying with and following up the process are fundamental to execution. Unfortunately, many managers who try exchanges lack skills in these areas.6. Employee transfer requestsMany managers fear that the employees they send to other departments might like the work so well that they request a permanent transfer! This is a major fear.Doing anEmployeeExchangeGiven the pitfalls,itssmall wonder that employee exchange programsarentaggressively pursued,but thatdoesntmean that theydontwork.Wheredothey work?1. Companies that want to improvetheiremployee retentionSeveral years ago, a major appliance manufacturer offered an internal program where employees could sign up for projects outside of their regular businessareas andget time to work on the projects. Other companies have followed suit. This outside of the department work unlocked employee creativity and career growth opportunities. It improved employee morale, which in turn reducedemployee churn. In 2024, overall employee churn at US companies was at 20%, orone in five employees. With a tight job market, companies want to reduce churn, and expanding employee work experiences and knowledge is one way to do it.2. Organizations thatrequirecross-trainingThe military is a prime example of this. Recruits are trained in a variety of different functional areas todeterminewhere theybestexcel.3. Not-for-profit entitiesCredit unions and other not-for-profit entities have historically been great proving grounds for employee exchange programs because of their people orientation. Upper and middle managers are genuinely committed to the idea of employee growth through cross-training. The not-for-profit culture also promotes resource sharing, so managers are less resistant to the idea that they could lose a valuable employee to another department because the employee likes working there.4. When clearobjectivesare set, andfollow-upis doneAn employee exchange requires clearobjectivesto succeed atan optimallevel. For example, youdontsend an IT staffer over to accounting to learn clerical processes of closing the month-end financials and reporting them to management. Ifitstaking finance three days to do the month-end close, you send an IT employee over to learn the process and the process obstacles, and todeterminewhyitstaking finance three days instead of one day to do the close. The hope is that the employee returns to IT and works on the tech side of the process so the month-endclosingcan be done in one day. That'sa clear business win.SummaryFor managers who are uncomfortable with employee exchanges, it might be best not toattemptthem. But for those who can see the benefits of these exchanges, and who can answer a solid yes to their commitment levels, employee exchanges can work extraordinarily well for everyone involved.
    0 Comentários ·0 Compartilhamentos ·108 Visualizações
  • Toxic Cybersecurity Workplaces: How to Identify Them and Fix Them
    www.informationweek.com
    Toxic workplaces have been a prevailing theme in the zeitgeist for decades -- the phrase was first used in a 1989 nursing leadership guide. Discussion of workplace dissatisfaction reached a fever pitch with the advent of social media. Disgruntled workers took to the web, sharing their experiences of abusive managers, unrealistic expectations, grueling hours -- and a plethora of more minor complaints as well.Thus, it might be argued, the meaning of the term has been diluted. Surely, there are differences between being regularly berated by a supervisor for insignificant infractions or refusals to acknowledge an employees personal commitments and the occasional request for overtime or expectations of inconvenient social conventions.Even if the intended meaning has drifted, the discourse on workplace toxicity has identified a range of prevailing tendencies that have severe consequences both for employees and the organizations they work for. Cybersecurity is no exception -- and toxicity appears to be particularly pernicious in this profession for a variety of reasons.It is likely exacerbated by the cybersecurity shortage -- small teams are expected to carry heavy workloads, and their managers bear the brunt of the consequences for any failures that occur. This zero-failure mentality results from a siloed structure in which cybersecurity professionals are isolated from other parts of an organization and expected to carry the entire burden of protection from attacks without any assistance. Individuals are blamed for events that in reality result from institutional failures -- and those failures are never addressed.Related:This is exacerbated by a general lack of people skills among managers and poorly executed communication. These factors lead to a bullying managerial culture, demoralized staff, burnout, high turnover rates -- and ultimately, a greater likelihood of breaches.Here, InformationWeek looks at the factors contributing to toxic cybersecurity environments and the steps that CISOs and other IT leaders should take to correct them, with insights from Rob Lee, chief of research at cybersecurity training company SANS Institute; and Chlo Messdaghi, founder of responsible AI and cybersecurity consultancy SustainCyber.Tech Over PeopleOne of the first organizational mistakes that can lead to toxicity in the cybersecurity workforce in an emphasis on packaged solutions. Slick marketing and fast-talking salespeople can easily lead anxious executives to purchase supposedly comprehensive cybersecurity packages that offer assurances of protection from outside attackers with very little work or additional investment. But even the most well-designed package requires maintenance by cybersecurity professionals.Related:Ninety percent of the cybersecurity market is product based, Lee says. You can have an amazing Boeing strike fighter, but you still need a pilot to run it.The failure to understand the demands of this work can lead to underfunded and understaffed departments expected to keep up with unrealistic expectations. CISOs are thus compelled to pressure their employees to perform beyond their capabilities and toxicity soon results.Siloed SecurityEven in cases where cybersecurity teams are reasonably funded and given a degree of agency in an organizations approach to protecting its assets, their efficacy is limited when the entire burden falls to them. If an organization does not implement top-down practices such as multi-factor authentication and education on phishing scams, it regularly falls to the cyber team to clean up preventable messes. This can shift focus from other proactive measures.There are conflicts when the organization is trying to enable innovation and freedom, Lee says. Security still has to do monitoring and restrict access.Related:Siloes develop within cyber teams themselves, too. Teams focused on compliance, risk assessment, and operations may have very different priorities. If they are not in regular communication, those priorities cannot be reconciled. This leads to further conflict and inefficiency.Resources Versus RealityThe availability of both staff and funding can negatively affect a cybersecurity work environment. Tiny teams faced with massive defense tasks are likely to feel overburdened and underappreciated, even under the best management. Understaffed cyber teams are frequently the result of underfunding.Chlo Messdaghi, SustainCyberChlo Messdaghi, SustainCyberWhen you go to like the board or the executive team, theyll say No, its not needed. We don't need more funds, Messdaghi relates. They dont understand why security is important. They see it as setting money on fire.One study found that cybersecurity budgets were only expected to increase by 11% from 2023 to 2025 despite the exponential rise in threats, putting the onus on already strained cybersecurity teams to make up the difference. These unrealistic expectations are likely to lead to employees being burned out.But that is not the whole picture: Burnout also comes from bad leadership. Burnout is not caused by the amount of work you have. Its about leadership and a lack of communication, Messdaghi argues.Toxic Personalities in ManagementToxicity trickles down -- from management to the most junior of employees, no matter the industry. This appears to be particularly true in cybersecurity. One of the worst traits in upper management appears to be apathy -- simply not caring much about cybersecurity at all.This can lead directly to underfunding or band aid solutions that leave teams scrambling to compensate. These types of executives dismiss admonitions to implement password security procedures and phishing tests across the organizations, considering them to be meaningless exercises.When cyber teams do raise relevant issues with management, they may be dismissed or treated as irritations rather than people who are attempting to do their jobs. Further, when errors do occur, they are pinned squarely on these underfunded and understaffed teams.Cybersecurity team leaders themselves can contribute to toxic environments, even if upper management is supporting solid practices. Micromanaging employees, publicly or privately abusing them with demeaning or profane language and refusing to listen to their concerns can lead to disengagement, adversarial relationships and decreased performance.Research has identified such managers as petty tyrants, so involved with their own sense of importance in the organizational scheme that they feel entitled to these behaviors. Their behaviors may more directly affect their subordinates due to the small size of many cyber teams -- their toxicity is not diffused across many employees and their handful of subordinates bear the brunt.These behaviors may be further exacerbated by the shortage of skilled cybersecurity employees -- someone who is able to manage a team on a technical level remains valuable even if they lack people skills and do so in an abusive fashion.And some leadership toxicity may simply be the result of managers not being enabled to do their jobs. CISO burnout is extremely real, Lee says. There are a lot of people saying, Im never doing this job again.When good managers leave due to toxicity from their superiors, the effects can be devastating for the entire organization. Theyll take half the team with them, Lee says.Toxic Tendencies in Cyber TeamsAs poisonous as the behaviors of executives and managers can be, some of the toxicity in cybersecurity workforces can come from within the teams themselves.A prevailing toxic tendency is the so-called hero complex -- highly skilled employees shoulder enormous workloads. This can lead to resentments on both sides of the equation. The hero may resent what they perceive to be an unfair burden, carrying the weight of less-invested employees. And other employees may resent the comparison to heroes, whose work ethic they feel unequipped to match. Some heroes may become bullies, feeling entitled to push others out of their way in an effort to get their work done, and others may feel bullied themselves, forced to shoulder the consequences of the incompetence of their colleagues.This personality type may be prevalent in cybersecurity teams due to the history of competition in the industry, beginning with early hackers. Hierarchies based on achievements -- such as medals -- have been reinforced by the entry of ex-military members into the workforce.The prevalence of these personality types has, likely unintentionally, led organizations to feel comfortable with understaffed cybersecurity departments because the work does ultimately get done, even if it is only by a few people working under unsustainable pressures. But it also creates single points of failure: When one hero finally slips up, the whole enterprise comes crashing down.Blaming and ShamingBlaming individuals for security events is a hallmark of toxic cybersecurity culture. While events can often be traced to a single action by an employee, those actions are typically the result of a defective system that cannot be attributed to one person.The zero-intrusion mindset that prevails among executives who do not understand the cybersecurity landscape can exacerbate the blame game. Intrusions are a near inevitability, even in scrupulously maintained environments. Coming down on the people who are responsible for containing these events rather than congratulating their effective work at containing them is going to result in resentment and anger.Rob Lee, SANS InstituteRob Lee, SANS InstituteTheres this assumption that someone did something wrong, Lee says. There are no medals awarded for stopping the intrusion before it does something devastating.This type of behavior can have even further consequences. Employees who know they will be excoriated if they make a mistake or have been faulted for the mistakes of others are likely to conceal an error rather than bring to the attention of their superiors, which is likely to make a potential breach even worse.There are always going to be people who are curious and want to work on improving themselves, Messdaghi observes. And then youre going to have people who are going to blame others for their wrongdoings.Effects on EmployeesToxic cybersecurity environments can have substantial effects on the physical and mental health of employees. Stress and anxiety are common, in some cases leading to more severe consequences such as suicidality. One study of the industry found that over half of respondents had been prescribed medication for their mental health. Conflicts, infighting and bullying can increase in a vicious feedback loop according to research by Forrester.These factors can result in apathy toward the job, leaving the team and eventual exit from the industry entirely. Nearly half of cyber leaders are expected to change jobs this year according to a 2023 Gartner report. Simultaneously, unrealistic performance expectations lead to further staffing problems. There may be little interest in entry level employees due to their perceived lack of skills even as more experienced staff head for the door.And stress is only growing -- 66% of cybersecurity professionals said their job was more stressful than it was five years ago according to a 2024 survey.Risks Created by ToxicityAccording to a study by Bridewell, 64% of respondents to a survey of cybersecurity professionals working in national security infrastructure saw declines in productivity due to stress.The apathy, annoyance, stress, and eventual burnout that result from toxic cybersecurity workplaces create prime conditions for breaches. Errors increase. Team members become less invested in protecting organizations that do not care about their well-being. Rapid turnover ensues, decreasing team stability and the institutional knowledge that comes with it.A 2024 Forrester report found that teams who were emotionally disengaged from their work experienced almost three times as many internal incidents. And those that lived in fear of retribution for errors experienced nearly four times as many internal incidents. These conditions exacerbated the risk of external attacks as well.Fixing the ProblemAddressing toxicity in cybersecurity is a tricky proposition -- not least due to the vagueness of the term. Distinguishing toxicity from acceptable workplace pressures is highly subjective.CISOs and IT leaders can institute a number of practices to ensure that cyber teams are getting the resources and support they need. Regular meetings with superiors, anonymous surveys and open conversations can elicit useful feedback -- and if that feedback is actually implemented, it can create more positive and productive conditions.Even the best cyber managers can only do so much to address unrealistic pressures and failures across the organization that result in risk. If resources and time are not allocated appropriately, toxicity is likely to fester despite the best efforts of everyone involved.People who are open and good communicators -- these are the best qualities I see, Messdaghi says. They dont need to be super technical. They just need to just be there to support the employees and get them what they need.
    0 Comentários ·0 Compartilhamentos ·98 Visualizações
  • Build Sustainable Data Centers in the Age of GenAI
    www.informationweek.com
    Simon Ninan, Senior Vice President of Business Strategy, Hitachi VantaraMarch 17, 20254 Min ReadCagkan Sayin via Alamy StockGenerative AI has incredible potential to improve productivity and drive innovation across domains and sectors. But the challenge to enterprises is twofold as cost and carbon footprint complicate the path forward.GenAI models require growing volumes of data and storage, and the graphics processing units (GPUs) on which GenAI relies to move and process data require an enormous amount of energy to run. The complexities of price and power drain that result from the massive amounts of data and GPUs are a huge hurdle for enterprises seeking to create a greener, more sustainable future.This has important implications for hyperscalers and other enterprises that operate data centers, as well as for businesses, communities, and all living things across the planet.Despite barriers to early adoption, GenAI is here to stay. Software developers, support staff, and consumers are already using GenAI, which will only surge with greater applications and adoption, thus becoming as common as mobile devices.That said, sustainable data centers will be critical for GenAI expansion and the world.I predict that sustainable data centers will become mandatory in the next five years.Here are five steps to help your enterprise build a sustainable data center:Related:1. Address data managementData is the lifeblood of GenAI applications, so its important for enterprises to have data management systems and strategies that provide access to the right data at the right time.Application data may reside on premises, in the cloud or the edge. Ensure that your data management strategy enables you to access data from whichever platform suits your needs at the optimal cost and in a secure manner. This will contain costs, limit your risk, and help you differentiate your business.To address the rapidly growing data volume of GenAI, you should also explore compression technology, which can store the same data, with 60% less footprint, decreasing your need for power and lowering your carbon emissions.Distributed, dark data is another challenge. With so much data in so many places, you may not know where all of your data resides. Address that by using data cataloging, compliance, ediscovery and governance solutions to understand what data you have, where it sits, and how to integrate data to reduce waste and use your data to drive business results.Data governance and compliance can be tricky given the rapidly intensifying complexity. Many organizations lack the skills, industry knowledge and expertise to tackle this alone. Engage with a partner with deep industry knowledge that has baked that knowledge into their tools.Related:2. Explore your data storeOptimal data management will help you strike the right balance between the data storage and processing you need to support your GenAI applications and your sustainability commitments.Dont store multiple copies of data. Only hang on to the data that your business really needs.Keep the most important information in hot storage. Put the data for which you dont have an immediate need in cold storage, which will reduce your costs and lower your carbon footprint.Select a data storage solution that is highly engineered for performance and sustainability. Validate the solution by reviewing what independent third parties say about its sustainability.3. Process data where it livesData movement consumes time and power, so avoid unnecessarily moving data.Ensure you have the technology to process data close to the source and use metadata management to access what you need and transport only the data you need to move.4. Implement liquid cooling systemsGenAI attracted 100 million users in less than two months after launch, and an Enterprise Strategy Group study sponsored by Hitachi Vantara indicates that 97% of enterprises see GenAI as a top five priority. But the massive GPU infrastructure needed to power GenAI applications generates a lot of heat in data centers.Related:Over the years, companies have tried to cool equipment by doing everything from submerging data centers in the ocean to locating data facilities in remote parts of the world like Iceland.However, you dont need to dive deep or travel far to cool your data center. Instead, you can use liquid cooling, which can.5. Take a holistic approachRapidly growing data, GPU requirements and data centers -- and the complexity, risk and environmental impacts they entail -- make sustainability more important than ever.The green data center of the future will be simple, smart, secure, self-healing, scalable and sustainable. It calls for a holistic approach that addresses server, software and storage efficiency, employs the right mix of sustainable energy sources, and uncovers opportunities to use innovations and eco-friendly solutions to make the best use of data and limit carbon footprint.When companies and data centers are more sustainable, everybody benefits. The world becomes a better place, customers enjoy better outcomes and businesses grow stronger.About the AuthorSimon NinanSenior Vice President of Business Strategy, Hitachi VantaraSimon Ninan is senior vice president of business strategy, responsible for developing and driving aligned execution of Hitachi Vantaras business strategy for the short and long term, with the goal of maximizing customer and stakeholder value, while driving growth, innovation and market leadership.Simon holds a Bachelor of Engineering degree in Information Science and an MBA in Strategy and International Business. He resides in the Bay Area, where he enjoys reading, writing, hiking and the occasional 3D jigsaw puzzle. He also volunteers with Kontagious, a non-profit organization that provides mentorship to immigrant students.See more from Simon NinanWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·90 Visualizações
  • Red Hat CIO Marco Bill on Space Mission, AI Goals, and Tech Outlook
    www.informationweek.com
    Shane Snider, Senior Writer, InformationWeekMarch 17, 20254 Min ReadInset profile photo of Red Hat CIO Marco Bill provided by company.imageBROKER.com / Alamy Stock Just one month into his new role as chief information officer, Red Hats Marco Bill is already helping the company reach for the stars -- literally. From a unique space collaboration, to helping businesses navigate their AI ambitions, to rolling with quickly emerging technologies, Bill is forging ahead in his new role.The Raleigh N.C.-based open-source software giant recently announced a new collaboration with Axiom Space to run a data center on the International Space Station. The mission will launch this spring and Red Hats Device Edge will power Data Center Unit-1, enabling hybrid cloud applications and cloud-native workloads -- in outer space.Axiom says the effort will allow data center customers to have access to satellite data closer to the source, making transmission quicker and more efficient. Bill says the collaboration was an opportunity for Red Hat to innovate in a new space.It was a mutual interest, Bill says of the space project. We dont really have a space mission at Red Hat, but its obviously a use case that fits very well with us and what we do. Its very intriguing. For us at Red Hat, its good to be exposed to these new environments. We always learn and we can improve our products.Axiom says its Orbital Space Center (OBC) will have tangible benefits, including reducing delays by utilizing cloud storage and edge processing infrastructure, allowing for faster and more secure connections in orbit. Reducing latency in space will allow quicker access to orbital data sources for terrestrial users, the company says.Related:(Editors Note: Be sure to check out this weeks DOS Wont Hunt Podcast, which features a panel discussion about data centers in exotic locations, including space).Earthly AI AmbitionsBack on Earth, Red Hat is facing more terrestrial issues, like the sudden AI arms race sparked by booming enterprise interest in generative AI (GenAI). Like any company, Red Hat is balancing increasing AI infrastructure costs.The development of AI is definitely our big mission, Bill says. We want to be a leader there and thats where the budget goes from a company perspective. I have to provide infrastructure there -- the data is important as well, so Ive got to follow that. I have to provide an environment with the right GPUs, right?CIOs struggling to balance budgets with priorities can learn from Red Hats process, Bill says. I do spend quite a bit of money on the whole transformation of data, because thats where we were lagging. So, we cleaned this up over the last two years And then theres not much budget left, right? So, you really have to work with the business and identify the priorities.Related:CIOs need to place a high priority on AI, Bill says. The biggest advice I would give to other CIOs is not to ignore AI or to find excuses why AI doesnt work in their environment. Dont ignore this. [AI] is bigger than the internet when it came around and companies who ignored the internet arent around anymore. Dont find excuses, really double down and find ways to experiment. Finding that right use case is important, but this is not hype.Securing Open SourceMany IT leaders may struggle with the option of open-source solutions as they struggle with increasing cybersecurity threats. They may see open-source software as a risky proposition, despite benefits in cost and innovation. Bill says CIOs can take advantage of the open-source value proposition and maintain a strong security stance.We have a whole cyber team engaged globally 24/7 and theyre engaged in the communities, he says. When you have a good team of people, you can mix open source. In our culture, if you have a lot of open-source engineers, they want to have some freedom. I cannot give them a Windows laptop and lock it down -- youve got to give them environments they can actually work with in the open-source community. But you still need to control it. Thats one of the biggest challenges.Related:Red Hat and the Future of TechFor Bill, the next several years of tech will bring more diversity in cloud infrastructure and placement. You will have some applications running on the ground, you will have some in the public cloud, and youll have data centers in space. Youll have to be on different footprints, and that can be for geopolitical reasons or because of cost. So being on a hybrid-cloud infrastructure is really important.And that infrastructure will usher in a new era of AI, where companies can begin reaping benefits and seeing a return on investment.There is so much we can do with AI, Bill says. With Red Hat, our infrastructure is important. Linux is still important to us. Thats our foundation with open source and having the Kubernetes platform. How do those work together? How do they work on a hybrid cloud and enable AI? There will be a lot of evolution with the large language models thats the future that we see.About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·115 Visualizações
  • Strange Data Centers in a Strange Land: Data Hubs in Exotic Places
    www.informationweek.com
    TechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Strange Data Centers in a Strange Land: Data Hubs in Exotic PlacesStrange Data Centers in a Strange Land: Data Hubs in Exotic PlacesIs putting a data center under the ocean or in orbit just a novelty or a future-forward idea?Joao-Pierre S. Ruth, Shane SniderMarch 17, 2025For a CIO, CTO, or CISO, does it matter if the data centers their organizations rely on are located in exotic, remote locations?Data centers have been installed undersea and now in space -- do these exotic locations seem more like a novelty than a benefit to enterprise operations? One benefit is access to natural sources of cooling for data centers, either in the freezing temperatures in space or cold waters of the sea.Dmitry Zakharchenko, chief software officer for Blaize; and Alvin Nguyen, senior analyst with Forrester, discussed these topics with Shane Snider, senior writer with InformationWeek joining in.Does it raise concerns about reliability if the data centers organizations need are in potentially hard to reach locations that technicians cannot access immediately if there is an issue that requires hands-on service?What kind of safeguards or guarantees would CIOs, CTOs, and CISOs want regarding reliability and security of such remote data centers? Would failover backups in traditional locations be essential?What are the other potential benefits of exploring remote, nontraditional sites for data centers?Listen to the full episode here.About the AuthorsJoao-Pierre S. RuthSenior EditorJoao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.See more from Joao-Pierre S. RuthShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·76 Visualizações
  • Navigating Tech's Next Frontier: AI, Efficiency, Regulatory
    www.informationweek.com
    As we step into 2025, the tech sector stands at an important juncture, balancing immense opportunity with mounting complexity. Tech disruptions like generative AI and IoT are driving innovation, enhancing productivity, and transforming industries. Meanwhile, the semiconductor industry remains the backbone of these advancements, powering breakthroughs that can redefine how businesses operate and deliver value. However, with progress comes challenges, and chief information officers and senior tech leaders should navigate evolving business models, competitive pressures, and shifting regulatory landscapes to succeed.Thriving in this dynamic era requires more than adaptation: It demands reinvention. Companies that can act decisively on investments in AI and adopt transformative technologies, such as AI agents, will likely emerge as leaders. The ability to rethink operations, modernize data to better leverage AI, and streamline processes while staying attuned to geopolitical risks and cyber threats can set the stage for success.Reinvent Your Business Model With AIAI-driven innovation is redefining business models across the tech sector in 2025, offering a powerful opportunity for reinvention. GenAI, alongside the Internet of Things and semiconductor advancements, is enabling companies to unlock new value streams, streamline operations, and gain a competitive edge. From creating personalized fan experiences in smart venues to building virtual worlds for gamers and manufacturers, these technologies are reshaping industries and setting the pace for innovation. Challenges in the AI era require balancing innovation with trust and transparency. While accessing and leveraging data is now an important value driver, 46% of TMT companies (technology, media and telecom) identify data monetization as a major hurdle. Mergers and acquisitions (M&A) are increasingly viewed as a way to help bolster capabilities, accelerate reinvention, and address these challenges. AI-driven investments and early-year megadeals fueled a surge in tech deal activity in 2024, but the evolving regulatory environment and geopolitical uncertainty highlight the need for deliberate, innovative approaches to partnerships and business strategies that include forging alliances with key ecosystem players. These efforts can have big payoffs: PwC analysis reveals that TMT ecosystem-driven companies make higher profits -- 50% to 60% margins -- compared to 30% to 35% for those selling standalone products. Tech companies who are building AI infrastructure and capabilities are seeing substantial valuation increases. identify data monetization as a major hurdle.Related:Cut Costs and Boost OutputRelated:In 2025, GenAI is poised to play a pivotal role in driving operational efficiency and cost reduction across the tech sector. With 45% of tech and telecom leaders expecting GenAI to achieve more savings in the coming months, many companies are now turning their attention to the power of AI agents to take on tasks, improve workflows, and enhance productivity. This trend could reshape global delivery models by reducing deployment times and resource needs, enabling businesses to help streamline operations.The foundation of effective AI implementation lies in modernized data systems that are fed the right data. With 80% of TMT executives having already modernized or planning to modernize their data within the next 12 months, companies are making sure that GenAI models can process high-quality, well-organized data to help drive better decision-making and business outcomes. By combining cloud-based systems, advanced analytics, and AI-driven insights, businesses can enhance flexibility, improve resource management, and scale more efficiently.Navigate Regulatory and Geopolitical ChallengesRelated:AI is at the center of evolving regulatory and geopolitical challenges, creating both risks and opportunities for tech companies. As AI continues to grow in prominence, governments worldwide are intensifying their focus around its development and use. AI could benefit from deregulation, with faster approvals for large projects, and streamlined rules for innovation and deployment. With states now handling AI and privacy laws, compliance will grow more complex, while diverging US-EU regulations may force companies into regional strategies, limiting global competition. While these changes can bring compliance hurdles, they also present a chance for tech companies to help build resilience, gain consumer trust, and redefine their market positions by adopting responsible AI practices.Geopolitical tensions are also top of mind for many tech leaders, adding complexity to AI investment and innovation. The US Department of the Treasury has introduced restrictions on investments in Chinas AI sector, highlighting growing concerns over national security and technological dominance. These restrictions, alongside the rip and replace program targeting Chinese telecommunications infrastructure, underscores regulatory pressures on supply chain security. For tech companies navigating this uncertain environment, adapting strategies to meet new regulatory requirements and mitigate risks can be an integral component to maintaining a competitive edge.Looking AheadFrom reinventing business models and driving operational efficiencies to navigating complex regulatory and geopolitical landscapes, the opportunities for tech in 2025 are immense, but the challenges can be just as significant.The companies that can thrive will be those that embrace innovation, prioritize trust through responsible AI practices, and adapt swiftly to regulatory and geopolitical shifts.
    0 Comentários ·0 Compartilhamentos ·98 Visualizações
  • Breaking Through the AI Bottlenecks
    www.informationweek.com
    As chief information officers race to adopt and deploy artificial intelligence, they eventually encounter an uncomfortable truth: Their IT infrastructure isn't ready for AI. From widespread GPU shortages and latency-prone networks to rapidly spiking energy demands, they encounter bottlenecks that undermine performance and boost costs.An inefficient AI framework can greatly diminish the value of AI, says Sid Nag, vice president of research at Gartner. Adds Teresa Tung, global data capability lead at Accenture: The scarcity of high-end GPUs is an issue, but there are other factors -- including power, cooling, and data center design and capacity -- that impact results.The takeaway? Demanding and resource-intensive AI workloads require IT leaders to rethink how they design networks, allocate resources and manage power consumption. Those who ignore these challenges risk falling behind in the AI arms race -- and undercutting business performance.Breaking PointsThe most glaring and widely reported problem is a scarcity of high-end GPUs required for inferencing and operating AI models. For example, highly coveted Nvidia Blackwell GPUs, officially known as GB200 NVL-72, have been nearly impossible to find for months, as major companies like Amazon, Google, Meta and Microsoft scoop them up. Yet, even if a business can obtain these units, the cost for a fully configured server can cost around $3 million. A less expensive version, the NVL36 server, runs about $1.8 million.Related:While this can affect an enterprise directly, the shortage of GPUs also impacts major cloud providers like AWS, Google, and Microsoft. They increasingly ration resources and capacity, Nag says. For businesses, the repercussions are palpable. Lacking an adequate hardware infrastructure thats required to build AI models, training a model can become slow and unfeasible. It can also lead to data bottlenecks that undermine performance, he notes.GPU shortages are just a piece of the overall puzzle, however. As organizations look to plug in AI tools for specialized purposes such as computer vision, robotics, or chatbots they discover that theres a need for fast and efficient infrastructure optimized for AI, Tung explains.Network latency can prove particularly challenging. Even small delays in processing AI queries can trip up an initiative. GPU clusters require high-speed interconnects to communicate at maximum speed. Many networks continue to rely on legacy copper, which significantly slows data transfers, according to Terry Thorn, vice president of commercial operations for Ayar Labs, a vendor that specializes in AI-optimized infrastructure.Related:Still another potential problem is data center space and energy consumption. AI workloads -- particularly those running on high-density GPU clusters -- draw vast amounts of power. As deployment scales, CIOs may scramble to add servers, hardware and advanced technologies like liquid cooling. Inefficient hardware, network infrastructure and AI models exacerbate the problem, Nag says.Making matters worse, upgrading power and cooling infrastructure is complicated and time-consuming. Nag points out that these upgrades may require a year or longer to complete, thus creating additional short-term bottlenecks.Scaling SmartOptimizing AI is inherently complicated because the technology impacts areas as diverse as data management, computational resources and user interfaces. Consequently, CIOs must decide how to approach various AI projects based on the use case, AI model and organizational requirements. This includes balancing on-premises GPU clusters with different mixes of chips and cloud-based AI services.Organizations must consider how, when and where cloud services and specialty AI providers make sense, Tung says. If building a GPU cluster internally is either undesirable or out of reach, then its critical to find a suitable service provider. You have to understand the vendors relationships with GPU providers, what types of alternative chips they offer, and what exactly you are gaining access to, she says.Related:In some cases, AWS, Google, or Microsoft may offer a solution through specific products and services. However, an array of niche and specialty AI service companies also exist, and some consulting companies -- Accenture and Deloitte are two of them -- have direct partnerships with Nvidia and other GPU vendors. In some cases, Tung says, you can get data flowing through these custom models and frameworks. You can lean into these relationships to get the GPUs you need.For those running GPU clusters, maximizing network performance is paramount. As workloads scale, systems struggle with data transfer limitations. One of the critical choke points is copper. Ayar Labs, for example, replaces these interconnects with high-speed optical interconnects that reduce latency, power consumption and heat generation. The result is better GPU utilization but also more efficient model processing, particularly for large-scale deployments.In fact, Ayar Labs claims a 10x lower latency and up to 10x more bandwidth over traditional interconnects. Theres also a 4x to 8x reduction in power. No longer are chips waiting for data rather than computing, Thorn states. The problem can become particularly severe as organizations adopt complex large language models. Increasing the size of the pipe boosts utilization and reduces CapEx, he adds.Still another piece of the puzzle is model efficiency and distillation processes. By specifically adapting a model for a laptop or smartphone, for example, its often possible to use different combinations of GPUs and CPUs. The result can be a model that runs faster, better and cheaper, Tung says.Power PlaysAddressing AIs power requirements is also essential. An overarching energy strategy can help avoid short-term performance bottlenecks as well as long-term chokepoints. Energy consumption is going to be a problem, if it is not already a problem for many companies, Nag says. Without adequate supply, power can become a barrier to success. It also can undermine sustainability and boost greenwashing accusations. He suggests that CIOs view AI in a broad and holistic way, including identifying ways to reduce reliance on GPUs.Establishing clear policies and a governance framework around the use of AI can minimize the risk of non-technical business users misusing tools or inadvertently creating bottlenecks. The risk is greater when these users turn to hyperscalers like AWS, Google and Microsoft. Without some guidance and direction, it can be like walking into a candy store and not knowing what to pick, Nag points out.In the end, an enterprise AI framework must bridge both strategy and IT infrastructure. The objective, Tung explains, is ensuring your company controls its destiny in an AI-driven world.
    0 Comentários ·0 Compartilhamentos ·76 Visualizações
  • Why AI Model Management Is So Important
    www.informationweek.com
    Lisa Morgan, Freelance WriterMarch 14, 20258 Min ReadDragos Condrea via Alamy StockMany organizations have learned that AI models need to be monitored, fine-tuned, and eventually retired. This is as true of large language models (LLM) as it is of other AI models, but the pace of generative AI innovation has been so fast, some organizations are not managing their models as they should be, yet.Senthil Padmanabhan, VP, platform and infrastructure at global commerce company eBay, says enterprises are wise to establish a centralized gateway and a unified portal for all model management tasks as his company has done. EBay essentially created an internal version of Hugging Face that eBay has implemented as a centralized system.Our AI platform serves as a common gateway for all AI-related API calls, encompassing inference, fine-tuning, and post-training tasks. It supports a blend of closed models (acting as a proxy), open models (hosted in-house), and foundational models built entirely from the ground up, says Padmanabhan in an email interview. Enterprises should keep in mind four essential functionalities when approaching model management: Dataset preparation, model training, model deployment and inferencing, and continuous evaluation pipeline. By consolidating these functionalities, weve achieved consistency and efficiency in our model management processes.Related:Previously, the lack of a unified system led to fragmented efforts and operational chaos.Rather than building the platform first during its initial exploration of GenAI, the company focused on identifying impactful use cases.As the technology matured and generative AI applications expanded across various domains, the need for a centralized system became apparent, says Padmanabhan. Today, the AI platform is instrumental in managing the complexity of AI model development and deployment at scale.Senthil Padmanabhan, eBaySenthil Padmanabhan, eBayPhoenix Childrens Hospital has been managing machine learning models for some time because predictive can models drift.Weve had a model that predicts malnutrition in patients [and] a no-show model predicting when people are not going to show up [for appointments], says David Higginson, executive vice president and chief innovation officer at Phoenix Children's Hospital. Especially the no-show model changes over time so you have to be very, very conscious about, is this model still any good? Is it still predicting correctly? Weve had to build a little bit of a governance process around that over the years before large language models, but I will tell you, like with large language models, it is a learning [experience], because different models are used for different use cases.Related:Meanwhile, LLM providers, including OpenAI and Google, are rapidly adding new models turning off old ones, which means that something Phoenix Childrens Hospital built a year ago might suddenly disappear from Azure.Its not only that the technical part of it is just keeping up with whats being added and what's being removed. Theres also the bigger question of the large language models. If youre using it for ambient listening and youve been through a vetting process, and everybodys been using a certain model, and then tomorrow, theres a better model, people will want to use it, says Higginson. Were finding there are a lot of questions, [such as], is this actually a better model for my use case? What's the expense of this model? Have we tested it?How to Approach Model ManagementEBays Padmanabhan says any approach to model management will intrinsically establish a lifecycle, as with any other complex system. EBay already follows a structured lifecycle, encompassing stages from dataset preparation to evaluation.To complete the cycle, we also include model depreciation, where newer models replace existing ones, and older models are systematically phased out, says Padmanabhan. This process follows semantic versioning to maintain clarity and consistency during transitions. Without such a lifecycle approach, managing models effectively becomes increasingly challenging as systems grow in complexity.Related:EBays approach is iterative, shaped by constant feedback from developers, product use cases and the rapidly evolving AI landscape. This iterative process allowed eBay to make steady progress.With each iteration of the AI platform, we locked in a step of value, which gave us momentum for the next step. By repeating this process relentlessly, weve been able to adapt to surprise -- whether they were new constraints or emerging opportunities -- while continuing to make progress, says eBays Padmanabhan. While this approach may not be the most efficient or optimized path to building an AI platform, it has proven highly effective for us. We accepted that some effort might be wasted, but well do it in a safe way that continuously unlocks more value.To start, he recommends setting up a common gateway for all model API calls.This gateway helps you keep track of all the different use cases for AI models and gives you insights into traffic patterns, which are super useful for operations and SRE teams to ensure everything runs smoothly, says Padmanabhan. Its also a big win for your InfoSec and compliance teams. With a centralized gateway, you can apply policies in one place and easily block any bad patterns, making security and compliance much simpler. After that, one can use the traffic data from the gateway to build a unified portal. This portal will let you manage a models entire lifecycle, from deployment to phasing it out, making the whole process more organized and efficient as you scale.Phoenix Childrens Hospitals Higginson says its wise to keep an eye on the industry because its changing so fast.David Higginson, Phoenix Children's HospitalDavid Higginson, Phoenix Children's HospitalWhen a new model comes out, we try to think about it in terms of solving a problem, but we've stopped chasing the [latest] model as GPT-4 does most of what we need. I think what weve learned over time is dont chase the new model because were not quite sure what it is or youre limited on how much you can use it in a day, says Higginson. Now, were focusing more on models that have been deprecated or removed, because we get no notice of that.Its also important for stakeholders to have a baseline knowledge of AI so there are fewer obstacles to progress. Phoenix Childrens Hospital began its governance processes with AI 101 training for stakeholders, including information about how the models work. This training was done during the groups first three meetings.Otherwise, you can leave people behind, says Higginson. People have important things to say, [but] they just don't know how to say them in an AI world. So, I think thats the best way to get started. You also tend to find out that some people have an aptitude or an interest, and you can keep them on the team, and people who dont want to be part of it can exit.Jacob Anderson, owner of Beyond Ordinary Software Solutions, says a model is no different than a software product thats released to the masses.If you have lifecycle management on your product rollouts, then you should also implement the same in your model stewardship, says Anderson. You will need to have a defined retirement plan for models and have a policy in place to destroy the models. These models are just amalgamations of the data that went into training them. You need to treat models with the same care as you would the training data.Sage AdviceEBays Padmanabhan recommends that organizations still in the early stages of exploring GenAI refrain from building a complex platform to start, which is exactly what eBay did.At eBay, we initially focused on identifying impactful use cases rather than investing in a platform. Once the technology matured and applications expanded across different domains, we saw the need for a centralized system, says Padmanabhan. Today, our AI platform helps us manage the complexity of AI development and deployment at scale -- but we built it when the timing was right.He also thinks it wise not to become overwhelmed by the rapid changes in this field.Its easy to get caught up in trying to create a system that supports every type of model out there. Instead, take a step back and focus on what will truly make a difference for your organization. Tailor your model management system to meet your specific needs, not just what the industry is buzzing about, says Padmanabhan. Lastly, from our experience we see that quality of the dataset is what really matters. Quality trumps quantity. It is better to have 10,000 highly curated high-quality rows than 100,000 average rows.Phoenix Childrens Hospitals Higginson recommends experimenting with guardrails so people can learn.Have a warning that says, Don't put PII in there and use the output carefully, but absolutely use it, says Higginson. Don't believe everything it says, but other than that, don't be scared. The use cases coming from our staff, employees and physicians are way more creative than I would have ever thought of, or any committee would have thought of.Beyond Ordinarys Anderson recommends understanding the legal obligations of jurisdictions in which the models are operating because they vary.Take care to understand those differences and how your obligations bleed into those regulatory theatres. Then you need to have a well-defined operational plan for model stewardship, says Anderson. This is very much akin to your data stewardship plan, so if you don't have one of those, then it's time to slow the bus and fix that flat tire.He also recommends against putting hobbyist AI practitioners in charge of models.Find qualified professionals to help you with the policy frameworks and setting up a stewardship plan, says Anderson. Cybersecurity credentials play into the stewardship of AI models because the models are just data. Your cyber people don't need to know how to train or evaluate an AI model. They just need to know what data went into training and how the model is going to be used in a real-world scenario.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·103 Visualizações
  • How AI is Transforming the Music Industry
    www.informationweek.com
    John Edwards, Technology Journalist & AuthorMarch 14, 20255 Min ReadWavebreakmedia Ltd IFE-250126 via Alamy Stock PhotoThe music industry is always evolving. Artists, trends, labels, and media platforms emerge and depart with startling regularity. Yet performers, recording firms, concert promoters, and other industry players may now be facing their biggest transformation challenge yet -- artificial intelligence.Even at this relatively early stage, there's no area of the business that's unaffected, says Daniel Abowd, president of music publishing company The Royalty Network. "On the creation side, AI-powered tools are being used to enhance and synthesize performance, editing, production, post-production, and post-release content," he explains in an email interview. "On the consumption side, AI is powering listener and playlisting algorithms and other tools that deliver listeners to content."There's already been an incredible number of AI-supported use cases, says Andrew Sanchez, co-founder of Udio, which offers a generative AI model that produces music based on simple text prompts. He observes, via email, that The Beatles' "Now and Then," which was restored with the help of AI, was recently nominated for two Grammys in the Record of the Year and Best Rock Performance categories.There's always been a distance between music creators and listeners, Sanchez states. He notes that AI is helping to reduce that gap by allowing a more direct dialogue between artists and their fans. "When artists release music that fans can then remix, extend, distort, or otherwise interact with through AI, it opens up an entirely new revenue stream for artists and means of engagement."Related:GenAI, in particular, opens a new way to explore musical creativity, inviting people who might otherwise never engage with music, says Mike Clem, CEO of musical equipment retailer Sweetwater. "It takes patience and grit to learn an instrument, and AI lowers the bar on the talent required to sound good," he explains in an online interview. As a result, there's now a new wave of music makers experimenting with AI, who then learn to play a "real" instrument.AI-generated music tools are also helping artists accelerate their creative processes, allowing them to generate hits that match the pace of pop culture innovation, Sanchez says. He notes that comedian Willonius Hatcher, known as King Willonius, used Udio to create an AI-assisted song called BBL Drizzy. "The song made waves in pop culture when Metro Boomin sampled it, Sanchez says, marking the first time an AI-generated song was sampled by a major producer."A Generational TransformationRelated:Unlike their predecessors, many modern musicians have no desire to appear live on stage or even record an album, Clem says. He believes there's now a transition from 'musicians' to 'creators,' fueled in part by AI. "Its about creating content that connects with their audiences to build and grow their following," he explains.Music has evolved throughout history, thanks to artists who aren't afraid to push the status quo, Sanchez says. "The transformation in AI is really being led by artists who understand how AI-generated music tools can enhance their creative processes."Some industry observers view AI as a potential replacement for human artists. But Sanchez disagrees. "In reality, we believe that human creativity will never be cut out of the process," he says. "The songs that rise to the top have the confluence of the creative spark and the understanding of what people actually want to listen to."Both Sides NowAI-powered tools can enhance, empower, and inspire human creativity, Abowd says. They can simplify many creative tasks, such as editing out breaths from a vocal track. With consent, AI technology can also enhance or simulate the vocal sound of a singer who's no longer able to perform as they did years ago, as well as inspire songwriters with a foundational sound concept they can build upon.Related:On the downside, there's the possible existential threat posed by AI models that use unlicensed human-authored music to create new works that will compete in the same marketplace, potentially at a lower price point, Abowd says. "Reasonable people can disagree on the magnitude of that threat, but it's certainly a conversation on the tip of many people's tongues."A Golden OpportunitySanchez believes that blending AI with art presents a golden opportunity to create a powerful, transformative creativity technology that will open new revenue options for artists. Fans will benefit, too. "It's clear from recent music tour successes ... that consumers are interested in immersive experiences that put them at the helm of the storyline."There's something very innately human and beautiful about expressing yourself musically, Clem observes. "AI may displace some commercial music production -- for example, in commercials and video game soundtracks -- but we're in no danger of computers replacing our desire to express ourselves creatively, or our desire to experience live music and all its attached emotions and nostalgia," he notes. "There's something about music that resonates in our souls in ways that we cannot explain."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·106 Visualizações
  • How Big of a Threat Is AI Voice Cloning to the Enterprise?
    www.informationweek.com
    In March, several YouTube content creators seemed to receive a private video from the platforms CEO, Neal Mohan. It turns out that it was not Mohan in the video, rather an AI-generated version of him created by scammers out to steal credentials and install malware. This may stir memories of other recent, high-profile AI-powered scams. Last year, robocalls featuring the voice of President Joe Biden urged people not to vote in the primaries. The calls made use of AI to mimic Bidens voice, AP News reports.Examples of these kinds of deepfakes -- video and audio -- are popping up in the news frequently. The nonprofit Consumer Reports reviewed six voice cloning apps and reports that four of those apps have no significant guardrails preventing users from cloning someones voice without their consent.Executives are often the public faces and voices of their companies; audio and video of CEOs, CIOs, and other C-suite members are readily available online. How concerned should CIOs and other enterprise tech leaders be about voice cloning and other deepfakes?A Lack of GuardrailsElevenLabs, Lovo, PlayHT, and Speechify -- four of the apps Consumer Reviews evaluated -- ask users to check a box confirming that they have the legal right to go ahead with their voice cloning capabilities. Descript and Resemble AI take consent a step further by asking users to read and record a consent statement, according to Consumer Reports.Related:Barriers to prevent misuse of these apps are quite low. Even for the apps that require users to read a statement could potentially be manipulated by audio created by a non-consensual voice clone on another platform, the Consumer Reports review notes.Not only can users employ many readily available apps to clone someones voice without their consent, they dont need technical skills to do so.No CS background, no masters degree, no need to program, literally go on to your app store on your phone or to Google and type in voice clone or deepfake face generator, and there's thousands of tools for fraudsters to cause harm, says Ben Colman, co-founder and CEO of deepfake detection company Reality Defender.Colman also notes that compute costs have dramatically dropped within the past few months. A year ago you needed cloud compute. Now, you can do it on a commodity laptop or phone, he adds.The issue of AI regulation is still very much up in the air. Could there be more guardrails for these kinds of apps in the future? Colman is confident that there will be. He gave testimony before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on the dangers of election deepfakes.Related:The challenges and risks created by generative AI are a truly bipartisan concern, Colman tells InformationWeek. We're very optimistic about near-term guardrails.The Risks of Voice CloningWhile more guardrails may be forthcoming, whether via regulation or another impetus, enterprise leaders have to contend with the risks of voice cloning and other deepfakes today.The burden to entry is so low right now that AI voices could essentially bypass outdated authentication systems, and that's going to leave you with multiple risks whether there's data breaches, reputational concerns, financial fraud, says Justice Erolin, CTO of BairesDev, a software outsourcing company. And because there's no industry safeguards, it leaves most companies at risk.Safeguarding Against FraudThe obvious frontline defense to defend against voice cloning would be to limit sharing personal data, like your voice print. The harder it is to find audio featuring your voice, the harder it is to clone it. They should not share either personal data or voice or face, but it's challenging for CEOs. For example, I'm on YouTube. I'm on the news. It's just a cost of doing business, says Colman.Related:CIOs must operate in the realities of digital world, knowing that enterprises leaders are going to have publicly available audio that scammers can attempt to voice clone and use for nefarious ends.AI voice cloning is not a futuristic risk. It's a risk that's here today. I would treat it like any other cyber threat: with robust authentication, says Erolin.Given the risks of voice cloning, audio alone for authentication is risky. Adopting multifactor authentication can mitigate that risk. Enabling passwords, pins, or biometrics along with audio can help ensure you are speaking to the person you think you are, not someone who has cloned their voice or likeness.The Outlook for DetectionDetection is an essential tool in the fight against voice cloning. Colman likens the development of deepfake detection tools to the development of antivirus scanning, which is done locally, in real time on devices.I'd say deepfake detection [has] the exact same growth story, Colman explains. Last year, it was pick files you want to scan, and this year, it's pick a certain location, scan everything. And we're expecting within the next year, we will move completely on-device.Detection tools could be integrated onto devices, like phones and computers, and into video conferencing platforms to detect when audio and video have been generated or manipulated by AI. Reality Defender is working on pilots of its tool with banks, for example, initially integrating with call centers and interactive voice response (IVR) technology.I think we're going to look back on this period in a few years, just like antivirus, and say, Can you imagine a world where we didn't check for generative AI? says Colman.Like any other cybersecurity concern, there will be a tug of war between escalating deepfake capabilities in the hands of threat actors and detection capabilities in the hands of defenders. CIOs and other security leaders will be challenged to implement safeguards and evaluate those capabilities against those of fraudsters.
    0 Comentários ·0 Compartilhamentos ·55 Visualizações
  • Compliance in the Age of AI
    www.informationweek.com
    Raghav K.A., Global Head of Engineering, IOT and Blockchain, InfosysMarch 13, 20254 Min ReadAndriy Popov via Alamy StockAccording to a 2024 survey, 97% of US business leaders whose companies had invested in AI confirmed positive returns. A third of those with existing investments are planning to top that off with US $10 million or more this year.While AI adoption is on a roll, public trust in the technology is declining rapidly amid rising threats such as phishing, deepfakes and ransomware. A global online survey of trust and credibility found that peoples trust in AI organizations fell eight percentage points between 2019 and 2024. In the United States, there was a precipitous fall -- from 50% to 35% -- signaling US consumers concerns around AI.Regulators have responded to the growing perils of digitization by evolving compliance mandates to govern the use of data and digital technologies. For example, from 2023 to 2025, different administrations added the G7 AI Principles, the EU AI Act, new OECD AI Guidelines and an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligencein the US to the list of AI regulations. The US also has a separate law, namely the US IoT Cybersecurity Improvement Act of 2020, to address the security of specific types of IoT devices.As products and services turn increasingly digital, industry standards are changing to align with the transformation. Think HIPAA, PCI DSS, ISO 27001 and the US National Institute of Standards and Technology (NIST) framework, which extended its scope of guidance from critical infrastructure to organizations of all sizes in 2024.Related:These entities are working toward essential goals, such as ensuring safety, protecting fundamental rights and promoting ethical development and use of digital technologies. However, amid a growing sprawl of regulations across sectors, it is becoming challenging for enterprises to remain compliant. Large organizations must continually perform compliance checks to meet requirements of mandates at significant cost. This task becomes harder when checks involve departments operating in silos.With this, businesses must adopt technologies to innovate and stay relevant. By aligning technology and regulatory objectives, they can ensure that innovation and compliance do not work at cross-purposes. In addition, they should take a systematic approach to compliance by doing the following:Reassessing existing compliance practices: Regular review of compliance measures, including data governance policies, access and security protocols and breach response mechanisms can help organizations identify any gaps and vulnerabilities, prioritize areas of maximum risk and proactively strengthen compliance processes.Related:Adopting robust information security: As data and data regulations proliferate, a solid information security management framework becomes essential for ensuring data security and privacy in line with regulations, such as GDPR, COPPA, HIPAA, SEC/FINRA and so on. Besides recommending policies, controls and best practices for mitigating various information security risks, a framework facilitates continuous improvement by guiding enterprises to periodically examine and update controls, thereby fostering a security culture.Laying down data policies and procedures: Procedures and policies enforce compliance with evolving regulations by detailing the rules and responsibilities for collecting, storing, accessing or disposing of data. Involving stakeholders from different functions in policy formulation builds a compliance mindset among employees.Implementing comprehensive data protection: Data protection measures, including data governance, mitigate digital transformation risks and improve compliance. While data governance stipulates the guidelines for handling data, data management covers the tools and steps required to implement governance across the enterprise. A privacy-by-design approach helps embed data privacy in systems right from the start, rather than bolting it on later (which is less effective).Related:Performing periodic internal data audits: Regular audits of data policies, practices and assets help organizations better understand their data and how its being used, as well as align data management practices with compliance expectations. Advantages include increase in customer trust, efficient data management and improvement in quality, and strengthening of the organizations security standing.Compliance first approach: Enterprises have adopted mobile-first, cloud-first, secure-first and AI-first approaches for their enterprise architecture and business functions. The same needs to be extended by adding a compliance-first approach. Frameworks governing enterprise IT architecture should have compliance checklists.The explosion in generative AI has brought ethical implications to the forefront, stressing the need for transparency, traceability, accountability, fairness and privacy in AI development. Responsible AI (RAI) combines technology and governance to help organizations pursue their AI ambitions without compromising customer interest or stakeholder trust. RAI emphasizes fairness in AI models to prevent the perpetuation of bias and demands accountability from organizations for AI usage. It addresses concerns around AIs lack of transparency by providing insights into data inputs, algorithmic models and decision-making criteria. It also improves explainability and reproducibility, allowing organizations to use AI confidently and safeguard data privacy rights. However, organizations should always provide a human-in-the-loop on top of RAI governance to ensure complete compliance and trust.Read more about:RegulationAbout the AuthorRaghav K.A.Global Head of Engineering, IOT and Blockchain, InfosysRaghav K.A. is SVP and global head of engineering, blockchain & sustainability services at Infosys, a global leader in next-generation digital services and consulting. At Infosys, Raghav is responsible for overseeing and growing client engagements in core product development, next generation engineering technologies including digital thread, generative design and AI / generative AI across all industry verticals. He is an advisor to CTOs and CDOs in defining and implementing product strategy and digital transformation initiatives across the product value stream.See more from Raghav K.A.WebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·90 Visualizações
  • AI Hallucinations Can Prove Costly
    www.informationweek.com
    Samuel Greengard, Contributing ReporterMarch 13, 20255 Min ReadDavid Kashakhi via Alamy StockLarge language models (LLMs) and generative AI are fundamentally changing the way businesses operate -- and how they manage and use information. Theyre ushering in efficiency gains and qualitative improvements that would have been unimaginable only a few years ago.But all this progress comes with a caveat. Generative AI models sometimes hallucinate. They fabricate facts, deliver inaccurate assertions and misrepresent reality. The resulting errors can lead to flawed assessments, poor decision-making, automation errors and ill will among partners, customers and employees.Large language models are fundamentally pattern recognition and pattern generation engines, points out Van L. Baker, research vice president at Gartner. They have zero understanding of the content they produce.Adds Mark Blankenship, director of risk at Willis A&E: Nobody is going to establish guardrails for you. Its critical that humans verify content from an AI system. A lack of oversight can lead to breakdowns with real-world repercussions.False PromisesAlready, 92% of Fortune 500 companies use ChatGPT. As GenAI tools become embedded across business operations -- from chatbots and research tools to content generation engines -- the risks associated with the technology multiply.Related:There are several reasons why hallucinations occur, including mathematical errors, outdated knowledge or training data and an inability for models to reason symbolically, explains Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania. For instance, a model might treat satirical content as factual or misinterpret a word that can have different contexts.Regardless of the root cause, AI hallucinations can lead to financial harm, legal problems, regulatory sanctions, and damage to trust and reputation that ripples out to partners and customers.In 2023, a New York City lawyer using ChatGPT filed a lawsuit that contained egregious errors, including fabricated legal citations and cases. The judge later sanctioned the attorney and imposed a $5,000 fine. In 2024, Air Canada lost a lawsuit when it failed to honor the price its chatbot quoted to a customer. The case resulted in minor damages and bad publicity.At the center of the problem is the fact that LLMs and GenAI models are autoregressive, meaning they arrange words and pixels logically with no inherent understanding of what they are creating. AI hallucinations, most associated with GenAI, differ from traditional software bugs and human errors because they generate false yet plausible information rather than failing in predictable ways, says Jenn Kosar, US AI assurance leader at PwC.Related:The problem can be especially glaring in widely used public models like ChatGPT, Gemini and Copilot. The largest models have been trained on publicly available text from the Internet, Baker says. As a result, some of the information ingested into the model is incorrect or biased. The errors become numeric arrays that represent words in the vector database, and the model pulls words that seems to make sense in the specific context.Internal LLM models are at risk of hallucinations as well. AI-generated errors in trading models or risk assessments can lead to misinterpretation of market trends, inaccurate predictions, inefficient resource allocation or failing to account for rare but impactful events, Kosar explains. These errors can disrupt inventory forecasting and demand planning by producing unrealistic predictions, misinterpreting trends, or generating false supply constraints, she notes.Smarter AIAlthough theres no simple fix for AI hallucinations, experts say that business and IT leaders can take steps to keep the risks in check. The way to avoid problems is to implement safeguards surrounding things like model validation, real-time monitoring, human oversight and stress testing for anomalies, Kosar says.Related:Training models with only relevant and accurate data is crucial. In some cases, its wise to plug in only domain-specific data and construct a more specialized GenAI system, Kosar says. In some cases, a small language model (SLM) can pay dividends. For example, AI thats fine-tuned with tax policies and company data will handle a wide range of tax-related questions on your organization more accurately, she explains.Identifying vulnerable situations is also paramount. This includes areas where AI is more likely to trigger problems or fail outright. Kosar suggests reviewing and analyzing processes and workflows that intersect with AI. For instance, A customer service chatbot might deliver incorrect answers if someone asks about technical details of a product that was not part of its training data. Recognizing these weak spots helps prevent hallucinations, she says.Specific guardrails are also essential, Baker says. This includes establishing rules and limitations for AI systems and conducting audits using AI augmented testing tools. It also centers on fact-checking and failsafe mechanisms such as retrieval augmented generation (RAG), which comb the Internet or trusted databases for additional information. Including humans in the loop and providing citations that verify the accuracy of a statement or claim can also help.Finally, users must understand the limits of AI, and an organization must set expectations accordingly. Teaching people how to refine their prompts can help them get better results, and avoid some hallucination risks, Kosar explains. In addition, she suggests that organizations include feedback tools so that users can flag mistakes and unusual AI responses. This information can help teams improve an AI model as well as the delivery mechanism, such as a chatbot.Truth and ConsequencesEqually important is tracking the rapidly evolving LLM and GenAI spaces and understanding performance results across different models. At present, nearly two dozen major LLMs exist, including ChatGPT, Gemini, Copilot, LLaMA, Claude, Mistral, Grok, and DeepSeek. Hundreds of smaller niche programs have also flooded the app marketplace. Regardless of the approach an organization takes, In early stages of adoption, greater human oversight may make sense while teams are upskilling and understanding risks, Kosar says.Fortunately, organizations are becoming savvier about how and where they use AI, and many are constructing more robust frameworks that reduce the frequency and severity of hallucinations. At the same time, vendor software and open-source projects are maturing. Concludes Blankenship: AI can create risks and mitigate risks. Its up to organizations to design frameworks that use it safely and effectively.About the AuthorSamuel GreengardContributing ReporterSamuel Greengard writes about business, technology, and cybersecurity for numerous magazines and websites. He is author of the books "The Internet of Things" and "Virtual Reality" (MIT Press).See more from Samuel GreengardWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·89 Visualizações
  • Asias Top Integrated Security Exhibition Starts Soon
    www.informationweek.com
    TechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Asias Top Integrated Security Exhibition Starts SoonAsias Top Integrated Security Exhibition Starts SoonThe 24th annual SECON & eGISEC 2025 in Korea will cover the latest in physical and cybersecurity innovations, trends, products, and techniques. The exhibition features over 400 exhibitors from more than 10 countries.Pam Baker, Contributing WriterMarch 13, 20252 Min ReadMirko Vitali via Alamy StockThis year marks the 24th annual show of the SECON & eGISEC 2025 exhibition. As Asias only integrated security exhibition covering both the physical and cybersecurity fields, expectations and anticipations are running high for both the exhibitors and attendees.As cybersecurity threats become more sophisticated, we are eager to discover new technological approaches in AI-driven security, cloud security, and security automation, said Junghyun Kim, Marketing Director at AhnLab, an endpoint and network security provider and an exhibitor, at the event.Additionally, networking with security professionals will provide a valuable opportunity to share experiences and discuss potential collaborations, Junghyun added.The event spreads over 28,000+ m of space, has over 400 exhibitors signed up, and is expecting more than 30,000 visitors from around the world.While security is of significant concern to companies and governments everywhere, global interest is reaching a fevered pitch given current political issues and the emergence of China's AI model DeepSeek, according to event organizers.SECON & eGISEC 2025 is being held in Korea given its key role as a testbed for validating and demonstrating new technologies in the security market. The exhibition will be held from March 19 to 21 at Hall 3-5 in Kintex, Korea.Related:We are excited about the opportunity to engage with not only industry experts but also international buyers. Understanding global market demands and security trends while establishing networks with overseas companies and buyers will be invaluable in expanding the international reach of INEBs security solutions, said Haeun JI at iNeb Inc, a provider of encryption and data security and an exhibitor.Two of the most intriguing pre-show glimpses into the future are the expansion of edge devices with on-device AI, and the rise of converged security.Comprehensive, integrated protection for physical and cyber elements of the same entity is crucial for complex systems like smart cities, smart cars, and other interconnected environments. But converged security also applies to virtualized environments with complex systems and both real-world and virtual world components.In particular, we are eager to observe how cybersecurity is evolving in response to advancements in AI, big data, and the metaverse and how these technologies are shaping the future of the industry, said Haeun.Other areas of prime concern to be covered at the exhibition include cloud and IoT security, smart city security, automotive security, and maritime security, among others.Related:Beyond the exhibits, SECON & eGISEC 2025 will feature an extensive conference and seminar program. This program is produced in collaboration with leading institutions and industry experts and presents 30+ tracks and 100+ sessions. Topics range from industrial security and integrated CCTV control to aviation security, counterterrorism strategies, and personal data protection and beyond.Pre-registering for SECON & eGISEC 2025 will get you quick access to the event via fast badge issuance and faster entry. Attendees will also save big as they wont have to pay the 15,000 KRW on-site registration fee. The deadline for preregistration is 6:00 PM (KST) on March 18th.About the AuthorPam BakerContributing WriterA prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which are "Decision Intelligence for Dummies" and "ChatGPT For Dummies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.See more from Pam BakerWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·112 Visualizações
  • Unmanaged Devices: The Overlooked Threat CISOs Must Confront
    www.informationweek.com
    TechTarget and Informa Techs Digital Business Combine.TechTarget and InformaTechTarget and Informa Techs Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Unmanaged Devices: The Overlooked Threat CISOs Must ConfrontUnmanaged Devices: The Overlooked Threat CISOs Must ConfrontNo matter the strategy, companies must approach securing unmanaged devices with sensitivity and respect for employee privacy.Dark Reading, Staff & ContributorsMarch 13, 20251 Min ReadMBI via Alamy StockOne of my favorite things about working in security, and tech in general, is the shared attitude that no problem is unsolvable. We transitioned virtually the entire Internet from "http" to "https" in the name of security. Clearly, we're not afraid of a challenge. But there's one problem that many companies haven't even tried to solve, and its very name seems to communicate a kind of surrender: unmanaged devices.By "unmanaged devices," we're talking about laptops, tablets, and phones that employees use at work but that aren't covered by a mobile device management (MDM) solution, and so are outside the visibility and control of security or IT, often because the company has no effective way to prevent personal devices from authenticating. These devices might belong to contractors, Linux users, or employees using personal devices under abring-your-own-device (BYOD) policy. A2022 Kolide studyfound that 47% of companies allow unmanaged devices to access company resources. That means nearly half let sensitive data disappear onto devices with no safeguards.Read the Full Article on Dark ReadingAbout the AuthorDark ReadingStaff & ContributorsDark Reading: Connecting The Information Security CommunityLong one of the most widely-read cybersecurity news sites on the Web, Dark Reading is also the most trusted online community for security professionals. Our community members include thought-leading security researchers, CISOs, and technology specialists, along with thousands of other security professionals.See more from Dark ReadingWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·114 Visualizações
  • 3 Tech Deep Dives That CIOs Must Absolutely Make
    www.informationweek.com
    Mary E. Shacklett, President of Transworld DataMarch 12, 20257 Min ReadimageBROKER.com via Alamy Stock When I was a junior programmer/analyst on my first IT job, I was working with a programmer-mentor named Bob who was teaching me to code subroutines. The days conversation got around to the CIO, and Bob unexpectedly said, That guys nothing more than a pencil pusher. He doesnt have a clue about what were doing!Bobs words stuck with me, especially after I became a CIO. I kept thinking about the side conversations that happen in cubicles. I determined that although it wasnt my business as a CIO to code, I would make it my business to stay atop technology details so I could actively interact with my technical staff members in a value-added way. I decided to also learn how to communicate about technology at a plain English top level with other executives and board members.Staying on top of technology at a detailed level isnt easy for CIOs who have a broad range of responsibilities to fulfill. Meanwhile, it's crucial to be able to articulate complicated tech in plain English to superiors who lack a tech background when your own strength might be in science and engineering, but not in public speaking.Nevertheless, its absolutely essential for CIOs to do both, or they risk losing the respect of their superiors and their staff.Here are three tech deep dives that CIOs must make in 2025 so they can meet the technology expectations of their superiors and staffs:SecuritySecurity worries corporate boards. Its a key IT responsibility, and as cyberattacks grow more sophisticated, preventing them is becoming more than just monitoring the periphery of the network and conducting security audits. Using traditional security analysts who are generalized in their knowledge also might not suffice.Enter technologies like network and system observability, which can probe beyond monitoring, drilling down to security threat root causes and interpretations of events based upon the relationships between data points and access points. Youll have to break down the concept of observability and possibly the evolution of new tech roles in security for the board and executives who will be asked to fund them.On the IT staff side, implementing observability will be a topic of technical discussion. There may also be a need to discuss new security roles and positions. For instance, in sensitive industries like finance, law enforcement, healthcare or aerospace, you may need a cyberthreat hunter who seeks out malware that may be dormant and embedded in systems, only waiting to be activated. Or, it may be time for a security forensics specialist who can get to the bottom of a breach to identify the perpetrator. These are positions that are more specialized than security analyst. You may have to develop the skillsets for cyberhunting or forensics internally or seek them outside. Adding these roles could force a realignment of duties on the IT security staff, and it will be important for you to work closely with your staff.Generative and Agentive AICompanies are flocking to invest in AI, with boards and CEOs wanting to know about it, and the data science and IT departments want direction on it.Generative AI is the most common AI used, but how many boards know what Gen AI is, and how it works? Meanwhile, agentive AI, in which AI not only makes decisions but acts upon them, is coming into view.Both forms of AI can dramatically impact business strategies, customer relationships, business processes and employee headcount. CEOs and boards need to know about these forms of AI, what they are capable of doing, where the risks are, and what the impact could be. They will come to the CIO for information. They dont need to know about every nut and bolt, but they do need enough working knowledge so they can understand the technology at a conceptual business level.On the IT and data science staff side, generative AI engines must operate on quality data from a variety of external and internal feeds that must be vetted. In some cases, ETL (extract-transform-load) software must be used to clean and normalize the data. The technical approach to doing this needs to be discussed and implemented. It is a plus for everyone if the CIO partakes in some of these meetings.With agentive AI, there should be discussions about technology readiness and ethical guardrails as to just how much autonomous work AI should be allowed to perform on its own.For all AI, security and refresh cycles for data need to be defined and executed, and the algorithms operating on the data must be trialed and tuned.Collectively, these activities require project approval and budget allotments, so it is in the staffs and CIOs best interests that they get discussed technically so the nature of the work, its challenges and opportunities are clearly understood by all.NaaSWeve heard of IaaS (infrastructure as a service), SaaS (software as a service) and PaaS (platform as a service), and now there is NaaS (network as a service). What they have in common is that they are all cloud services. The intent is to shift IT functions to the cloud so you have less direct responsibility for managing them in-house.Boards and C-level executives are attracted to cloud services because they perceive the cloud as being less expensive, easier to manage, and a way to avoid investing in technology that will be obsolete three years later.But now there is NaaS, which most of them havent heard about. Just what is NaaS (network outsourcing), and what does it do for the company? They will ask the CIO to explain it.On the IT side, if youre discussing NaaS, there are decisions to be made as to how much (if any) of the network youre willing to outsource. Also, if you did outsource, what will be the impact on cost, management, security, bandwidth, application integration service levels.The discussion can get into the weeds of the technology, and the CIO should be prepared to go there.The Quandary for the CIOThe quandary the CIO faces is that he or she cant be all things to all people but is often expected to be. Its why once over lunch, the CFO of my company told me, Im sure glad Im not doing your job. It seems impossible!There were days when I thought so, too! There were days when I spent the majority of my time doing what my old mentor Bob complained about: pencil pushing, for budget justifications, headcount increases, security and compliance reporting, and vendor negotiations. There were also days spent in meetings with other C-level managers to explain new technologies so the path could be smoothed for IT project work with a minimum of user resistance.All of these CIO tasks are necessary, but the IT staff doesnt see them.I understood this, and I also understood that my own staff had expectations. One of them was that I kept my technology chops sharp so I could engage with them in a manner Bob would approve. This is work! I thought to myself, But you must do both.About the AuthorMary E. ShacklettPresident of Transworld DataMary E. Shacklett is an internationally recognized technology commentator and President of Transworld Data, a marketing and technology services firm. Prior to founding her own company, she was Vice President of Product Research and Software Development for Summit Information Systems, a computer software company; and Vice President of Strategic Planning and Technology at FSI International, a multinational manufacturer in the semiconductor industry.Mary has business experience in Europe, Japan, and the Pacific Rim. She has a BS degree from the University of Wisconsin and an MA from the University of Southern California, where she taught for several years. She is listed in Who's Who Worldwide and inWho's Who in the Computer Industry.See more from Mary E. ShacklettWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·113 Visualizações
  • How to Turn Developer Team Friction Into a Positive Force
    www.informationweek.com
    John Edwards, Technology Journalist & AuthorMarch 12, 20255 Min ReadDragos Condrea via Alamy Stock PhotoTeams occasionally generate a certain amount of internal friction, and development staffs are no exception. Yet, when managed properly, team friction can actually be turned into a motivating force.Developer team friction can become a positive driving force when it encourages diverse perspectives, promotes critical thinking, fosters innovation, and improves communication skills, observes JB McGinnis, a principal with Deloitte Consulting. "Constructive disagreements can lead to more robust solutions, continuous improvement, and stronger team cohesion," he explains in an email interview. "By tapping into and exploring this friction positively, teams can enhance performance and drive innovation."Friction can be a fantastic driver for positive change, states Andy Miears, a director with technology research and advisory firm ISG. "When members of a development team are at odds with each other, it often indicates some degree of inefficiency, lack of work product quality, a poor working environment, or unclear roles and responsibilities," he says via email. "Using friction as a compelling way to identify, prioritize, and address pain points is a healthy behavior for any high-performing team."Multiple BenefitsDeveloper team friction, while often seen as a negative trait, can actually become a positive force under certain conditions, McGinnis says. "Friction can enhance problem-solving abilities by highlighting weaknesses in current processes or solutions," he explains. "It prompts the team to address these issues, thereby improving their overall problem-solving skills."Related:Team friction often occurs when a developer passionately advocates a new approach or solution. That's generally a good thing, notes Stew Beck, director of engineering at work product management solutions provider iManage. "When team members have conflicting ideas, you naturally end up with some friction -- it's something you want to have on every team," he says via email. If team members aren't advocating their own ideas, there's a risk they're not fully engaged in the problem. "Without friction, teams could be missing out on a way to make the product better."Allowing team friction in a controlled and safe way helps everyone. "Team members can challenge ideas, ways of accomplishing a task, encourage better results, and hold each other accountable to shared objectives, standards and processes," Miears says.Team seniority and status shouldn't matter. "The best ideas don't always come from the most senior person in the room," Beck observes. Yet failing to encourage open discussions, regardless of rank, risks overlooking something important that could cost the team, and the entire enterprise, later.Related:Channeling FrictionTo channel friction into positive results, the team leader should encourage balanced and constructive productive feedback. "Additionally, the leader should commit to creating an environment that's open to a wide set of opinions, where teammates are encouraged to share their thoughts," McGinnis advises.The team leader should schedule regular meetings with their development team to identify what's currently working and, more importantly, what may be failing. "In a mature Agile development framework, retrospectives should take part at the end of every sprint," Miears recommends. Larger retrospectives, meanwhile, should be scheduled at the end of releases or program increments. "These sessions should be used to create new, better, or more efficient value for users, stakeholders and the overall team."Maintaining ControlTeam leaders should set clear expectations and goals for all members. "These objectives should be defined for both the team as a whole and for individual members," McGinnis says. Leading by example is also critical. "As a leader, you are a reflection of your team, so demonstrating the handling of conflicts with a professional demeanor, while showing empathy, goes a long way."Related:Friction can easily spiral out of control when retrospectives and feedback focus on individuals instead of addressing issues and problems jointly as a team. "Staying solution-oriented and helping each other achieve collective success for the sake of the team, should always be the No. 1 priority," Miears says. "Make it a safe space."As a leader it's important to empower every team member to speak up, Beck advises. Each team member has a different and unique perspective. "For instance, you could have one brilliant engineer who rarely speaks up, but when they do its important that people listen," he says. "At other times, you may have an outspoken member on your team who will speak on every issue and argue for their point, regardless of the situation." Staying in tune with these differences and quirks helps to foster a healthy discussion environment.Parting ThoughtTeam building is a great way to ensure a safe team when friction arises, Miears says. "Celebrate successes and individual accomplishments together, he recommends. "Do the work to build a safe and inclusive culture in which the team can thrive."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários ·0 Compartilhamentos ·130 Visualizações
Mais Stories